Connect with us

Featured

AWS launches 5 ML services, deep learning cam

Published

on

At the AWS Re:Invent conference in Las Vegas last week, Amazon Web Services announced five new machine learning services and a deep learning-enabled wireless video camera for developers. 

At the AWS Re:Invent conference in Las Vegas last week, Amazon Web Services announced five new machine learning services and a deep learning-enabled wireless video camera for developers.

Amazon SageMaker is a fully managed service for developers and data scientists to quickly build, train, deploy, and manage their own machine learning models. AWS also introduced AWS DeepLens, a deep learning-enabled wireless video camera that can run real-time computer vision models to give developers hands-on experience with machine learning. And, AWS announced four new application services that allow developers to build applications that emulate human-like cognition: Amazon Transcribe for converting speech to text; Amazon Translate for translating text between languages; Amazon Comprehend for understanding natural language; and, Amazon Rekognition Video, a new computer vision service for analyzing videos in batches and in real-time. To learn more about AWS’s machine learning services, visit: https://aws.amazon.com/machine-learning.com.

Amazon SageMaker and AWS DeepLens make machine learning accessible to all developers

Today, implementing machine learning is complex, involves a great deal of trial and error, and requires specialized skills. Developers and data scientists must first visualize, transform, and pre-process data to get it into a format that an algorithm can use to train a model. Even simple models can require massive amounts of compute power and a great deal of time to train, and companies may need to hire dedicated teams to manage training environments that span multiple GPU-enabled servers. All of the phases of training a model—from choosing and optimizing an algorithm, to tuning the millions of parameters that impact the model’s accuracy—involve a great deal of manual effort and guesswork. Then, deploying a trained model within an application requires a different set of specialized skills in application design and distributed systems. As data sets and variables grow, customers have to repeat this process again and again as models become outdated and need to be continuously retrained to learn and evolve from new information. All of this takes a lot of specialized expertise, access to massive amounts of compute power and storage, and a great deal of time. To date, machine learning has been out of reach for most developers.

Amazon SageMaker is a fully managed service that removes the heavy lifting and guesswork from each step of the machine learning process. Amazon SageMaker makes model building and training easier by providing pre-built development notebooks, popular machine learning algorithms optimized for petabyte-scale datasets, and automatic model tuning. Amazon SageMaker also dramatically simplifies and accelerates the training process, automatically provisioning and managing the infrastructure to both train models and run inference to make predictions using these models. AWS DeepLens was designed from the ground-up to help developers get hands-on experience in building, training, and deploying models by pairing a physical device with a broad set of tutorials, examples, source code, and integration with familiar AWS services to support learning and experimentation.

“Our original vision for AWS was to enable any individual in his or her dorm room or garage to have access to the same technology, tools, scale, and cost structure as the largest companies in the world. Our vision for machine learning is no different,” said Swami Sivasubramanian, VP of Machine Learning, AWS. “We want all developers to be able to use machine learning much more expansively and successfully, irrespective of their machine learning skill level. Amazon SageMaker removes a lot of the muck and complexity involved in machine learning to allow developers to easily get started and become competent in building, training, and deploying models.”

With Amazon SageMaker developers can:

  • Easily build machine learning models with performance-optimized algorithms: Amazon SageMaker is a fully managed machine learning notebook environment makes it easy for developers to explore and visualize data they have stored in Amazon Simple Storage Service (Amazon S3), and transform it using all of the popular libraries, frameworks, and interfaces. Amazon SageMaker includes ten of the most common deep learning algorithms (e.g. k-means clustering, factorization machines, linear regression, and principal component analysis), which AWS has optimized to run up to ten times faster than standard implementations. Developers simply choose an algorithm and specify their data source, and Amazon SageMaker installs and configures the underlying drivers and frameworks. Amazon SageMaker includes native integration with TensorFlow and Apache MXNet with additional framework support coming soon. Developers can also specify any framework and algorithm they choose by uploading them into a container on the Amazon EC2 Container Registry.
  • Fast, fully managed training: Amazon SageMaker makes training easy. Developers simply select the type and quantity of Amazon EC2 instances and specify the location of their data. Amazon SageMaker sets up the distributed compute cluster, performs the training, outputs the result to Amazon S3, and tears down the cluster when complete. Amazon SageMaker can automatically tune models with hyper-parameter optimization, adjusting thousands of different combinations of algorithm parameters to arrive at the most accurate predictions.
  • Deploy models into production with one click: Amazon SageMaker takes care of launching instances, deploying the model, and setting up a secure HTTPS end-point for the application to achieve high throughput and low latency predictions, as well as auto-scaling Amazon EC2 instances across multiple availability zones (AZs). It also provides native support for A/B testing. Once in production, Amazon SageMaker eliminates the heavy lifting involved in managing machine learning infrastructure, performing health checks, applying security patches, and conducting other routine maintenance.

With AWS DeepLens, developers can:

  • Get hands-on machine learning experience: AWS DeepLens is the first of its kind: a deep-learning enabled, fully programmable video camera, designed to put deep learning into the hands of any developer, literally. AWS DeepLens includes a HD video camera with on-board compute capable of running sophisticated deep learning computer vision models in real-time. The custom-designed hardware, capable of running over 100 billion deep learning operations per second, comes with sample projects, example code, and pre-trained models so even developers with no machine learning experience can run their first deep learning model in less than ten minutes. Developers can extend these tutorials to create their own custom, deep learning-powered projects with AWS Lambda functions. For example, AWS DeepLens could be programmed to recognize the numbers on a license plate and trigger a home automation system to open a garage door, or AWS DeepLens could recognize when the dog is on the couch and send a text to its owner.
  • Train models in the cloud and deploy them to AWS DeepLens: AWS DeepLens integrates with Amazon SageMaker so that developers can train their models in the cloud with Amazon SageMaker and then deploy them to AWS DeepLens with just a few clicks in the AWS Management Console. The camera runs the models, in-real time, on the device.

“We’ve deepened our relationship with AWS, adding them as an Official Technology Provider of the NFL and are excited to use Amazon SageMaker for our next-generation stats initiative,” said Michelle McKenna-Doyle, SVP and CIO, National Football League. “With Amazon SageMaker in our toolkit, our developers can stop worrying about the undifferentiated heavy lifting of machine learning, and start adding new visualizations, stats, and experiences that our fans will adore.”

As the world’s leading provider of high-resolution Earth imagery, data and analysis, DigitalGlobe works with enormous amounts of data every day. “DigitalGlobe is making it easier for people to find, access, and run compute against our 100PB image library which is stored in the AWS cloud in order to apply deep learning to satellite imagery,” said Dr. Walter Scott, Chief Technology Officer of Maxar Technologies and founder of DigitalGlobe. “We plan to use Amazon SageMaker to train models against petabytes of earth observation imagery datasets using hosted Jupyter notebooks, so DigitalGlobe’s Geospatial Big Data Platform (GBDX) users can just push a button, create a model, and deploy it all within one scalable distributed environment at scale.”

Hotels.com is a leading global lodging brand operating 90 localized websites in 41 languages, “At Hotels.com, we are always interested in ways to move faster, to leverage the latest technologies and stay innovative,” says Matt Fryer, VP and Chief Data Science Officer of Hotels.com and Expedia Affiliate Network. “With Amazon SageMaker, the distributed training, optimized algorithms, and built-in hyperparameter features should allow my team to quickly build more accurate models on our largest data sets, reducing the considerable time it takes us to move a model to production. It is simply an API call. Amazon SageMaker will significantly reduce the complexity of machine learning, enabling us to create a better experience for our customers, fast.”

Intuit recognizes the enormous value and power of machine learning to help its customers make better decisions and streamline their work, every day. “With Amazon SageMaker, we can accelerate our artificial intelligence initiatives at scale by building and deploying our algorithms on the platform,” says Ashok Srivastava, Chief Data Officer at Intuit. “We will create novel large-scale machine learning and AI algorithms and deploy them on this platform to solve complex problems that can power prosperity for our customers.”

Thomson Reuters is the world’s leading source of news and information for professional markets. “For over 25 years we have been developing advanced machine learning capabilities to mine, connect, enhance, organize and deliver information to our customers, successfully allowing them to simplify and derive more value from their work,” said Khalid Al-Kofahi, who leads Thomson Reuters center for AI and Cognitive Computing. “Working with Amazon SageMaker enabled us to design a natural language processing capability in the context of a question-answering application. Our solution required several iterations of deep learning configurations at scale using the capabilities of Amazon SageMaker.”

“Deep learning is something that our students find really inspiring. It seems like every week now it is leading to new breakthroughs in robotics, language, and biology. What I like about AWS DeepLens is that it seems likely to democratize access to experimenting with machine learning,” said Andrew Moore, Dean of the School of Computer Science at Carnegie Mellon University. “Campuses like ours are going to be really excited to bring AWS DeepLens into our classrooms and labs to help accelerate the process of getting students into real-world deep learning.”

New speech, language, and vision services allow app developers to easily build intelligent applications

For those developers who are not experts in machine learning, but are interested in using these technologies to build a new class of apps that exhibit human-like intelligence, Amazon Transcribe, Amazon Translate, Amazon Comprehend, and Amazon Rekognition video provide high-quality, high-accuracy machine learning services that are scalable and cost-effective.

“Today, customers are storing more data than ever before, using Amazon Simple Storage Service (Amazon S3) as their scalable, reliable, and secure data lake. These customers want to put this data to use for their organization and customers, and to do so they need easy-to-use tools and technologies to unlock the intelligence residing within this data,” said Swami Sivasubramanian, VP of Machine Learning, AWS. “We’re excited to deliver four new machine learning application services that will help developers immediately start creating a new generation of intelligent apps that can see, hear, speak, and interact with the world around them.”

  • Amazon Transcribe (available in preview) converts speech to text, allowing developers to turn audio files stored in Amazon S3 into accurate, fully punctuated text. Amazon Transcribe has been trained to handle even low fidelity audio, such as contact center recordings, with a high degree of accuracy. Amazon Transcribe can generate a time stamp for every word so that developers can precisely align the text with the source file. Today, Amazon Transcribe supports English and Spanish with more languages to follow. In the coming months, Amazon Transcribe will have the ability to recognize multiple speakers in an audio file, and will also allow developers to upload custom vocabulary for more accurate transcription for those words.
  • Amazon Translate (available in preview) uses state of the art neural machine translation techniques to provide highly accurate translation of text from one language to another. Amazon Translate can translate short or long-form text and supports translation between English and six other languages (Arabic, French, German, Portuguese, Simplified Chinese, and Spanish), with many more to come in 2018.
  • Amazon Comprehend (available today) can understand natural language text from documents, social network posts, articles, or any other textual data stored in AWS. Amazon Comprehend uses deep learning techniques to identify text entities (e.g. people, places, dates, organizations), the language the text is written in, the sentiment expressed in the text, and key phrases with concepts and adjectives, such as ‘beautiful,’ ‘warm,’ or ‘sunny.’ Amazon Comprehend has been trained on a wide range of datasets, including product descriptions and customer reviews from Amazon.com, to build best-in-class language models that extract key insights from text. It also has a topic modeling capability that helps applications extract common topics from a corpus of documents. Amazon Comprehend integrates with AWS Glue to enable end-to-end analytics of text data stored in Amazon S3, Amazon Redshift, Amazon Relational Database Service (Amazon RDS), Amazon DynamoDB, or other popular Amazon data sources.
  • Amazon Rekognition Video (available today) can track people, detect activities, and recognize objects, faces, celebrities, and inappropriate content in millions of videos stored in Amazon S3. It also provides real-time facial recognition across millions of faces for live stream videos. Amazon Rekognition Video’s easy-to-use API is powered by computer vision models that are trained to accurately detect thousands of objects and activities, and extract motion-based context from both live video streams and video content stored in Amazon S3. Amazon Rekognition Video can automatically tag specific sections of video with labels and locations (e.g. beach, sun, child), detect activities (e.g. running, jumping, swimming), detect, recognize, and analyze faces, and track multiple people, even if they are partially hidden from view in the video.

“At Isentia, we built our media intelligence software in a single language. To expand our capabilities and address the diverse language needs of our customers, we needed translation support to generate and deliver valuable insights from non-English media content. Having tried multiple machine translation services in the past, we are impressed with how easy it is to integrate Amazon Translate into our pipeline and its ability to scale to handle any volume we throw at it. The translations also came out more accurate and nuanced and met our high standards for clients,” says Andrea Walsh, CIO at Isentia.

“RingDNA is an end-to-end communications platform for sales teams. Hundreds of enterprise organizations use RingDNA to dramatically increase productivity, engage in smarter sales conversations, gain predictive sales insights, improve their win rate and coach reps to succeed faster than ever before. A critical component of RingDNA’s Conversation AI requires best of breed speech-to-text to deliver transcriptions of every phone call. RingDNA is excited about Amazon Transcribe since it provides high-quality speech recognition at scale, helping us to better transcribe every call to text,” said Howard Brown, CEO and Founder at RingDNA.

“The Post strives to give its nearly 100 million readers the best experience possible and relevant content recommendations are a key part of that mission,” said Dr. Sam Han (PhD), Director of Data Science at The Washington Post. “With Amazon Comprehend, we can leverage the continuously-trained NLP capabilities like Keyphrase and Topic APIs to potentially allow us to provide even better content personalization, SEO, and ad targeting capabilities.”

“Building intelligent applications to help customers drive their businesses is our entire focus,” said Manjunath Ganimasty, V.P. Software Development with Infor. “Amazon Comprehend allows us to analyze unstructured text within search, chat, and documents to understand intent and sentiment. This capability enables us to train our Coleman AI skillset, and also provide a truly focused and tailored search experience for our customers.”

“Natural language processing is hard. We’ve looked at everything from closed to open-source solutions to analyze and make sense of our data, but couldn’t find a practical solution that would allow us to stay agile, scalable, and cost effective. Amazon Comprehend provides a continuously-trained model allowing us to focus on our business and innovate in Supply Chain Management (SCM),” said Minh Chau, Head of Engineering at Elementum.

“The City of Orlando is excited to work with Amazon to pilot the latest in public safety software through a unique, first-of-its-kind public-private partnership,” said John Mina Police Chief., City of Orlando. “Through the pilot, Orlando will utilize Amazon’s Rekognition Video and Acuity technology in a way that will use existing City resources to provide real-time detection and notification of persons-of-interest, further increasing public safety and operational efficiency opportunities for the City of Orlando and other cities across the nation. ”

“The analytic features of Amazon Rekognition Video are impressive. They can, for example, help with search of historical and real time video for persons-of-interest, providing efficiencies and awareness by automating this typically human task,” Dan Law, Chief Data Scientist at Motorola.

Featured

Acer gaming beast escapes

Acer this week unveiled two notebooks that take portable gaming to new extremes.

Published

on

Acer  unveiled two new Predator Helios gaming notebooks this week at the next@acer global press conference in New York. They include the powerful Predator Helios 500, featuring up to 8th Gen Intel Core i9+ processors, and the Predator Helios 300 Special Edition that includes upgraded specs from its predecessor and a distinctive white chassis. Both feature VR-Ready performance, advanced thermal technologies, and blazing-fast connectivity.

“We’ve expanded our Predator Helios gaming notebook line in response to popular demand from gamers seeking extreme performance on the go,” said Jerry Kao, President of IT Products Business, Acer. “The Predator Helios 500 and Helios 300 gaming notebooks feature Acer’s proprietary thermal technologies and powerful components that, coupled with our award-winning software, deliver unparalleled gaming experiences.”

“The 8th Gen Intel Core i9+ processor for gaming and creation laptops is the highest performance Intel has ever delivered for this class of devices; purpose built for enthusiasts who demand premium gaming experiences whether at home or on the go,” said Steve Long, Vice President and General Manager, Client Computing Group Sales and Marketing, Intel. “Intel and Acer’s long relationship has produced amazing products over the years, and the new Acer Predator Helios gaming notebooks are powerful examples of what’s possible with this unprecedented level of performance.”

Predator Helios 500 is a gaming beast featuring overclocking, 4K 144 Hz panels

Designed for extreme gamers, the Predator Helios 500 is a gaming beast. It features up to overclockable 8th Gen Intel Core i9+ processors and overclockable GeForce GTX 1070 graphics. Intel Optane memory increases responsiveness and load times, while ultra-fast NVMePCIe SSDs, Killer DoubleShot Pro networking, and up to 64GB of memory keep the action going, making the Helios 500 the ideal gaming notebook for graphic-intensive AAA titles and live streaming.

Top-notch visuals are delivered on bright, vibrant 4K UHD or FHD IPS 17.3-inch displays with 144Hz refresh rates for blur- and tear-free gameplay. NVIDIA G-SYNC technology is supported on both the built-in display and external monitors, allowing for buttery-smooth imagery without tearing or stuttering. For those looking for maximum gaming immersion, dual Thunderbolt 3 ports, and display and HDMI 2.0 ports support up to three external monitors. Two speakers, a subwoofer, and Acer TrueHarmony and Waves MAXXAudio technology deliver incredible sound and hyper-realistic 3D audio using Waves Nx.

The Helios 500 stays cool with two of Acer’s proprietary AeroBlade 3D metal fans, and five heat pipes that distribute cool air to the machine’s key components while simultaneously releasing hot air. Fan speed can be controlled and customized through the PredatorSense app.

A backlit RGB keyboard offers four lighting zones with support for up to 16.8 million colors. Anti-ghosting technology provides the ultimate control for executing complex commands and combos, which can be set up via five dedicated programmable keys.

Acer’s PredatorSense app can be used to control and monitor the notebook’s vitals from one central interface, including overclocking, lighting, hotkeys, temperature, and fan control.

Predator Helios 300 Special Edition brings a sophisticated design twist to gaming notebooks

Acer’s budget-friendly Helios 300 gaming line sees the addition of a Special Edition model featuring an all-white aluminum chassis accented with gold trim, an unusually chic design for gaming notebooks.

The Helios 300 Special Edition (PH315-51) allows for ultra-smooth gameplay via its 15.6-inch FHD IPS display with an upgraded 144Hz refresh rate. The rapid refresh rate shortens frame rendering time and lowers input lag to give gamers an excellent in-game experience. It’s powered by up to an 8th Gen Intel Core i7+ processor, overclockable GeForce GTX 1060 graphics, up to a 512 GB PCIe Gen 3 NVMe solid state drive, and up to a 2 TB hard disk drive.

The Helios 300 Special Edition also comes equipped with up to 16 GB of DDR4 memory, and is upgradable to 32GB. Intel Optane memory speeds up load times of games and applications, access to information and improves overall system responsiveness. In addition, Gigabit Ethernet provides fast wired connections, while Gigabit Wi-Fi is provided by the latest Intel Wireless-AC 9560 that delivers up to 1.73Gbps throughput when using 160 MHz channels (2×2 802.11ac, dual-band 2.4GHz and 5GHz).

The Helios 300 Special Edition also includes two of Acer’s ultrathin (0.1 mm) all-metal AeroBlade 3D fans designed with advanced aerodynamics and superior airflow to keep the system cool. They can be controlled with Acer’s PredatorSense app, which offers three usage modes:

1. Coolboost mode:

For heavy loading games, rendering, streaming, and extended video consumption

2. Normal mode:

For productivity tools like Microsoft Office

3. Silent mode:

For web browsing and online chatting

Price and Availability

Predator Helios 500 will be available in South Africa in June starting at R34 999.00

Helios 300 Special Edition will be available in South Africa in August 2018. Exact Price will be communicated closer to the time.

Continue Reading

Featured

LG G7 arrives in SA

LG this week introduced South Africa to its latest premium smartphone, the LG G7 ThinQ, focused on bringing useful and convenient AI features to the smartphone experience.

Published

on

Powered by the latest Qualcomm Snapdragon 845 Mobile Platform, the LG G7 ThinQ offers 4GB of RAM and 64GB of internal storage to run demanding tasks and apps with. It is equipped with a 6.1-inch Super Bright Display, but the LG G7 ThinQ remains compact enough to use with one hand.

Sporting a new design aesthetic for the G series, the polished metal rim gives the LG G7 ThinQ a sleeker, more refined look, complemented by Gorilla Glass 5 on both the front and the back for enhanced durability. Rated IP68 for dust and water resistance, the LG G7 ThinQ is also awarded MIL-STD 810 c certification, having been subjected to a range of extreme temperature and environment tests designed by the United States military.

The LG G7 ThinQ has an 8MP camera up front, rendering clear and natural selfies, with two 16MP cameras at the back that deliver higher resolution photos with more detail, as well as a Super Wide Angle configuration.

As with other leading brands, LG has evolved its signature camera by including AI functionality. The AI CAM offers 19 shooting modes for intelligence-optimised shots. Users can also improve their photos by choosing from an additional three effect options should the AI CAM recommendation not suit their taste.

The new Super Bright Camera captures images that are up to four times brighter than typical photos shot in dim light. Through the combination of pixel binning and software processing, the AI algorithm adjusts the camera settings automatically when shooting in low light.

Live Photo Mode records one second before and after the shutter is pressed for snippets of unexpected moments or expressions that would normally be missed. Stickers uses face recognition to generate fun 2D and 3D overlays, such as sunglasses and headbands, that can be viewed directly on the display.

New to the G series is Portrait Mode, which generates professional-looking shots with out-of-focus backgrounds. This effect can be generated using both front and rear standard lenses as well as the rear Super Wide Angle lens.

LG G7 ThinQ offers further AI functionality with the inclusion of Google Lens features. Google Lens is a new way to search using the AI and computer vision. Google Assistant and Google Photos allow users to access more information on objects such as landmarks, plants, animals, and books. It can identify text or visit websites, add business cards to contacts, events to the calendar or look up an item on a restaurant menu.

A button just below the volume keys launches the AI functionality. A single tap of this button launches the Google Assistant, while two quick taps launches Google Lens. Users can also hold down the button to start talking to the Google Assistant without the repetition of the OK Google command.

With Super Far Field Voice Recognition (SFFVR) and the highly-sensitive G7ThinQ microphone, the Google Assistant can recognise voice commands from up to five meters away. SFFVR is able to separate commands from background noise, making the LG G7 ThinQ an alternative to a home AI speaker, even when the TV is on. Commands for the Google Assistant have been increased in the LG G7 ThinQ so users can get more done with their voice alone.

“The LG G7 ThinQ is strongly focused on the fundamentals and its launch marks a new chapter for our company,” said Deon Prinsloo, General Manager for Mobile Communication, LG Electronics S.A Pty Ltd. “Through the combination of personalised and useful AI functionalities with meaningful smartphone features, this is LG’s most convenient and in the moment smartphone yet.”

Key Specifications

  • Mobile Platform: Qualcomm Snapdragon 845 Mobile Platform
  • Display: 6.1-inch QHD+ 19.5:9 FullVision Super Bright Display (3120 x 1440 / 564ppi)
  • Memory:
    • LG G7 ThinQ: 4GB LPDDR4x RAM / 64GB UFS 2.1 ROM / MicroSD (up to 2TB)
  • Camera:
    • Rear Dual: 16MP Super Wide Angle (F1.9 / 107°) / 16MP Standard Angle (F1.6 / 71°)
    • Front: 8MP Wide Angle (F1.9 / 80°)
  • Battery: 3000mAh
  • OS: Android 8.0 Oreo
  • Size: 153.2 x 71.9 x 7.9mm
  • Weight: 162g
  • Connectivity: Wi-Fi 802.11 a, b, g, n, ac / Bluetooth 5.0 BLE / NFC / USB Type-C 2.0 (3.1 compatible)
  • Colours: New Aurora Black
  • Others: Super Bright Display / New Second Screen / AI CAM / Super Bright Camera / Super Far Field Voice Recognition / Boombox Speaker / Google Lens / AI Haptic / Hi-Fi Quad DAC / DTS:X 3D Surround Sound / IP68 Water and Dust Resistance / HDR10 / Google Assistant Key / Face Recognition / Fingerprint Sensor / Qualcomm Quick Charge 3.0 Technology / Wireless Charging / MIL-STD 810G Compliant / FM Radio
Continue Reading

Trending

Copyright © 2018 World Wide Worx