The engine behind Amazon Alexa is one of the machine learning technologies powering a new suite of artificial intelligence tools announced by AWS this week.
At the AWS re:Invent conference in Las Vegas this week, Amazon Web Services announced three Artificial Intelligence (AI) services that make it easy for developers to build apps that can understand natural language, turn text into lifelike speech, have conversations using voice or text, analyze images, and recognise faces, objects, and scenes.
Amazon Lex, Amazon Polly, and Amazon Rekognition are based on the same highly scalable Amazon technology built by the thousands of deep learning and machine learning experts across the company. AWS says that Amazon AI services all provide high-quality, high-accuracy AI capabilities that are scalable and cost-effective. Amazon AI services are fully managed services, so there are no deep learning algorithms to build, no machine learning models to train, and no up-front commitments or infrastructure investments required. This promises to free developers to focus on defining and building a new generation of apps that can see, hear, speak, understand, and interact with the world around them.
Amazon Web Services provided the following information:
Until now, very few developers have been able to build, deploy, and broadly scale apps with AI capabilities because doing so required access to vast amounts of data, and specialized expertise in machine learning and neural networks. Effectively applying AI involves extensive manual effort to develop and tune many different types of machine learning and deep learning algorithms (e.g. automatic speech recognition, natural language understanding, image classification), collect and clean the training data, and train and tune the machine learning models. And this process must be repeated for every object, face, voice, and language feature in an application. Amazon AI services eliminate all of this heavy lifting, making AI broadly accessible to all app developers by offering Amazon’s powerful and proven deep learning algorithms and technologies as fully managed services that any developer can access through an API call or a few clicks in the AWS Management Console. Amazon AI services make the full power of Amazon’s natural language understanding, speech recognition, text-to-speech, and image analysis technologies available at any scale, for any app, on any device, anywhere.
“The combination of better algorithms and broad access to massive amounts of data and cost-effective computing power provided by the cloud is making AI a reality for application developers. AWS is home to some of the most innovative and creative AI applications in use today,” said Raju Gulabani, VP, Databases, Analytics, and AI, AWS. “Thousands of machine learning and deep learning experts across Amazon have been developing AI technologies for years to predict what customers might like to read, to drive efficiencies in our fulfillment centers through robotics and computer vision technologies, and to give customers our AI-powered virtual assistant, Alexa. Now, we are making the technology underlying these innovations available to any developer in the form of three fully managed Amazon AI services that are easy to use, powerful, and cost effective. We are excited to see how customers use Amazon Lex, Amazon Polly, and Amazon Rekognition to build a new generation of apps that have human-like intelligence and can see, hear, speak, and interact with people and their environments.”
Intelligent conversations with Amazon Lex
Amazon Lex is a new service for building conversational interfaces using voice and text that is built on the same automatic speech recognition (ASR) technology and natural language understanding (NLU) that powers Amazon Alexa. Amazon Lex makes it easy to bring sophisticated, natural language capabilities to virtually any app. Developers can build and test bots (conversational apps that perform automated tasks like checking the weather or booking flights) directly from the AWS Management Console by typing in a few sample phrases (e.g., “find a flight,” or “book a flight”) along with instructions for getting the required parameters to complete task (e.g., travel date and destination) and the corresponding clarifying questions to ask the user (e.g., “when do you want to travel?” and “where do you want to go?”). Amazon Lex takes care of the rest, building the language model and asking the follow-up questions needed to complete the task. Because Amazon Lex is integrated with AWS Lambda, developers can configure Amazon Lex to invoke the appropriate backend service (e.g., the flight booking service) through an AWS Lambda function. Developers can also use pre-built enterprise connectors that execute AWS Lambda functions to answer questions like “what are my top 10 accounts in Salesforce.com,” by fetching data from enterprise systems like Salesforce, Microsoft Dynamics, Marketo, Zendesk, QuickBooks and HubSpot.
Bots built using Amazon Lex can be used anywhere: from web applications, to chat and messenger apps like Slack and Facebook Messenger, or through voice in apps on mobile or connected devices. Amazon Lex handles the authentication required by different platforms and simplifies the user interface design by not requiring developers to write custom code for each platform. Moreover, developers do not have to worry about scaling their infrastructure as Amazon Lex scales automatically as traffic to a bot increases, and developers pay only for the calls made to the Amazon Lex API.
Capital One offers a broad spectrum of financial products and services to consumers, small businesses, and commercial clients through a variety of channels. “As a heavy user of AWS, Amazon Lex’s seamless integration with other AWS services like AWS Lambda and Amazon DynamoDB is really appealing,” said Firoze Lafeer, Chief Technology Officer, Capital One Labs, Capital One. “A highly scalable solution, Amazon Lex also offers potential to speed time to market for a new generation of voice and text interactions, such as our recently launched Capital One skill for Alexa.”
OhioHealth is a nationally recognized healthcare organization with a network of 11+ hospitals in 47 counties. “We are excited about utilizing evolving speech recognition and natural language processing technology to enhance the lives of our customers. Amazon Lex represents a great opportunity for us to deliver a new experience to our patients,” said Michael Krouse, Senior Vice President Operational Support and Chief Information Officer, OhioHealth. “Everything we do at OhioHealth is ultimately about providing the right care to our patients at the right time and in the right place. Amazon Lex’s next generation technology and the innovative applications we are developing while using it will help provide an enhanced customer experience. We are just scratching the surface of what is possible.”
HubSpot is a marketing and sales software leader. “HubSpot’s GrowthBot is an all-in-one chatbot which helps marketers and sales people be more productive by providing access to relevant data and services using a conversational interface. With GrowthBot, marketers can get help creating content, researching competitors, and monitoring their analytics. Through Amazon Lex, we’re adding sophisticated natural language processing capabilities that helps GrowthBot provide a more intuitive UI for our users,” said Dharmesh Shah, Chief Technology Officer and Founder, HubSpot. “Amazon Lex lets us take advantage of advanced AI and machine learning without having to code the algorithms ourselves.”
Twilio helps businesses make communications relevant and contextual by making it possible to easily embed real-time communication and authentication capabilities directly into software applications. “Developers and businesses use Twilio to build apps that can communicate with customers in virtually every corner of the world,” said Benjamin Stein, Director of Messaging Products, Twilio. “Amazon Lex will provide developers with an easy-to-use modular architecture and comprehensive APIs to enable building and deploying conversational bots on mobile platforms. We look forward to seeing what our customers build using Twilio and Amazon Lex.”
Intelligent Speech with Amazon Polly
Amazon Polly makes it easy for developers to add natural-sounding speech capabilities to existing applications like newsreaders and e-learning platforms, or create entirely new categories of speech-enabled products – from mobile apps to devices and appliances. Amazon Polly is easy to use; developers can send text to Amazon Polly using the SDK or from within the AWS Management Console and Polly immediately returns an audio stream that can be played directly or stored in a standard audio file format. With 47 lifelike voices and support for 24 languages, developers can choose from both male and female voices with a variety of accents to make applications for users around the globe. And Amazon Polly’s fluid pronunciation of text content means applications deliver high-quality voice output across a wide variety of text formats. Amazon Polly is scalable, returning high-quality speech fast, even when converting large volumes of text to speech. With Amazon Polly, developers pay only for the text they convert, and they can cache generated speech and replay it as many times as they like with no restrictions.
The Washington Post is a Pulitzer Prize-winning media and technology company that publishes more than 1200 stories a day. “We’ve long been interested in providing audio versions of our stories, but have found that existing text-to-speech solutions are not cost-effective for the speech quality they offer,” said Joseph Price, Senior Product Manager, The Washington Post. “With the arrival of Amazon Polly and its high-quality voices, we look forward to offering readers more rich and versatile ways to experience our content.”
GoAnimate is a cloud-based, animated video creation platform, designed to allow business people with no background in animation to quickly and easily create animated videos. “Amazon Polly gives GoAnimate users the ability to immediately give voice to the characters they animate using our platform. This is especially helpful in scenarios where live voiceover is either resource or time prohibitive, such as when developing a video in many languages, or within pre-production to speed the approval process,” said Alvin Hung, CEO and Founder, GoAnimate. “The speech from Amazon Polly is integrated seamlessly with our rich set of pre-animated assets, which reinforces GoAnimate’s ease of use and affords our customers both efficiency and speed to market.”
Intelligent Image Analysis with Amazon Rekognition
Amazon Rekognition enables developers to quickly and easily build applications that analyze images, and recognize faces, objects, and scenes. Amazon Rekognition uses deep learning technologies to automatically identify objects and scenes, such as vehicles, pets, or furniture, and provides a confidence score that lets developers tag images so that application users can search for specific images using key words. Amazon Rekognition can locate faces within images and detect attributes, such as whether or not the face is smiling or the eyes are open. Amazon Rekognition also supports advanced facial analysis functionalities such as face comparison and facial search. Using Rekognition, developers can build an application that measures the likelihood that faces in two images are of the same person, thereby being able to verify a user against a reference photo in near real-time. Similarly, developers can create collections of millions of faces (detected in images) and can search for a face similar to their reference image in the collection. Amazon Rekognition removes the complexity and overhead required to develop and manage expensive image processing pipelines by making comprehensive image classification, detection, and management capabilities available in a simple, cost-effective, and reliable AWS service. There are no upfront costs for Amazon Rekognition, developers pay only for the images they analyze and the facial feature vectors they store.
Redfin is a full-service brokerage that uses modern technology to help people buy and sell houses. “Redfin users love to browse images of properties on our site and mobile apps, and we want to make it easier for our users to sift through hundreds of millions of listing and images,” says Yong Huang, Director of Big Data & Analytics, Redfin. “Amazon Rekognition generates a rich set of tags directly from images of properties. This makes it relatively simple to build a smart search feature that helps customers discover houses based on their specific needs, such as a fireplace, yard, or swimming pool. And since Rekognition accepts Amazon S3 URLs, it is a huge time-saver to detect objects, scenes, and faces without having to move images around.”
SmugMug is a safe and beautiful home for photos that stores billions of beautiful photos for millions of amazing customers every day. “SmugMug customers want to spend their time making more memories, not manually managing their photo collection,” said Don MacAskill, Co-Founder, Chief Executive Officer, and Chief Geek, SmugMug. “Amazon Rekognition will allow us to automatically identify the content in customers’ photos, unlocking a host of features that will allow them and their visitors to have more time to focus on enjoying life and celebrating their photos.”
Deep Learning and AI on AWS
Amazon Polly is available today in US East (N. Virginia), US East (Ohio), US West (Oregon), and EU (Dublin) Regions, and will expand to additional Regions in the coming months. Amazon Rekognition is available in US East (N. Virginia), US West (Oregon), and EU (Dublin) Regions, and will expand to additional Regions in the coming months. Customers can sign up for the Amazon Lex preview starting today.
In addition to these services, AWS recently announced it is investing significantly in MXNet, an open source distributed deep learning framework, initially developed by Carnegie Mellon University and other top universities, by contributing code and improving the developer experience. MXNet will enable machine learning scientists to build scalable deep learning models that can significantly reduce the training time for their applications. For more information on AWS support for MXNet, visit: http://www.allthingsdistributed.com/2016/11/mxnet-default-framework-deep-learning-aws.html.
AWS also makes it easy for developers to run their own deep learning and machine learning workloads to build their own AI platform on top of AWS. Amazon Elastic Compute Cloud (Amazon EC2), with its broad set of instance types and GPUs with large amounts of memory, is ideal for deep learning training. P2 instances, launched in September 2016, were designed for large-scale machine learning and deep learning with up to 8 NVIDIA Tesla K80 Accelerators, each running a pair of NVIDIDA GK210 GPUs that have 12 GiB of memory and 2,496 parallel processing cores. And, customers can make use of AWS’s Deep Learning AMI, which contains six pre-configured and pre-tested deep learning frameworks including all dependencies, Nvidia drivers, and data science tools like Jupyter and Anaconda. In addition, AWS CloudFormation templates are available for training deep neural networks at scale in just a few clicks.
Bring your network with you
At last week’s Critical Communications World, Motorola unveiled the LXN 500 LTE Ultra Portable Network Infrastructure. It allows rescue personal to set up dedicated LTE networks for communication in an emergency, writes SEAN BACHER.
In the event of an emergency, communications are absolutely critical, but the availability of public phone networks are limited due to weather conditions or congestion.
Motorola realised that this caused a problem when trying to get rescue personnel to those in need and so developed its LXN 500 LTE Ultra Portable Network Infrastructure. The product is the smallest and lightest full powered broadband network to date and allows the first person on the scene to set up an LTE network in a matter of minutes, allowing other rescue team members to communicate with each other.
“The LXN 500 weighs six kilograms and comes in a backpack with two batteries. It offers a range of 1km and allows up to 100 connections at the same time. However, in many situations the disaster area may span more than 1km which is why they can be connected to each other in a mesh formation,” says Tunde Williams, Head of Field and Solutions Marketing EMEA, Motorola Solutions.
The LXN 500 solution offers communication through two-way radios, and includes mapping, messaging, push-to-talk, video and imaging features onboard, thus eliminating the need for any additional hardware.
Data collected on the device can then be sent through to a central control room where an operator can deploy additional rescue personnel where needed. Once video is streamed into the control room, realtime analytics and augmented reality can be applied to it to help predict where future problem points may arise. Video images and other multimedia can also be made available for rescuers on the ground.
“Although the LXN 500 was designed for the seamless communications between on ground rescue teams and their respective control rooms, it has made its way into the police force and in places where there is little or no cellular signal such as oil rigs,” says Williams.
He gave a hostage scenario: “In the event of a hostage situation, it is important for the police to relay information in realtime to ensure no one is hurt. However the perpetrators often use their mobile phones to try and foil any rescue attempts. Should the police have the correct partnerships in place they are able to disable cellular towers in the vicinity, preventing any in or outgoing calls on a public network and allowing the police get their job done quickly and more effectively.”
By disabling any public networks in the area, police are also able to eliminate any cellular detonated bombs from going off but still stay in touch with each other he says.
The LXN 500 offers a wide range of mission critical cases and is sure to transform communications and improve safety for first responders and the people they are trying to protect.
Kaspersky moves to Switzerland
As part of its Global Transparency Initiative, Kaspersky Lab is adapting its infrastructure to move a number of core processes from Russia to Switzerland.
This includes customer data storage and processing for most regions, as well as software assembly, including threat detection updates. To ensure full transparency and integrity, Kaspersky Lab is arranging for this activity to be supervised by an independent third party, also based in Switzerland.
Global transparency and collaboration for an ultra-connected world
The Global Transparency Initiative, announced in October 2017, reflects Kaspersky Lab’s ongoing commitment to assuring the integrity and trustworthiness of its products. The new measures are the next steps in the development of the initiative, but they also reflect the company’s commitment to working with others to address the growing challenges of industry fragmentation and a breakdown of trust. Trust is essential in cybersecurity, and Kaspersky Lab understands that trust is not a given; it must be repeatedly earned through transparency and accountability.
The new measures comprise the move of data storage and processing for a number of regions, the relocation of software assembly and the opening of the first Transparency Center.
Relocation of customer data storage and processing
By the end of 2019, Kaspersky Lab will have established a data center in Zurich and in this facility, will store and process all information for users in Europe, North America, Singapore, Australia, Japan and South Korea, with more countries to follow. This information is shared voluntarily by users with the Kaspersky Security Network (KSN) an advanced, cloud-based system that automatically processes cyberthreat-related data.
Relocation of software assembly
Kaspersky Lab will relocate to Zurich its ‘software build conveyer’ — a set of programming tools used to assemble ready to use software out of source code. Before the end of 2018, Kaspersky Lab products and threat detection rule databases (AV databases) will start to be assembled and signed with a digital signature in Switzerland, before being distributed to the endpoints of customers worldwide. The relocation will ensure that all newly assembled software can be verified by an independent organisation and show that software builds and updates received by customers match the source code provided for audit.
Establishment of the first Transparency Center
The source code of Kaspersky Lab products and software updates will be available for review by responsible stakeholders in a dedicated Transparency Center that will also be hosted in Switzerland and is expected to open this year. This approach will further show that generation after generation of Kaspersky Lab products were built and used for one purpose only: protecting the company’s customers from cyberthreats.
Independent supervision and review
Kaspersky Lab is arranging for the data storage and processing, software assembly, and source code to be independently supervised by a third party qualified to conduct technical software reviews. Since transparency and trust are becoming universal requirements across the cybersecurity industry, Kaspersky Lab supports the creation of a new, non-profit organisation to take on this responsibility, not just for the company, but for other partners and members who wish to join.