Connect with us

Featured

Voice changes the game

Published

on

Using voice is powerful way to interact with an interface because it’s spontaneous, intuitive, and enables one to interact with technology in the most natural way possible, which is why Amazon is investing heavily in its voice services, writes WERNER VOGELS, CTO at Amazon.com.

At Amazon, we are heavily invested in machine learning (ML), and are developing new tools to help developers quickly and easily build, train, and deploy ML models. The power of ML is in its ability to unlock a new set of capabilities that create value for consumers and businesses. A great example of this is the way we are using ML to deal with one of the world’s biggest and most tangled datasets: human speech.

Voice-driven conversation has always been the most natural way for us to communicate. Conversations are personal and they convey context, which helps us to understand each other. Conversations continue over time, and develop history, which in turn builds richer context. The challenge was that technology wasn’t capable of processing real human conversation.

The interfaces to our digital system have been dictated by the capabilities of our computer systems—keyboards, mice, graphical interfaces, remotes, and touch screens. Touch made things easier; it let us tap on screens to get the app that we wanted. But what if touch isn’t possible or practical? Even when it is, the proliferation of apps has created a sort of “app fatigue”. This essentially forces us to hunt for the app that we need, and often results in us not using many of the apps that we already have. None of these approaches are particularly natural. As a result, they fail to deliver a truly seamless and customer-centric experience that integrates our digital systems into our analog lives.

Voice becomes a game changer

Using your voice is powerful because it’s spontaneous, intuitive, and enables you to interact with technology in the most natural way possible. It may well be considered the universal user interface. When you use your voice, you don’t need to adapt and learn a new user interface. Voice interfaces don’t need to be application-centric, so you don’t have to find an app to accomplish the task that you want. All of these benefits make voice a game changer for interacting with all kinds of digital systems.

Until 2-3 years ago we did not have the capabilities to process voice at scale and in real time. The availability of large scale voice training data, the advances made in software with processing engines such as Caffe, MXNet and Tensflow, and the rise of massively parallel compute engines with low-latency memory access, such as the Amazon EC2 P3 instances have made voice processing at scale a reality.

Today, the power of voice is most commonly used in the home or in cars to do things like play music, shop, control smart home features, and get directions. A variety of digital assistants are playing a big role here. When we released Amazon Alexa, our intelligent, cloud-based voice service, we built its voice technology on the AWS Natural Language Processing platform powered by ML algorithms. Alexa is constantly learning, and she has tens of thousands of skills that extend beyond the consumer space. But by using the stickiness of voice, we think there are even more scenarios that can be unlocked at work.

Helping more people and organizations use voice

People interact with many different applications and systems at work. So why aren’t voice interfaces being used to enable these scenarios? One impediment is the ability to manage voice-controlled interactions and devices at scale, and we are working to address this with Alexa for Business. Alexa for Business helps companies voice-enable their spaces, corporate applications, people, and customers.

To use voice in the workplace, you really need three things. The first is a management layer, which is where Alexa for Business plays. Second, you need a set of APIs to integrate with your IT apps and infrastructure, and third is having voice-enabled devices everywhere.

Voice interfaces are a paradigm shift, and we’ve worked to remove the heavy lifting associated with integrating Alexa voice capabilities into more devices. For example, Alexa Voice Service (AVS), a cloud-based service that provides APIs to interface with Alexa, enables products built using AVS to have access to Alexa capabilities and skills.

We’re also making it easy to build skills for the things you want to do. This is where the Alexa Skills Kit and the Alexa Skills Store can help both companies and developers. Some organizations may want to control who has access to the skills that they build. In those cases, Alexa for Business allows people to create a private skill that can only be accessed by employees in your organization. In just a few months, our customers have built hundreds of private skills that help voice-enabled employees do everything from getting internal news briefings to asking what time their help desk closes.

Voice-enabled spaces

Just like Alexa is making smart homes easier, the same is possible in the workplace. Alexa can control the environment, help you find directions, book a room, report an issue, or find transportation. One of the biggest applications of voice in the enterprise is conference rooms and we’ve built some special skills in this area to allow people to be more productive.

For example, many meetings fail to start on time. It’s usually a struggle to find the dial-in information, punch in the numbers, and enter a passcode every time a meeting starts. With Alexa for Business, the administrator can configure the conference rooms and integrate calendars to the devices. When you walk into a meeting, all you have to say is “Alexa, start my meeting”. Alexa for Business automatically knows what the meeting is from the integrated calendar, mines the dial-in information, dials into the conference provider, and starts the meeting. Furthermore, you can also configure Alexa for Business to automatically lower the projector screen, dim the lights, and more. People who work from home can also take advantage of these capabilities. By using Amazon Echo in their home office and asking Alexa to start the meeting, employees who have Alexa for Business in their workplace are automatically connected to the meeting on their calendar.

Voice-enabled applications

Voice interfaces will really hit their stride when we begin to see more voice-enabled applications. Today, Alexa can interact with many corporate applications including Salesforce, Concur, ServiceNow, and more. IT developers who want to take advantage of voice interfaces can enable their custom apps using the Alexa Skills Kit, and make their skills available just for their organization. There are a number of agencies and SIs that can help with this, and there are code repositories with code examples for AWS services.

We’re seeing a lot of interesting use cases with Alexa for Business from a wide range of companies. Take WeWork, a provider of shared workspaces and services. WeWork has adopted Alexa, managed by Alexa for Business, in their everyday workflow. They have built private skills for Alexa that employees can use to reserve conference rooms, file help tickets for their community management team, and get important information on the status of meeting rooms. Alexa for Business makes it easy for WeWork to configure and deploy Alexa-enabled devices, and the Alexa skills that they need to improve their employees’ productivity.

The next generation of corporate systems and applications will be built using conversational interfaces, and we’re beginning to see this happen with customers using Alexa for Business in their workplace. Want to learn more? If you are attending Enterprise Connect in Orlando next week, I encourage you to attend the AWS keynote on March 13 given by Collin Davis. Collin’s team has focused on helping customers use voice to manage everyday tasks. He’ll have more to share about the advances we’re seeing in this space, and what we’re doing to help our customers be successful in a voice-enabled era.

When it comes to enabling voice capabilities at home and in the workplace, we’re here to help you build.

 

Continue Reading

Featured

Prepare your cam to capture the Blood Moon

On 27 July 2018, South Africans can witness a total lunar eclipse, as the earth’s shadow completely covers the moon.

Published

on

Also known as a blood or red moon, a total lunar eclipse is the most dramatic of all lunar eclipses and presents an exciting photographic opportunity for any aspiring photographer or would-be astronomers.

“A lunar eclipse is a rare cosmic sight. For centuries these events have inspired wonder, interest and sometimes fear amongst observers. Of course, if you are lucky to be around when one occurs, you would want to capture it all on camera,” says Dana Eitzen, Corporate and Marketing Communications Executive at Canon South Africa.

Canon ambassador and acclaimed landscape photographer David Noton has provided his top tips to keep in mind when photographing this occasion.   In South Africa, the eclipse will be visible from about 19h14 on Friday, 27 July until 01h28 on the Saturday morning. The lunar eclipse will see the light from the sun blocked by the earth as it passes in front of the moon. The moon will turn red because of an effect known as Rayleigh Scattering, where bands of green and violet light become filtered through the atmosphere.

A partial eclipse will begin at 20h24 when the moon will start to turn red. The total eclipse begins at about 21h30 when the moon is completely red. The eclipse reaches its maximum at 22h21 when the moon is closest to the centre of the shadow.

David Noton advises:

  1. Download the right apps to be in-the-know

The sun’s position in the sky at any given time of day varies massively with latitude and season. That is not the case with the moon as its passage through the heavens is governed by its complex elliptical orbit of the earth. That orbit results in monthly, rather than seasonal variations, as the moon moves through its lunar cycle. The result is big differences in the timing of its appearance and its trajectory through the sky. Luckily, we no longer need to rely on weight tables to consult the behaviour of the moon, we can simply download an app on to our phone. The Photographer’s Ephemeris is useful for giving moonrise and moonset times, bearings and phases; while the Photopills app gives comprehensive information on the position of the moon in our sky.  Armed with these two apps, I’m planning to shoot the Blood Moon rising in Dorset, England. I’m aiming to capture the moon within the first fifteen minutes of moonrise so I can catch it low in the sky and juxtapose it against an object on the horizon line for scale – this could be as simple as a tree on a hill.

 

  1. Invest in a lens with optimal zoom  

On the 27th July, one of the key challenges we’ll face is shooting the moon large in the frame so we can see every crater on the asteroid pockmarked surface. It’s a task normally reserved for astronomers with super powerful telescopes, but if you’ve got a long telephoto lens on a full frame DSLR with around 600 mm of focal length, it can be done, depending on the composition. I will be using the Canon EOS 5D Mark IV with an EF 200-400mm f/4L IS USM Ext. 1.4 x lens.

  1. Use a tripod to capture the intimate details

As you frame up your shot, one thing will become immediately apparent; lunar tracking is incredibly challenging as the moon moves through the sky surprisingly quickly. As you’ll be using a long lens for this shoot, it’s important to invest in a sturdy tripod to help capture the best possible image. Although it will be tempting to take the shot by hand, it’s important to remember that your subject is over 384,000km away from you and even with a high shutter speed, the slightest of movements will become exaggerated.

  1. Integrate the moon into your landscape

Whilst images of the moon large in the frame can be beautifully detailed, they are essentially astronomical in their appeal. Personally, I’m far more drawn to using the lunar allure as an element in my landscapes, or using the moonlight as a light source. The latter is difficult, as the amount of light the moon reflects is tiny, whilst the lunar surface is so bright by comparison. Up to now, night photography meant long, long exposures but with cameras such as the Canon EOS-1D X Mark II and the Canon EOS 5D Mark IV now capable of astonishing low light performance, a whole new nocturnal world of opportunities has been opened to photographers.

  1. Master the shutter speed for your subject 

The most evocative and genuine use of the moon in landscape portraits results from situations when the light on the moon balances with the twilight in the surrounding sky. Such images have a subtle appeal, mood and believability.  By definition, any scene incorporating a medium or wide-angle view is going to render the moon as a tiny pin prick of light, but its presence will still be felt. Our eyes naturally gravitate to it, however insignificant it may seem. Of course, the issue of shutter speed is always there; too slow an exposure and all we’ll see is an unsightly lunar streak, even with a wide-angle lens.

 

On a clear night, mastering the shutter speed of your camera is integral to capturing the moon – exposing at 1/250 sec @ f8 ISO 100 (depending on focal length) is what you’ll need to stop the motion from blurring and if you are to get the technique right, with the high quality of cameras such as the Canon EOS 5DS R, you might even be able to see the twelve cameras that were left up there by NASA in the 60’s!

Continue Reading

Featured

How Africa can embrace AI

Currently, no African country is among the top 10 countries expected to benefit most from AI and automation. But, the continent has the potential to catch up with the rest of world if we act fast, says ZOAIB HOOSEN, Microsoft Managing Director.

Published

on

To play catch up, we must take advantage of our best and most powerful resource – our human capital. According to a report by the World Economic Forum (WEF), more than 60 percent of the population in sub-Saharan Africa is under the age of 25.

These are the people who are poised to create a future where humans and AI can work together for the good of society. In fact, the most recent WEF Global Shapers survey found that almost 80 percent of youth believe technology like AI is creating jobs rather than destroying them.

Staying ahead of the trends to stay employed

AI developments are expected to impact existing jobs, as AI can replicate certain activities at greater speed and scale. In some areas, AI could learn faster than humans, if not yet as deeply.

According to Gartner, while AI will improve the productivity of many jobs and create millions more new positions, it could impact many others. The simpler and less creative the job, the earlier, a bot for example, could replace it.

It’s important to stay ahead of the trends and find opportunities to expand our knowledge and skills while learning how to work more closely and symbiotically with technology.

Another global study by Accenture, found that the adoption of AI will create several new job categories requiring important and yet surprising skills. These include trainers, who are tasked with teaching AI systems how to perform; explainers, who bridge the gap between technologist and business leader; and sustainers, who ensure that AI systems are operating as designed.

It’s clear that successfully integrating human intelligence with AI, so they co-exist in a two-way learning relationship, will become more critical than ever.

Combining STEM with the arts

Young people have a leg up on those already in the working world because they can easily develop the necessary skills for these new roles. It’s therefore essential that our education system constantly evolves to equip youth with the right skills and way of thinking to be successful in jobs that may not even exist yet.

As the division of tasks between man and machine changes, we must re-evaluate the type of knowledge and skills imparted to future generations.

For example, technical skills will be required to design and implement AI systems, but interpersonal skills, creativity and emotional intelligence will also become crucial in giving humans an advantage over machines.

“At one level, AI will require that even more people specialise in digital skills and data science. But skilling-up for an AI-powered world involves more than science, technology, engineering and math. As computers behave more like humans, the social sciences and humanities will become even more important. Languages, art, history, economics, ethics, philosophy, psychology and human development courses can teach critical, philosophical and ethics-based skills that will be instrumental in the development and management of AI solutions.” This is according to Microsoft president, Brad Smith, and EVP of AI and research, Harry Shum, who recently authored the book “The Future Computed”, which primarily deals with AI and its role in society.

Interestingly, institutions like Stanford University are already implementing this forward-thinking approach. The university offers a programme called CS+X, which integrates its computer science degree with humanities degrees, resulting in a Bachelor of Arts and Science qualification.

Revisiting laws and regulation

For this type of evolution to happen, the onus is on policy makers to revisit current laws and even bring in new regulations. Policy makers need to identify the groups most at risk of losing their jobs and create strategies to reintegrate them into the economy.

Simultaneously, though AI could be hugely beneficial in areas such as curbing poor access to healthcare and improving diagnoses for example, physicians may avoid using this technology for fear of malpractice. To avoid this, we need regulation that closes the gap between the pace of technological change and that of regulatory response. It will also become essential to develop a code of ethics for this new ecosystem.

Preparing for the future

With the recent convergence of a transformative set of technologies, economies are entering a period in which AI has the potential overcome physical limitations and open up new sources of value and growth.

To avoid missing out on this opportunity, policy makers and business leaders must prepare for, and work toward, a future with AI. We must do so not with the idea that AI is simply another productivity enhancer. Rather, we must see AI as the tool that can transform our thinking about how growth is created.

It comes down to a choice of our people and economies being part of the technological disruption, or being left behind.

Continue Reading

Trending

Copyright © 2018 World Wide Worx