Connect with us

Featured

Why AI needs ethics

As AI systems continue to evolve, humanity will place increasing levels of trust in them for decision making, says RUDEON SNELL, Leonardo Leader at SAP Africa.

Published

on

There is little doubt that the pace of innovation is accelerating at unprecedented levels. Technology enabled breakthroughs are happening with increased frequency, enhancing the human life span, improving access to basic needs and leaving the general public with little time to adjust and comprehend the magnitude of these advances.

Within the field of Artificial Intelligence (AI), this phenomenon is certainly just as true, with the accelerated pace of AI development, generating huge interest about moral AI and how, as imperfect human beings, we are teaching AI the differences between right and wrong. As AI systems continue to evolve, humanity will place increasing levels of trust in them for decision making, especially as these systems transition from being perceived as mere tools, to operating as autonomous agents making autonomous decisions.

The question of the ethics pertaining to the decisions that get made by AI systems must be addressed.

Ethical fundamentals of everyday life

The question of ethics finds some of its roots in the notion of fairness. What is fairness? How does one define fairness? Instinctively, human beings grasp the concept of what is fair and what is not fair. As an example, we commonly accept that one for me, two for you, is not fair. We teach our children about what it means to be fair, why we need to share, what the moral and ethical constructs as we believe them to be are when it comes to fairness and sharing. The concept of fairness also features prominently in the United Nations Sustainable Development Goals: Gender Equality (goal #5), Decent Work and Economic Growth (goal #8) and Reduced Inequalities (goal #10) are all arguably built on the concept of fairness.

But how do we teach AI systems about fairness in the same way we teach our children about fairness, especially when an AI system decides that achieving its goal in an optimal manner can be done through unfair advantage? Consider an AI system in charge of ambulance response with the goal of servicing as many patients as possible. It’s quite possible that it might prioritise serving 10 people with small scratches and surface cuts above serving 2 people with severe internal injuries, because serving 10 people allows it to achieve its goal better. Although this optimises patient service, it fundamentally falls flat, when one considers the intent of what was meant to be accomplished in the most optimal way.

In business we have ethical and unethical behaviour and we have strict codes of conduct regarding what we consider to be ethical and unethical business conduct. We accept that not everything that is legal is ethical and not everything that unethical is illegal and as a society we frown upon unethical business conduct, especially from big corporates. How does this transfer to AI systems? Surely, we wouldn’t want AI systems that stay within the bounds of the law, but push as hard as they can against those boundaries to see what they can get away with, exploiting loopholes to fulfil their goals.

Data perpetuating embedded bias

AI systems feed off data. If AI is the new electricity, data is the grid it runs on. AI systems look at data, evaluate that data against its goals and then find the most optimal path towards achieving those goals. Data is absolutely critical for AI systems to be effective. Machine learning (ML) algorithms gain their experience from the data they are given and if that data is biased or ethically or morally tainted, the ML algorithms will perpetuate this. What about factors that are not expressed with data, such as the value of another person, the value of connections, the value of a relationship? The biggest challenge with data unfortunately, is that data quite simply just does not data give you ethics.

Then there’s the issue of blame, who is to blame for AI making mistakes? The manufacturer, the software supplier, the reseller, the data set, the owner, the user? The issue gets more complicated when we talk about the loss of life in an accident. Consider incidents with AI systems in Healthcare and who would be legally held liable. What about autonomous vehicles disrupting the automotive industry and making its way into society sooner rather than later? If we expand on this trend, what about AI systems making decisions based on their programming, leading to them committing crimes? Are they guilty? Can an AI system be guilty of a crime? Are their programmers to blame? Their data sets? What laws govern this eventuality?

Take our smartphones autocorrect function as a simple example. I’m positive many of us have had an incident where we’ve send texts to friends right after an autocorrect function changes one word to another, often to a more embarrassing version, from where we often issue some grovelling apology. The point is this; if technology today struggles with understanding the intent of a few lines of text, how can we count on it to understand and make life and death decisions?

Revisiting classic questions regarding ethics and morality

Researchers have explored how to effectively resolve this situation in the past. The Trolley problem tests have been around since 1967. First proposed by the philosopher Phillipa Foot it has subsequently proliferated into many variants. Generally, it is used to assess what actions people would take when asked to take an action that would, for example kill 1 person vs 10 people. This is specifically being applied in the context of autonomous vehicles, as a reference model to help AIs make effective life or death decisions, but it’s not a fool proof solution.

Utilitarian principles could offer a framework to help with the ethical decisions AIs need to make. The focus would be on AIs making decisions that result in the greatest good for the greatest amount of people. However, at what cost? How do utilitarian calculations that violate individual rights get reconciled? Ethics is often not about one thing or the other specifically, but more leaning towards the notion of how if you go down a particular road, that road has a particular set of ramifications. If you go down an alternative road the implications could be different. This is what AIs currently struggle with and what humans instinctively understand.

AI systems have largely been built to achieve specific goals and specific outcomes. For humans to have any semblance of creating ethical AI, AI systems should be programmed to be sensitive to achieving its goals in the construct of human values as they could achieve their goals in rather bizarre fashions. Think about a machine deciding to protect humanity by enslaving it (The movie i-Robot rings a bell). Soft governance, industry standards, professional codes of conduct and policies. These are the considerations that must be given in order for us to understand how we can engineer AI in a safer way and how we make our values part of the design process when implementing AI systems. Who decides how ethics are defined? Who decides which ethics are applied in AI?

Ethics ultimately is embodied in knowing the difference between what you have the right to do and what is right to do. We all will need to do our part in ensuring AI systems know how to do this.

Private and public-sector organisations with all their multifarious complexities; societies, from the family to the nation; economies, from the subsistence farmer to the giant multinational – all are inherently human undertakings fuelled by desires and ideas and made possible through collaboration, conversations and amazing technologies. That’s why SAP will be at the Singularity Summit in Johannesburg in October 2018. And that’s why we look forward seeing you there to talk about how to help the world run better and improve peoples’ lives.

Cars

Motor Racing meets Machine Learning

Published

on

The car of tomorrow, most of us imagine, is being built by the great automobile manufacturers of the world. More and more, however, we are seeing information technology companies joining the race to power the autonomous vehicle future.

Last year, chip-maker Intel paid $15.3-billion to acquire Israeli company Mobileye, a leader in computer vision for autonomous driving technology. Google’s autonomous taxi division, Waymo, has been valued at $45-billion.

Now there’s a new name to add to the roster of technology giants driving the future.

DeepRacer on the inside

Amazon Web Services, the world’s biggest cloud computing service and a subsidiary of Amazon.com,  last month unveiled a scale model autonomous racing car for developers to build new artificial intelligence applications. Almost in the same breath, at its annual re:Invent conference in Las Vegas, it showcased the work being done with machine learning in Formula 1 racing.

AWS DeepRacer is a 1/18th scale fully autonomous race car, designed to incorporate the features and behaviour of a full-sized vehicle. It boasts all-wheel drive, monster truck tires, an HD video camera, and on-board computing power. In short, everything a kid would want of a self-driving toy car.

But then, it also adds everything a developer would need to make the car autonomous in ways that, for now, can only be imagined. It uses a new form of machine learning (ML), the technology that allows computer systems to improve their functions progressively as they receive feedback from their activities. ML is at the heart of artificial intelligence (AI), and will be core to autonomous, self-driving vehicles.

AWS has taken ML a step further, with an approach called reinforcement learning. This allows for quicker development of ML models and applications, and DeepRacer is designed to allow developers to experiment with and hone their skill in this area. It is built on top of another AWS platform, called Amazon SageMaker, which enables developers and data scientists to build, train, and deploy machine learning quickly and easily.

Along with DeepRacer, AWS also announced the DeepRacer League, the world’s first global autonomous racing league, open to anyone who orders the scale model from AWS.

DeepRacer on the outside

As if to prove that DeepRacer is not just a quirky entry into the world of motor racing, AWS also showcased the work it is doing with the Formula One Group. Ross Brawn, Formula 1’s managing director of Motor Sports, joined AWS CEO Andy Jassy during the keynote address at the re:Invent conference, to demonstrate how motor racing meets machine learning.

“More than a million data points a second are transmitted between car and team during a Formula 1 race,” he said. “From this data, we can make predictions about what we expect to happen in a wheel-to-wheel situation, overtaking advantage, and pit stop advantage. ML can help us apply a proper analysis of a situation, and also bring it to fans.

“Formula 1 is a complete team contest. If you look at a video of tyre-changing in a pit stop – it takes 1.6 seconds to change four wheels and tyres – blink and you will miss it. Imagine the training that goes into it? It’s also a contest of innovative minds.”

AWS CEO Andy Jassy unveils DeepRacer

Formula 1 racing has more than 500 million global fans and generated $1.8 billion in revenue in 2017. As a result, there are massive demands on performance, analysis and information. 

During a race, up to 120 sensors on each car generate up to 3GB of data and 1 500 data points – every second. It is impossible to analyse this data on the fly without an ML platform like Amazon SageMaker. It has a further advantage: the data scientists are able to incorporate 65 years of historical race data to compare performance, make predictions, and provide insights into the teams’ and drivers’ split-second decisions and strategies.

This means Formula 1 can pinpoint how a driver is performing and whether or not drivers have pushed themselves over the limit.

“By leveraging Amazon SageMaker and AWS’s machine-learning services, we are able to deliver these powerful insights and predictions to fans in real time,” said Pete Samara, director of innovation and digital technology at Formula 1.

  • Arthur Goldstuck is founder of World Wide Worx and editor-in-chief of Gadget.co.za. Follow him on Twitter on @art2gee and on YouTube

Continue Reading

Featured

LG rethinks portable speakers

LG adds three sizes to its XBoom Go portable speaker line in a portable revision, writes BRYAN TURNER.

Published

on

Portable Bluetooth speakers are fairly commonplace at a pool party because they’re battery-powered. The only issue is that louder speakers usually distort the music or break the bank. The LG XBoom aims to change this.

LG has partnered with Meridian Audio to produce great sounding speakers that can go loud without distorting the audio. Meridian Audio is an expert in high-performance, high-fidelity audio experiences. The company is best known for producing the industry’s first audiophile-quality compact disc player and provide audio equipment to McLaren and Jaguar Land Rover.

The Bluetooth software in the XBoom Go is Qualcomm aptX HD compatible, meaning that 24bit vinyl-quality audio can be played through this speaker over Bluetooth instead of standard-fidelity audio.

The major phone assistants feature on these speakers, with tethered Google Assistant or Apple Siri functionality from one’s smartphone. This makes it very convenient to use the voice assistant button to skip tracks and change music when one’s hands are wet.

Three models of the XBoom Go series – the PK3, PK5 and PK7 – offer different audio functions depending on the audio needs of the user. Best fits for these speakers are:

  • PK3 – The Pool Friendly Speaker: The PK3 is IPX7 water resistant, up to 1 metre for 30 minutes, making this speaker accident proof at pool parties. Boasting up to 12 hours of playback from its built-in battery, this speaker will last as long as the party.

  • PK5 – The Party Friendly Speaker: Even if the lunch braai turns into a midnight feast, this speaker will play throughout as its battery lasts up to 18 hours. Clear Vocal technology is added to the PK5, which reduces audio imperfections from the music for a sharper sound. It is also water and splash resistant and has a handle, allowing for it to be easily carried. Built-in LED lights which pulse with the beat of the music on this speaker provide a light show for any song.

  • PK7 –  The Audiophile’s Speaker: With a battery life that lasts for up to 22 hours, the PK7 also contains an LED light to the rhythm of the sound. The speaker integrates a convenient handle grip that allows for it to be transported securely. The powerful PK7 Bluetooth speaker also distributes its high frequencies across two separate tweeters for more precise sonic detail.

Overall, LG’s XBoom PK portable speakers are a phenomenal set of high-quality wireless speakers.

Continue Reading

Trending

Copyright © 2018 World Wide Worx