Connect with us

Featured

Why AI needs ethics

As AI systems continue to evolve, humanity will place increasing levels of trust in them for decision making, says RUDEON SNELL, Leonardo Leader at SAP Africa.

Published

on

There is little doubt that the pace of innovation is accelerating at unprecedented levels. Technology enabled breakthroughs are happening with increased frequency, enhancing the human life span, improving access to basic needs and leaving the general public with little time to adjust and comprehend the magnitude of these advances.

Within the field of Artificial Intelligence (AI), this phenomenon is certainly just as true, with the accelerated pace of AI development, generating huge interest about moral AI and how, as imperfect human beings, we are teaching AI the differences between right and wrong. As AI systems continue to evolve, humanity will place increasing levels of trust in them for decision making, especially as these systems transition from being perceived as mere tools, to operating as autonomous agents making autonomous decisions.

The question of the ethics pertaining to the decisions that get made by AI systems must be addressed.

Ethical fundamentals of everyday life

The question of ethics finds some of its roots in the notion of fairness. What is fairness? How does one define fairness? Instinctively, human beings grasp the concept of what is fair and what is not fair. As an example, we commonly accept that one for me, two for you, is not fair. We teach our children about what it means to be fair, why we need to share, what the moral and ethical constructs as we believe them to be are when it comes to fairness and sharing. The concept of fairness also features prominently in the United Nations Sustainable Development Goals: Gender Equality (goal #5), Decent Work and Economic Growth (goal #8) and Reduced Inequalities (goal #10) are all arguably built on the concept of fairness.

But how do we teach AI systems about fairness in the same way we teach our children about fairness, especially when an AI system decides that achieving its goal in an optimal manner can be done through unfair advantage? Consider an AI system in charge of ambulance response with the goal of servicing as many patients as possible. It’s quite possible that it might prioritise serving 10 people with small scratches and surface cuts above serving 2 people with severe internal injuries, because serving 10 people allows it to achieve its goal better. Although this optimises patient service, it fundamentally falls flat, when one considers the intent of what was meant to be accomplished in the most optimal way.

In business we have ethical and unethical behaviour and we have strict codes of conduct regarding what we consider to be ethical and unethical business conduct. We accept that not everything that is legal is ethical and not everything that unethical is illegal and as a society we frown upon unethical business conduct, especially from big corporates. How does this transfer to AI systems? Surely, we wouldn’t want AI systems that stay within the bounds of the law, but push as hard as they can against those boundaries to see what they can get away with, exploiting loopholes to fulfil their goals.

Data perpetuating embedded bias

AI systems feed off data. If AI is the new electricity, data is the grid it runs on. AI systems look at data, evaluate that data against its goals and then find the most optimal path towards achieving those goals. Data is absolutely critical for AI systems to be effective. Machine learning (ML) algorithms gain their experience from the data they are given and if that data is biased or ethically or morally tainted, the ML algorithms will perpetuate this. What about factors that are not expressed with data, such as the value of another person, the value of connections, the value of a relationship? The biggest challenge with data unfortunately, is that data quite simply just does not data give you ethics.

Then there’s the issue of blame, who is to blame for AI making mistakes? The manufacturer, the software supplier, the reseller, the data set, the owner, the user? The issue gets more complicated when we talk about the loss of life in an accident. Consider incidents with AI systems in Healthcare and who would be legally held liable. What about autonomous vehicles disrupting the automotive industry and making its way into society sooner rather than later? If we expand on this trend, what about AI systems making decisions based on their programming, leading to them committing crimes? Are they guilty? Can an AI system be guilty of a crime? Are their programmers to blame? Their data sets? What laws govern this eventuality?

Take our smartphones autocorrect function as a simple example. I’m positive many of us have had an incident where we’ve send texts to friends right after an autocorrect function changes one word to another, often to a more embarrassing version, from where we often issue some grovelling apology. The point is this; if technology today struggles with understanding the intent of a few lines of text, how can we count on it to understand and make life and death decisions?

Revisiting classic questions regarding ethics and morality

Researchers have explored how to effectively resolve this situation in the past. The Trolley problem tests have been around since 1967. First proposed by the philosopher Phillipa Foot it has subsequently proliferated into many variants. Generally, it is used to assess what actions people would take when asked to take an action that would, for example kill 1 person vs 10 people. This is specifically being applied in the context of autonomous vehicles, as a reference model to help AIs make effective life or death decisions, but it’s not a fool proof solution.

Utilitarian principles could offer a framework to help with the ethical decisions AIs need to make. The focus would be on AIs making decisions that result in the greatest good for the greatest amount of people. However, at what cost? How do utilitarian calculations that violate individual rights get reconciled? Ethics is often not about one thing or the other specifically, but more leaning towards the notion of how if you go down a particular road, that road has a particular set of ramifications. If you go down an alternative road the implications could be different. This is what AIs currently struggle with and what humans instinctively understand.

AI systems have largely been built to achieve specific goals and specific outcomes. For humans to have any semblance of creating ethical AI, AI systems should be programmed to be sensitive to achieving its goals in the construct of human values as they could achieve their goals in rather bizarre fashions. Think about a machine deciding to protect humanity by enslaving it (The movie i-Robot rings a bell). Soft governance, industry standards, professional codes of conduct and policies. These are the considerations that must be given in order for us to understand how we can engineer AI in a safer way and how we make our values part of the design process when implementing AI systems. Who decides how ethics are defined? Who decides which ethics are applied in AI?

Ethics ultimately is embodied in knowing the difference between what you have the right to do and what is right to do. We all will need to do our part in ensuring AI systems know how to do this.

Private and public-sector organisations with all their multifarious complexities; societies, from the family to the nation; economies, from the subsistence farmer to the giant multinational – all are inherently human undertakings fuelled by desires and ideas and made possible through collaboration, conversations and amazing technologies. That’s why SAP will be at the Singularity Summit in Johannesburg in October 2018. And that’s why we look forward seeing you there to talk about how to help the world run better and improve peoples’ lives.

Featured

Now download a bank account

Absa has introduced an end-to-end account opening for new customers, through the Absa Banking App, which can be downloaded from the Android and Apple app stores. This follows the launch of the world first ChatBanking on WhatsApp service.

Published

on

This “download your account” feature enables new customers to Absa, to open a Cheque account, order their card and start transacting on the Absa Banking App, all within minutes, from anywhere and at any time, by downloading it from the App stores.

“Overall, this new capability is not only expected to enhance the customer’s digital experience, but we expect to leverage this in our branches, bringing digital experiences to the branch environment and making it easier for our customers to join and bank with us regardless of where they may be,” says Aupa Monyatsi, Managing Executive for Virtual Channels at Absa Retail & Business Banking.

“With this innovation comes the need to ensure that the security of our customers is at the heart of our digital experience, this is why the digital onboarding experience for this feature includes a high-quality facial matching check with the Department of Home Affairs to verify the customer’s identity, ensuring that we have the most up to date information of our clients. Security is supremely important for us.”

The new version of the Absa Banking App is now available in the Apple and Android App stores, and anyone with a South African ID can become an Absa customer, by following these simple steps:

  1. Download the Absa App
  2. Choose the account you would like to open
  3. Tell us who you are
  4. To keep you safe, we will verify your cell phone number
  5. Take a selfie, and we will do facial matching with the Department of Home Affairs to confirm you are who you say you are
  6. Tell us where you live
  7. Let us know what you do for a living and your income
  8. Click Apply.

 

Continue Reading

Featured

How we use phones to avoid human contact

A recent study by Kaspersky Lab has found that 75% of people pick up their connected device to avoid conversing with another human being.

Published

on

Connected devices are becoming essential to keeping people in contact with each other, but for many they are also a much-needed comfort blanket in a variety of social situations when they do not want to interact with others. A recent survey from Kaspersky Lab has confirmed this trend in behaviour after three-quarters of people (75%) admitted they use a device to pretend to be busy when they don’t want to talk to someone else, showing the importance of keeping connected devices protected under all circumstances. 

Imagine you’ve arrived at a bar and you’re waiting for your date. The bar is busy, and people are chatting all around you. What do you do now? Strike up a conversation with someone you don’t know? Grab your phone from your pocket or handbag until your date arrives to keep yourself busy? Why talk to humans or even make eye-contact with someone else when you can stare at your connected device instead?

The truth is, our use of devices is making it much easier to avoid small talk or even be polite to those around us, and new Kaspersky Lab research has found that 72% of people use one when they do not know what to do in a social situation. They are also the ‘go-to’ distraction for people even when they aren’t trying to look busy or avoid someone’s eye. 46% of people admit to using a device just to kill time every day and 44% use it as a daily distraction.

In addition to just being a distraction, devices are also a lifeline to those who would rather not talk directly to another person in day-to-day situations, to complete essential tasks. In fact, nearly a third (31%) of people would prefer to carry out tasks such as ordering a taxi or finding directions to where they need to go via a website and an app, because they find it an easier experience than speaking with another person.

Whether they are helping us avoid direct contact or filling a void in our daily lives, our constant reliance on devices has become a cause for panic when they become unusable. A third (34%) of people worry that they will not be able to entertain themselves if they cannot access a connected device. 12% are even concerned that they won’t be able to pretend to be busy if their device is out of action.

Dmitry Aleshin, VP for Product Marketing, Kaspersky Lab said, “The reliance on connected devices is impacting us in more ways than we could have ever expected. There is no doubt that being connected gives us the freedom to make modern life easier, but devices are also vital to help people get through different and difficult social situations. No matter what your ‘connection crutch’ is, it is essential to make sure your device is online and available when you need it most.”

To ensure your device lifeline is always there and in top health – no matter what the reason or situation – Kaspersky Security Cloud keeps your connection safe and secure:

·         I want to use my device while waiting for a friend – is it secure to access the bar’s Wi-Fi?

With Kaspersky Security Cloud, devices are protected against network threats, even if the user needs to use insecure public Wi-Fi hotspots. This is done through transferring data via an encrypted channel to ensure personal data safety, so users’ devices are protected on any connection.

·         Oh no! I’m bored but my phone’s battery is getting low – what am I going to do?

Users can track their battery level thanks to a countdown of how many minutes are left until their device shuts down in the Kaspersky Security Cloud interface. There is also a wide-range of portable power supplies available to keep device batteries charged while on-the-go.

·         I’ve lost my phone! How will I keep myself entertained now?

Should the unthinkable happen and you lose or have your phone stolen, Kaspersky Security Cloud can track and protect your device from data breaches, for complete peace of mind. Remote lock and locate features ensure your device remains secure until you are reunited.

 

Continue Reading

Trending

Copyright © 2018 World Wide Worx