There is little doubt that the pace of innovation is accelerating at unprecedented levels. Technology enabled breakthroughs are happening with increased frequency, enhancing the human life span, improving access to basic needs and leaving the general public with little time to adjust and comprehend the magnitude of these advances.
Within the field of Artificial Intelligence (AI), this phenomenon is certainly just as true, with the accelerated pace of AI development, generating huge interest about moral AI and how, as imperfect human beings, we are teaching AI the differences between right and wrong. As AI systems continue to evolve, humanity will place increasing levels of trust in them for decision making, especially as these systems transition from being perceived as mere tools, to operating as autonomous agents making autonomous decisions.
The question of the ethics pertaining to the decisions that get made by AI systems must be addressed.
Ethical fundamentals of everyday life
The question of ethics finds some of its roots in the notion of fairness. What is fairness? How does one define fairness? Instinctively, human beings grasp the concept of what is fair and what is not fair. As an example, we commonly accept that one for me, two for you, is not fair. We teach our children about what it means to be fair, why we need to share, what the moral and ethical constructs as we believe them to be are when it comes to fairness and sharing. The concept of fairness also features prominently in the United Nations Sustainable Development Goals: Gender Equality (goal #5), Decent Work and Economic Growth (goal #8) and Reduced Inequalities (goal #10) are all arguably built on the concept of fairness.
But how do we teach AI systems about fairness in the same way we teach our children about fairness, especially when an AI system decides that achieving its goal in an optimal manner can be done through unfair advantage? Consider an AI system in charge of ambulance response with the goal of servicing as many patients as possible. It’s quite possible that it might prioritise serving 10 people with small scratches and surface cuts above serving 2 people with severe internal injuries, because serving 10 people allows it to achieve its goal better. Although this optimises patient service, it fundamentally falls flat, when one considers the intent of what was meant to be accomplished in the most optimal way.
In business we have ethical and unethical behaviour and we have strict codes of conduct regarding what we consider to be ethical and unethical business conduct. We accept that not everything that is legal is ethical and not everything that unethical is illegal and as a society we frown upon unethical business conduct, especially from big corporates. How does this transfer to AI systems? Surely, we wouldn’t want AI systems that stay within the bounds of the law, but push as hard as they can against those boundaries to see what they can get away with, exploiting loopholes to fulfil their goals.
Data perpetuating embedded bias
AI systems feed off data. If AI is the new electricity, data is the grid it runs on. AI systems look at data, evaluate that data against its goals and then find the most optimal path towards achieving those goals. Data is absolutely critical for AI systems to be effective. Machine learning (ML) algorithms gain their experience from the data they are given and if that data is biased or ethically or morally tainted, the ML algorithms will perpetuate this. What about factors that are not expressed with data, such as the value of another person, the value of connections, the value of a relationship? The biggest challenge with data unfortunately, is that data quite simply just does not data give you ethics.
Then there’s the issue of blame, who is to blame for AI making mistakes? The manufacturer, the software supplier, the reseller, the data set, the owner, the user? The issue gets more complicated when we talk about the loss of life in an accident. Consider incidents with AI systems in Healthcare and who would be legally held liable. What about autonomous vehicles disrupting the automotive industry and making its way into society sooner rather than later? If we expand on this trend, what about AI systems making decisions based on their programming, leading to them committing crimes? Are they guilty? Can an AI system be guilty of a crime? Are their programmers to blame? Their data sets? What laws govern this eventuality?
Take our smartphones autocorrect function as a simple example. I’m positive many of us have had an incident where we’ve send texts to friends right after an autocorrect function changes one word to another, often to a more embarrassing version, from where we often issue some grovelling apology. The point is this; if technology today struggles with understanding the intent of a few lines of text, how can we count on it to understand and make life and death decisions?
Revisiting classic questions regarding ethics and morality
Researchers have explored how to effectively resolve this situation in the past. The Trolley problem tests have been around since 1967. First proposed by the philosopher Phillipa Foot it has subsequently proliferated into many variants. Generally, it is used to assess what actions people would take when asked to take an action that would, for example kill 1 person vs 10 people. This is specifically being applied in the context of autonomous vehicles, as a reference model to help AIs make effective life or death decisions, but it’s not a fool proof solution.
Utilitarian principles could offer a framework to help with the ethical decisions AIs need to make. The focus would be on AIs making decisions that result in the greatest good for the greatest amount of people. However, at what cost? How do utilitarian calculations that violate individual rights get reconciled? Ethics is often not about one thing or the other specifically, but more leaning towards the notion of how if you go down a particular road, that road has a particular set of ramifications. If you go down an alternative road the implications could be different. This is what AIs currently struggle with and what humans instinctively understand.
AI systems have largely been built to achieve specific goals and specific outcomes. For humans to have any semblance of creating ethical AI, AI systems should be programmed to be sensitive to achieving its goals in the construct of human values as they could achieve their goals in rather bizarre fashions. Think about a machine deciding to protect humanity by enslaving it (The movie i-Robot rings a bell). Soft governance, industry standards, professional codes of conduct and policies. These are the considerations that must be given in order for us to understand how we can engineer AI in a safer way and how we make our values part of the design process when implementing AI systems. Who decides how ethics are defined? Who decides which ethics are applied in AI?
Ethics ultimately is embodied in knowing the difference between what you have the right to do and what is right to do. We all will need to do our part in ensuring AI systems know how to do this.
Private and public-sector organisations with all their multifarious complexities; societies, from the family to the nation; economies, from the subsistence farmer to the giant multinational – all are inherently human undertakings fuelled by desires and ideas and made possible through collaboration, conversations and amazing technologies. That’s why SAP will be at the Singularity Summit in Johannesburg in October 2018. And that’s why we look forward seeing you there to talk about how to help the world run better and improve peoples’ lives.
Opera launches built-in VPN on Android browser
Opera has released a new version of its mobile browser, which features a built-in virtual private network service.
Opera has released a new version of its mobile browser, Opera for Android 51, which features a built-in VPN (virtual private network) service.
A VPN allows users to create a secure connection to a public network, and is particularly useful if users are unsure of the security levels of the public networks that they use often.
The new VPN in Opera for Android 51 is free, unlimited and easy to use. When enabled, it gives users greater control of their online privacy and improves online security, especially when connecting to public Wi-Fi hotspots such as coffee shops, airports and hotels. The VPN will encrypt Internet traffic into and out of their mobile devices, which reduces the risk of malicious third parties collecting sensitive information.
“There are already more than 650 million people using VPN services globally. With Opera, any Android user can now enjoy a free and no-log service that enhances online privacy and improves security,” said Peter Wallman, SVP Opera Browser for Android.
When users enable the VPN included in Opera for Android 51, they create a private and encrypted connection between their mobile device and a remote VPN server, using strong 256-bit encryption algorithms. When enabled, the VPN hides the user’s physical location, making it difficult to track their activities on the internet.
The browser VPN service is also a no-log service, which means that the VPN servers do not log and retain any activity data, all to protect users privacy.
“Users are exposed to so many security risks when they connect to public Wi-Fi hotspots without a VPN,” said Wallman. “Enabling Opera VPN means that users makes it difficult for third parties to steal information, and users can avoid being tracked. Users no longer need to question if or how they can protect their personal information in these situations.”
According to a report by the Global World Index in 2018, the use of VPNs on mobile devices is rising. More than 42 percent of VPN users on mobile devices use VPN on a daily basis, and 35 percent of VPN users on computers use VPN daily.
The report also shows that South African VPN users said that their main reason for using a VPN service is to remain anonymous while they are online.
“Young people in particular are concerned about their online privacy as they increasingly live their lives online,” said Wallman. “Opera for Android 51 makes it easy to benefit from the security and anonymity of VPN , especially for those may not be aware of how to set these up.”
Setting up the Opera VPN is simple. Users just tap on the browser settings, go to VPN and enable the feature according to their preference. They can also select the region of their choice.
The built-in VPN is free, which means that users don’t need to download additional apps on their smartphones or pay additional fees as they would for other private VPN services. With no sign-in process, users don’t need to log in every time they want to use it.
Opera for Android is available for download in Google Play. The rollout of the new version of Opera for Android 51 will be done gradually per region.
Future of the car is here
Three new cars, with vastly different price-tags, reveal the arrival of the future of wheels, writes ARTHUR GOLDSTUCK
Just a few months ago, it was easy to argue that the car of the future was still a long way off, at least in South Africa. But a series of recent car launches have brought the high-tech vehicle to the fore in startling ways.
The Jaguar i-Pace electric vehicle (EV), BMW 330i and the Datsun Go have little in common, aside from representing an almost complete spectrum of car prices on the local market. Their tags start, respectively, at R1.7-million, R650 000 and R150 000.
Such a widely disparate trio of vehicles do not exactly come together to point to the future. Rather, they represent different futures for different segments of the market. But they also reveal what we can expect to become standard in most vehicles produced in the 2020s.
The i-Pace may be out of reach of most South Africans, but it ushers in two advances that will resonate throughout the EV market as it welcomes new and more affordable cars. It is the first electric vehicle in South Africa to beat the bugbear of range anxiety.
Unlike the pioneering “old” Nissan Leaf, which had a range of up to about 150km, and did not lend itself to long distance travel, the i-Pace has a 470km range, bringing it within shouting distance of fuel-powered vehicles. A trip from Johannesburg to Durban, for example, would need just one recharge along the way.
And that brings in the other major advance: the i-Pace is the first EV launched in South Africa together with a rapid public charging network on major routes. It also comes with a home charging kit, which means the end of filling up at petrol stations.
The Jaguar i-Pace dispels one further myth about EVs: that they don’t have much power under the hood. A test drive around Gauteng revealed not only a gutsy engine, but acceleration on a par with anything in its class, and enough horsepower to enhance the safety of almost any overtaking situation.
Specs for the Jaguar i-Pace include:
- All-wheel drive
- Twin motors with a combined 294kW and 696Nm
- 0-100km/h in 4.8s
- 90kWh Lithium-ion battery, delivering up to 470km range
- Eight-year/160 000km battery warranty
- Two-year/34 000km service intervals
Click here to read about BMW’s self-driving technology, and how Datsun makes smart technology affordable.