In simple terms, AI is a machine’s ability to reason, and make decisions based on that reasoning, in similar ways to human beings. AI systems learn and evolve over time, so in theory AI can improve itself; the software becomes the software developer.
This creates the potential for exponential gains in analytical and automated processes through AI. Because of this we stand at a critical juncture in the AI journey – a moment in time where we get to define not just what AI can do, but how it does it.
Boosting productivity and unlocking growth
As it goes mainstream – thanks to the cloud, deep learning, and big data – AI will boost productivity and unlock economic growth. It will transform the workplace, and change the shape, look and feel of many industries including health, transport, manufacturing, and more. But for some, the rise of AI conjures images from the Terminator films or WestWorld TV series. In these stories, humans are at the mercy of these faster, stronger, smarter systems with no ethical hang-ups. These narratives are clear on the problem with AI as they imagine it: no humanity, no heart.
Exploring ethics within capabilities
The ethics of AI goes beyond just regulation and legislation. It’s fundamentally about creating an operating framework that limits and directs the priorities of an AI system.
A real-world example is how one might program a driverless motor vehicle to treat an imminent crash. Should the system act to save its own passenger or should it prioritise the life or safety of a pedestrian? We need to know where we stand on these kinds of issues, to tell learning, thinking machines how they should handle them. If AI can give us natural language interaction, what are the rules we put in place to manage its responses, or to ensure it doesn’t discriminate against non-native English speakers, for example? If an AI business analytics system can unlock new sales techniques or customer journeys, are these ethical and fair for customers? What does the system do with the private and personal data it collects before, during and after these interactions?
There is a myriad of concerns at play once you scratch beneath the surface. At Microsoft we take this responsibility extremely seriously. In fact, one of our three core pillars in this field is: “developing a trusted approach so that AI is developed and deployed in a responsible manner”. This relates directly to the principles of fairness, accountability, transparency and ethics (or FATE) that guide us in ensuring our AI systems are fair, reliable and sage, inclusive, transparent and accountable, and private and secure.
Of course, principles are only as good as the processes that flow from these. In “inclusivity” for example, we believe that to achieve AI that is inclusive, we must nurture inclusivity and diversity in the teams creating the systems – and that the output is just as inclusive. These are the kind of concerns that our internal advisory committee examines, to help ensure our products adhere to these principles.
The bigger picture
We must also be aware that we are not the only player in the game – that AI advances will happen across companies, NGOs and countries. This is where the role of leadership, and the guidance of community, will be critical. We are an active participant in AI-related forums and organisations, such as the Partnership on AI, for this exact reason – and we encourage all AI players to get involved and help us develop the best practices for AI.
Our approach to AI is grounded in, and consistent with, our company mission to help every person and organisation on the planet to achieve more.
If we remain true to this – as we always strive to be – then we must also consider how to mitigate any of the potential downsides that might result from technological advancement. One source of fear for many, is the idea that AI will change our workplaces and – in certain cases – eliminate jobs. Mitigating this will necessitate nurturing new skills and preparing the workforce (and those who will soon join it) for the future of work. The transformative power of AI will also mean more regulation from governments across the globe – and across the progressive-conservative spectrum. This will bring private and public sectors into closer collaboration, so AI providers must be prepared to engage, to train, to advocate, and to listen, as we move towards a consensus on the values that we inculcate into AI systems.
Fear not – we’ve found the sweet spot
Some people will always fear the unknown, and others will always stride forward in pursuit of progress. The sweet spot lies between them – in the power of AI to unlock creativity, potential and insight, while still behaving in an ethical and responsible manner. Put aside the scary chapters of a science fiction future for a moment. There is another icon of pop culture that applies – Mary Shelley’s classic tale of Dr. Victor Frankenstein and his monster. In Frankenstein, the doctor is driven by ambition and ego, to create a being made up of parts, reanimated into life. But the doctor is horrified by the creature he creates and abandons it rather than guiding it and helping it into this new life it finds itself in – ultimately leading to deadly consequences.
The spectre of that ghoulish creature looms large in our minds, but – as the novel so wonderfully conveys – the real monster in Frankenstein is the doctor, the flawed man who creates a life without consideration of the chain of events he has set in motion. Similarly, those of us working in AI today need to be sure that we give our own “creation” firm rules and guidelines for operating in the world.
To avoid becoming the Doctor-monster of Shelley’s nightmare, we need to put the heart into the machine.
Revealing the real cost of ‘free’ online services
A free service by Finnish cybersecurity provider F-Secure reveals the real cost of using “free” services by Google, Apple, Facebook, and Amazon, among others.
What do Google, Facebook, and Amazon have in common? Privacy and identity scandals. From Cambridge Analytica to Google’s vulnerability in Google+, the amount of personal data sitting on these platforms is enormous.
Cybersecurity provider F-Secure has released a free online tool that helps expose the true cost of using some of the web’s most popular free services. And that cost is the abundance of data that has been collected about users by Google, Apple, Facebook, Amazon Alexa, Twitter, and Snapchat. The good news is that you can take back your data “gold”.
F-Secure Data Discovery Portal sends users directly to the often hard-to-locate resources provided by each of these tech giants that allow users to review their data, securely and privately.
“What you do with the data collection is entirely between you and the service,” says Erka Koivunen, F-Secure Chief Information Security Officer. “We don’t see – and don’t want to see – your settings or your data. Our only goal is to help you find out how much of your information is out there.”
More than half of adult Facebook users, 54%, adjusted how they use the site in the wake of the scandal that revealed Cambridge Analytica had collected data without users’ permission.* But the biggest social network in the world continues to grow, reporting 2.3 billion monthly users at the end of 2018.**
“You often hear, ‘if you’re not paying, you’re the product.’ But your data is an asset to any company, whether you’re paying for a product or not,” says Koivunen. “Data enables tech companies to sell billions in ads and products, building some of the biggest businesses in the history of money.”
F-Secure is offering the tool as part of the company’s growing focus on identity protection that secures consumers before, during, and after data breaches. By spreading awareness of the potential costs of these “free” services, the Data Discovery Portal aims to make users aware that securing their data and identity is more important than ever.
A recent F-Secure survey found that 54% of internet users over 25 worry about someone hacking into their social media accounts.*** Data is only as secure as the networks of the companies that collect it, and the passwords and tactics used to protect our accounts. While the settings these sites offer are useful, they cannot eliminate the collection of data.
Koivunen says: “While consumers effectively volunteer this information, they should know the privacy and security implications of building accounts that hold more potential insight about our identities than we could possibly share with our family. All of that information could be available to a hacker through a breach or an account takeover.”
However, there is no silver bullet for users when it comes to permanently locking down security or hiding it from the services they choose to use.
“Default privacy settings are typically quite loose, whether you’re using a social network, apps, browsers or any service,” says Koivunen. “Review your settings now, if you haven’t already, and periodically afterwards. And no matter what you can do, nothing stops these companies from knowing what you’re doing when you’re logged into their services.”
***Source: F-Secure Identity Protection Consumer (B2C) Survey, May 2019, conducted in cooperation with survey partner Toluna, 9 countries (USA, UK, Germany, Switzerland, The Netherlands, Brazil, Finland, Sweden, and Japan), 400 respondents per country = 3600 respondents (+25years)
WhatsApp comes to KaiOS
By the end of September, WhatsApp will be pre-installed on all phones running the KaiOS operating system, which turns feature phones into smart phones. The announcement was made yesterday by KaiOS Technologies, maker of the KaiOS mobile operating system for smart feature phones, and Facebook. WhatsApp is also available for download in the KaiStore, on both 512MB and 256MB RAM devices.
“KaiOS has been a critical partner in helping us bring private messaging to smart feature phones around the world,” said Matt Idema, COO of WhatsApp. “Providing WhatsApp on KaiOS helps bridge the digital gap to connect friends and family in a simple, reliable and secure way.”
WhatsApp is a messaging tool used by more than 1.5 billion people worldwide who need a simple, reliable and secure way to communicate with friends and family. Users can use calling and messaging capabilities with end-to-end encryption that keeps correspondence private and secure.
WhatsApp was first launched on the KaiOS-powered JioPhone in India in September of 2018. Now, with the broad release, the app is expected to reach millions of new users across Africa, Europe, North America, Southeast Asia, and Latin America.
“We’re thrilled to bring WhatsApp to the KaiOS platform and extend such an important means of communication to a brand new demographic,” said Sebastien Codeville, CEO of KaiOS Technologies. “We strive to make the internet and digital services accessible for everyone and offering WhatsApp on affordable smart feature phones is a giant leap towards this goal. We can’t wait to see the next billion users connect in meaningful ways with their loved ones, communities, and others across the globe.”
KaiOS-powered smart feature phones are a new category of mobile devices that combine the affordability of a feature phone with the essential features of a smartphone. They meet a growing demand for affordable devices from people living across Africa – and other emerging markets – who are not currently online.
WhatsApp is now available for download from KaiStore, an app store specifically designed for KaiOS-powered devices and home to the world’s most popular apps, including the Google Assistant, YouTube, Facebook, Google Maps and Twitter. Apps in the KaiStore are customised to minimise data usage and maximise user experience for smart feature phone users.
KaiOS currently powers more than 100 million devices shipped worldwide, in over 100 countries. The platform enables a new category of devices that require limited memory, while still offering a rich user experience.
* For more details, visit: Meet The Devices That Are Powered by KaiOS
* Also read Arthur Goldstuck’s story, Smart feature phones spell KaiOS