Connect with us

Featured

Ethics: the heart of AI

Artificial Intelligence (AI) and machine learning (ML) are more than trendy or futuristic topics – they are real computing advances that are playing out in homes and businesses today, writes ZOAIB HOOSEN, Managing Director, Microsoft.

Published

on

In simple terms, AI is a machine’s ability to reason, and make decisions based on that reasoning, in similar ways to human beings. AI systems learn and evolve over time, so in theory AI can improve itself; the software becomes the software developer.

This creates the potential for exponential gains in analytical and automated processes through AI. Because of this we stand at a critical juncture in the AI journey – a moment in time where we get to define not just what AI can do, but how it does it.

Boosting productivity and unlocking growth

As it goes mainstream – thanks to the cloud, deep learning, and big data – AI will boost productivity and unlock economic growth. It will transform the workplace, and change the shape, look and feel of many industries including health, transport, manufacturing, and more.  But for some, the rise of AI conjures images from the Terminator films or WestWorld TV series. In these stories, humans are at the mercy of these faster, stronger, smarter systems with no ethical hang-ups. These narratives are clear on the problem with AI as they imagine it: no humanity, no heart.

Exploring ethics within capabilities

The ethics of AI goes beyond just regulation and legislation. It’s fundamentally about creating an operating framework that limits and directs the priorities of an AI system.

A real-world example is how one might program a driverless motor vehicle to treat an imminent crash. Should the system act to save its own passenger or should it prioritise the life or safety of a pedestrian? We need to know where we stand on these kinds of issues, to tell learning, thinking machines how they should handle them.  If AI can give us natural language interaction, what are the rules we put in place to manage its responses, or to ensure it doesn’t discriminate against non-native English speakers, for example?  If an AI business analytics system can unlock new sales techniques or customer journeys, are these ethical and fair for customers? What does the system do with the private and personal data it collects before, during and after these interactions?

There is a myriad of concerns at play once you scratch beneath the surface.  At Microsoft we take this responsibility extremely seriously. In fact, one of our three core pillars in this field is: “developing a trusted approach so that AI is developed and deployed in a responsible manner”. This relates directly to the principles of fairness, accountability, transparency and ethics (or FATE) that guide us in ensuring our AI systems are fair, reliable and sage, inclusive, transparent and accountable, and private and secure.

Of course, principles are only as good as the processes that flow from these. In “inclusivity” for example, we believe that to achieve AI that is inclusive, we must nurture inclusivity and diversity in the teams creating the systems – and that the output is just as inclusive. These are the kind of concerns that our internal advisory committee examines, to help ensure our products adhere to these principles.

The bigger picture

We must also be aware that we are not the only player in the game – that AI advances will happen across companies, NGOs and countries. This is where the role of leadership, and the guidance of community, will be critical. We are an active participant in AI-related forums and organisations, such as the Partnership on AI, for this exact reason – and we encourage all AI players to get involved and help us develop the best practices for AI.

Our approach to AI is grounded in, and consistent with, our company mission to help every person and organisation on the planet to achieve more.

If we remain true to this – as we always strive to be – then we must also consider how to mitigate any of the potential downsides that might result from technological advancement. One source of fear for many, is the idea that AI will change our workplaces and – in certain cases – eliminate jobs. Mitigating this will necessitate nurturing new skills and preparing the workforce (and those who will soon join it) for the future of work.  The transformative power of AI will also mean more regulation from governments across the globe – and across the progressive-conservative spectrum. This will bring private and public sectors into closer collaboration, so AI providers must be prepared to engage, to train, to advocate, and to listen, as we move towards a consensus on the values that we inculcate into AI systems.

Fear not – we’ve found the sweet spot

Some people will always fear the unknown, and others will always stride forward in pursuit of progress. The sweet spot lies between them – in the power of AI to unlock creativity, potential and insight, while still behaving in an ethical and responsible manner.  Put aside the scary chapters of a science fiction future for a moment. There is another icon of pop culture that applies – Mary Shelley’s classic tale of Dr. Victor Frankenstein and his monster. In Frankenstein, the doctor is driven by ambition and ego, to create a being made up of parts, reanimated into life. But the doctor is horrified by the creature he creates and abandons it rather than guiding it and helping it into this new life it finds itself in – ultimately leading to deadly consequences.

The spectre of that ghoulish creature looms large in our minds, but – as the novel so wonderfully conveys – the real monster in Frankenstein is the doctor, the flawed man who creates a life without consideration of the chain of events he has set in motion. Similarly, those of us working in AI today need to be sure that we give our own “creation” firm rules and guidelines for operating in the world.

To avoid becoming the Doctor-monster of Shelley’s nightmare, we need to put the heart into the machine.

Featured

Get your passwords in shape

New Year’s resolutions should extend to getting password protection sorted out, writes Carey van Vlaanderen, CEO at ESET Southern Africa.

Published

on

Many of us have entered the new year with a boat load of New Year’s resolutions.  Doing more exercise, fixing unhealthy eating habits and saving more money are all highly respectable goals, but could it be that they don’t go far enough in an era with countless apps and sites that scream for letting them help you reach your personal goals.

Now, you may want to add a few weightier and yet effortless habits on top of those well-worn choices. Here are a handful of tips for ‘exercises’ that will go good for your cyber-fitness.

I won’t pass up on stubborn passwords

Passwords have a bad rap, and deservedly so: they suffer from weaknesses, both in terms of security and convenience, that make them a less-than-ideal method of authentication.  However, much of what the internet offers is independent on your singing up for this or that online service, and the available form of authentication almost universally happens to the username/password combination.

As the keys that open online accounts (not to speak of many devices), passwords are often rightly thought of as the first – alas, often only – line of defence that protects your virtual and real assets from intruders. However, passwords don’t offer much in the way of protection unless, in the first place, they’re strong and unique to each device and account.

But what constitutes a strong password?  A passphrase! Done right, typical passphrases are generally both more secure and more user-friendly than typical passwords. The longer the passphrase and the more words it packs the better, with seven words providing for a solid start. With each extra character (not to mention words), the number of possible combinations rises exponentially, which makes simple brute-force password-cracking attacks far less likely to succeed, if not well-nigh impossible (assuming, of course, that the service in question does not impose limitations on password input length – something that is, sadly, far too common).

Click here to read about making secure passwords by not using dictionary words, using two-factor authentication, and how biometrics are coming to web browsers.

Previous Page1 of 4

Continue Reading

Featured

Code Week prepares 2.3m young Africans for future

By SUNIL GENESS, Director Government Relations & CSR, Global Digital Government, at SAP Africa.

Published

on

On January 6th, 2019, news broke of South African President Cyril Ramaphosa’s plans to announce a new approach to education in his second State of the Nation address, including:

  • A universal roll-out of tablets for all pupils in the country’s 23 700 primary and secondary schools
  • Computer coding and robotics classes for the foundation-phase pupils from grade 1-3 and the
  • Digitisation of the entire curriculum, , including textbooks, workbooks and all teacher support material.

With this, the President has shown South Africa’s response to a global challenge: equipping our youth with the skills they’ll need to survive and thrive in the 21st century digital economy.

Africa’s working-age population will increase to 600 million in 2030 from a base of 370 million in 2010.

In South Africa, unemployment stands at 26.7 percent, but is much more pronounced among youths: 52.2 percent of the country’s 15-24-year-olds are looking for work.

As an organisation deeply invested in South Africa and its future, SAP has developed and implemented a range of initiatives aimed at fostering digital skills development among the country’s youth, including:

AFRICA CODE WEEK

Since its launch in 2015, Africa Code Week has introduced more than 4 million African youth to basic coding.

In 2018, more than 2.3 million youth across 37 countries took part in Africa Code Week.

The digital skills development initiative’s focus on building local capacity for sustainable learning resulted in close to 23 000 teachers being trained in the run-up to the October 2018 events.

Vital to the success of Africa Code Week is the close support it receives from a broad spectrum of public and private sector institutions, including UNESCO YouthMobile, Google, the German Federal Ministry for Economic Cooperation and Development (BMZ), the Cape Town Science Centre, the Camden Education Trust, 28 African governments, over 130 implementing partners and 120 ambassadors across the continent.

SAP’s efforts to drive digital skills development on the African continent forms part of a broader organisational commitment to the UN Sustainable Development Goals, specifically Goal 4 (“Ensure quality and inclusive education for all”)

A core component of Africa Code Week is to encourage female participation in STEM-related skills development activities: in 2018, more than 46% of all Africa Code Week participants were female.

According to Africa Code Week Global Coordinator Sunil Geness, female representation in STEM-related fields among African businesses currently stands at 30%, “requiring powerful public-private partnerships to start turning the tide and creating more equitable opportunities for African youth to contribute to the continent’s economic development and success”.

Click here to read more about the Skills for Africa graduate training programme, and about the LEGO League.

Previous Page1 of 2

Continue Reading

Trending

Copyright © 2018 World Wide Worx