Connect with us

Featured

Ethics: the heart of AI

Artificial Intelligence (AI) and machine learning (ML) are more than trendy or futuristic topics – they are real computing advances that are playing out in homes and businesses today, writes ZOAIB HOOSEN, Managing Director, Microsoft.

Published

on

In simple terms, AI is a machine’s ability to reason, and make decisions based on that reasoning, in similar ways to human beings. AI systems learn and evolve over time, so in theory AI can improve itself; the software becomes the software developer.

This creates the potential for exponential gains in analytical and automated processes through AI. Because of this we stand at a critical juncture in the AI journey – a moment in time where we get to define not just what AI can do, but how it does it.

Boosting productivity and unlocking growth

As it goes mainstream – thanks to the cloud, deep learning, and big data – AI will boost productivity and unlock economic growth. It will transform the workplace, and change the shape, look and feel of many industries including health, transport, manufacturing, and more.  But for some, the rise of AI conjures images from the Terminator films or WestWorld TV series. In these stories, humans are at the mercy of these faster, stronger, smarter systems with no ethical hang-ups. These narratives are clear on the problem with AI as they imagine it: no humanity, no heart.

Exploring ethics within capabilities

The ethics of AI goes beyond just regulation and legislation. It’s fundamentally about creating an operating framework that limits and directs the priorities of an AI system.

A real-world example is how one might program a driverless motor vehicle to treat an imminent crash. Should the system act to save its own passenger or should it prioritise the life or safety of a pedestrian? We need to know where we stand on these kinds of issues, to tell learning, thinking machines how they should handle them.  If AI can give us natural language interaction, what are the rules we put in place to manage its responses, or to ensure it doesn’t discriminate against non-native English speakers, for example?  If an AI business analytics system can unlock new sales techniques or customer journeys, are these ethical and fair for customers? What does the system do with the private and personal data it collects before, during and after these interactions?

There is a myriad of concerns at play once you scratch beneath the surface.  At Microsoft we take this responsibility extremely seriously. In fact, one of our three core pillars in this field is: “developing a trusted approach so that AI is developed and deployed in a responsible manner”. This relates directly to the principles of fairness, accountability, transparency and ethics (or FATE) that guide us in ensuring our AI systems are fair, reliable and sage, inclusive, transparent and accountable, and private and secure.

Of course, principles are only as good as the processes that flow from these. In “inclusivity” for example, we believe that to achieve AI that is inclusive, we must nurture inclusivity and diversity in the teams creating the systems – and that the output is just as inclusive. These are the kind of concerns that our internal advisory committee examines, to help ensure our products adhere to these principles.

The bigger picture

We must also be aware that we are not the only player in the game – that AI advances will happen across companies, NGOs and countries. This is where the role of leadership, and the guidance of community, will be critical. We are an active participant in AI-related forums and organisations, such as the Partnership on AI, for this exact reason – and we encourage all AI players to get involved and help us develop the best practices for AI.

Our approach to AI is grounded in, and consistent with, our company mission to help every person and organisation on the planet to achieve more.

If we remain true to this – as we always strive to be – then we must also consider how to mitigate any of the potential downsides that might result from technological advancement. One source of fear for many, is the idea that AI will change our workplaces and – in certain cases – eliminate jobs. Mitigating this will necessitate nurturing new skills and preparing the workforce (and those who will soon join it) for the future of work.  The transformative power of AI will also mean more regulation from governments across the globe – and across the progressive-conservative spectrum. This will bring private and public sectors into closer collaboration, so AI providers must be prepared to engage, to train, to advocate, and to listen, as we move towards a consensus on the values that we inculcate into AI systems.

Fear not – we’ve found the sweet spot

Some people will always fear the unknown, and others will always stride forward in pursuit of progress. The sweet spot lies between them – in the power of AI to unlock creativity, potential and insight, while still behaving in an ethical and responsible manner.  Put aside the scary chapters of a science fiction future for a moment. There is another icon of pop culture that applies – Mary Shelley’s classic tale of Dr. Victor Frankenstein and his monster. In Frankenstein, the doctor is driven by ambition and ego, to create a being made up of parts, reanimated into life. But the doctor is horrified by the creature he creates and abandons it rather than guiding it and helping it into this new life it finds itself in – ultimately leading to deadly consequences.

The spectre of that ghoulish creature looms large in our minds, but – as the novel so wonderfully conveys – the real monster in Frankenstein is the doctor, the flawed man who creates a life without consideration of the chain of events he has set in motion. Similarly, those of us working in AI today need to be sure that we give our own “creation” firm rules and guidelines for operating in the world.

To avoid becoming the Doctor-monster of Shelley’s nightmare, we need to put the heart into the machine.

Featured

Legion gets a pro makeover

Lenovo’s latest Legion gaming laptop, the Y530, pulls out all the stops to deliver a sleek looking computer at a lower price point, writes BRYAN TURNER

Published

on

Gaming laptops have become synonymous with thick bodies, loud fans, and rainbow lights. Lenovo’s latest gaming laptop is here to change that.

The unit we reviewed housed an Intel Core i7-8750H, with an Nvidia GeForce GTX 1060 GPU. It featured dual storage, one bay fitted with a Samsung 256GB NVMe SSD and the other with a 1TB HDD.

The latest addition to the Legion lineup has become far more professional-looking, compared to the previous generation Y520. This trend is becoming more prevalent in the gaming laptop market and appeals to those who want to use a single device for work and play. Instead of sporting flashy colours, Lenovo has opted for an all-black computer body and a monochromatic, white light scheme. 

The laptop features an all-metal body with sharp edges and comes in at just under 24mm thick. Lenovo opted to make the Y530’s screen lid a little shorter than the bottom half of the laptop, which allowed for more goodies to be packed in the unit while still keeping it thin. The lid of the laptop features Legion branding that’s subtly engraved in the metal and aligned to the side. It also features a white light in the O of Legion that glows when the computer is in use.

The extra bit of the laptop body facilitates better cooling. Lenovo has upgraded its Legion fan system from the previous generation. For passive cooling, a type of cooling that relies on the body’s build instead of the fans, it handles regular office use without starting up the fans. A gaming laptop with good passive cooling is rare to find and Lenovo has shown that it can be achieved with a good build.

The internal fans start when gaming, as one would expect. They are about as loud as other gaming laptops, but this won’t be a problem for gamers who use headsets.

Click here to read about the screen quality, and how it performs in-game.

Previous Page1 of 3

Continue Reading

Featured

Serious about security? Time to talk ISO 20000

Published

on

By EDWARD CARBUTT, executive director at Marval Africa

The looming Protection of Personal Information (PoPI) Act in South Africa and the introduction of the General Data Protection Regulation (GDPR) in the European Union (EU) have brought information security to the fore for many organisations. This in addition to the ISO 27001 standard that needs to be adhered to in order to assist the protection of information has caused organisations to scramble and ensure their information security measures are in line with regulatory requirements.

However, few businesses know or realise that if they are already ISO 20000 certified and follow Information Technology Infrastructure Library’s (ITIL) best practices they are effectively positioning themselves with other regulatory standards such as ISO 27001. In doing so, organisations are able to decrease the effort and time taken to adhere to the policies of this security standard.

ISO 20000, ITSM and ITIL – Where does ISO 27001 fit in?

ISO 20000 is the international standard for IT service management (ITSM) and reflects a business’s ability to adhere to best practice guidelines contained within the ITIL frameworks. 

ISO 20000 is process-based, it tackles many of the same topics as ISO 27001, such as incident management, problem management, change control and risk management. It’s therefore clear that if security forms part of ITSM’s outcomes, it should already be taken care of… So, why aren’t more businesses looking towards ISO 20000 to assist them in becoming ISO 27001 compliant?

The link to information security compliance

Information security management is a process that runs across the ITIL service life cycle interacting with all other processes in the framework. It is one of the key aspects of the ‘warranty of the service’, managed within the Service Level Agreement (SLA). The focus is ensuring that the quality of services produces the desired business value.

So, how are these standards different?

Even though ISO 20000 and ISO 27001 have many similarities and elements in common, there are still many differences. Organisations should take cognisance that ISO 20000 considers risk as one of the building elements of ITSM, but the standard is still service-based. Conversely, ISO 27001 is completely risk management-based and has risk management at its foundation whereas ISO 20000 encompasses much more

Why ISO 20000?

Organisations should ask themselves how they will derive value from ISO 20000. In Short, the ISO 20000 certification gives ITIL ‘teeth’. ITIL is not prescriptive, it is difficult to maintain momentum without adequate governance controls, however – ISO 20000 is.  ITIL does not insist on continual service improvement – ISO 20000 does. In addition, ITIL does not insist on evidence to prove quality and progress – ISO 20000 does.  ITIL is not being demanded by business – governance controls, auditability & agility are. This certification verifies an organisation’s ability to deliver ITSM within ITIL standards.

Ensuring ISO 20000 compliance provides peace of mind and shortens the journey to achieving other certifications, such as ISO 27001 compliance.

Continue Reading

Trending

Copyright © 2019 World Wide Worx