Connect with us

Featured

Where next for humans?

Published

on

With many worried about their jobs being taken over by artificial intelligence and machines, SIMON CARPENTER, Chief Technology Adviser at SAP Africa, asks what the future has in stall for humankind.

Perhaps it’s because I have less runway ahead of me than behind me or perhaps it’s because of being in the IT industry for three and half decades but I find myself marvelling at and sometimes bewildered by, the exponentially accelerating pace and scope of advances in science and technology. Recently, in response to alarming headlines about job destruction and Artificial Intelligence (AI) getting away from us, I’ve been wondering where it’s all heading for the apex primate – humankind.

A brief history of mankind

When you look at the mammal that is Homo sapiens in the context of geological time, the 200,000 years we have been around is a tiny, tiny amount of time – a mere 0.00004% of Earth’s existence. And yet here we are, living in the Anthropocene epoch (recently named for us by climatologists and geologists). This epoch is so named because for the first time in the 4.1-billion-year history of life on Earth we humans, as a species, are changing what happens to and on the planet, rather than simply being the observers and subjects of natural forces.

In that 200,000 years since Homo sapiens first emerged in Africa and spread across the planet we have evolved to become a “reasonably smart” apex primate at the top of the food chain in a closed system called planet Earth (although we have already made our presence felt in other parts of the solar system).

You could argue that we’re only “reasonably smart” because whilst we are sentient, have consciousness, self-awareness, intellectual capacity, language, moral reasoning, and the ability to create, we haven’t yet figured out how to live without degrading our own environment through pollution, over-population, over-exploitation of natural resources and species extinction. Only “reasonably smart” because whilst we create great art, music, literature, food, science and technologies and new industries we haven’t yet figured out how to stop warring with each other, to transcend tribalism and racism, or to curb the greed and corruption whereby the few predate upon the many. Only “reasonably smart” because although we’ve made great strides in medicine and healthcare we have not yet figured out how to cure dread diseases, prevent obesity, build equitable, inclusive economies or provide universal healthcare.

The greatest show in the universe

These “reasonable smarts” come to us courtesy of arguably the most amazingly complex “thing” in the universe – the human brain. With its estimated 86,000,000,000 to 100,000,000,000 neurons and 3,440,000,000,000,000 to 4,000,000,000,000,000 synapses it accounts for roughly 2% of our body mass yet consumes around 20% of the oxygen and energy we take in. It is this human brain and its astonishing capacities that keeps us alive on daily basis, that enables us to dominate other animals and that accounts for all human progress and the massive impact we have had on the planet despite having been here for only 0.00004% of the earth’s history. And, it is this amazing brain that has helped us to develop and master the various technologies that have brought us this far; from fire to fission and everything in between.

Yet, despite this awesomeness, the human brain may not be sufficient to ensure our survival as a species. Whilst there is still much to discover about how the brain works we do know that it suffers from the fact that it is trapped in a physically constrained space – the skull – and is subject to metabolic limitations. The prefrontal cortex, where we do most of our reasoning, appears to be able process no more than five to seven discrete pieces of information at any one time, and the myth of multitasking is just that – a complete myth. Our individual brains, in other words, are ill-equipped to deal with either the size or dynamism of some of the challenges we now face. That’s one of the reasons we find it so hard to execute the dictum that “No problem can be solved from the same level of consciousness that created it” (commonly attributed to Einstein) – it’s hard to change consciousness when its seat doesn’t change. The world in which our current brains evolved no longer exists and, per evolutionary science, it will take somewhere in the region of one million years for any significant changes in our human capabilities. So, we will need to look elsewhere for solutions to the many pressing problems such as how to feed an additional two billion people on shrinking amounts of arable land and how to manage traffic congestion and safety in rapidly urbanising societies, how to provide sufficient energy for economic development or manage epidemics or maximise corporate profits without harming society and so on.

The digital brain

Help is at hand.  We now stand at the dawn of a Digital Revolution, one which promises socio-economic upheaval as profound as that which followed previous agricultural and industrial revolutions. Whereas the plough, the steam-engine and the production technologies of yesteryear augmented our physical capabilities this new Digital Revolution, with its data generation, information processing and communication technologies is about augmenting our mental capabilities. Pre-eminent among the multi-faceted technologies that underpin the Digital Revolution is Artificial Intelligence.

As we embed sensors in more and more things in the world (including ourselves) and this cyber-physical world creates unprecedented volumes and velocities of data we must use AI to make sense of it as our brains are simply not up to the task of dealing with the velocity and volumes of data.

Unlike our brains, and courtesy of Moore’s law, we can scale up silicon-based capabilities in a largely unrestricted fashion – it’s not bound by the physical limitations of the human skull or by the metabolic need for sleep – and this is enabling us to deliver AI capabilities that were the stuff of science fiction only a few years ago. There is much debate as to when (and whether) AI will exceed human intelligence with futurists such a Ray Kurzweil (who claims an 86% accuracy rate for the 147 predictions he has made since the 1990s) positing 2045 as the year when the Singularity will occur. The Singularity being the point in time at which AI leads to machines that are smarter than human beings.

But, you may say, AI is not new and for every success there have been many failures and it’s nowhere close to matching human general intelligence. So, what is different this time? Well, your assertions would be right on all counts but the answer to your question is “plenty”.

Advances in technology

The last few years have seen spectacular improvements in capability and affordability on several fronts that feed into Artificial Intelligence. The pace of this Digital Revolution has no historical precedent we can refer to; advances in science and technology are combinatorial and exponential in nature, intersecting in ways we battle to anticipate and because of which jobs, industries and societies are being disrupted.

There have been improvements in computing technologies, especially Graphics Processing Units (GPUs) to the point where super-computing and massive memory is affordable and therefore widely available, significant improvements in algorithms (not least of which is Deep Learning) and massive sets of data on which to train new AI models using techniques like Machine and Deep Learning. And all of this is set to accelerate as the Internet of Things (IoT) takes hold allowing us to create and “feed” real-time data into Digital Twins that will represent all sorts of artefacts from the real world (including humans). The possibilities are endless and limited only by our imaginations and ethical considerations, the ramifications can be scary, and the process is unstoppable.

It is now up to us as individuals, workers, parents, managers, leaders, companies, governments and societies to understand and evaluate these trends and technologies and to ask how can we ensure this new technology serves us? How will we apply it to help make the world run better and improve people’s lives?

The (narrow) usefulness of AI

The answer lies in understanding that today’s AIs are very narrow – they can do certain specific tasks, but only those tasks, astonishingly well. So, it’s about picking the most valuable use cases, understanding that for certain tasks, where efficiency, repeatability, neutrality, speed and big sets of data are the norm the narrow intelligence of today’s AI is often superior to humans when it comes to getting a job done. It’s also about having a mindset of embedding these new tools into both existing or new business processes and models in such a way that we can take the “work out of work” and free our people up to do the things that only humans can do; imagining, empathising, relating, creating and solving complex problems. This last point is worth emphasising; AI cannot envision, it cannot innovate, it doesn’t empathise, it cannot synthesise new solutions to complex problems. What it can do is tackle the routine, mundane, dangerous activities that make work a less than stellar experience for millions of people – so that those people can bring their talents to bear on the world’s challenges in a more engaging fashion.

It is about taking tools such as machine learning and applying them to the data you already have (or will generate through new capabilities such as social listening, IoT or visual processing) to generate new insights, to make life and work safer, easier and more productive, and to design innovative competitive capabilities.

As we stand at the beginning of a new age for humanity, one where we can use Artificial Intelligence for good, it is up to us to explore ways to make sure technology serves us well. We don’t yet know where it will take us but we do know that we must get started.

Have you asked yourself how your organisation is using Artificial Intelligence today?

Featured

Win a Poster Heater with Gadget and Takealot.com

This winter Gadget and Takealot.com are giving away three Poster Heaters, which look like posters but become heaters when you plug them in.

Published

on

Three Gadget readers will each win a unit, valued at R550 each. To enter, follow @GadgetZA and @Takealot on Twitter and tell us on the @GadgetZA account how many Watts the heater consumes.

What’s the big deal about these heaters? Many of us are struggling to keep the balance between soaring electricity costs and the need to keep warm this winter.

However, the recently launched Poster Heater by EasyHeat and distributed in South Africa by Takealot.com is not only one of the most cost effective electric heaters currently on the market, it is also easy to setup and use.

As the name indicates, it is a poster similar to one you would hang on a wall. But, plug it in and it turns into a 300 Watt heater. The Poster Heater isn’t designed to heat hallways or large rooms, but rather smaller ones like a bedroom or a baby’s nursery or a dressing room.

It uses radiant heating, which means that it heats up in a couple of minutes and the heat is directed at the objects or people around it, quickly taking the chill out of the air and providing a comfortable ambient temperature.

The other advantage of radiant heating is that it doesn’t dry out the air like infrared or gas heaters. Users also don’t have to worry about their children or pets getting too close to it because, even though it gets hot, it can be touched.

To enter the competition follow the steps below:

Competition entry details:

1. Follow @GadgetZA and @Takealot on Twitter. (We will ONLY be accepting entries via Twitter, so please don’t enter through the comments section of this article.)

2. Tell us on Twitter, via @GadgetZA, mentioning @Takealot in your posting, how many Watts the Poster Heater consumes.

cleardot.gif3. The competition closes on 31 July 2018.

4. Winners will be notified via Twitter on 1 August and Takealot.com will be in touch to organise delivery.

5. The competition is only open to South African residents.

Continue Reading

Featured

Happy Emoji Day! Here’s 10 reasons to be cheerful

First created by Shigetaka Kurita in 1999, the emoji has become a huge part of everyday communication. Whether you love them or hate them, flying dollar bills, applauding hands and rolling eyes are here to stay.

Published

on

Scientist suggest that the use of emojis will help us gain the same satisfaction from digital interactions as we enjoy from personal contact.

Almost two decades later, and we have over 2600 unique emojis to perfectly express what we feel, thank you Mr Kurita! Join HMD, the home of Nokia phones as we celebrate World Emoji Day on the 17th of July with these interesting emoji facts:

The most popular emoji used is “Person Shrugging”

1.       The Nokia 3310 was chosen as one of the first 3 “National” emojis for Finland… it represents unbreakable!

2.       South Africa’s favourite emoji is the “Kiss and wink”… how sweet SA!

3.       French is the only language where a ‘smiley’ does not top the list for its use

4.       On average, over 60 billion emojis are sent on Facebook every day

5.       For the first time ever, the Oxford Dictionaries Word of the Year was a pictograph! The “Face with Tears of Joy” was crowned word of the year in 2015

6.       According to Emojipedia, some of the most requested emoji’s include afro, a bagel and hands making a heart

7.       To include all races, a diversity pack was released in 2017

8.       It has become so trendy that the Museum of Modern Art displays the original emoji collection on canvas

9.       In 2009, Herman Melville’s classic Moby Dick was completely translated into emoji’s

 

Continue Reading

Trending

Copyright © 2018 World Wide Worx