Connect with us


Artificial intelligence needs more than artificial trust

The technology that makes facial recognition possible is paving the way for machines to recognise feelings, writes ARTHUR GOLDSTUCK



The great irony of artificial intelligence (AI) and devices that recognise our voices, faces and fingerprints is that they are oblivious to our thoughts and feelings.

“We need to rethink our relationship with technology,” says Rana el Kaliouby, co-founder and CEO of an AI company called Affectiva. “Machines know a lot about us but are completely oblivious to our emotional and cognitive states. Yet, AI is going to change only the way we connect with our devices, but it will fundamentally change the way we connect and communicate with human beings.”

She is speaking in a packed out session at Dell Technologies World in Las Vegas, where more than 15,000 paying delegates are receiving a deep dive into topics as diverse as cloud computing and sustainability of the oceans. Her concern is that, as much as machines need to win the trust if humans, so humans must also win the trust of machines. That sounds absurd for inanimate objects, but this form of artificial trustwill be essential in a future where machines will be expected to assess both our identities and our moods, not to mention our needs.

El Kaliouby earned a PhD in machine learning at CambridgeUniversity, and helped found Affectiva in Boston, USA, to put into practice her research.

“I spent the last 20 years working to build algorithms that understand people’s emotional states, cognitive states, and apply them to the technology around us that makes them more effective.”

The reality, she discovered, is that as we imbue machines with greater intelligence, we must also imbue ourselves with a greater ethical mission.

“We need a news social contract between humans and machines. It’s a two way street. Can AI trust humans? And what will it take to have reciprocal trust? There are a lot of examples of where it goes wrong, like the chatbot on Twitter that became racist, a self driving car that kills people, and a face recognition system that discriminates against people, especially women of colour.

“Sometimes trust is explicit, but most times it is implicit, manifested in subtle interactions like tone of voice and facialexpression. The core of that is empathy. People who havehigher empathy tend to be more liable to be trusted andtherefore more persuasive and tend to be more successful in their personal lives.

“We can’t work with people we don’t trust, and I argue it is the same with AI. We have a lot of common intelligence but not enough emotional intelligence. What if a computer can tell the difference between a smile and smirk? Both involve the lower half of face but have very different meanings.”

She gives the example of the contrast between physical healthcare and mental healthcare. When people walk into doctors rooms they don’t ask what their temperature or blood pressure is, they just measure it. But in mental health care, the practitioner must ask, typically on a scale of 1 to 10, how much the person is hurting.

“The science of emotions has been around for over 200 years,since Duchenne de Boulogne mapped out stimulation of human muscles, through to a modern facial action coding system. It takes a 100 hours of training to become a professional facial analyser. It’s very time-intensive, and it’s not scalable. We use machine learning and big data and tons of computing power to speed up that process. Imagine when that becomes instant?”

The most immediate practical application of the technology is likely to be in the automotive sector, and long before self-driving cars become the norm. However, it is with cars that can switch between human-control and self-driving that the technology will come into its own.

“Our system detects four levels of drowsiness. If you are able to detect that in real time, the car can intervene in a number of ways to make it a safer driving experience. It can tell if you are using your cellphone while driving. By detecting eye gaze direction and using object detectors, the system tells us you’re not keeping your eyes on the road and looking at a smartphone. 

“How can a car react if it senses you’re distracted or drowsy? It can start with an alert. If the vehicle is semi-autonomous, it can say ‘I can be a better driver than you’, and it can take over control.

“Ina  few years, with robo-taxis, the car will still need to understand how many people are in the vehicle, what’s the mood in there, are people stressed or enjoying the ride and, if not, how can we craft the riding experience to make it more enjoyable?”

She points out that luxury car brands are in stress, because their marketing message revolves around the driving experience. Once the owner is no longer driving, the experience will still remain the key.

That, however, does not address the subtle ethical concernsthat are somewhat more nuanced than a car killing its passengers. Many supposedly cutting edge systems useCaucasian faces to “train” the algorithms to become intelligent and distinguish between faces. The result is that they have difficulty identifying non-Caucasian faces. Even within this sub-set, however, there are cultural differences that affect expressions. Affectiva addressed the issue from the start.

“We have amassed 5-billion facial frames from around world,” says El Kaliouby. “We collect spontaneous facial expressions as people go about their daily activity, and there are numerous cultural and gender differences. Women are more expressive than men but it differs by culture. So in theUK there is no significant difference, but in the USA there is a 40% difference.

“Our data is diverse, not only by gender and culture, but also context, like wearing glasses, or blurry photos, as well as by gender, age, and race diversity. It’s not perfect, but at least we are thinking about it and trying to avoid accidentally discriminating based on ethnicity.”

• Arthur Goldstuck is founder of World Wide Worx and editor-in-chief of Follow him on Twitter and Instagram on @art2gee