Connect with us

Featured

How far is the future?

Published

on

The customer service environment is being shaped be technologies like AI, machine learning and augmented reality, but how long will it take for this to become mainstream? MICHELLE OSMOND from 1Stream, sheds some light.

In the movie Big Hero 6, the inflatable Baymax robot is a healthcare companion who can diagnose and suggest treatment based on the 10,000 medical procedures he has learnt, all within a two-second body scan. So, how far off are we from using such technology for customer service?

We are in fact already using artificial intelligence (AI) and machine learning on a daily basis when we use the Uber app to call for a taxi, or when Netflix suggests a series we may enjoy based on our viewing habits.

In the contact centre environment, this technology is being used to automate certain functions to enhance the customer experience, giving rise to the use of customer-facing chatbots and digital assistants that provide an initial layer of support that is accessible 24/7. The customer speaks to a machine and not a human agent.

Instead of going through long menus that force users to choose inadequate options and repeat their queries at every step, the chatbot uses automatic speech recognition (transcription) and text-to-speech (automated responses), to handle the initial contact and deal with basic interactions.

Intelligent routing

A chatbot must be able to correctly identify the intentions of the customer, and will have hundreds of possible scenarios available to it. It knows the entities involved, and what kind of immediate help can be provided. Ongoing training of the chatbot enables it to expand the range of interactions it can manage. It must also be able to detect the emotional state of the customer, and based on the interaction, transfer the call to a human representative if necessary.

For instance, a chatbot can handle a basic interaction such as an airport shuttle booking, but it will transfer the call to a human agent if there is a query it cannot handle, such as whether or not the shuttle will be able to accommodate a bicycle.

Machine learning

All the information and context from the contact is passed on to the human agent in order to swiftly answer the query and finalise the booking, and the chatbot will stay on the call to learn the correct response for future reference. This is machine learning being used to expand the chatbot’s knowledge.

Social media integration

When the power of AI and machine learning is combined with the integration of the contact centre function with social media, a powerful customer engagement is possible.

When a customer’s luggage does not arrive and they are frustrated, they may turn to the travel company’s Facebook Messenger to complain. A messenger bot will be able to respond with a view of the full history of the customer journey. The chatbot will be able to detect the tone and urgency of this interaction and will transfer it to a human agent if they are unable to resolve the query effectively.

Augmented Reality

These technologies combine and enable us to link all the data we have for customers and make it available to both virtual and human agents. By creating this dialogue between the customer, the human agent, and the chatbot, agents have the ability to access useful data previously inaccessible in real-time, bringing Augmented Reality to the heart of the contact centre.

The contact centre agent of the future

This shift to integration of all channels and the use of chatbots will not make agents redundant, but rather allow them to focus on developing their communication skills and manage the more nuanced interactions that chatbots are not able to cope with. Contact centre agents will become super agents, with sophisticated social interaction and people management skills.

This will make for a better customer experience with swifter responses on whichever channel suits that particular customer best.

Featured

CES: Most useless gadgets of all

Choosing the best of show is a popular pastime, but the worst gadgets of CES also deserve their moment of infamy, writes ARTHUR GOLDSTUCK.

Published

on

It’s fairly easy to choose the best new gadgets launched at the Consumer Electronics Show (CES) in Las Vegas last week. Most lists – and there are many – highlight the LG roll-up TV, the Samsung modular TV, the Royole foldable phone, the impossible burger, and the walking car.

But what about the voice assisted bed, the smart baby dining table, the self-driving suitcase and the robot that does nothing? In their current renditions, they sum up what is not only bad about technology, but how technology for its own sake quickly leads us down the rabbit hole of waste and futility.

The following pick of the worst of CES may well be a thinly veneered attempt at mockery, but it is also intended as a caution against getting caught up in hype and justification of pointless technology.

1. DUX voice-assisted bed

The single most useless product launched at CES this year must surely be a bed with Alexa voice control built in. No, not to control the bed itself, but to manage the smart home features with which Alexa and other smart speakers are associated. Or that any smartphone with Siri or Google Assistant could handle. Swedish luxury bedmaker DUX thinks it’s a good idea to manage smart lights, TV, security and air conditioning through the bed itself. Just don’t say Alexa’s “wake word” in your sleep.

2. Smart Baby Dining Table 

Ironically, the runner-up comes from a brand that also makes smart beds: China’s 37 Degree Smart Home. Self-described as “the world’s first smart furniture brand that is transforming technology into furniture”, it outdid itself with a Smart Baby Dining Table. This isa baby feeding table with a removable dining chair that contains a weight detector and adjustable camera, to make children’s weight and temperature visible to parents via the brand’s app. Score one for hands-off parenting.

Click here to read about smart diapers, self-driving suitcases, laundry folders, and bad robot companions.

Previous Page1 of 3

Continue Reading

Featured

CES: Tech means no more “lost in translation”

Published

on

Talking to strangers in foreign countries just got a lot easier with recent advancements in translation technology. Last week, major companies and small startups alike showed the CES technology expo in Las Vegas how well their translation worked at live translation.

Most existing translation apps, like Bixby and Siri Translate, are still in their infancy with live speech translation, which brings about the need for dedicated solutions like these technologies:

Babel’s AIcorrect pocket translator

AI_star_from_China_AIcorrect-b83fb388c6b7a636ec02f5a66bb403cd.jpg

The AIcorrect Translator, developed by Beijing-based Babel Technology, attracted attention as the linguistic king of the show. As an advanced application of AI technology in consumer technology, the pocket translator deals with problems in cross-linguistic communication. 

It supports real-time mutual translation in multiple situations between Chinese/English and 30 other languages, including Japanese, Korean, Thai, French, Russian and Spanish. A significant differentiator is that major languages like English being further divided into accents. The translation quality reaches as high as 96%.

It has a touch screen, where transcription and audio translation are shown at the same time. Lei Guan, CEO of Babel Technology, said: “As a Chinese pathfinder in the field of AI, we designed the device in hoping that hundreds of millions of people can have access to it and carry out cross-linguistic communication all barrier-free.” 

Click here to read about the Pilot, Travis, Pocketalk, Google and Zoi translators.

Previous Page1 of 6

Continue Reading

Trending

Copyright © 2018 World Wide Worx