One of the questions that we at the International Data Corporation are asked is what impact technologies like Artificial Intelligence (AI) will have on jobs. Where are there likely to be job opportunities in the future? Which jobs (or job functions) are most ripe for automation? What sectors are likely to be impacted first? The problem with these questions is that they misunderstand the size of the barriers in the way of system-wide automation: the question isn’t only about what’s technically feasible. It’s just as much a question of what’s legally, ethically, financially and politically possible.
That said, there are some guidelines that can be put in place. An obvious career path exists in being on the ‘other side of the code’, as it were – being the one who writes the code, who trains the machine, who cleans the data. But no serious commentator can leave the discussion there – too many people are simply not able to or have the desire to code. Put another way: where do the legal, financial, ethical, political and technical constraints on AI leave the most opportunity?
Firstly, AI (driven by machine learning techniques) is getting better at accomplishing a whole range of things – from recognising (and even creating) images, to processing and communicating natural language, completing forms and automating processes, fighting parking tickets, being better than the best Dota 2 players in the world and aiding in diagnosing diseases. Machines are exceptionally good at completing tasks in a repeatable manner, given enough data and/or enough training. Adding more tasks to the process, or attempting system-wide automation, requires more data and more training. This creates two constraints on the ability of machines to perform work:
- machine learning requires large amounts of (quality) data and;
- training machines requires a lot of time and effort (and therefore cost).
Let’s look at each of these in turn – and we’ll discuss how other considerations come into play along the way.
Speaking in the broadest possible terms, machines require large amounts of data to be trained to a level to meet or exceed human performance in a given task. This data enables the bot to learn how best to perform that task. Essentially, the data pool determines the output.
However, there are certain job categories which require knowledge of, and then subversion of, the data set – jobs where producing the same ‘best’ outcome would not be optimal. Particularly, these are jobs that are typically referred to as creative pursuits – design, brand, look and feel. To use a simple example: if pre-Apple, we trained a machine to design a computer, we would not have arrived at the iMac, and the look and feel of iOS would not become the predominant mobile interface.
This is not to say that machines cannot create things. We’ve recently seen several ML-trained machines on the internet that produce pictures of people (that don’t exist) – that is undoubtedly creation (of a particularly unnerving variety). The same is true of the AI that can produce music. But those models are trained to produce more of what we recognise as good. Because art is no science, a machine would likely have no better chance of producing a masterpiece than a human. And true innovation, in many instances, requires subverting the data set, not conforming to it.
Secondly, and perhaps more importantly, training AI requires time and money. Some actions are simply too expensive to automate. These tasks are either incredibly specialised, and therefore do not have enough data to support the development of a model, or very broad, which would require so much data that it will render the training of the machine economically unviable. There are also other challenges which may arise. At the IDC, we refer to the Scope of AI-Based Automation. In this scope:
- A task is the smallest possible unit of work performed on behalf of an activity.
- An activity is a collection of related tasks to be completed to achieve the objective.
- A process is a series of related activities that produce a specific output.
- A system (or an ecosystem) is a set of connected processes.
As we move up the stack from task to system, we find different obstacles. Let’s use the medical industry as an example to show how these constraints interact. Medical image interpretation bots, powered by neural networks, exhibit exceptionally high levels of accuracy in interpreting medical images. This is used to inform decisions which are ultimately made by a human – an outcome that is dictated by regulation. Here, even if we removed the regulation, those machines cannot automate the entire process of treating the patient. Activity reminders (such as when a patient should return for a check-up, or reminders to follow a drug schedule) can in part be automated, with ML applications checking patient past adherence patterns, but with ultimate decision-making by a doctor. Diagnosis and treatment are a process that is ultimately still the purview of humans. Doctors are expected to synthesize information from a variety of sources – from image interpretation machines to the patient’s adherence to the drug schedule – in order to deliver a diagnosis. This relationship is not only a result of a technicality – there are ethical, legal and trust reasons that dictate this outcome.
There is also an economic reason that dictates this outcome. The investment required to train a bot to synthesize all the required data for proper diagnosis and treatment is considerable. On the other end of the spectrum, when a patient’s circumstance requires a largely new, highly specialised or experimental surgery, a bot will unlikely have the data required to be sufficiently trained to perform the operation and even then, it would certainly require human oversight.
The economic point is a particularly important one. To automate the activity in a mine, for example, would require massive investment into what would conceivably be an army of robots. While this may be technically feasible, the costs of such automation likely outweigh the benefits, with replacement costs of robots running into the billions. As such, these jobs are unlikely to disappear in the medium term.
Thus, based on technical feasibility alone our medium-term jobs market seems to hold opportunity in the following areas: the hyper-specialised (for whom not enough data exists to automate), the jack-of-all-trades (for whom the data set is too large to economically automate), the true creative (who exists to subvert the data set) and finally, those whose job it is to use the data. However, it is not only technical feasibility that we should consider. Too often, the rhetoric would have you believe that the only thing stopping large scale automation is the sophistication of the models we have at our disposal, when in fact financial, regulatory, ethical, legal and political barriers are of equal if not greater importance. Understanding the interplay of each of these for a role in a company is the only way to divine the future of that role.
Nokia 7.2: The sweet-spot for mid-range smartphones
Nokia has hit one of the best quality-to-price ratios with the Nokia 7.2. BRYAN TURNER tested the device.
Cameras are often the main factor in selecting a smartphone today. Nokia is no stranger to the high-end camera smartphone market, and its legacy shows with the latest Nokia 7.2.
In many aspects, the device looks and feels like an expensive flagship, yet it carries a mid-range R6000 price tag. From its vivid PureDisplay technology to an ultra-wide camera lens, it’s quite something to experience this device – especially knowing the price.
Before powering it on, one notices the sleek design. The front features a large, 6.3” screen, with a 19.5:9 aspect ratio. Like many phones nowadays, it features a notch, but it is smaller than the usual earpiece-and-camera notch. Instead, it features a small notch for the front camera only. It hides the front earpiece away in a slim cutout, just under the outer frame. While it’s not the highest screen-to-body (STB) ratio, it has a pretty slim bezel with an 83.34% STB ratio. It loses some of this to an elegant chin on the bottom that shows the Nokia logo. This is all protected by a Gorilla glass certification, which makes it a little more difficult to shatter on an impact.
It’s encased by a Polycarbonate composite outer frame, which seems metal-like but will withstand more knocks than an aluminium frame. On the right side, it features a volume rocker and a power button and, on the left side, a Google Assistant button, which starts listening for commands when pressed. Above the button is the SIM and SD card tray. On the top, it houses a very welcome 3.5mm headphone jack. On the bottom, it has a speaker grille and a USB Type-C port. Overall, the positioning of the buttons takes some getting used to because the Assistant button and power button are similarly sized, and many smartphones place the lock button on the opposite side of the volume rocker.
The back features a frosted Gorilla glass panel, like the front. The frosted design is quite understated and yet another elegant design feature of the device. A fingerprint sensor sits in the middle and, towards the top, the device has a circular camera bump, not too different from the Huawei Mate 30 series. The bump features two lenses, a depth sensor, and a flash. The camera system has been made in partnership with Zeiss optics to produce high-quality photography.
When powering on the device, one is greeted with the Android One logo, which is Nokia’s promise that its users will always be among the first to get the latest Android security and feature updates. This is one of the defining purchase points for users looking to get this device, as it features the purest, unedited version of Android available.
This, in turn, allows the device to run the latest software by Google that enables the device to get better over time. This is done by using Google’s Artificial Intelligence engine, which learns how one uses the device and optimises apps and services accordingly. That translates to the phone’s battery life actually extending over time, instead of deteriorating like other smartphones that are weighed down by battery hungry apps. The concept was pioneered by Huawei in the Mate 9.
The rear camera is excellent for snapping pictures and features a 48MP Sony sensor for accurate colour reproduction. This puts the device in the league of the Google Pixel and Apple iPhone devices, which also use Sony sensors. By default, the device is set to take pictures at 12MP, which is what makes the photos look great, as it blends 4 pixels into one for a high level of sharpness and colour accuracy, but users can bump up the resolution to the full 48MP if they want to zoom in a bit more.
The 8MP wide-angle lens spans 118-degrees, and proves extremely useful for getting everyone in the shot. It also features some great colour accuracy. The 5MP depth-sensing lens is purely for the portrait mode, which adds a blur effect to the background of the photo. It features a 20MP selfie camera, which also provides excellent sharpness and a portrait mode.
The most impressive part of this system is the Pro camera setting, which can help take photos from excellent to extraordinary. We managed to get some excellent low light photography by adjusting the shutter speed, ISO, and exposure. The setting is pretty easy to use and it’s worth it for users to learn how it works.
The PureDisplay also helps make photos and video look great. The 7.2’s PureDisplay has a 2160 x 1080 resolution, at 401 pixels per inch (ppi). It also makes use of HDR10 and covers 96% of the DCI-P3 colour gamut, which makes the colours very vibrant. Some of these display features are not even found in some high-end phones on the market, so it’s very surprising that this tech is in a mid-range device.
At this price, there is one drawback: the processor. It houses a Qualcomm Snapdragon 660, which is neither bad nor good. It performs well in many situations, but begins to stutter on heavier graphical applications like Fortnite and PUBG Mobile. That said, all other applications of the device work perfectly, and multi-tasking is very fluid between regular apps.
At a recommended selling price of R6,000, the Nokia 7.2 is one of the most feature rich and aesthetically pleasing devices available in this price range.
Voice interface moves digital wars to ‘first mile’
By RICHARD MULLINS, Managing Director for EMEA at Acceleration
Anyone who often travels on the London tube will notice people around them – usually students and young professionals – speaking into their smartphones even in sections of the underground without Wi-Fi or cellular coverage. They’re not sweet-talking their mobile devices, but cueing up a series of WhatsApp voice messages to be sent to their friends and colleagues as soon as they walk back into an area with an Internet connection.
This shift away from text-based and visual communication to multi-sensory (voice and visual) is one of the most significant trends to emerge from the next wave of artificial intelligence technologies. Many members of Generations X and Y abandoned voice calls for instant messaging once they got smartphones; now, the next generation are becoming more vocal in how they interact with – and through – machines.
We’re already seeing rising adoption of conversational voice interfaces, as young and imperfect as the technology still is. Research from comScore predicts that half of all searches will be performed via voice by 2020, while a study by Voicebot.ai indicates that nearly one in five US adults own a smart speaker or have access to one in their homes.
This trend is one reason that we are seeing the battle for the digital customer move away from the ‘last mile’ to the ‘first mile’ at a rapid speed. Now that the giants of ecommerce have largely solved the ‘last mile’ challenge of reliable logistics and rapid delivery, they are looking at ways they can tighten their grip on the first digital mile, where customers engage with and discover content, product and services.
Raising the stakes
This race to own the customer interface is not new, but the stakes are rising. We already live in a world with two major smartphone platforms (Apple’s iOS and Google’s Android), and now a handful of companies (Google, Facebook, Microsoft, Apple and Amazon) are seeking to own the voice interface with smart devices like speakers, kitchen appliances and home security systems.
Most consumers are today using voice conversation interfaces for simple content requests – Alexa, give me the news headlines; Siri, play my party mix – and the experience can be somewhat clunky. However, technology is improving exponentially, as we saw earlier this year when Google demoed its assistant phoning a hairdresser to make an appointment on behalf of a user.
Such interfaces are likely to become the place where a high proportion of customers are converted and complete transactions in the next few years. In other words, the likes of Apple and Google will have even more power over what consumers see, hear and interact with than they do today. Brands should be thinking about how they will prepare themselves for this future.
One of the first considerations is how they can use voice to engage with customers in an increasingly natural and simple nature. Today, it is usually easy to tell when you are speaking to a virtual assistant or chatbot, but in future, these interfaces will become harder to tell humans and machines apart, unless you are told.
This is an opportunity to offer personalised service in an automated manner—the human touch at machine scale. Brands that offer the best experiences through their conversational interfaces will have a competitive advantage. This will not just be about the AI driving the interaction, but also about how brands use data to personalise interactions and make them more relevant to customers.
How will you reach your customers?
Brands also need to decide how they will reach their customers in the first place – will they create services for platforms like Alexa and focus on mobile apps? Or will they try to take control of more of the digital first mile themselves? This will be a daunting challenge, but the rewards may be significant since the companies in the digital first mile will control the data and own the customer.
For this reason, we can expect to see those companies with the resources to do so focus on owning more of the customer interface and becoming the gateways to service and commerce for their client base. They will partner with other big brands to create platforms, experiences and digital destinations where customers can purchase a variety of goods and services.
Consider examples such as how Discovery’s Vitality weaves together healthcare, lifestyle brands and financial services, then think about how they might evolve in a digital world. Brands have long cooperated through strategies such as white label products, sponsorship agreements and distribution deals, but the next wave of digital change will take it to a new level.
As this shakes out in the years to come, brands will need to focus on building a technical architecture that enables them to rapidly partner with other brands to roll out innovative solutions and services. They will also need to consider how and where they will capture customer data and which touchpoints they can use to own the customer relationship.
The challenges will not be purely technical in nature. There is the human element of blending AI and people into ‘teams’ that deliver the best possible customer experience. Companies will also need to think about their business models and where they fit into the value chain. Those that align AI and data behind a coherent business strategy will be the ones who will win the first digital mile.