Connect with us

Featured

Encryption will be key to compliance under new laws

With consumers required to divulge personal details to access many apps, ensuring the safety of data has become a collective responsibility. NEIL COSSER, Identity and Data Protection Manager for Africa at Gemalto, believes encryption is key to safeguarding data.

As technology continues to shift and shape how we connect with each other and brands, personal data has become a highly valuable and lucrative commodity. With consumers required to divulge personal details to access most of the plethora of apps available, ensuring the safety of data has become a collective responsibility: shared between service providers, app developers and the individual themselves. What does this mean for mobile providers, banks, government and brands, especially as South Africa starts grappling with the Protection of Personal Information (PoPI)? And what does it mean for consumers and corporates doing business across our shores, many of whom are still blissfully unaware of the risks involved?

Driven by relentless news of security breaches and data loss, many governments around the world are considering introducing or are in the process of introducing legislation that will help protect the personal data of their citizens. For example, the European Union has adopted the General Data Protection Regulation (GDPR) in April 2016. There are obvious signs that significant risks lie ahead if companies do nothing to change how they protect data because the new regulation will have major implications for all the ways in which data is collected, stored, accessed and secured.  Locally, certain sections of the Protection of Personal Information Act (PoPIA) have already commenced (under proclamation No. R. 25, 2014).

But what does compliance mean for local businesses?

Given the proliferation of technology and what it has come to mean for companies, it is now an imperative for businesses to deploy suitable mechanisms to process personal information of employees, customers or other business stakeholders. This is done with the view to implement organisation-wide privacy initiatives in order to comply with the conditions of the Act. Compliance will have an impact on the processes, technology and manner in which stakeholders – particularly within the employer and employees parameters – handle and process personal information.

According to renowned provider of legal solutions, Michalsons, GDPR’s grace period has been earmarked to end on 24 May 2018 – thus making it legally enforceable from that period onwards. Locally, we can expect PoPI’s grace period to end soon after the GDPR’s. Organisations that have to comply with both the PoPI Act and the GDPR might focus on complying with the GDPR first and then POPI second. Taking this approach could offer prudent lessons for businesses through the compliance of GDPR that can be applied to PoPI.

The writing on the wall

The release of Gemalto’s 2016 Breach Level Index (BLI) report has offered an intriguing backdrop to the issue of data management (particularly where data protection is concerned) in the local context. A key takeout from the 2016 report highlighted that that we cannot argue that we have a growing data security crisis evidenced by the almost 1.4 billion records being compromised during 2016. The sad truth is that this number is actually higher, because most breaches go unreported worldwide. This is particularly worrying given the impact that a data breach can have on an organisation’s reputation and ultimately revenue.

The Ponemon  2016 Cost of Data Breach Study indicates that the average cost of a data breach to a businesses now stands at $4 million (average cost per record $158), with reputation and the loss of customer loyalty most heavily impacting the bottom line. In fact,  our research revealed that two thirds (66%) would be unlikely to do business with organizations responsible for exposing financial and sensitive information.

It’s all about action

The debate surrounding data protection vs. impact on reputation and revenue is not a new one but it seems that many executives agree that the issue is of data security is still taken for granted by those businesses with a big user base. This was the sentiment shared by the panelists who formed part of our Gemalto BLI roundtable event hosted on 28 March 2017 in Johannesburg.

Justin Williams, Executive: Group Information Security at MTN reiterated that consumer data is a prized commodity and it cannot and should not be taken for granted. “There is a concerning lack of regulation in Africa. Beyond the strict requirements of the regulations, what companies really need is to shift to a new data security mindset,” he explained. He added that now is the right time for businesses to start taking steps now to prepare for implementation of the new rules.

Williams’ advice begs the question, what should organisations do to limit their risk of breaches and ensuring that consumer data is protected against all odds. The answer to this is simple; securing a breach is the first point of call. Organisations should consider three factors when building a comprehensive data protection strategy. Firstly, we need to analyse where data being stored – is it in a database, file servers, virtual environments or the cloud? Secondly, how and where are encryption keys being secured? Finally, who’s accessing the data and more importantly, how is this access being controlled?

Once these three factors have been understood, this can then be converted into a three-step approach to data protection which includes encrypting all sensitive data, storing and managing encryption keys and lastly, controlling access.

Fail to prepare, prepare to fail

Today’s security strategies are dominated by a singular focus on breach prevention that includes firewalls, antivirus, threat detection and monitoring. But, if history has taught us anything, it is that walls are eventually breached and made obsolete.

The next and last layers of defense need to be around both the data and the individuals that access the data by surrounding them with end-to-end encryption, authentication and access controls that provide the additional measures necessary to protect customer data.

Security professionals will always need to consider the need to perform specific risk analysis in order to implement the organisational and technical measures that are needed to prevent, detect, and block data breaches. Data encryption solutions provide an essential basis for achieving reliable data unintelligibility. When encryption is combined with other measures, such as secure key management and access controls, these mechanisms provide a robust foundation for achieving compliance with applicable EU data protection laws.

The reality is that our world is quickly becoming an Internet of Things where every person, place, thing and organisation is connected to each other through the Internet. The proliferation of the cloud, digital content, mobile device usage, online banking, e-commerce, and social media means that we are creating, accessing and storing data and conducting transactions in more places than ever before.  We simply have more to manage and more places of exposure.

For Joe Pindar, Research & Development Director: Identity & Data Protection at Gemalto, transparency is the best paved road to ensuring consumer trust. Security should be a key consideration for all businesses going forward. Telling customers about the security measures your organisation has put in place to protect their data can go a long way in cementing customer loyalty. “If you are doing something better than the rest of the industry, like encrypting data end-to-end, then you might be seen as a trusted innovator.”

In conclusion…

As we look towards the future of data management and in order to be ready for upcoming legislative changes, companies need to start taking steps now and change their security mindset about protecting customer data. The signs for taking action are obvious. It’s clear that being breached is not a question of “if” but “when. Companies should move away from the traditional strategy of focusing on breach prevention, and move towards a ‘secure breach’ approach. This means accepting that breaches happen and using best practice data protection to guarantee that data is effectively useless when it falls into unauthorised hands. Traditional approaches to data security do not work anymore, and if companies don’t wake up to this new reality soon, the consumer revolt will come.

Featured

What’s left after the machines take over?

KIERAN FROST, research manager for software in sub-Saharan Africa for International Data Corporation, discusses the AI’s impact on the workforce.

One of the questions that we at the International Data Corporation are asked is what impact technologies like Artificial Intelligence (AI) will have on jobs. Where are there likely to be job opportunities in the future? Which jobs (or job functions) are most ripe for automation? What sectors are likely to be impacted first? The problem with these questions is that they misunderstand the size of the barriers in the way of system-wide automation: the question isn’t only about what’s technically feasible. It’s just as much a question of what’s legally, ethically, financially and politically possible.

That said, there are some guidelines that can be put in place. An obvious career path exists in being on the ‘other side of the code’, as it were – being the one who writes the code, who trains the machine, who cleans the data. But no serious commentator can leave the discussion there – too many people are simply not able to or have the desire to code. Put another way: where do the legal, financial, ethical, political and technical constraints on AI leave the most opportunity?

Firstly, AI (driven by machine learning techniques) is getting better at accomplishing a whole range of things – from recognising (and even creating) images, to processing and communicating natural language, completing forms and automating processes, fighting parking tickets, being better than the best Dota 2 players in the world and aiding in diagnosing diseases. Machines are exceptionally good at completing tasks in a repeatable manner, given enough data and/or enough training. Adding more tasks to the process, or attempting system-wide automation, requires more data and more training. This creates two constraints on the ability of machines to perform work:

  1. machine learning requires large amounts of (quality) data and;
  2. training machines requires a lot of time and effort (and therefore cost).

Let’s look at each of these in turn – and we’ll discuss how other considerations come into play along the way.

Speaking in the broadest possible terms, machines require large amounts of data to be trained to a level to meet or exceed human performance in a given task. This data enables the bot to learn how best to perform that task. Essentially, the data pool determines the output.

However, there are certain job categories which require knowledge of, and then subversion of, the data set – jobs where producing the same ‘best’ outcome would not be optimal. Particularly, these are jobs that are typically referred to as creative pursuits – design, brand, look and feel. To use a simple example: if pre-Apple, we trained a machine to design a computer, we would not have arrived at the iMac, and the look and feel of iOS would not become the predominant mobile interface. 

This is not to say that machines cannot create things. We’ve recently seen several ML-trained machines on the internet that produce pictures of people (that don’t exist) – that is undoubtedly creation (of a particularly unnerving variety). The same is true of the AI that can produce music. But those models are trained to produce more of what we recognise as good. Because art is no science, a machine would likely have no better chance of producing a masterpiece than a human. And true innovation, in many instances, requires subverting the data set, not conforming to it.

Secondly, and perhaps more importantly, training AI requires time and money. Some actions are simply too expensive to automate. These tasks are either incredibly specialised, and therefore do not have enough data to support the development of a model, or very broad, which would require so much data that it will render the training of the machine economically unviable. There are also other challenges which may arise. At the IDC, we refer to the Scope of AI-Based Automation. In this scope:

  • A task is the smallest possible unit of work performed on behalf of an activity.
  • An activity is a collection of related tasks to be completed to achieve the objective.
  • A process is a series of related activities that produce a specific output.
  • A system (or an ecosystem) is a set of connected processes.

As we move up the stack from task to system, we find different obstacles. Let’s use the medical industry as an example to show how these constraints interact. Medical image interpretation bots, powered by neural networks, exhibit exceptionally high levels of accuracy in interpreting medical images. This is used to inform decisions which are ultimately made by a human – an outcome that is dictated by regulation. Here, even if we removed the regulation, those machines cannot automate the entire process of treating the patient. Activity reminders (such as when a patient should return for a check-up, or reminders to follow a drug schedule) can in part be automated, with ML applications checking patient past adherence patterns, but with ultimate decision-making by a doctor. Diagnosis and treatment are a process that is ultimately still the purview of humans. Doctors are expected to synthesize information from a variety of sources – from image interpretation machines to the patient’s adherence to the drug schedule – in order to deliver a diagnosis. This relationship is not only a result of a technicality – there are ethical, legal and trust reasons that dictate this outcome.

There is also an economic reason that dictates this outcome. The investment required to train a bot to synthesize all the required data for proper diagnosis and treatment is considerable. On the other end of the spectrum, when a patient’s circumstance requires a largely new, highly specialised or experimental surgery, a bot will unlikely have the data required to be sufficiently trained to perform the operation and even then, it would certainly require human oversight.

The economic point is a particularly important one. To automate the activity in a mine, for example, would require massive investment into what would conceivably be an army of robots. While this may be technically feasible, the costs of such automation likely outweigh the benefits, with replacement costs of robots running into the billions. As such, these jobs are unlikely to disappear in the medium term. 
Thus, based on technical feasibility alone our medium-term jobs market seems to hold opportunity in the following areas: the hyper-specialised (for whom not enough data exists to automate), the jack-of-all-trades (for whom the data set is too large to economically automate), the true creative (who exists to subvert the data set) and finally, those whose job it is to use the data. However, it is not only technical feasibility that we should consider. Too often, the rhetoric would have you believe that the only thing stopping large scale automation is the sophistication of the models we have at our disposal, when in fact financial, regulatory, ethical, legal and political barriers are of equal if not greater importance. Understanding the interplay of each of these for a role in a company is the only way to divine the future of that role.

Continue Reading

Featured

LG unveils NanoCell TV range

At the recent LG Electronics annual Innofest innovation celebration in Seoul, Korea, the company unveiled its new NanoCell range: 14 TVs featuring ThinQ AI technology. It also showcased a new range of OLED units.

The new TV models deliver upgraded AI picture and sound quality, underpinned by the company’s second-generation α (Alpha) 9 Gen 2 intelligent processor and deep learning algorithm. As a result, the TVs promise optimised picture and sound by analysing source content and recognising ambient conditions.

LG’s premium range for the MEA market is headlined by the flagship OLED TV line-up, which offers a variety of screen sizes: W9 (model 77/65W9), E9 (model 65E9), C9 (model 77/65/55C9) and B9 (model 65/55B9).

NanoCell is LG’s new premier LED brand, the name intended to highlight outstanding picture quality enabled by NanoCell technology. Ensuring a wider colour gamut and enhanced contrast, says LG, “NanoColor employs a Full Array Local Dimming (FALD) backlight unit. NanoAccuracy guarantees precise colours and contrast over a wide viewing angle while NanoBezel helps to create the ultimate immersive experiences via ultra-thin bezels and the sleek, minimalist design of the TV.”

The NanoCell series comprises fourteen AI-enabled models, available in sizes varying from 49 to 77 inches (model 65SM95, 7565/55SM90, 65/55/49SM86 and 65/55/49SM81).

The LG C9 OLED TV and the company’s 86-inch 4K NanoCell TV model (model 86SM90) were recently honoured with CES 2019 Innovation Awards. The 65-inch E9 and C9 OLED TVs also picked up accolades from Dealerscope, Reviewed.com, and Engadget.

The α9 Gen 2 intelligent processor used in LG’s W9, E9 and C9 series OLED TVs elevates picture and sound quality via a deep learning algorithm (which leverages an extensive database of visual information), recognising content source quality and optimising visual output.

The α9 Gen 2 intelligent processor is able to understand how the human eye perceives images in different lighting and finely adjusts the tone mapping curve in accordance with ambient conditions to achieve the optimal level of screen brightness. The processor uses the TV’s ambient light sensor to measure external light, automatically changing brightness to compensate as required. With its advanced AI, the α9 Gen 2 intelligent processor can refine High Dynamic Range (HDR) content through altering brightness levels. In brightly lit settings, it can transform dark, shadow-filled scenes into easily discernible images, without sacrificing depth or making colours seem unnatural or oversaturated. LG’s 2019 TVs also leverage Dolby’s latest innovation, which intelligently adjusts Dolby Vision content to ensure an outstanding HDR experience, even in brightly lit conditions.

LG’s audio algorithm can up-mix two-channel stereo to replicate 5.1 surround sound. The α9 Gen 2 intelligent processor fine-tunes output according to content type, making voices easier to hear in movies and TV shows, and delivering crisp, clear vocals in songs. LG TVs intelligently set levels based on their positioning within a room, while users can also adjust sound settings manually if they choose. LG’s flagship TVs offer the realistic sound of Dolby Atmos for an immersive entertainment experience.

LG’s 2019 premium TV range comes with a new conversational voice recognition feature that makes it easier to take control and ask a range of questions. The TVs can understand context, which allows for more complex requests, meaning users won’t have to make a series of repetitive commands to get the desired results. Conversational voice recognition will be available on LG TVs with ThinQ AI in over a hundred countries.

LG’s 2019 AI TVs support HDMI 2.1 specifications, allowing the new 4K OLED and NanoCell TV models to display 4K content at a remarkable 120 frames per second. Select 2019 models offer 4K high frame rate (4K HFR), automatic low latency mode (ALLM), variable refresh rate (VRR) and enhanced audio return channel (eARC).

To find out more about LG’s latest TVs and home entertainment systems, visit https://www.lg.com/ae.

Continue Reading

Trending

Copyright © 2019 World Wide Worx