As adoption of biometric authentication increases, it’s important to understand the security methods used to protect biometric data, writes GREG SARRAIL, VP of Solutions Business Development at Biometrics at HID Global.
Biometric solutions are rapidly becoming the new standard for providing secure and convenient identity verification for consumers and corporations. In recent years, biometric technologies have been adopted to enhance security on mobile devices, secure access to facilities and even validate individual identity within the banking industry. When faced with new technology, many people question the security of the solution. Where does the biometric data reside? Is it protected? Can it be easily accessed? If the data is compromised, can it be used maliciously?
Protect and/or render useless
Biometric fingerprint data is the information that is obtained by capturing unique features from an individual fingerprint image. There are several ways to protect this information to ensure that it cannot be openly accessed and used for fraudulent means. During user authentication, the biometric data collected by the sensor must match the information that was captured during enrolment and is stored on a back-end system. Most biometric systems use templates, mathematical representations of biometric data, rather than a raw image of a fingerprint. Templates are much smaller than full images, which decreases the time required to provide a match, minimizes storage requirements and protects user privacy because a fingerprint image cannot be reconstructed from a template. Some systems provide an additional layer of security by encrypting the transport tunnel and even the templates themselves to ensure the data is protected as it moves from the sensor to the back-end system.
Additional security methods can be deployed which are more dependent on the specific use case. For example, in an ATM setting, a user’s biometric information can be augmented before it is stored in a uniform way. This security practice is called “salting” and is done by combining the individual’s PIN and the fingerprint data prior to being stored. When verifying the biometric information, the same PIN is used with the same salting algorithm to provide a match. The advantage of this approach is that the back-end database does not contain an image of a fingerprint or even a standard template, but rather the combined “salted” template. This approach increases both the security and privacy of a system.
An alternate approach is to eliminate the back-end database altogether by placing the secured biometric information on a card that is carried by the user. The new South African National ID, for example, is an identity card that securely stores an individual’s unique biometric fingerprint information that was captured during the enrolment process and was written to the card. This card is then presented at the time of verification. After the individual places a finger on the sensor the information is matched locally against the data stored on the card. No database must be queried; the transaction simply confirms that the identity of the user matches the identity stored on the card. This approach reduces the reliance on the back-end database and external transmission security.
Biometrics is the measurement of physiological characteristics; characteristics that are unique to each individual. Facial characteristics are plainly available — this is how people recognize each other, after all — and fingerprints are left behind at every restaurant, subway rail or door that we touch. A secure system must ensure that an individual, and only that individual, can use his or her own biometric data to authenticate. Thus, it is not enough to simply match biometric characteristics against enrolled data, since access to your fingerprint information isn’t protected. A secure fingerprint system will evaluate whether the finger being presented is real or simply a falsified representation of actual fingerprint data. This capability is called liveness detection and it provides an important way to secure biometric information. Liveness detection reduces the ability for a fraudster to use a fake finger or replay stolen biometric data since the data is useless without a live finger. Whichever combination of security methods are used to secure your identity, the ultimate goal is to render biometric data useless if a perpetrator were to access it.
Verify, not identify
In the non-criminal setting, biometrics is typically used to verify an individual and not to identify an individual. To verify a person’s identity the goal is to confirm with the highest level of assurance that the person is who he or she claims to be. Commercial applications often use demographic information, account numbers, card numbers or digital certificates in addition to the fingerprint data to determine a match.
Criminal systems typically don’t have any other information aside from the fingerprint, or partial fingerprint, and therefore must determine an identity with only the biometric data. This process utilizes a large back-end database to compare individual unique features of a fingerprint and to find probable matches among a stored database of fingerprint templates. This process is time intensive and expensive and is not often used in a commercial setting.
Biometric security systems are as unique as fingerprints. Yet, good biometric systems combine the use of fingerprint templates with liveness detection to validate the identity of the right individual. Successful biometric systems are designed in accordance with the specific use case and with the desired results in mind: secure, convenient and reliable authentication that properly verifies the right individuals and rejects the wrong.
IoT at starting gate
South Africa is already past the Internet of Things (IoT) hype cycle and well into the mainstream, writes MARK WALKER, associate vice president of Sub-Saharan Africa at International Data Corporation (IDC).
Projects and pilots are already becoming a commercial reality, tying neatly into the 2017 IDC prediction that 2018 would be the year when the local market took IoT mainstream. Over the next 12-18 months, it is anticipated that IoT implementations will continue to rise in both scope and popularity. Already 23% are in full deployment with 39% in the pilot phase. The value of IoT has been systematically proven and yet its reputation remains tenuous – more than 5% of companies are reluctant to put their money where the trend is – thanks to the shifting sands of IoT perception and success rate.
There are several reasons behind why IoT implementations are failing. The biggest is that organisations don’t know where to start. They know that IoT is something they can harness today and that it can be used to shift outdated modalities and operations. They are aware of the benefits and the case studies. What they don’t know is how to apply this knowledge to their own journey so their IoT story isn’t one of overbearing complexity and rising costs.
Another stumbling block is perception. Yes, there is the futuristic potential with the talking fridge and intelligent desk, but this is not where the real value lies. Organisations are overlooking the challenges that can be solved by realistic IoT, the banal and the boring solutions that leverage systems to deliver on business priorities. IoT’s potential sits within its ability to get the best out of assets and production efficiencies, solving problems in automation, security, and environment.
In addition to this, there is a lack of clarity around return on investment, uncertainty around the benefits, a lack of executive leadership, and concerns around security and the complexities of regulation. Because IoT is an emerging technology there remains a limited awareness of the true extent of its value proposition and yet 66% of organisations are confident that this value exists.
This percentage poses both a problem and opportunity. On one hand, it showcases the local shift in thinking towards IoT as a technology worth investing into. On the other hand, many companies are seeing the competition invest and leaping blindly in the wrong direction. Stop. IoT is not the same for every business.
It is essential that every company makes its own case for IoT based on its needs and outcomes. Does agriculture have the same challenges as mining? Does one mining company have the same challenges as another? The answer is no. Organisations that want their IoT investment to succeed must reject the idea that they can pick up where another has left off. IoT must be relevant to the business outcome that it needs to achieve. While some use cases may apply to most industries based on specific circumstances, there are different realities and priorities that will demand a different approach and starting point.
Ask – what is the business problem right now and how can technology be leveraged to resolve it?
In the agriculture space, there is a need to improve crop yields and livestock management, improve farm productivity and implement environmental monitoring. In the construction and mining industry, safety and emergency response are a priority alongside workforce and production management. Education shifts the lens towards improving delivery and quality of education, access to advanced learning methods and reducing the costs of learning. Smart cities want to improve traffic and efficiently deliver public services and healthcare is focusing on wellness, reducing hospital admissions and the security of assets and inventory management.
The technology and solutions selected must speak to these specific challenges.
If there are no insights used to create an IoT solution, it’s the equivalent of having the fastest Ferrari on Rivonia Road in peak traffic. It makes a fantastic noise, but it isn’t going to move any faster than the broken-down sedan in the next lane. Everyone will be impressed with the Ferrari, but the amount of power and the size of the investment mean nothing. It’s in the wrong place.
What differentiates the IoT successes is how a company leverages data to deliver meaningful value-added predictions and actions for personalised efficiencies, convenience, and improved industry processes. To move forward the organisation needs to focus on the business outcomes and not just the technology. They need to localise and adapt by applying context to the problem that’s being solved and explore innovation through partnerships and experimentation.
ERP underpins food tracking
The food traceability market is expected to reach almost $20 billion by 2022 as increased consumer awareness, strict governance requirements, and advances in technology are resulting in growing standardisation of the segment, says STUART SCANLON, managing director of epic ERP
Just like any data-driven environment, one of the biggest enablers of this is integrated enterprise resource planning (ERP) solutions.
As the name suggests, traceability is the ability to track something through all stages of production, processing, and distribution. When it comes to the food industry, traceability must also enable stakeholders to identify the source of all food inputs that can include anything from raw materials, additives, ingredients, and packaging.
Considering the wealth of data that all these facets generate, it is hardly surprising that systems and processes need to be put in place to manage, analyse, and provide actionable insights. With traceability enabling corrective measures to be taken (think product recalls), having an efficient system is often the difference between life or death when it comes to public health risks.
Sceptics argue that traceability simply requires an extensive data warehouse to be done correctly, the reality is quite different. Yes, there are standard data records to be managed, but the real value lies in how all these components are tied together.
ERP provides the digital glue to enable this. With each stakeholder audience requiring different aspects of traceability (and compliance), it is essential for the producer, distributor, and every other organisation in the supply chain, to manage this effectively in a standardised manner.
With so many different companies involved in the food cycle, many using their own, proprietary systems, just consider the complexity of trying to manage traceability. Organisations must not only contend with local challenges, but global ones as well as the import and export of food are big business drivers.
So, even though traceability is vital to keep track of everything in this complex cycle, it is also imperative to monitor the ingredients and factories where items are produced. Having expansive solutions that must track the entire process from ‘cradle to grave’ is an imperative. Not only is this vital from a safety perspective, but from cost and reputational management aspects as well. Just think of the recent listeriosis issue in South Africa and the impact it has had on all parties in that supply chain.
Thanks to the increasing digital transformation efforts by companies in the food industry, traceability becomes a more effective process. It is no longer a case of using on-premise solutions that can be compromised but having hosted ones that provide more effective fail-safes.
In a market segment that requires strict compliance and regulatory requirements to be met, cloud-based solutions can provide everyone in the supply chain with a more secure (and tamper-resistant) solution than many of the legacy approaches of old.
This is not to say ERP requires the one or the other. Instead, there needs to be a transition provided between the two scenarios that empowers those in the food supply chain to maximise the insights (and benefits) derived from traceability.
Now, more than ever, traceability is a business priority. Having the correct foundation through effective ERP is essential if a business can manage its growth and meet legislative requirements into the future.