Connect with us

Featured

Time to prioritise apps

App complexity has become even more confusing as businesses move to a multi-cloud world. But there are practical ways to tackle it, writes IAN JANSEN VAN RENSBURG, Senior Manager: Systems Engineering at VMware Southern Africa

Published

on

When discussing the impact of technology on the organisation, we’ve typically done so in terms of platforms and infrastructure:  on-premise, off-premise, cloud, data centers, networks, Edge. And you might measure value and effectiveness in terms of the value of cost optimization, agility, speed to market, security, compliance, control and choice. What this focus overlooks is what’s actually driving business decisions today – something that, until a few years ago, most people outside of the IT department didn’t really think about – applications. 

Everything changed when we, ‘the consumer’, got our hands on the iPhone and its App Store. Now, in only a handful of years, and with Application Marketplaces for every operating system, enterprises are thinking ‘app first’. But not all applications were created equal, and each app’s value must be measured in terms of how core it is to the business. 

So, what’s mission critical, what’s business critical and what’s customer facing? It’s this prioritisation of applications that is ultimately informing IT decisions, whether it’s a mission critical app that must deliver complete security without compromising its performance, or a consumer-facing service that needs to have the scalability to manage major spikes in use without constantly consuming vast amounts of resource, such as a retailer’s mobile commerce offering. The type of application is also a major factor – if you have a bespoke app that has sat at the core of your business for many years, like an automated pricing tool for a logistics company, simply lifting and shifting to the cloud will not work. With access to its data so critical, the decision may be made to keep it in its existing environment for the time being. 

These are all factors that influence the criteria for choosing the right platform. The challenge is that with each application requiring different operating systems and platforms, and no one platform yet being able to offer all benefits without being prohibitively expensive, many organizations find themselves with a multitude of infrastructures and platforms with a complex application estate hosted in all sorts of places. Unfortunately, many of these applications are unable to move easily across platforms and different clouds to where they would be best located and used. Respondents to a recent VMware survey highlighted significant challenges to this situation; with integrating legacy systems (57%) and understanding new technologies (54%) two of the biggest obstacles organisations needed to overcome in order to get the best performance out of this mix of infrastructures. But is there a way of managing this ‘complex’ landscape with more ease? 

Delivering a better experience across multiple platforms

Having a clear strategy and defined approach is key. Take a retail bank for example. With physical branches as well as mobile applications and online banking services, its infrastructure will mostly be a mix of on-premise or private cloud.  With security, regulatory compliance and governance so critical, the unwieldly nature of these systems means that going with tried and trusted approaches is usually more straight forward. However, with new entrants and digital-native disruptors using public cloud providers, unencumbered by legacy systems, established players need to find a way of being able to respond quickly. Banks such as Capital One and the World Bank are deploying public cloud computing for development and testing. In this way, they enjoy the benefits of flexibility, scalability and agility without significant investment, whilst experimenting or using applications that do not draw on legacy data. 

For instance, trailing the use of blockchain to streamline letters of credit could require significant resource. As it is a pilot, however, the bank may be less keen to commit to the investment of a fully private cloud environment. Deploying a public cloud becomes attractive; it provides the necessary infrastructure, the pilot can be run, and if it is deemed a success the decision can be made to move the application over to a private cloud environment. In doing so, the bank has been able to develop, deploy and test quickly, turning around results that allow a decision to be made and, potentially, a new product to be released to the market. If it has not been a success, investment in permanent resource has not been lost. 

Another opportunity for a clearly defined approach and strategy is the opening up of banking. Driven by the likes of the Open Banking initiative in the UK and the EU’s Directive on Payment Services (PSD2), more financial institutions are giving API access to third party developers to build applications and services that consumers or businesses can use to manage their finances across multiple providers. The aim is to provide greater transparency and flexibility to customers, ultimately delivering a better experience. What it means for banks and other financial service providers is having the infrastructure in place to easily share relevant data securely – again, a mix of private and public cloud environments can support the development of third party apps without exposing core data or mission critical services to security risks or non-compliance. 

Managing talent and avoiding silos

But what does this mean for the bank’s technology team? For starters, it raises the possibility of requiring teams with multiple skillsets or, more likely, separate teams focused on separate platforms. That public cloud might be from AWS, for example, which requires a different type of skillset to the one needed to operate the private cloud, which again might not be relevant for the team managing the legacy infrastructure. IT has long been plagued by silos of teams working on individual, proprietary technology, and left unchecked, this issue will be exacerbated further by the demands of multi-platform infrastructure. The whole point of having a multi-cloud environment, of being able to securely move applications from one environment to another depending on requirements at that time, becomes much more complicated if siloed teams struggle to work together.

And these demands are only going to increase. As more and more enterprises accelerate their digital transformation agendas, they are faced with the challenge of repurposing their sprawling application estates to meet their digital requirements without compromising security. Many are already harnessing multi-cloud environments to enable transformation. The same VMware survey mentioned earlier found that 80% of respondents believed that one of the benefits of multi-cloud was improving innovation – and it makes sense; being able to get the best across multiple types of environment sounds exactly what most enterprises need to do to unlock the opportunities of digitization. 

Understanding what you need to achieve 

For a multi-cloud deployment to work, enterprises need to understand what they fundamentally require and have the hybrid cloud infrastructure to run and manage those requirements across all environments and devices. The environments used are ultimately the support, the enabler, not the objective itself; that lies with the applications. 

Yet this should also be in a constant state of evolution. As enterprises continue to digitally transform, they need to be continually reviewing and reforming their application estate. It is the ongoing process of choosing which applications are redundant, which need to be retrofitted, which can be completely transformed into cloud-native apps, and which need to be kept in legacy environments for a bit longer, all whilst being able to manage and move workloads as required. By following this approach, and by working with partners with the experience and skills required to deliver infrastructure that can efficiently run different platforms, enterprises can deliver an effective app-first approach, across any number of environments, to drive their digital business goals forward. 

Featured

Huawei Mate 20 unveils ‘higher intelligence’

The new Mate 20 series, launching in South Africa today, includes a 7.2″ handset, and promises improved AI.

Published

on

Huawei Consumer Business Group today launches the Huawei Mate 20 Series in South Africa.

The phones are powered by Huawei’s densest and highest performing system on chip (SoC) to date, the Kirin 980. Manufactured with the 7nm process, incorporating the Cortex-A76-based CPU and Mali-G76 GPU, the SoC offers improved performance and, according to Huawei, “an unprecedented smooth user experience”.

The new 40W Huawei SuperCharge, 15W Huawei Wireless Quick Charge, and large batteries work in tandem to provide users with improved battery life. A Matrix Camera System includes a  Leica Ultra Wide Angle Lens that lets users see both wider and closer, with a new macro distance capability. The camera system adopts a Four-Point Design that gives the device a distinct visual identity.

The Mate 20 Series is available in 6.53-inch, 6.39-inch and 7.2-inch sizes, across four devices: Huawei Mate 20, Mate 20 Pro, Mate 20 X and Porsche Design Huawei Mate 20 RS. They ship with the customisable Android P-based EMUI 9 operating system.

“Smartphones are an important entrance to the digital world,” said Richard Yu, CEO of Huawei Consumer BG, at the global launch in London last week. “The Huawei Mate 20 Series is designed to be the best ‘mate’ of consumers, accompanying and empowering them to enjoy a richer, more fulfilled life with their higher intelligence, unparalleled battery lives and powerful camera performance.”

The SoC fits 6.9 billion transistors within a die the size of a fingernail. Compared to Kirin 970, the latest chipset is equipped with a CPU that is claimed to be 75 percent more powerful, a GPU that is 46 percent more powerful and an NPU (neural processing unit) that is 226 percent more powerful. The efficiency of the components has also been elevated: the CPU is claimed to be 58 percent more efficient, the GPU 178 percent more efficient, and the NPU 182 percent more efficient. The Kirin 980 is the world’s first commercial SoC to use the Cortex-A76-based cores.

Huawei has designed a three-tier architecture that consists of two ultra-large cores, two large cores and four small cores. This allows the CPU to allocate the optimal amount of resources to heavy, medium and light tasks for greater efficiency, improving the performance of the SoC while enhancing battery life. The Kirin 980 is also the industry’s first SoC to be equipped with Dual-NPU, giving it higher On-Device AI processing capability to support AI applications.

Read more about the Mate 20 Pro’s connectivity, battery and camera on the next page. 

Previous Page1 of 2

Continue Reading

Featured

How Quantum computing will change … everything?

Research labs, government agencies (NASA) and tech giants like Microsoft, IBM and Google are all focused on developing quantum theories first put forward in the 1970s. What’s more, a growing start-up quantum computing ecosystem is attracting hundreds of millions of investor dollars. Given this scenario, Forrester believes it is time for IT leaders to pay attention.

Published

on

“We expect CIOs in life sciences, energy, defence, and manufacturing to see a deluge of hype from vendors and the media in the coming months,” says Forrester’s Brian Hopkins, VP, principal analyst serving CIOs and lead author of a report: A First Look at Quantum Computing. “Financial services, supply-chain, and healthcare firms will feel some of this as well. We see a market emerging, media interest on the rise, and client interest trickling in. It’s time for CIOs to take notice.”

The Forrester report gives some practical applications for quantum computing which helps contextualise its potential: 

  • Security could massively benefit from quantum computing. Factoring very large integers could break RSA-encrypted data, but could also be used to protect systems against malicious attempts. 
  • Supply chain managers could use quantum computing to gather and act on price information using minute-by-minute fluctuations in supply and demand 
  • Robotics engineers could determine the best parameters to use in deep-learning models that recognise and react to objects in computer vision
  • Quantum computing could be used to discover revolutionary new molecules making use of the petabytes of data that studies are now producing. This would significantly benefit many organisations in the material and life sciences verticals – particularly those trying to create more cost-effective electric car batteries which still depend on expensive and rare materials. 

Continue reading to find out how Quantum computing differs.

Previous Page1 of 3

Continue Reading

Trending

Copyright © 2018 World Wide Worx