Companies are now less concerned with data backup times, but more with the integrity and time taken to restore a backup. It is for this reason that recovery times and objectives are becoming more precise than ever, writes MARK BENTKOWER.
When it comes to modern data protection, not all data should be treated the same way. Long gone are the days of just dumping a bunch of files onto a tape overnight and sending it to the vault. Today’s organisations are less concerned about data backup times, than they are about ensuring a quick and easy recovery of application data and business services due to a natural or human-induced disaster. Recovery time and recovery point objectives are becoming more precise and demanding as Service Level Agreements (SLA’s) begin to cover larger amounts of data.
A recent IDC survey of small and medium-sized business users revealed that 67 percent of these firms have a recovery time requirement of less than four hours, while 31 percent have a recovery time requirement of less than two hours. Recovering from multiple mediums, such as Storage Area Network (SAN) snapshots, hypervisor guests and virtualised applications is critical to maintain productivity and avoid the legal risks and hefty financial penalties that come with broken SLA’s. Rapid application recovery is fast becoming the only option, providing organisations with new levels of agility that are critical in today’s information era.
Recognising DR challenges
In a region where serious outages and natural disasters are not uncommon, the lack of a comprehensive Disaster Recovery (DR) plan has the very real potential of threatening the continued existence of some organisations. Many companies in Southeast Asia do not have a cohesive DR strategy, or have implemented DR strategies which cannot sufficiently safeguard them from these business crippling risks. Below are some of the key DR challenges identified in the region.
Lack of automation: The manual management of information requires a significant investment of time and burdens technical teams to simply manage backups and address issues as they arise. There is no time to take a nuanced approach based on mission criticality. Manual systems create greater risk around human error, confidential data exposure and information loss. With automated information lifecycle systems, today’s IT teams should focus more on individual SLA’s, and should prioritise automation to free up administrators to fulfil more difficult tasks.
Use of tape: While tape is fine for slow archival storage, it is too inefficient and slow for the rapid pace of DR restores, especially at the application level. Think about the rapid pace of change at play here. In terms of global data growth, the world generated over 90 percent of extant data in the last two years alone. That’s a game changing statistic. Yet, many organisations in Asia Pacific still rely on tape as a key source of backup, which is hindering their ability to be agile, flexible and react quickly to both crises and market opportunities.
Redundant data: The proliferation of data silos within Asia Pacific organisations are hindering the ability for IT managers to make insight-based decisions and effectively manage large pools of data. This results in increased IT costs, hindered innovation and a segmented view of the business. A Commvault-commissioned survey by IDC found that 40 percent of IT decision makers across APAC report that backup, recovery, data protection and analytics strategies are still managed at a departmental level .
Network bottlenecks: Asia and the Pacific are amongst the world’s most natural disaster-prone areas. Of the world’s reported natural disasters between 2004 and 2013, 41.2 percent or 1,690 incidences, occurred in the Asia-Pacific region alone. Compounding this, Southeast Asia is made up of predominantly under-developed and developing economies with slow and unreliable network connections. For example, in Thailand, businesses have lost US$297 million in revenue from network downtime over the past year.
Defining the new state of recovery
So how can companies move past these challenges and adopt a modern approach to DR? Organisations can consider using block-level methods with orchestrated snapshot and streaming recovery across backup data with incremental change capture. This technology captures regular snapshots of only time incremental changes in information (rather than entire environment every time), which dramatically reduces network impact during data protection operations. Incremental change capture also provides downstream efficiencies in network and storage utilisation by reading and moving the delta blocks, and storing only the unique changed blocks. This reduces bandwidth and storage requirements for ongoing recovery operations, and speeds Recovery Point Objective (RPO) and Recovery Time Objective (RTO).
Additionally, organisations can drive the benefits below from including incremental change capture in their checklist as they seek to advance their data management strategy.
– Lower impact on the business as full backups are not required – as much as 90 percent less impact, compared with streaming backup
– Workload computing capacity typically required for backup will be available for other business needs
– An hourly recovery point minimises risk by reducing RPO
– Reduction of data storage space as a single copy of the data can be used for multiple purposes
– Faster data recovery as data is stored in an open format instead of a proprietary format
Innovating to address evolving needs
As mega trends like migration to the cloud, anywhere computing, and the explosive growth of data sweep across all industries, business expectations have also evolved. Businesses have become increasingly intolerant of data loss and services downtime. Redefining traditional DR strategies assures continued availability of information, which is fundamental to maintaining competitive edge and enabling innovation.
* Mark Bentkower, CISSP, Director of Systems Engineering, ASEAN, Commvault.
Samsung unfolds the future
At the #Unpacked launch, Samsung delivered the world’s first foldable phone from a major brand. ARTHUR GOLDSTUCK tried it out.
Everything that could be known about the new Samsung Galaxy S10 range, launched on Wednesday in San Francisco, seems to have been known before the event.
Most predictions were spot-on, including those in Gadget (see our preview here), thanks to a series of leaks so large, they competed with the hole an iceberg made in the Titanic.
The big surprise was that there was a big surprise. While it was widely expected that Samsung would announce a foldable phone, few predicted what would emerge from that announcement. About the only thing that was guessed right was the name: Galaxy Fold.
The real surprise was the versatility of the foldable phone, and the fact that units were available at the launch. During the Johannesburg event, at which the San Francisco launch was streamed live, small groups of media took turns to enter a private Fold viewing area where photos were banned, personal phones had to be handed in, and the Fold could be tried out under close supervision.
The first impression is of a compact smartphone with a relatively small screen on the front – it measures 4.6-inches – and a second layer of phone at the back. With a click of a button, the phone folds out to reveal a 7.3-inch inside screen – the equivalent of a mini tablet.
The fold itself is based on a sophisticated hinge design that probably took more engineering than the foldable display. The result is a large screen with no visible seam.
The device introduces the concept of “app continuity”, which means an app can be opened on the front and, in mid-use, if the handset is folded open, continue on the inside from where the user left off on the front. The difference is that the app will the have far more space for viewing or other activity.
Click here to read about the app experience on the inside of the Fold.
Password managers don’t protect you from hackers
Using a password manager to protect yourself online? Research reveals serious weaknesses…
Top password manager products have fundamental flaws that expose the data they are designed to protect, rendering them no more secure than saving passwords in a text file, according to a new study by researchers at Independent Security Evaluators (ISE).
“100 percent of the products that ISE analyzed failed to provide the security to safeguard a user’s passwords as advertised,” says ISE CEO Stephen Bono. “Although password managers provide some utility for storing login/passwords and limit password reuse, these applications are a vulnerable target for the mass collection of this data through malicious hacking campaigns.”
In the new report titled “Under the Hood of Secrets Management,” ISE researchers revealed serious weaknesses with top password managers: 1Password, Dashlane, KeePass and LastPass. ISE examined the underlying functionality of these products on Windows 10 to understand how users’ secrets are stored even when the password manager is locked. More than 60 million individuals 93,000 businesses worldwide rely on password managers. Click here for a copy of the report.
Password managers are marketed as a solution to eliminate the security risks of storing passwords or secrets for applications and browsers in plain text documents. Having previously examined these and other password managers, ISE researchers expected an improved level of security standards preventing malicious credential extraction. Instead ISE found just the opposite.
Click here to read the findings from the report.