Connect with us

Featured

Disaster recovery must be at heart of business

Published

on

Unless businesses ensure they also take people and processes into account when planning for disasters, they run the risk of not surviving them, writes SAKKIE BURGER, Managing Executive at Business Connexion.

Most companies prioritise the need for restoring IT in the event of a system breakdown. What they do not focus on, however, is what processes are in place to ensure the business can continue when automated or digital processes fail and specifically the role that employees have to play as they are ultimately the custodians of the processes that drive operations. It’s easy to, for example, provide a company with 10 seats to go and restore their IT systems and get them up and running again, but how do you accommodate a company with 100 employees that have just lost their premises in a disaster? This poses a different challenge and there are not many companies, providing disaster recovery in South Africa, that have the luxury of having that amount of space available waiting just to be occupied when there is a need for disaster recovery.

Although most companies are going the route of digitisation, manual processes still have a fundamental role to play. Take an airline, for example. If their electronic system for checking passengers onto the plane goes down, they have to have a manual back office process in place to perform this function. They cannot just ground the aircraft until the electronic system is restored. And herein lies the challenge: not many companies have these contingencies in place and they are putting themselves, their businesses and most important, their customers at risk.

While many organisations have these failover processes in place, they either do not test them regularly enough or their testing practices are inadequate. Many organisations have testing in place, but they perform a paper-based test. They see that there’s a manual process in place, the configuration is there and that it is documented, but that is where it ends. There is no actual testing from end-to-end by recovering on a piece of hardware and making sure it works, that the network is connected and that users can actually sign in and check the data. People tend to do disaster recovery tests to satisfy their auditors rather than making sure the business can continue to run in the event of a disaster.

There are a number of challenges in adopting an adequate disaster recovery strategy. The biggest challenge is the cost. You know you have to have it, but also that you might never need it. The second challenge is distance. What distance is the correct distance for you to have a disaster recovery site, particularly when you take incidents that could affect a broader geographical area into account? Here connectivity also comes into play, because the further away your disaster recovery is from your main site, the more expensive network constituencies become.

Possibly one of the biggest risks companies face is that, while they have disaster recovery processes in place, they tend to set it up on equipment that has become redundant or obsolete. In these cases companies have had to upgrade their equipment, so they use the new technology for their production line and then run their disaster recovery on the old machines. The challenge with this is that when they do need to do a recovery, they find that it’s not compatible or supported anymore, which means they are not capable of recovering core systems in reasonable timeframes.

DR often does not get the attention it deserves because it is an expenditure that is not really productive. That is why there is a trend to outsource their disaster recovery to a third party, where there is an agreement that they have to have the necessary equipment in place to ensure they can run your disaster recovery effectively and efficiently.

Companies that are either re-looking their disaster recovery strategy or implementing it for the first time, need to ensure that they understand which of their applications are the most critical as a first step. Some applications don’t need disaster recovery contingency and you can run your business without them. Interestingly though, between 5 and 7 years ago mail wasn’t deemed a high priority application. Today, that is deemed the first thing companies want to have recovered, because it has become mission critical to the running of their businesses.

Times have certainly changed

Companies must also understand the technology that is involved. You can’t just move a workload from a Unix platform to a Microsoft platform. You must ensure that the work breakdown structures and standard operating procedures and processes are documented, tested and updated at least twice a year. It’s easy to just write a process and file it away in a cupboard and do nothing further with it. It needs to be tested vigorously and on a regular basis. It’s not just about testing it, it’s about change management and fixing problem as and when you are presented with them.

Often change management is the biggest problem in disasters. A disaster happens because something changed and a change request didn’t notify the disaster recovery process of this change. If your disaster recovery manual is not up to date, it could significantly increase the amount of time spent to fix the problem.

Featured

Which IoT horse should you back?

The emerging IoT is evolving at a rapid pace with more companies entering the market. The development of new product and communication systems is likely to continue to grow over the next few years, after which we could begin to see a few dominant players emerge, says DARREN OXLEE, CTOf of Utility Systems.

Published

on

But in the interim, many companies face a dilemma because, in such a new industry, there are so many unknowns about its trajectory. With the variety of options available (particularly regarding the medium of communication), there’s the a question of which horse to back.

Many players also haven’t fully come to grips with the commercial models in IoT (specifically, how much it costs to run these systems).

Which communication protocol should you consider for your IoT application? Depends on what you’re looking for. Here’s a summary of the main low-power, wide area network (LPWAN) communications options that are currently available, along with their applicability:

SIGFOX 

SigFox has what is arguably the most traction in the LPWAN space, thanks to its successful marketing campaigns in Europe. It also has strong support from vendors including Texas Instruments, Silicon Labs, and Axom.

It’s a relatively simple technology, ultra-narrowband (100 Hz), and sends very small data (12 bytes) very slowly (300 bps). So it’s perfect for applications where systems need to send small, infrequent bursts of data. Its lack of downlink capabilities, however, could make it unsuitable for applications that require two-way communication.

LORA 

LoRaWAN is a standard governed by the LoRa Alliance. It’s not open because the underlying chipset is only available through Semtech – though this should change in future.

Its functionality is like SigFox: it’s primarily intended for uplink-only applications with multiple nodes, although downlink messages are possible. But unlike SigFox, LoRa uses multiple frequency channels and data rates with coded messages. These are less likely to interfere with one another, increasing the concentrator capacity.

RPMA 

Ingenu Technology Solutions has developed a proprietary technology called Random Phase Multiple Access (RPMA) in the 2.4 GHz band. Due to its architecture, it’s said to have a superior uplink and downlink capacity compared to other models.

It also claims to have better doppler, scheduling, and interference characteristics, as well as a better link budget of 177 dB compared to LoRa’s 157 dB and SigFox’s 149 dB. Plus, it operates in the 2.4 GHz spectrum, which is globally available for Wi-Fi and Bluetooth, so there are no regional architecture changes needed – unlike SigFox and LoRa.

LTE-M 

LTE-M (LTE Cat-M1) is a cellular technology that has gained traction in the United States and is specifically designed for IoT or machine‑to‑machine (M2M) communications.

It’s a low‑power wide‑area (LPWA) interface that connects IoT and M2M devices with medium data rate requirements (375 kb/s upload and download speeds in half duplex mode). It also enables longer battery lifecycles and greater in‑building range compared to standard cellular technologies like 2G, 3G, or LTE Cat 1.

Key features include:

·       Voice functionality via VoLTE

·       Full mobility and in‑vehicle hand‑over

·       Low power consumption

·       Extended in‑building range

NB-IOT 

Narrowband IoT (NB‑IoT or LTE Cat NB1) is part of the same 3GPP Release 13 standard3 that defined LTE Cat M1 – both are licensed as LPWAN technologies that work virtually anywhere. NB-IoT connects devices simply and efficiently on already established mobile networks and handles small amounts of infrequent two‑way data securely and reliably.

NB‑IoT is well suited for applications like gas and water meters through regular and small data transmissions, as network coverage is a key issue in smart metering rollouts. Meters also tend to be in difficult locations like cellars, deep underground, or in remote areas. NB‑IoT has excellent coverage and penetration to address this.

MY FORECAST

The LPWAN technology stack is fluid, so I foresee it evolving more over the coming years. During this time, I suspect that we’ll see:

1.     Different markets adopting different technologies based on factors like dominant technology players and local regulations

2.     The technologies diverging for a period and then converging with a few key players, which I think will be SigFox, LoRa, and the two LTE-based technologies

3.     A significant technological shift in 3-5 years, which will disrupt this space again

So, which horse should you back?

I don’t believe it’s prudent to pick a single technology now; lock-in could cause serious restrictions in the long-term. A modular, agile approach to implementing the correct communications mechanism for your requirements carries less risk.

The commercial model is also hugely important. The cellular and telecommunications companies will understandably want to maximise their returns and you’ll want to position yourself to share an equitable part of the revenue.

So: do your homework. And good luck!

Continue Reading

Featured

Ms Office hack attacks up 4X

Published

on

Exploits, software that takes advantage of a bug or vulnerability, for Microsoft Office in-the-wild hit the list of cyber headaches in Q1 2018. Overall, the number of users attacked with malicious Office documents rose more than four times compared with Q1 2017. In just three months, its share of exploits used in attacks grew to almost 50% – this is double the average share of exploits for Microsoft Office across 2017. These are the main findings from Kaspersky Lab’s Q1 IT threat evolution report.

Attacks based on exploits are considered to be very powerful, as they do not require any additional interactions with the user and can deliver their dangerous code discreetly. They are therefore widely used; both by cybercriminals looking for profit and by more sophisticated nation-backed state actors for their malicious purposes.

The first quarter of 2018 experienced a massive inflow of these exploits, targeting popular Microsoft Office software. According to Kaspersky Lab experts, this is likely to be the peak of a longer trend, as at least ten in-the-wild exploits for Microsoft Office software were identified in 2017-2018 – compared to two zero-day exploits for Adobe Flash player used in-the-wild during the same time period.

The share of the latter in the distribution of exploits used in attacks is decreasing as expected (accounting for slightly less than 3% in the first quarter) – Adobe and Microsoft have put a lot of effort into making it difficult to exploit Flash Player.

After cybercriminals find out about a vulnerability, they prepare a ready-to-go exploit. They then frequently use spear-phishing as the infection vector, compromising users and companies through emails with malicious attachments. Worse still, such spear-phishing attack vectors are usually discreet and very actively used in sophisticated targeted attacks – there were many examples of this in the last six months alone.

For instance, in late 2017, Kaspersky Lab’s advanced exploit prevention systems identified a new Adobe Flash zero-day exploit used in-the-wild against our customers. The exploit was delivered through a Microsoft Office document and the final payload was the latest version of FinSpy malware. Analysis of the payload enabled researchers to confidently link this attack to a sophisticated actor known as ‘BlackOasis’. The same month, Kaspersky Lab’s experts published a detailed analysis of СVE-2017-11826, a critical zero-day vulnerability used to launch targeted attacks in all versions of Microsoft Office. The exploit for this vulnerability is an RTF document containing a DOCX document that exploits СVE-2017-11826 in the Office Open XML parser. Finally, just a couple of days ago, information on Internet Explorer zero day CVE-2018-8174 was published. This vulnerability was also used in targeted attacks.

“The threat landscape in the first quarter again shows us that a lack of attention to patch management is one of the most significant cyber-dangers. While vendors usually issue patches for the vulnerabilities, users often can’t update their products in time, which results in waves of discreet and highly effective attacks once the vulnerabilities have been exposed to the broad cybercriminal community,” notes Alexander Liskin, security expert at Kaspersky Lab.

Other online threat statistics from the Q1, 2018 report include:

  • Kaspersky Lab solutions detected and repelled 796,806,112 malicious attacks from online resources located in 194 countries around the world.
  • 282,807,433 unique URLs were recognised as malicious by web antivirus components.
  • Attempted infections by malware that aims to steal money via online access to bank accounts were registered on 204,448 user computers.
  • Kaspersky Lab’s file antivirus detected a total of 187,597,494 unique malicious and potentially unwanted objects.
  • Kaspersky Lab mobile security products also detected:
    • 1,322,578 malicious installation packages.
    • 18,912 mobile banking Trojans (installation packages).

To reduce the risk of infection, users are advised to:

  • Keep the software installed on your PC up to date, and enable the auto-update feature if it is available.
  • Wherever possible, choose a software vendor that demonstrates a responsible approach to a vulnerability problem. Check if the software vendor has its own bug bounty program.

·         Use robust security solutions , which have special features to protect against exploits, such as Automatic Exploit Prevention.

·         Regularly run a system scan to check for possible infections and make sure you keep all software up to date.

  • Businesses should use a security solution that provides vulnerability, patch management and exploit prevention components, such as Kaspersky Endpoint Security for Business. The patch management feature automatically eliminates vulnerabilities and proactively patches them. The exploit prevention component monitors suspicious actions of applications and blocks malicious files executions.
Continue Reading

Trending

Copyright © 2018 World Wide Worx