Connect with us

Featured

Monitoring is heart of IoT

Published

on

Connected devices result in an increase in the volume of data flowing into companies. But, while companies are able to analyse the data, IT will struggle to maintain adequate application performance levels, writes WIMPIE VAN RENSBURG, Country Manager for Sub Saharan Africa at Riverbed Technology.

We’re all becoming pretty familiar with the idea of the Internet of Things (IoT). Often, when thinking of the IoT, the first things that come to mind may be a wearable fitness tracker or a smartphone app that can control a thermostat. In fact, adoption of the IoT is so widespread that Gartner predicts that by 2020, there will be over 20 billion connected things in the world. The IoT is not only having an effect on the consumer world. It is also driving rapid digital transformation in the realm of business. We’re already seeing IoT powering a wide range of applications across industries. For example, The William Tracey Group, one of the UK’s largest recycling management companies, is using the IoT to collect data from chipped wheelie bins, smart weighing arms on collection trucks and on-board computers. This data is then used to help enterprises protect the environment while creating new business opportunities.

The growing business case for connected things means that the volume of data flowing into companies’ data collections is increasing. However, whilst companies are able to analyse the volumes of data supplied by connected devices in order to improve decision-making processes and efficiency, IT will struggle to maintain adequate application performance levels as enterprises bring more connected devices online.

Implementing application performance monitoring (APM) establishes the end-to-end visibility IT needs in order to immediately identify what’s causing an application to perform poorly, so that the issue can be fixed before it escalates.

The challenges of IoT

There’s a lot that goes on behind the scenes in order to make the IoT come to life. While users may be launching a simple app on their smartphone, there are a number of factors that go into making that simple digital experience work.

By considering how a wearable fitness tracker works, we can understand the complexity that can be constructed by the IoT. The user interface is simple, but the wristband is always working to send and receive information via Bluetooth from a smartphone, upload that information to a cloud-based app that analyses a range of metrics, including activity levels, nutrition, sleep quality and heart rate. The application then supplies that analysis to its dedicated smartphone app, and possibly also to other mobile and web-based applications.

Users expect all of this to occur in real-time. In order to meet these expectations, network communication and interdependent application processes taking place on a grid of distributed environments need to perform to perfection. If just one piece of this application fails, so will everything else.

The complexity of this process is further amplified if we consider a company managing a fleet of delivery vehicles such as UPS. UPS has installed a variety of connected devices their vehicles to monitor mileage, optimum speed and overall engine health– all in real-time. This enables the company to ensure the driver is driving safely, automatically schedule maintenance, and provide immediate updates to customers. The operation becomes even more complex when scaled across an entire fleet of vehicles.

Achieving seamless app performance

With businesses storing information in the cloud as well as on local systems – creating what are known as hybrid environments – and enabling employees to access that data from an increasing number of connected devices, including smartphones, laptops and tablets, the number of things that can go wrong within applications as well as within the network increases.

Monitoring the performance of all the applications and systems that run across hybrid networks has become more and more difficult, costly and time-consuming for IT. This is why many organisations are seeking the help of technology in order to achieve real-time visibility to oversee the performance of massively distributed applications. By implementing the use of specialised APM tools, companies can:

1.                   Monitor distributed applications and the underlying networks: By achieving complete visibility over the organisation’s apps, IT can examine the type of information flowing through the network and map out how it is being collected and shared between devices, applications, cloud services and the analytics systems. IT can then quickly identify if there are any issues affecting the end user experience.

2.                   Pinpoint the causes of bottlenecks or errors: IT can then identify the causes of information bottlenecks, determine which are affecting business critical processes, and address these first.

3.                   Look for opportunities to improve performance: Because APM tools continuously monitor applications and information transactions, IT can amass a wealth of information that can then be analysed for patterns in order to identify minor bugs before they become severe, or to seek opportunities for performance improvement.

What next?

Business-critical IoT applications now span both physical, virtual, and hybrid environments and end-users’ expectations are continuing to grow. IDC predicts that within three years, 50 per cent of IT networks will transition from having excess capacity to handle the additional IoT devices, to being network constrained with nearly 10 per cent of sites being overwhelmed.

With this in mind, it’s now more important than ever to monitor the performance and availability of the business applications that employees and customers rely on so business productivity can increase. Companies need to be able to pre-empt an inevitable rise in the flow of data and ensure that they have adequate bandwidth to cope with this upsurge.

Application Performance Management tools can provide the end-to-end visibility and diagnostics necessary for identifying issues with complex networks and distributed applications as well as for taking action before issues escalate. Additionally, the detailed analytics provided by APM enables companies to not only take control of performance improvement, but to also evaluate the business impact of all applications in their network.

Featured

Bring your network with you

At last week’s Critical Communications World, Motorola unveiled the LXN 500 LTE Ultra Portable Network Infrastructure. It allows rescue personal to set up dedicated LTE networks for communication in an emergency, writes SEAN BACHER.

Published

on

In the event of an emergency, communications are absolutely critical, but the availability of public phone networks are limited due to weather conditions or congestion.

Motorola realised that this caused a problem when trying to get rescue personnel to those in need and so developed its LXN 500 LTE Ultra Portable Network Infrastructure. The product is the smallest and lightest full powered broadband network to date and allows the first person on the scene to set up an LTE network in a matter of minutes, allowing other rescue team members to communicate with each other.

“The LXN 500 weighs six kilograms and comes in a backpack with two batteries. It offers a range of 1km and allows up to 100 connections at the same time. However, in many situations the disaster area may span more than 1km which is why they can be connected to each other in a mesh formation,” says Tunde Williams, Head of Field and Solutions Marketing EMEA, Motorola Solutions.

The LXN 500 solution offers communication through two-way radios, and includes mapping, messaging, push-to-talk, video and imaging features onboard, thus eliminating the need for any additional hardware.

Data collected on the device can then be sent through to a central control room where an operator can deploy additional rescue personnel where needed. Once video is streamed into the control room, realtime analytics and augmented reality can be applied to it to help predict where future problem points may arise. Video images and other multimedia can also be made available for rescuers on the ground.

“Although the LXN 500 was designed for the seamless communications between on ground rescue teams and their respective control rooms, it has made its way into the police force and in places where there is little or no cellular signal such as oil rigs,” says Williams.

He gave a hostage scenario: “In the event of a hostage situation, it is important for the police to relay information in realtime to ensure no one is hurt. However the perpetrators often use their mobile phones to try and foil any rescue attempts. Should the police have the correct partnerships in place they are able to disable cellular towers in the vicinity, preventing any in or outgoing calls on a public network and allowing the police get their job done quickly and more effectively.”

By disabling any public networks in the area, police are also able to eliminate any cellular detonated bombs from going off but still stay in touch with each other he says.

The LXN 500 offers a wide range of mission critical cases and is sure to transform communications and improve safety for first responders and the people they are trying to protect.

Continue Reading

Featured

Kaspersky moves to Switzerland

As part of its Global Transparency Initiative, Kaspersky Lab is adapting its infrastructure to move a number of core processes from Russia to Switzerland.

Published

on

This includes customer data storage and processing for most regions, as well as software assembly, including threat detection updates. To ensure full transparency and integrity, Kaspersky Lab is arranging for this activity to be supervised by an independent third party, also based in Switzerland.

Global transparency and collaboration for an ultra-connected world

The Global Transparency Initiative, announced in October 2017, reflects Kaspersky Lab’s ongoing commitment to assuring the integrity and trustworthiness of its products. The new measures are the next steps in the development of the initiative, but they also reflect the company’s commitment to working with others to address the growing challenges of industry fragmentation and a breakdown of trust. Trust is essential in cybersecurity, and Kaspersky Lab understands that trust is not a given; it must be repeatedly earned through transparency and accountability.

The new measures comprise the move of data storage and processing for a number of regions, the relocation of software assembly and the opening of the first Transparency Center.

Relocation of customer data storage and processing

By the end of 2019, Kaspersky Lab will have established a data center in Zurich and in this facility, will store and process all information for users in Europe, North America, Singapore, Australia, Japan and South Korea, with more countries to follow. This information is shared voluntarily by users with the Kaspersky Security Network (KSN) an advanced, cloud-based system that automatically processes cyberthreat-related data.

Relocation of software assembly

Kaspersky Lab will relocate to Zurich its ‘software build conveyer’ — a set of programming tools used to assemble ready to use software out of source code. Before the end of 2018, Kaspersky Lab products and threat detection rule databases (AV databases) will start to be assembled and signed with a digital signature in Switzerland, before being distributed to the endpoints of customers worldwide. The relocation will ensure that all newly assembled software can be verified by an independent organisation and show that software builds and updates received by customers match the source code provided for audit.

Establishment of the first Transparency Center

The source code of Kaspersky Lab products and software updates will be available for review by responsible stakeholders in a dedicated Transparency Center that will also be hosted in Switzerland and is expected to open this year. This approach will further show that generation after generation of Kaspersky Lab products were built and used for one purpose only: protecting the company’s customers from cyberthreats.

Independent supervision and review

Kaspersky Lab is arranging for the data storage and processing, software assembly, and source code to be independently supervised by a third party qualified to conduct technical software reviews. Since transparency and trust are becoming universal requirements across the cybersecurity industry, Kaspersky Lab supports the creation of a new, non-profit organisation to take on this responsibility, not just for the company, but for other partners and members who wish to join.

Continue Reading

Trending

Copyright © 2018 World Wide Worx