Connect with us

Featured

How hackers are using stealth to evade detection

Published

on

Hackers are evading traditional detection applications with a new approach called Application program interface hooking. LUKE JENNINGS, Chief Research Officer for Countercept at MWR InfoSecurity, takes a look at what API hooking is and how it can be thwarted.

Traditional malware detection and forensic investigation techniques typically focus on detecting malicious native executables on disk, and performing disk forensics to uncover evidence of historical actions on a system. In response, many threat actors have shifted their offensive techniques to avoid writing to disk, staying resident only in memory. Consequently, the ability to effectively analyse live memory for evidence of compromise and to gather additional forensic evidence has become increasingly important.

Application program interface (API) hooking is one of the memory-resident techniques cyber criminals are increasingly using. The process involves intercepting function calls in order to monitor and/or change the information passing back and forth between them. There are many reasons, both legitimate and malicious, why using this might be desirable. In the case of malware, the API hooking process is commonly considered to be ‘rootkit’ functionality and is mostly used to hide evidence of its presence on the system from other processes, and to spy on sensitive data.

How are the cyber criminals using API hooking?

There are two common use cases for the malicious use of API hooking. Firstly, it can be used to spy on sensitive information and so they use it to intercept sensitive data, such as communications with the keyboard to log keystrokes including passwords that are typed by a user, or sensitive network communications before they are transmitted. This includes the ability to intercept data encrypted using protocols such as Transport Layer Security (TLS) prior to the point at which they are protected, in order to capture passwords and other sensitive data before it is transmitted.

Secondly, they modify the results returned from certain API calls in order to hide the presence of their malware. This commonly may involve file-system or registry related API calls to remove entries used by the malware, to hide its presence from other processes. Not only can cyber criminals implement API hooking in a number of ways, the technique can also be deployed across a wide range of processes on a targeted system.

Tackling malicious API hooking

One way cyber security teams can detect the hidden traces of API hooking and other similar techniques is through memory analysis frameworks such as Volatility. Volatility is an open-source framework and the de facto standard toolset for performing memory analysis techniques against raw system memory images, useful in forensic investigations and malware analysis. The Volatility framework is very valuable when performing an in-depth investigation of systems on which day-to-day compromises have been detected.

While memory analysis can be an incredibly powerful and useful technique, it does not come without its challenges. One hurdle to consider when deploying memory analysis is the labour intensity it requires. Memory analysis is a highly skilled and time-intensive technique typically performed on one image at a time. This can be very effective when performing a dedicated investigation of a serious compromise, where the systems involved are known and relatively small in number. However, the challenge arises when trying to use memory analysis at scale to detect compromises on a large enterprise network in the absence of any other evidence.

Another obstacle to be aware of when implementing memory analysis is legitimate ‘bad’ behaviour. There are plenty of examples of hooking techniques being used by malware for malicious purposes. Nevertheless, there are also many cases of these techniques being used for legitimate, above-board purposes. In particular, technologies such as data loss prevention and antivirus often target the same functions for hooking as malware does. Without the techniques and experience to quickly separate legitimate injection and hooking from malicious behaviour, a great deal of time can be wasted.

Successful attack detection and response 

As a first step in dealing with techniques like this, organisations need the capability in place to easily retrieve system memory images from suspect machines to allow rapid response and aid forensic investigation. However, this can generally only be used in a reactive manner.

To perform effective attack detection and response at scale specifically with regard to these techniques, an ability to conduct memory analysis proactively at scale across an enterprise network is required, which is where toolsets continuously conducting live memory analysis and reporting on suspicious findings are required. This will enable the proactive discovery of unknown memory-resident malware without any prior knowledge or signatures.

Good Endpoint Detection and Response (EDR) software that offers live memory analysis capabilities at scale are required to proactively detect the direct use of techniques such as live memory analysis. Additionally, when gathering results at scale, approaches such as anomaly detection can help greatly by drawing a dividing line between API hooking that is common across the network, probably due to security software in use, and anomalous API hooking that seems present only in a few isolated cases. Traditional memory forensics using a tool such as Volatility can then be used in order to investigate, in detail, systems exhibiting suspicious behaviour.

Conclusion

Many malware families have moved to using techniques such as API hooking in a stealthy attempt to avoid traditional security solutions and achieve certain end goals, such as spying on passwords. The 2015 Verizon Data Breach Report found that “malware is part of the event chain in virtually every security incident”. It also reported that “70-90% of malware samples are unique to an organisation” and that “organisations would need access to all threat intelligence indicators in order for the information to be helpful.” Given these findings, it is obvious that having an effective technique for discovering previously unseen malware on your network is extremely important.

Overall, memory analysis can be used to uncover some, not all, of the stealth techniques used by modern malware families. However, it is an important capability to have in order to detect compromises using modern memory-resident malware.

Featured

IoT at tipping point

We have long been in the hype phase of IoT, but it is finally taking on a more concrete form illustrating its benefits to business and the public at large, says PAUL RUINAARD, Country Manager at Nutanix Sub-Saharan Africa.

Published

on

People have become comfortable with talking to their smartphones and tasking these mini-computers to find the closest restaurants, schedule appointments, and even switch on their connected washing machines while they are stuck in traffic.

This is considerable progress from those expensive (and dated) robotic vacuum cleaners that drew some interest a few years ago. Yes, being able to automate cleaning the carpets held promise, but the reality failed to deliver on those expectations.

However, people’s growing comfort when it comes to talking to machines and letting them complete menial tasks is not what the long-anticipated Internet of Things (IoT) is about. It really entails taking connectedness a step further by getting machines to talk to one another in an increasingly digital world filled with smart cities, devices, and ways of doing things.

We have long been in the hype phase of IoT, but it is finally taking on a more concrete form illustrating its benefits to business and the public at large. The GSM Association predicts that Africa will account for nearly 60 percent of the anticipated 30 billion connected IoT devices by 2020.

Use cases across the continent hold much promise. In agriculture, for example, placing sensors in soil enable farmers to track acidity levels, temperature, and other variables to assist in improving crop yields. In some hotels, infrared sensors are being used to detect body heat so cleaning staff now when they can enter a room. In South Africa, connected cars (think telematics) are nothing new. Many local insurers use the data generated to reward good driver behaviour and penalise bad ones with higher premiums.

Data management

The proliferation of IoT also means huge opportunity for businesses. According to the IDC, the market opportunity for IoT in South Africa will grow to $1.7 billion by 2021. And with research from Statista showing that retail IoT spending in the country is expected to grow to $60 million by the end of this year (compared to the $41 million of 2016), there is significant potential for connected devices once organisations start to unlock the value of the data being generated.

But before we get a real sense of what our newly-connected world will look like and the full picture of the business opportunities IoT will create, we need to put the right resources in place to manage it. With IoT comes data, more than we can realistically imagine, and we are already creating more data than ever before.

Processing data is something usually left to ‘the IT person’. However, if business leaders want to join the IoT game, then it is something they must start thinking about. Sure, there are several ways to process data but they all link back to a data centre, that room or piece of equipment in the office, or the public data centre down the road. Most know it is there but little else, other than it has something to do with data and computers.

Data centres are the less interesting but very essential tools in all things technology. They run the show, and without them we would not be able to do something as simple as send an email, let alone create an intricate system of connected devices that constantly communicate with each other.

Traditionally, data centres have been large, expensive and clunky machines. But like everything in technology, they have been modernised over the years and have become smaller, more powerful, and more practical for the digital demands of today.

Computing on the edge

Imagine real-time face scanning being used at the Currie Cup final or the Chiefs and Pirates derby. Just imagine more than a thousand cameras in action, working in real time scanning tens of thousands of faces from different angles, creating data all along the way and integrating with other technology such as police radios and in-stadium services.

As South Africans, we know all too well that the bandwidth to process such a large amount of data through traditional networks is simply not good enough to work efficiently. And while it can be run through a large core or public data centre, the likelihood of one of those being close to the stadium is minimal. Delays, or ‘latency and lag time’, are not an option in this scenario; it must work in real time or not at all.

So, what can be done? The answer lies in edge computing. This is where computing is brought closer to the devices being used. The edge refers to devices that communicate with each other. Think of all those connected things the IoT has become known for: things like mobile devices, sensors, fitness trackers, laptops, and so on. Essentially anything that is ‘remote’ that links to the Web or other devices falls under this umbrella. For the most part, edge computing refers to smaller data centres (those in the edge) that can process the data required for things like large-scale facial recognition.

At some point in the future, there could be an edge data centre at Newlands or The Calabash that processes the data in real time. It would, of course, also be connected to other resources such as a public or private cloud environment, but the ‘heavy lifting’ is done where the action is taking place.

Unfortunately, there are not enough of these edge resources in place to match our grand IoT ambitions. Clearly, this must change if we are to continue much further down the IoT path.

Admittedly, edge computing is not the most exciting part of the IoT revolution, but it is perhaps the most necessary component of it if there is to be a revolution at all.

Continue Reading

Featured

Don’t panic! Future of work is still human

Published

on

The digital age, and the new technologies it’s brought with it – blockchain, artificial intelligence (AI), robotics, augmented reality and virtual reality – is seen by many as a threat to our way of life as we know it. What if my job gets automated? How will I stay relevant? How do we adapt to the need for new skills to manage customer expectations and the flood of data that’s washing over us?

The bad news is that the nature of work has already changed irrevocably. Everything that can be automated, will be. We already live in an age of “robot restaurants”, where you order on a touch screen, and machines cook and serve your food. Did you notice the difference? AmazonGo is providing shopping without checkout lines. In the US alone, there are an estimated 3.4 million drivers that could be replaced by self-driving vehicles in 10 years, including truck drivers, taxi drivers and bus drivers.

We’re not immune from this phenomenon in Africa. In fact, the World Economic Forum (WEF) predicts that 41% of all work activities in South Africa are susceptible to automation, compared to 44% in Ethiopia, 46% in Nigeria and 52% in Kenya. This doesn’t mean millions of jobs on the continent will be automated overnight, but it’s a clear indicator of the future direction we’re taking.

The good news is that we don’t need to panic. What’s important for us in South Africa, and the continent, is to realise that there is plenty of work that only humans can do. This is particularly relevant to the African context, as the working-age population rises to 600 million in 2030 from 370 million in 2010. We have a groundswell of young people who need jobs – and the digital age has the ability to provide them, if we start working now.

Make no mistake, there’s no doubt that this so-called “Fourth Industrial Revolution” is going to disrupt many occupations. This is perfectly natural: every Industrial Revolution has made some jobs redundant. At the same time, these Revolutions have created vast new opportunities that have taken us forward exponentially.

Between 2012 and 2017, for example, it’s estimated that the demand for data analysts globally grew by 372%, and the demand for data visualisation skills by more than 2000%. As businesses, this means we have to not only create new jobs in areas like data science and analytics, but reskill our existing workforces to deal with the digital revolution and its new demands.

So, while bus drivers and data clerks are looking over their shoulders nervously right now, we’re seeing a vast range of new jobs being created in fields such as STEM (Science, Technology, Engineering and Mathematics), data analysis, computer science and engineering.

This is a challenge for Sub-Saharan Africa, where our levels of STEM education are still not where they should be. That doesn’t mean there are no opportunities to be had. In the region, for example, we have a real opportunity to create a new generation of home-grown African digital creators, designers and makers, not just “digital deliverers”. People who understand African nuances and stories, and who not only speak local languages, but are fluent in digital.

This ability to bridge the digital and physical worlds, as it were, will be the new gold for Africa. We need more business operations data analysts, who combine deep knowledge of their industry with the latest analytical tools to adapt business strategies. There will also be more demand for user interface experts, who can facilitate seamless human-machine interaction.

Of course, in the longer term, we in Africa are going to have to make some fundamental decisions about how we educate people if we’re going to be a part of this brave new world. Governments, big business and civil society will all have roles to play in creating more future-ready education systems, including expanded access to early-childhood education, more skilled teachers, investments in digital fluency and ICT literacy skills, and providing robust technical and vocational education and training (TVET). This will take significant intent not only from a policy point of view, but also the financial means to fund this.

None of this will happen overnight. So what can we, as individuals and businesspeople, do in the meantime? A good start would be to realise that the old models of learning and work are broken. Jenny Dearborn, SAP’s Global Head of Learning, talks about how the old approach to learning and work was generally a three-stage life that consisted largely of learn-work-retire.

Today, we live in what Ms Dearborn calls the multi-stage life, which includes numerous phases of learn-work-change-learn-work. And where before, the learning was often by rote, because information was finite, learning now is all about critical thinking, complex problem-solving, creativity and innovation and even the ability to un-learn what you have learned before.

Helping instill this culture of lifelong learning, including the provision of adult training and upskilling infrastructure, is something that all companies can do, starting now. The research is clear: even if jobs are stable or growing, they are going through major changes to their skills profile. WEF’s Future of Jobs analysis found that, in South Africa alone, 39% of core skills required across all occupations will be different by 2020 compared to what was needed to perform those roles in 2015.

This is a huge wake-up call to companies to invest meaningfully in on-the-job training to keep their people – and themselves – relevant in this new digital age. There’s no doubt that more learning will need to take place in the workplace, and greater private sector involvement is needed. As employers, we have to start working closely with should therefore offer schools, universities and even non-formal education to provide learning opportunities to our workers.

We can also drive a far stronger focus on the so-called “soft skills”, which is often used as a slightly dismissive term in the workplace. The core skills needed in today’s workplace are active listening, speaking, and critical thinking. A quick look at the WEF’s “21st Century Skills Required For The Future Of Work” chart bears this out: as much as we need literacy, numeracy and IT skills to make sense of the modern world of work, we also need innately human skills like communication and collaboration. The good news is that not only can these be taught – but they can be taught within the work environment.

It sounds almost counter-intuitive, but to be successful in the Digital Age, businesses are going to have to go back to what has always made them strong: their people. Everyone can buy AI, build data warehouses, and automate every process in sight. The companies that will stand out will be those that that focus on the things that can’t be duplicated by AI or machine learning – uniquely human skills.

I have no doubt that the future will not be humans OR robots: it will be humans AND robots, working side by side. For us, as businesspeople and children of the African continent, we’re on the brink of a major opportunity. We just have to grasp it.

Continue Reading

Trending

Copyright © 2018 World Wide Worx