Hackers are evading traditional detection applications with a new approach called Application program interface hooking. LUKE JENNINGS, Chief Research Officer for Countercept at MWR InfoSecurity, takes a look at what API hooking is and how it can be thwarted.
Traditional malware detection and forensic investigation techniques typically focus on detecting malicious native executables on disk, and performing disk forensics to uncover evidence of historical actions on a system. In response, many threat actors have shifted their offensive techniques to avoid writing to disk, staying resident only in memory. Consequently, the ability to effectively analyse live memory for evidence of compromise and to gather additional forensic evidence has become increasingly important.
Application program interface (API) hooking is one of the memory-resident techniques cyber criminals are increasingly using. The process involves intercepting function calls in order to monitor and/or change the information passing back and forth between them. There are many reasons, both legitimate and malicious, why using this might be desirable. In the case of malware, the API hooking process is commonly considered to be ‘rootkit’ functionality and is mostly used to hide evidence of its presence on the system from other processes, and to spy on sensitive data.
How are the cyber criminals using API hooking?
There are two common use cases for the malicious use of API hooking. Firstly, it can be used to spy on sensitive information and so they use it to intercept sensitive data, such as communications with the keyboard to log keystrokes including passwords that are typed by a user, or sensitive network communications before they are transmitted. This includes the ability to intercept data encrypted using protocols such as Transport Layer Security (TLS) prior to the point at which they are protected, in order to capture passwords and other sensitive data before it is transmitted.
Secondly, they modify the results returned from certain API calls in order to hide the presence of their malware. This commonly may involve file-system or registry related API calls to remove entries used by the malware, to hide its presence from other processes. Not only can cyber criminals implement API hooking in a number of ways, the technique can also be deployed across a wide range of processes on a targeted system.
Tackling malicious API hooking
One way cyber security teams can detect the hidden traces of API hooking and other similar techniques is through memory analysis frameworks such as Volatility. Volatility is an open-source framework and the de facto standard toolset for performing memory analysis techniques against raw system memory images, useful in forensic investigations and malware analysis. The Volatility framework is very valuable when performing an in-depth investigation of systems on which day-to-day compromises have been detected.
While memory analysis can be an incredibly powerful and useful technique, it does not come without its challenges. One hurdle to consider when deploying memory analysis is the labour intensity it requires. Memory analysis is a highly skilled and time-intensive technique typically performed on one image at a time. This can be very effective when performing a dedicated investigation of a serious compromise, where the systems involved are known and relatively small in number. However, the challenge arises when trying to use memory analysis at scale to detect compromises on a large enterprise network in the absence of any other evidence.
Another obstacle to be aware of when implementing memory analysis is legitimate ‘bad’ behaviour. There are plenty of examples of hooking techniques being used by malware for malicious purposes. Nevertheless, there are also many cases of these techniques being used for legitimate, above-board purposes. In particular, technologies such as data loss prevention and antivirus often target the same functions for hooking as malware does. Without the techniques and experience to quickly separate legitimate injection and hooking from malicious behaviour, a great deal of time can be wasted.
Successful attack detection and response
As a first step in dealing with techniques like this, organisations need the capability in place to easily retrieve system memory images from suspect machines to allow rapid response and aid forensic investigation. However, this can generally only be used in a reactive manner.
To perform effective attack detection and response at scale specifically with regard to these techniques, an ability to conduct memory analysis proactively at scale across an enterprise network is required, which is where toolsets continuously conducting live memory analysis and reporting on suspicious findings are required. This will enable the proactive discovery of unknown memory-resident malware without any prior knowledge or signatures.
Good Endpoint Detection and Response (EDR) software that offers live memory analysis capabilities at scale are required to proactively detect the direct use of techniques such as live memory analysis. Additionally, when gathering results at scale, approaches such as anomaly detection can help greatly by drawing a dividing line between API hooking that is common across the network, probably due to security software in use, and anomalous API hooking that seems present only in a few isolated cases. Traditional memory forensics using a tool such as Volatility can then be used in order to investigate, in detail, systems exhibiting suspicious behaviour.
Many malware families have moved to using techniques such as API hooking in a stealthy attempt to avoid traditional security solutions and achieve certain end goals, such as spying on passwords. The 2015 Verizon Data Breach Report found that “malware is part of the event chain in virtually every security incident”. It also reported that “70-90% of malware samples are unique to an organisation” and that “organisations would need access to all threat intelligence indicators in order for the information to be helpful.” Given these findings, it is obvious that having an effective technique for discovering previously unseen malware on your network is extremely important.
Overall, memory analysis can be used to uncover some, not all, of the stealth techniques used by modern malware families. However, it is an important capability to have in order to detect compromises using modern memory-resident malware.
VoD cuts the cord in SA
Some 20% of South Africans who sign up for a subscription video on demand (SVOD) service such as Netflix or Showmax do so with the intention of cancelling their pay television subscription.
That’s according to GfK’s international ViewScape survey*, which this year covers Africa (South Africa, Kenya and Nigeria) for the first time.
The study—which surveyed 1,250 people representative of urban South African adults with Internet access—shows that 90% of the country’s online adults today use at least one online video service and that just over half are paying to view digital online content. The average user spends around 7 hours and two minutes a day consuming video content, with broadcast television accounting for just 42% of the time South Africans spend in front of a screen.
Consumers in South Africa spend nearly as much of their daily viewing time – 39% of the total – watching free digital video sources such as YouTube and Facebook as they do on linear television. People aged 18 to 24 years spend more than eight hours a day watching video content as they tend to spend more time with free digital video than people above their age.
Says Benjamin Ballensiefen, managing director for Sub Sahara Africa at GfK: “The media industry is experiencing a revolution as digital platforms transform viewers’ video consumption behaviour. The GfK ViewScape study is one of the first to not only examine broadcast television consumption in Kenya, Nigeria and South Africa, but also to quantify how linear and online forms of content distribution fit together in the dynamic world of video consumption.”
The study finds that just over a third of South African adults are using streaming video on demand (SVOD) services, with only 16% of SVOD users subscribing to multiple services. Around 23% use per-pay-view platforms such as DSTV Box Office, while about 10% download pirated content from the Internet. Around 82% still sometimes watch content on disc-based media.
“Linear and non-linear television both play significant roles in South Africa’s video landscape, though disruption from digital players poses a growing threat to the incumbents,” says Molemo Moahloli, general manager for media research & regional business development at GfK Sub Sahara Africa. “Among most demographics, usage of paid online content is incremental to consumption of linear television, but there are signs that younger consumers are beginning to substitute SVOD for pay-television subscriptions.”
New data rules raise business trust challenges
When the General Data Protection Regulation comes into effect on May 25th, financial services firms will face a new potential threat to their on-going challenges with building strong customer relationships, writes DARREL ORSMOND, Financial Services Industry Head at SAP Africa.
The regulation – dubbed GDPR for short – is aimed at giving European citizens control back over their personal data. Any firm that creates, stores, manages or transfers personal information of an EU citizen can be held liable under the new regulation. Non-compliance is not an option: the fines are steep, with a maximum penalty of €20-million – or nearly R300-million – for transgressors.
GDPR marks a step toward improved individual rights over large corporates and states that prevents the latter from using and abusing personal information at their discretion. Considering the prevailing trust deficit – one global EY survey found that 60% of global consumers worry about hacking of bank accounts or bank cards, and 58% worry about the amount of personal and private data organisations have about them – the new regulation comes at an opportune time. But it is almost certain to cause disruption to normal business practices when implemented, and therein lies both a threat and an opportunity.
The fundamentals of trust
GDPR is set to tamper with two fundamental factors that can have a detrimental effect on the implicit trust between financial services providers and their customers: firstly, customers will suddenly be challenged to validate that what they thought companies were already doing – storing and managing their personal data in a manner that is respectful of their privacy – is actually happening. Secondly, the outbreak of stories relating to companies mistreating customer data or exposing customers due to security breaches will increase the chances that customers now seek tangible reassurance from their providers that their data is stored correctly.
The recent news of Facebook’s indiscriminate sharing of 50 million of its members’ personal data to an outside firm has not only led to public outcry but could cost the company $2-trillion in fines should the Federal Trade Commission choose to pursue the matter to its fullest extent. The matter of trust also extends beyond personal data: in EY’s 2016 Global Consumer Banking Survey, less than a third of respondents had complete trust that their banks were being transparent about fees and charges.
This is forcing companies to reconsider their role in building and maintaining trust with its customers. In any customer relationship, much is done based on implicit trust. A personal banking customer will enjoy a measure of familiarity that often provides them with some latitude – for example when applying for access to a new service or an overdraft facility – that can save them a lot of time and energy. Under GDPR and South Africa’s POPI act, this process is drastically complicated: banks may now be obliged to obtain permission to share customer data between different business units (for example because they are part of different legal entities and have not expressly received permission). A customer may now allow banks to use their personal data in risk scoring models, but prevent them from determining whether they qualify for private banking services.
What used to happen naturally within standard banking processes may be suddenly constrained by regulation, directly affecting the bank’s relationship with its customers, as well as its ability to upsell to existing customers.
The risk of compliance
Are we moving to an overly bureaucratic world where even the simplest action is subject to a string of onerous processes? Compliance officers are already embedded within every function in a typical financial services institution, as well as at management level. Often the reporting of risk processes sits outside formal line functions and end up going straight to the board. This can have a stifling effect on innovation, with potentially negative consequences for customer service.
A typical banking environment is already creaking under the weight of close to 100 acts, which makes it difficult to take the calculated risks needed to develop and launch innovative new banking products. Entire new industries could now emerge, focusing purely on the matter of compliance and associated litigation. GDPR already requires the services of Data Protection Officers, but the growing complexity of regulatory compliance could add a swathe of new job functions and disciplines. None of this points to the type of innovation that the modern titans of business are renowned for.
A three-step plan of action
So how must banks and other financial services firms respond? I would argue there are three main elements to successfully navigating the immediate impact of the new regulations:
Firstly, ensuring that the technologies you use to secure, manage and store personal data is sufficiently robust. Modern financial services providers have a wealth of customer data at their disposal, including unstructured data from non-traditional sources such as social media. The tools they use to process and safeguard this data needs to be able to withstand the threats posed by potential data breaches and malicious attacks.
Secondly, rethinking the core organisational processes governing their interactions with customers. This includes the internal measures for setting terms and conditions, how customers are informed of their intention to use their data, and how risk is assessed. A customer applying for medical insurance will disclose deeply personal information about themselves to the insurance provider: it is imperative the insurer provides reassurance that the customer’s data will be treated respectfully and with discretion and with their express permission.
Thirdly, financial services firms need to define a core set of principles for how they treat customers and what constitutes fair treatment. This should be an extension of a broader organisational focus on treating customers fairly, and can go some way to repairing the trust deficit between the financial services industry and the customers they serve.