Connect with us
Vladislav Tushkanov addressing the Kaspersky cybersecurity conference in Almaty, Kazakhstan. (Photo by Arthur Goldstuck)

Cybersecurity

AI set to spark new cybersecurity arms race

Kaspersky’s lead data scientist spells out the impact of generative AI on scams and data breaches. ARTHUR GOLDSTUCK reports.

A new cybersecurity arms race has broken out as the world sees an explosion in the launch of generative artificial intelligence (AI) tools that can create original content from text prompts.

Readily available services like ChatGPT and Microsoft’s Bing Image Creator have enhanced the ability of scammers to create hoax news reports, images, sounds and videos. These allow cybercriminals to steal identities in order to commit financial fraud or create more convincing “phishing” campaigns aimed at getting users to click on malicious links that expose their own computers or their companies’ networks to malware, ransomware attacks, and data breaches.

At a conference hosted by global cybersecurity provider Kaspersky in Almaty, Kazakhstan, this week, it was revealed by the firm’s Global Research and Analysis Team that South Africa had seen a 14% increase in users exposed to phishing attacks in the first three months of 2023, compared to last quarter of 2022. Nigeria had seen a 17% increase and Egypt 53%, indicating the potential scope for escalation of cybercrime across Africa.

While these attacks were not a result of generative AI, they indicated the potential escalation of cybercrime across the continent. Vladislav Tushkanov, lead data scientist at Kaspersky, told the conference that three separate categories of generative AI were already being used in scams:
• Diffusion networks, a type of AI that can “generate any kind of image from a text description”, by learning patterns from existing examples, and then using those patterns to generate similar material. The best-known examples are Midjourney and Dall-E. In recent weeks, Microsoft Bing Image Creator and Stable Diffusion have made the technology easier to use for the general public.
• Deepfakes, which “insert people’s faces into videos and animate still portraits. DeepFaceLab is a leader in the field.
• Large Language Models, which “generate any kind of text and solve text-based problems”. ChatGPT is the best-known, while Bing Chat competes for attention and Google Bard is playing catch-up.

Said Tushkanov: ““These technologies have a bright side but also a dark side. Each technology can bring value for business, but also introduce vulnerabilities and enable cybercriminals.”

He gave examples of a British energy company being breached using voice deepfakes to impersonate employees and the United States government issuing an official warning that deepfakes were being used to apply for remote jobs, well before the current generative AI explosion began.

More recently, in September 2022, deepfake videos were made of Elon Musk endorsing a cryptocurrency scam. It was so effective, that he had to announce on Twitter that it was not “Defs not me”.

“It turned out, through our research, that deepfakes were being sold on the Darknet (a version of the Web accessible only through specialised browsers) for many use cases, including creating advertisements for crypto scams to harassment on social media,” said Tushkanov.

It is some consolation to potential victims that the technology is still rudimentary and fakes can be identified relatively simply.

“This technology is still very basic,” he told Business Times. “Right now, it only generates images over which you have almost no control. But they are getting better and better because we have much more computational resources, we have faster graphics processors and better hardware, so we can generate better images and videos. You will be able to generate a picture of any person in any time in any environment. It can be captivating, and it can draw attention, but then it can also be used by cyber criminals.”

Most significantly, the technologies are becoming simpler to use.

“A year ago, to create a nice picture wasn’t as simple. You couldn’t just log into a web interface and create. You had to have some basic computer skills. Now, as they become more accessible, more of these low level campaigns might employ them. It’s not there yet, but it’s coming.”

The warning echoed Microsoft chief economist Michael Schwarz, talking in a World Economic Forum panel in Geneva on Wednesday, when he said AI could help make humans more productive and revolutionise the way most businesses operate, but “guardrails” had to be erected.

“I am confident AI will be used by bad actors, and yes it will cause real damage,” he said. “It can do a lot of damage in the hands of spammers with elections and so on.”

Craig Rosewarne, MD of South African cybersecurity consultancy Wolfpack Information Risk, confirmed that there was not yet any evidence of generative AI being used in attacks in South Africa.

“We’ve seen that criminals are still not as tech-savvy operating in South Africa,” he told Business Times. “Generally, we see it more coming from the Western African regions and some attacks coming in from Eastern European. We are starting to see more cybercrime-as-a-service being used, where you’ve got the whole underground economy operating together, where some people use a platform for launching ransomware or denial-of-service attacks that get rented out.

“But watch this space. we’re going to start seeing a lot more of it happening. We saw IBM this week announcing a layoff of about 30% of its non-client facing workforce. So as companies are starting to adopt and use AI, obviously cyber criminals are going to start to use it more and more.

“ChatGPT still has safeguards built into it so if you ask it to go find security vulnerabilities on the Wolfpack website, it will say that it cannot do it. But of course, they are ways of posing questions, for example, asking how would you go about doing it? If we compare it to the iPhone, it’s at the iPhone 1 stage. Of course, we’re now sitting on iPhone 15. With this, it’s going to just multiply dramatically, so big things are coming.”

That means there is still time for the business world to prepare.

Tushkanov said the answer, for now, was not technology, but awareness: “Understanding how AI changes the world and educating the public about AI is of utmost importance.”

Subscribe to our free newsletter
To Top