Gadget

Claude AI popularity inspires hackers

Anthropic’s chatbot Claude and its associated tools like Claude Code, along with other newly popular AI tools like OpenClaw, have seen rapidly growing demand – and inspired cybercriminals.

Kaspersky Threat Research this month (March 2026) identified a new malicious campaign targeted at developers looking for installation instructions for Claude Code, a development agent created by Anthropic.

When searching for “Claude Code download”, sponsored advertisements appear at the top of the search results. One of these ads redirects users to a malicious webpage that closely imitates the official installation documentation for Claude Code.

As a result, users are tricked into installing malware which harvests sensitive information including credentials, crypto wallet data, browser sessions, and other confidential files. Similar malicious campaigns mimic other popular AI tools, including OpenClaw. 

A fraudulent ad example. Photo supplied.

The fake documentation page is visually identical to the legitimate one and is hosted on the website-building and hosting platform Squarespace. Because the page precisely copies the original instructions, users may not notice the difference when copying and executing installation commands.

A fraudulent Claude page. Photo supplied.

However, instead of installing the developer tool, the commands deliver malware to the victim’s system. Depending on the operating system, the malicious commands deploy different infostealers:

Kaspersky researchers also identified similar malicious campaigns targeting other popular AI tools, including OpenClaw and Doubao. Using the same approach, attackers registered multiple domains and distributed files containing the Amatera infostealer while disguising them as legitimate downloads for these tools.

“The campaign poses significant risks because AI development tools such as Claude Code and OpenClaw are widely used not only by hobbyists and automation enthusiasts but also by professional developers working in large organisations,” says Vladimir Gursky, cybersecurity expert at Kaspersky. “If infected, victims may unknowingly expose source code from active projects, confidential corporate data, authentication credentials, and private accounts. This makes such campaigns particularly dangerous for businesses whose developers rely on AI-assisted coding tools.”

In December 2025, Kaspersky detected attackers spreading a macOS infostealer using Google Ads. A specially generated chat interface designed to resemble a ChatGPT tutorial pretended to guide users through installing the Atlas Browser. The malicious instructions appeared to be hosted on a legitimate site associated with OpenAI, helping attackers gain users’ trust.

To stay protected, Kaspersky recommends: 

Exit mobile version