Gadget

Report reveals rise of AI-powered cybercrime

Cyber criminals are weaponising artificial intelligence (AI), says cyber security firm Check Point Software Technologies in its inaugural AI Security Reportreleased at the RSA Conference 2025 in San Francisco, California, this week. 

As AI reshapes industries, says Check Point, it has also erased the lines between truth and deception in the digital world. Cyber criminals now wield generative AI and large language models (LLMs) to obliterate trust in digital identity. Today, what you see, hear, or read online can no longer be believed at face value. AI-powered impersonation bypasses even the most sophisticated identity verification systems, making anyone a potential victim of deception on a scale.

“The swift adoption of AI by cyber criminals is already reshaping the threat landscape,” said Lotem Finkelstein, director of Check Point Research. “While some underground services have become more advanced, all signs point toward an imminent shift – the rise of digital twins. These aren’t just lookalikes or soundalikes, but AI-driven replicas capable of mimicking human thought and behaviour. It’s not a distant future – it’s just around the corner.”

Key Threat Insights from the AI Security Report:

At the heart of these developments is AI’s ability to convincingly impersonate and manipulate digital identities, dissolving the boundary between authentic and fake. The report uncovers four core areas where this erosion of trust is most visible:

Defensive Strategies:

The report emphasises that defenders must now assume AI is embedded within adversarial campaigns. To counter this, organisations should adopt AI-aware cyber security frameworks, including:

“In this AI-driven era, cyber security teams need to match the pace of attackers by integrating AI into their defenses,” said Finkelstein. “This report not only highlights the risks but provides the roadmap for securing AI environments safely and responsibly.”

Exit mobile version