Check Point Software Technologies has launched an AI security control plane designed to help enterprises govern how AI is connected, deployed, and operated across the business.
According to the cybersecurity company, AI systems are moving from assistants to autonomous actors that access data, invoke tools, and take action. Check Point says the AI Defense Plane provides an intelligence layer to secure the agentic era.
“The enterprise is entering the agentic era,” says David Haber, Check Point VP for AI security. “AI is no longer limited to generating content. It is beginning to access systems, use tools, chain actions, and operate with increasing autonomy. That changes the security model. The challenge is no longer just what AI says, but what AI can do.
“Organisations need more than model safety. They need runtime control over how AI behaves inside real environments. The AI Defense Plane provides that control across employees, applications, and AI agents.”
As enterprises deploy AI systems in production, the scope of potential security risks extends beyond prompts and models to include agentic workflows, delegated actions, non-human access, and shadow agents operating within business environments.
The AI Defense Plane combines discovery, governance, observability, runtime control, and continuous validation across the AI execution lifecycle. The system is built on Check Point’s AI Security platform and uses technologies from ThreatCloud AI and acquisitions Lakera and Cyata.
The AI Defense Plane uses an AI-native security engine that makes real-time decisions based on analysis of millions of AI interactions, adversarial testing, and live threat intelligence. This approach is designed to adapt as AI systems evolve. According to Check Point, the platform delivers protection in under 50 milliseconds across more than 100 languages, supporting prevention as automated attacks increase.
Check Point says that, while most approaches focus on model guardrails, the AI Defense Plane secures how AI behaves in production. The system is designed to enforce control where enterprise AI risk becomes real: at runtime, inside live environments, and across the workflows that connect AI to business operations.
According to Check Point, the AI Defense Plane includes three primary modules:
- Workforce AI Security: Provides visibility, governance, and runtime safeguards for how employees use AI-powered applications. The module enforces policy in real time, reduces the risk of sensitive data exposure, and enables safe productivity across sanctioned and unsanctioned AI tools.
- AI Application and Agent Security: Provides discovery, posture, and runtime control for AI applications and agentic systems embedded across the business. Organisations can identify where AI is present, understand what data and tools it can access, evaluate how it behaves, and govern the permissions and trust relationships that shape agentic execution.
- AI Red Teaming: Enables continuous adversarial testing of prompts, reasoning paths, workflows, tool use, and agent behaviour. It helps organizations uncover exploitable weaknesses early and strengthen resilience as AI systems move from prototype to production.
George Davis, product leader at Sierra, an AI company focused on enterprise AI agents and systems, says: “Red teaming has become essential for agentic systems. When AI can query infrastructure, trigger workflows, and interact with sensitive data, the risk is no longer theoretical. Organisations need continuous testing to understand how these systems can be manipulated, where controls break down, and how resilient they are in production.”
The AI Defense Plane is included in Check Point’s AI Security portfolio, with workforce AI security and AI application and agent security available immediately, while AI red teaming is in limited release.
At RSA Conference 2026, Check Point showcased the AI Defense Plane, with live demonstrations and briefings. The company debuted Gandalf: The Agent Gauntlet, an experiential showcase exploring how agentic systems can be attacked, manipulated, and validated through modern red teaming methodologies.
