Artificial Intelligence
Cisco puts the brakes on AI hype
Delegates at this week’s Cisco Live conference in Amsterdam were told that infrastructure limits now define how far AI deployments can go, writes ARTHUR GOLDSTUCK.
Artificial intelligence projects are failing at scale because the systems meant to carry them cannot keep up. That was the message delivered during the opening keynote of Cisco Live Europe 2026 in Amsterdam. Global networking equipment leader Cisco said bandwidth limits, data-centre capacity and security controls now decide how far AI deployments can go.
“AI is here, and it’s moving at breakneck speed,” said Gordon Thomson, president of Cisco EMEA during the opening keynote session. “The decisions you’re making today will define your competitive advantage for years to come.”
At the event, the company announced a series of infrastructure and security advances aimed at supporting large-scale, agent-driven AI workloads that operate continuously across enterprise environments.
Thomson linked Cisco’s message to a widening readiness gap. He cited Cisco’s AI Readiness Index, which shows only 11% of organisations in EMEA describing themselves as fully prepared for AI. Those organisations, he said, move pilots into production more often and report clearer returns.
“Preparedness can no longer be a long-term goal,” he said. “It’s an immediate imperative.”
From tools to agents
Jeetu Patel, Cisco president and chief product officer, used the opening keynote to describe a shift away from chat-based AI toward systems that operate with greater autonomy.
“We are moving from this era of chatbots that we lived in over the course of the past three years to this age of agents,” he said. “Agents are co-workers that are actually augmented team members.”
Patel said these agents differ from earlier AI tools because they plan, act and operate across systems rather than responding to individual prompts.
“Each one of us, at some point in time in the very near future, will be supervisors of these agents.”
That change reshapes enterprise infrastructure demands. Patel identified three constraints shaping AI deployment: physical capacity, trust and access to data.
“If you don’t trust these systems, you’re not going to use them. Safety and security accelerate adoption, which therefore becomes a prerequisite for productivity.”
He also pointed to pressure on training data, as publicly available sources reach limits and organisations turn to proprietary and machine-generated data to differentiate their models.

Silicon and scale
Much of Cisco’s product activity at the event focused on addressing capacity constraints. The company announced Silicon One G300 switch silicon, designed to support gigawatt-scale AI clusters for training, inference and real-time agentic workloads. Cisco said the platform improves network utilisation by 33% and cuts job completion time by 28%, compared with non-optimised traffic.
Cisco introduced G300-powered Nexus 9100 and 8000 systems aimed at hyperscalers, service providers, sovereign cloud operators and enterprises building AI-focused networks. The company positioned the hardware as a foundation for scaling GPU-intensive workloads without increasing operational complexity.
Alongside the silicon, Cisco unveiled Nexus One, a unified management plane designed to simplify operations across on-premises and cloud-based data centres. The platform aims to reduce fragmentation as AI workloads stretch across multiple environments .
Cisco also expanded its AgenticOps portfolio across networking, security and observability. The approach draws on telemetry from across Cisco platforms, including networking, security cloud control, Nexus One and Splunk, to automate troubleshooting, optimisation and policy enforcement.
“These agents are going to be working seven by 24 on our behalf,” said Patel. “For every human, there might be ten, maybe 100, maybe 1,000 agents.”
Patel said that scale required a shift in operations, with systems handling routine response and humans stepping in for oversight and judgment.
Security under pressure
Security featured heavily in the opening keynote. Cisco announced its largest update to AI Defense, extending governance across the AI supply chain and adding runtime protections for agent-driven systems. The updates aim to reduce the risk of compromise or manipulation as agents interact with tools and data sources .
“With chatbots, you worry about what they say,” said Patel. “With agents, you worry about what they do.”
Cisco also detailed advances to its secure access service edge portfolio, including inspection of agentic traffic to assess how and why tools are used, alongside protections for model and agent integrity.
Sovereignty formed part of the broader platform message. Cisco said its customer experience organisation now supports air-gapped, on-premises and hybrid environments, and highlighted the rollout of Critical National Services Centres across Europe to support organisations with strict data-handling requirements .
Across the opening keynote, Cisco returned to a consistent position: AI capability advances faster than the systems designed to support it. As Thomson said, “The same infrastructure was not built for the scale and velocity of workloads tomorrow.”
Cisco’s response rests on tighter integration across networking, security and operations, with the aim of supporting AI workloads from the data centre through to the workplace and the edge.
For customers, the message was clear. As agent-driven systems move closer to production, infrastructure choices increasingly decide which AI initiatives scale and which stall.
“History is written by those who move with speed and conviction,” said Thomson. “You are in the driving seat.”



