Cisco has launched the Cisco N9100, an Nvidia partner-developed data centre switch based on Nvidia Spectrum-X Ethernet switch silicon. With this product, Cisco is offering an Nvidia Cloud Partner-compliant reference architecture for neocloud and sovereign cloud deployments.
For enterprise customers, the Cisco Secure AI Factory with Nvidia aims to strengthen protection and visibility across AI deployments with new security and observability integrations. Cisco, Nvidia, and their partners have unveiled what they say is the industry’s first AI-native wireless stack for 6G, designed to support next-generation telecom connectivity.
The offerings aim to provide neocloud, enterprise, and telecom customers the flexibility and interoperability to efficiently build, manage and secure AI infrastructure at scale.
“We’re at the beginning of the largest data centre build-out in history,” says Jeetu Patel, Cisco president and chief product officer. “The infrastructure that will power the agentic AI applications and innovation of the future requires new architectures designed to overcome today’s constraints in power, computing, and network performance.
“Together, Cisco and Nvidia are leading the way in defining the technologies that will power these AI-ready data centres in all their varieties, from emerging neoclouds, to global service providers, to enterprises, and beyond.”
Gilad Shainer, Nvidia SVP of networking, says: “Nvidia Spectrum-X Ethernet delivers the performance of accelerated networking for Ethernet. Working with Cisco’s Cloud Reference Architectures and Nvidia Cloud Partner design principles, customers can choose to deploy Spectrum-X Ethernet using the newest Cisco N9100 series or Cisco Silicon One based switches to build open, high-performance AI networks.”
Portfolio
Ethernet-based back-end and front-end networks are required to be flexible enough to support rapid AI innovation, integrate with existing infrastructure, and remain easy to deploy and manage.
The Cisco N9100 series switches, expected to be orderable before the end of the year, are available with either Cisco NX-OS or SONiC operating systems. They are designed to advance Ethernet for AI networks and provide greater flexibility for neocloud and sovereign cloud customers developing AI infrastructure.
Building on the N9100, Cisco plans to offer an Nvidia Cloud Partner-compliant reference architecture. The company’s Nexus data centre switching portfolio aims to deliver a unified operating model through Cisco Nexus Dashboard, spanning Silicon One, cloud-scale ASICs, and switches built on Spectrum-X Ethernet switch silicon.
For neocloud and sovereign cloud customers, the Cisco Cloud Reference Architecture is based on the design tenets of Nvidia’s Cloud Partner reference architecture and uses Cisco’s Silicon One and Cloud-scale ASIC offerings. The reference architecture will include the recently launched Cisco 8223 based on the Silicon One P200 for scale-across networks, Nvidia BlueField-4 DPUs, and Nvidia ConnectX-9 SuperNICs.
Cisco AI factory with Nvidia
The Cisco Secure AI Factory with Nvidia, unveiled at GTC in March 2025, aims to provide enterprises with an architecture for AI infrastructure that prioritises security and observability while maintaining performance.
Built on Cisco AI PODs and Nexus switching powered by Cisco Silicon One, the company is now expanding the offering with new capabilities and features across. According to the company, these include:
- Security and observability: Cisco AI Defense integrates with Nvidia NeMo Guardrails to deliver robust cybersecurity for AI applications. Cisco AI Defense is orderable for on-premises data-plane deployment enabling security and AI teams to protect AI models and applications, limiting the sensitive data that leaves their organisation’s data centres. Also available, Splunk Observability Cloud helps teams to monitor the performance, quality, security, and cost of their AI application stack – including real-time insights into AI infrastructure health with Cisco AI PODs – while Splunk Enterprise Security extends this visibility to protect AI workloads.
- Core AI Infrastructure: Cisco Isovalent is now validated for inference workloads on AI PODs, enabling enterprise grade, high-performance Kubernetes networking. Cisco Nexus Hyperfabric AI with a new cloud-managed Cisco G200 Silicon One switch that delivers high-density 800G Ethernet, is now orderable as a deployment option in AI PODs. Cisco UCS 880A M8 rack servers with Nvidia HGX B300, and the Cisco UCS X-Series modular servers with Nvidia RTX PRO 6000 Blackwell Server Edition GPUs are also now orderable as part of AI PODs. This enables high-performance GPU support for a wide range of workloads including generative AI fine-tuning, inference and more.
- Ecosystem expansion: Nvidia Run:ai software is available through Cisco and its partners, enabling intelligent AI workload and GPU orchestration capabilities. Nutanix Kubernetes Platform (NKP) solution is now a supported Kubernetes platform, and Nutanix Unified Storage (NUS) solution is now a supported storage option, with Nutanix Enterprise AI (NAI) solution as the interoperable software component that simplifies building and operating containerised inference services.
- Government readiness: Cisco is collaborating with Nvidia and aligning to the new Nvidia AI Factory for Government, a full-stack end-to-end reference design for AI workloads deployed in highly regulated environments.
AI-native wireless stack
As AI expands beyond smartphones to devices such as augmented reality glasses, connected vehicles, and robotics, wireless networks are under increasing pressure to handle billions of connections efficiently and at scale.
To address this, Cisco, Nvidia, and several telecom partners have developed what they says is the first American AI-RAN stack for mobile networks, integrating sensing and communication.
Demonstrated with multiple pre-6G applications at Nvidia GTC DC, the stack enables telecom providers to incorporate AI into mobile networks beginning with advanced 5G services and lays the foundation for future 6G capabilities. It combines Cisco’s user plane function and 5G core software with the Nvidia AI Aerial platform to support physical AI and integrated sensing with improved efficiency and security.
Xiaohe Hu, Infrawaves CEO, says: “The real challenge in AI infrastructure isn’t just performance – it’s maintaining operational sanity as you scale from dozens to thousands of GPUs. Cisco’s approach with NX-OS and Nexus Dashboard creates a single pane of glass across our entire AI fabric, whether we’re optimising inference latency in the front-end or maximising training throughput in the back-end. That operational simplicity translates directly to faster deployments and lower TCO.”
Yih Leong Sun, GMI Cloud head of Infra, says: “Cisco’s N9100 series powered by Nvidia Spectrum-X Ethernet switch silicon, provides a solution for high-performance, open infrastructure to meet our AI cloud demands. The capability to run NX-OS or SONiC under a unified operating model on Nexus Dashboard delivers more flexibility to our customers with operational simplicity. Its enterprise-grade networking with the scale and agility of the cloud – exactly what the next generation of AI workloads requires.”
