CES
CES 2026: Nvidia sets a
new course for AI
Jensen Huang used Nvidia’s CES 2026 keynote to lay out a clear direction for how AI will reshape computing, writes ARTHUR GOLDSTUCK.
Jensen Huang’s CES 2026 keynote served as a strategic reset for Nvidia. Instead of previewing individual products, the company’s founder and CEO laid out a direction for how artificial intelligence will evolve, where it will operate, and how Nvidia intends to shape that shift.
“Computing has been fundamentally reshaped as a result of accelerated computing, as a result of artificial intelligence,” he said. “What that means is some $10-trillion or so of the last decade of computing is now being modernised to this new way of doing computing.”
Nvidia’s emphasis, Huang said, now centres on building platforms that integrate intelligence into infrastructure.
“We are building the computing platform for the age of artificial intelligence,” he said.
At the core of that platform is Rubin, Nvidia’s next-generation architecture. Huang described Rubin as a full-stack system designed to handle the scale and complexity of modern AI workloads.
“Rubin is a complete platform,” he said. “It combines GPUs, CPUs, networking and software into one system.”
The significance of Rubin, Huang argued, lies in how it changes the economics of intelligence. Lower cost defines how far and how fast AI spreads.
“The cost of generating tokens is coming down dramatically,” he said. “We are reducing the cost by about an order of magnitude.
“The faster you train AI models, the faster you can get the next frontier out to the world.”

Efficiency, in Nvidia’s view, sets the pace of innovation across industries that depend on large-scale models.
Huang used that context to broaden the conversation beyond data centres. AI, he said, is moving into systems that interact directly with the physical world.
“The next wave of artificial intelligence is physical AI,” he said. “It understands the laws of physics. It understands cause and effect.”
From that premise, Huang introduced Alpamayo, a family of reasoning models built for autonomous systems. He described them as models that connect perception and decision-making.
“These models take perception and turn it into action. They reason about what they see and decide what to do.”
Automotive technology provided a clear illustration. Huang demonstrated a Mercedes-Benz CLA equipped with Nvidia’s autonomous driving stack, navigating complex traffic scenarios.
An emphasis on reasoning, rather than rule-based behaviour, marked a recurring theme. Huang positioned autonomy as a problem of judgement rather than mapping.
“Autonomous driving requires understanding,” he said. “Understanding requires reasoning.”
Robotics formed another pillar of Nvidia’s direction. Huang described robots as physical AI systems that must learn through experience. “We train robots in simulation. Then we transfer that intelligence into the physical world. Simulation lets us create massive amounts of experience. That experience becomes the training data for physical machines.”
This approach extends Nvidia’s reach beyond hardware into software frameworks, simulation environments and development tools. Huang framed it as an ecosystem designed to shorten the distance between idea and deployment. “Everything we do is about accelerating time to intelligence,” he said.
Openness surfaced repeatedly throughout the keynote. Huang stressed that Nvidia’s platforms and models are designed to support broad participation. “We build it completely in the open. So every company, every industry, every country can be part of this AI revolution.”
Huang referenced applications spanning healthcare, climate science, manufacturing and personal computing. Personal computing also featured, with AI agents running locally on powerful desktop systems.
Throughout the keynote, Huang returned to the importance of integration. Compute alone, he suggested, no longer defines progress. “The future of computing is systems, Systems that integrate compute, networking, software and intelligence.”
That systems-level thinking reflects Nvidia’s evolution. The company’s identity has expanded from graphics hardware to accelerated computing and now to full AI infrastructure.
What Rubin means
Rubin is Nvidia’s next-generation computing platform, and Jensen Huang positioned it as a system rather than a single chip. “Rubin is a complete platform,” he said. “It combines GPUs, CPUs, networking and software into one system.”
At its core, Rubin pairs new Nvidia GPUs with Vera CPUs designed to handle the heavy data movement that large AI models require. Huang described the aim as building infrastructure that supports reasoning, simulation and real-world interaction at scale, rather than chasing isolated performance gains.
A key focus of Rubin is efficiency. Huang highlighted a sharp drop in the cost of generating AI output. “The cost of generating tokens is coming down dramatically. We are reducing the cost by about an order of magnitude.”
That reduction determines how widely AI can be deployed. Lower cost means more training runs, faster iteration and broader access across industries.
Rubin is designed for data centres, autonomous systems and advanced simulation workloads, positioning it as the backbone for what Nvidia calls physical AI.
* Arthur Goldstuck is CEO of World Wide Worx, editor-in-chief of Gadget.co.za, and author of The Hitchhiker’s Guide to AI – The African Edge.




