CES 2026, the world’s largest consumer technology expo, felt close to being overrun by robots in Las Vegas last week. They rolled through aisles, paused at intersections, carried crates, guarded entrances, and drew crowds wherever they stopped. Movement across the Las Vegas Convention Centre turned into a shared exercise in human and machine awareness, with visitors adjusting their pace to accommodate devices that appeared increasingly comfortable in public space.
But there were subtle and obvious clues that robots are not quite ready for prime-time performance.
As I passed the stand of Chinese robot-maker Galbot, one of its robots collapsed in front of me. The fall came without warning, and its custodians were frozen in shock as it descended to the carpet. It was a poignant reminder that even in an era of “physical AI,” as the robotic revolution is being termed in 2026, the gap between a polished demo and real-world reliability remains a trip-hazard. CES fatigue is real, even for automatons.
The robots were not confined to a single hall. Industrial machines demonstrated logistics tasks with robotic arms that felt more like factory fixtures than prototypes. Dozens of low-cost Chinese-made humanoid robots seemed to portend the imminent arrival of affordable home robots for all. Little wonder that estimates suggest a market worth US$50-billion annually at present, more than doubling by the end of the decade. Some of the more wild-eyed forecasts put the number in the trillions. It is hardly confined to China.
The public debut of the electric robot Atlas from Boston Dynamics saw its appearance for the first time outside a lab. It moved with a haunting, non-human fluidity as it stood up from the floor by swivelling its joints in ways a human cannot, lacking the clunky hydraulics of its predecessor. And no, it wasn’t just dancing for a TikTok video; it was demonstrating the precision required for Hyundai’s manufacturing floors.
Humanoid robots, however, were the primary attraction.
UniX AI arrived at the show with its full-size humanoids, Wanda 2.0 and Wanda 3.0, positioned as ready-to-work service robots for hotels and managed facilities. In staged scenarios, they smoothly poured drinks and replenished amenities. Built on a foundation of 23 high-degree-of-freedom (DoF) joints and an 8-DoF bionic arm, the Wanda series is part of a new wave of mass-produced hardware. UniX AI claims a delivery capacity of 100 units per month, suggesting a shift from “research” to “revenue.”5
Nearby, the focus shifted to the domestic sphere. Luka showcased its family-focused Luka Robot, designed for “Generation Alpha”. Unlike the industrial units, the Luka Robot is built for multimodal interaction with kids: reading stories, holding conversations, and responding to emotional cues. Its presence underlined how robotics now stretches beyond labour into educational and emotional engagement, provided the environment is controlled and safe.
The difference between robots built to work and robots built to engage shaped the experience of the show. It also solidified how Paul Stafford read the exhibition. Stafford, CEO of robotics and physical-AI company Haply, spent CES watching robots operate once the initial novelty faded.
“Robots tend to be used in very simple use cases,” he said. “They’re used in highly automated environments, where the robot moves from point A to point B. It does that, and it doesn’t know anything other than A to B.”
The moment robots entered shared, unpredictable spaces, the strain showed, he said. Some paused when people crossed their paths; others slowed noticeably. This is where the technical gap between vision and sensation becomes a chasm.
Many robots at the show, including the high-profile humanoids, rely heavily on vision systems—processing the world through cameras and AI models like Nvidia’s Project GR00T. They see a person, process the pixels, and then calculate a response. But this creates a subtle “stutter” in logic.
“Bridging the gap between how a human carries out a task and bringing that into a robotic world is extremely complex,” said Stafford. “A lot of what’s being done now uses vision. Cameras see at a relatively slow rate: typically 30 to 60 frames per second. But when you touch something, your biological sensors and the haptic loop are working much faster, often at 1,000Hz or more.”
This technical gap explains the stiffness visible in many humanoids. Because they lack instinctive tactile feedback, their motion follows pre-calculated scripts rather than real-time physical sensation. If a Wanda robot grasps a glass, it is largely “calculating” the grip based on what it sees. If the glass is slightly oily or heavier than expected, the vision system might not catch the slip until it is too late.
Stafford’s work at Haply focuses on this precise “sensation” gap. The company develops systems that capture human movement through touch and force, recording motion at extremely high precision so robots can learn from how people actually perform tasks, rather than just how they look while doing them.9
“We have a device that captures your movements in a very natural way,” he said. “You hold it and move as you would when doing a task. It captures movement at extremely high rates, with very high accuracy.”
That approach showed its value in robots trained through direct human guidance. They adapted more smoothly to variation, behaving less like machines following instructions and more like extensions of human intent. It suggests that the “ChatGPT moment” for robotics will not come from better vision models, but from better “feeling” models: systems that can sense pressure, weight, and friction in real-time.
The most credible robots at CES shared a common assumption: people remain involved. Warehouse robots worked under supervision; lab robots assisted technicians. Even the service humanoids operated inside defined scenarios. Care robots and companions spoke gently and reacted to faces, but the functional boundaries remained. If a person falls, can the robot pick them up? Usually, the answer in 2026 is still no.
* Arthur Goldstuck is CEO of World Wide Worx, editor-in-chief of Gadget.co.za, and author of The Hitchhiker’s Guide to AI – The African Edge.
