Connect with us
Shahram Izadi, vice president of engineering at Google, delivering a keynote address on Android XR. Photo courtesy Android XR.

Hardware

Google I/O: Android XR glasses
finally see the point

After a decade of smart glasses trying too hard to be clever, Google has gone for subtle, writes ARTHUR GOLDSTUCK.

Once upon a time, Google tried to change the world. It launched Google Glass, which was in effect a computer and a video camera in one’s spectacles. As expected, the specs were ugly, the computer ineffectual, and the camera an invitation to have your face smashed in.

I wrote back in 2013 that it “makes the user look like a 20th century watchmaker, and turns anyone in the vicinity of the user into a privacy-obsessed paranoiac.” That line aged far better than the product. It captured the unease of Glass in public. More than a decade later, Google appears to have absorbed the lesson: this time, the new Android XR glasses don’t announce themselves.

They also don’t look like they’re trying to change the world. And that may be the most promising thing about them.

The Android XR glasses were unveiled during the opening keynote of Google I/O 2025 at the Shoreline Amphitheatre in Mountain View last week. Shahram Izadi, vice president of engineering at Google, presented them as a direct application of Gemini, the company’s AI model. “This is Gemini running on a lightweight pair of glasses,” he said. “It gives you immediate access to information and understanding – right when you need it, and only when you ask.”

Izadi described it as ambient computing: AI that blends into your environment. 

“Technology should enhance your awareness, not replace it,” he said.

In the AI Sandbox demo zone at Google I/O, featuring experimental Gemini-powered tools, I had the chance to try it for myself.

I was handed a prototype pair of the glasses and invited to look around. In front of me was a reproduction of a painting. I tapped the side of the frame and asked Gemini what the painting represented.

The answer came instantly. The system identified the work, gave a brief background on the artist, and explained the symbolism. It was clear, accurate, and softly projected into my field of view. The response was structured and cited.

Then I asked a follow-up: broader, still related, designed to test whether Gemini could handle a layer of historical interpretation. It simply repeated the first answer. Before I could say “Reset”, I was guided to the next demo station with a smile and a thank-you. No triggering us with our own glitches, okay?

For the rest of the demo, the prototype responded well to short, unambiguous questions. Gemini could retrieve calendar events, summarise recent headlines, and give location-based answers. The overlay appeared briefly, then faded. 

This was in keeping with the device itself.

The design didn’t try to draw attention to itself. There was no camera in sight. The frames looked like standard glasses. The system required a tethered phone. I wasn’t blown away, but then nor was I embarrassed – as another tester should have been when he arrived at the demo wearing 2013-vintage Google Glass to show he had always been invested. He didn’t mention that his Glass had also been disconnected for the past decade or so.

In his keynote, Izadi emphasised that the new design principles were based on minimalism. 

“You shouldn’t have to adapt to the interface,” he said. “The interface should adapt to you.” 

There was no mention of launch timing or final specifications. The focus was on showing what’s possible, not what’s finished.

Later in the day, Google co-founder Sergey Brin joined a session reflecting on the company’s earlier attempt at smart glasses. 

“We didn’t quite think through all the social implications of having a camera on your face,” he said in  a rare acknowledgement of past naivety – or hubris.

“There were people who wore them into bars, bathrooms. And then we were surprised that people found it creepy.”

The new glasses avoid that problem by avoiding any sense of surveillance. There was no suggestion of recording, no indicator lights, no attempt to be anything more than a quiet assistant.

Other testers found the same. Android Police highlighted the clarity of responses. ZDNet described the interaction as smooth and the answers as well-formed. But then, the system wasn’t pushed beyond basic tasks. In that range, it performed with little friction.

My session reflected the same: Gemini worked where it was meant to work. The moment the question stepped outside the demo script, the experience was paused and redirected. It wasn’t awkward. It was a reminder that this is still a controlled environment.

The glasses didn’t make a grand impression. They offered something smaller: a focused tool for lightweight tasks, framed around presence instead of performance.

The XR glasses offer a glimpse, not of the future, but of something that could fit into the present. But there is a caveat (ie, “buyer beware”): it needs to continue learning what to do when the questions get more interesting.

Arthur Goldstuck is CEO of World Wide Worx, editor-in-chief of Gadget.co.za, and author of “The Hitchhiker’s Guide to AI “.

Subscribe to our free newsletter
To Top