Software
Google I/O: Gemini rolls
out AI red carpet
The annual Google feast of innovation was aimed at developers, but its announcements will change users’ lives, writes ARTHUR GOLDSTUCK.
A riddle is solved live. The AI speaks its reasoning aloud, step by step. A phone scans a room and explains what just changed. A document rewrites itself in your own tone, without being prompted twice.
At Google I/O 2025, this wasn’t spectacle, even if a red carpet had been rolled out for AI.
This carpet stretched across the tools people already use: Search, Gmail, Docs, the camera in your phone.
The star wasn’t the system. It was the user.
Google I/O 2025 presented a clear vision of how AI is moving beyond theoretical promise into tangible, daily applications. The advancements showcased were not about creating new devices, but rather about enhancing the familiar tools people already use, making them more intuitive and responsive.
This evolution signifies a fundamental change in how users will interact with technology, with AI taking on a more proactive and integrated role. The emphasis was firmly on utility and seamless integration into existing digital ecosystems.
As Google CEO Sundar Pichai said in his keynote address, “We’re making AI more helpful for everyone, in bold and responsible ways.
“Every major computing platform shift has been propelled by a key technology breakthrough. AI is that next breakthrough.
“People are using search more naturally, asking even more complex questions. We’re beginning to redefine what it means to search.”
This platform shift wasn’t described in abstract terms; it was demonstrated through each new feature, consistently moving initiative closer to the user.
The core of this transformation lies in the pervasive integration of AI, particularly Google’s Gemini models, into widely used applications. The demonstrations at I/O illustrated a future where AI functions as an intelligent partner, enabling users to accomplish tasks with greater ease and efficiency.
AI Mode in Search
Google Search is no longer merely a conduit for information, but is becoming a reasoning engine. Users can pose complex, layered questions, and Search, powered by Gemini 2.5 Pro, will dissect the prompt, explain its logical steps aloud, and deliver comprehensive responses. This redefines the act of searching, shifting from a passive lookup to an active, guided exploration of information.
Gemini 2.5
Central to these advancements is the enhanced capability of Gemini 2.5. This model is now natively integrated into Google Workspace applications, including Gmail, Docs, Sheets, Slides, and Drive. It offers sophisticated functions such as rewriting, summarisation, and, critically, the ability to retain memory across different sessions.
Nobel Prize winner Demis Hassabis, CEO of Google DeepMind, said: “Gemini 2.5 Pro is our best-performing model yet. Students are learning new topics. Creators are exploring ideas faster.”
A significant technical achievement is Gemini 2.5 Pro’s expanded context window, which now encompasses one million tokens.
“That’s the longest of any large-scale foundation model,” he said.
This capacity allows the model to process extensive documents – full-length reports, legal documents, entire codebases, and videos – within a single session, significantly boosting its utility for demanding tasks.
Project Astra
Beyond text and data, Project Astra illustrates Google’s long-term vision for a truly universal AI agent. In a compelling demonstration at I/O, Astra displayed the ability to identify objects, recall changes in its environment, and answer questions about past moments within a visual context.
Said Hassabis: “We’ve been pursuing this vision for a while. A universal AI agent that can be truly helpful in everyday life.”
This represents a shift from AI that responds to prompts to AI that actively perceives and provides real-time feedback. The interaction becomes less about explicit commands and more about a dynamic observation and response.
AI Premium
For professionals with intensive daily workloads, the Google One AI Premium Plan offers a dedicated tier of service. Priced at $19.99 per month in the US, this plan provides access to Gemini Advanced, unlocking enhanced Pro features across Workspace, including extended session durations, faster output generation, and the capacity to handle larger files. It is positioned as essential infrastructure for those whose work heavily relies on writing, development, or analysis, underscoring AI’s growing role as a core productivity tool rather than a supplementary feature.
Android XR
Looking to the future of immersive technology, the new Android XR platform was introduced, designed to support spatial computing across applications and wearables. Google is collaborating with Qualcomm and Samsung on compatible hardware.
On stage, executives previewed Xreal glasses integrated with Gemini, demonstrating environment-aware assistance. These glasses can identify signs, landmarks, and movements, with prompts appearing seamlessly in the corner of the user’s view.
Real-time translation also occurs as the user looks at text. This integration transforms the AI assistant into a visually contextual guide, providing information and assistance directly within the user’s perception of the world.
A new relationship with technology
Google I/O 2025 delivered not a singular, dramatic reveal, but a series of incremental yet significant advancements that collectively redefine user expectations.
The tools that once required explicit prompts now anticipate and respond uninvited. This evolution signifies a move towards a more intuitive and proactive relationship with technology.
The red carpet rolled out for AI at this year’s I/O does not lead to a distant stage; it ends at the user’s desk, their device, and their daily to-do list. The focus is squarely on the user, empowered by AI that is becoming increasingly integrated and inherently helpful.
* Arthur Goldstuck is CEO of World Wide Worx, editor-in-chief of Gadget.co.za, and author of “The Hitchhiker’s Guide to AI “.
