Connect with us

Software

Oz gets an AI makeover

Google Cloud, AI and studio magic restores scenes that did not even exist in the classic “Wizard of Oz” movie, for The Sphere in Las Vegas, writes ARTHUR GOLDSTUCK.

It begins with a tornado. Not the one that swept Dorothy out of Kansas in 1939, but a digital whirlwind that greeted an invitation-only audience for a preview inside the Sphere in Las Vegas — a 112-metre-tall dome where technology and storytelling collide with AI. The Wizard of Oz @ The Sphere is not so much a remake as a reinvention, where AI both enhances the movie and expands its reality.

Gadget was there to witness a world-first: a Hollywood classic transformed into a 270-degree immersive experience, using AI to go beyond the frame, literally and figuratively. The preview was the curtain-raiser to the Google Cloud Next conference held in Las Vegas this month.

“I’ve never felt so large and so small simultaneously – it’s like I’m in a quantum state,” said Google and Alphabet CEO Sundar Pichai, as he opened the event from a tiny stage beneath a giant screen.

The Sphere in Las Vegas carrying a Google advertisement for The Wizard of Oz @ The Sphere. Photo courtesy Google.

Pichai described the project as a culmination of creativity and computation: “There were a lot of hard, technical challenges to solve. Over the last year, we’ve been pushing the boundaries of our frontier models — including Gemini and our state-of-the-art video generation model, Veo.”

AI went beyond the standard approach of upscaling the original visuals, which would have artificially changed low-resolution images into ultra-high resolution. Instead, it reimagined them. Every scene was reconstructed using Google DeepMind’s generative models, which filled in details that the original film never captured. Where the 1939 cameras cut off characters at the knees or edges of the frame, AI now paints them in full. When Dorothy walks down the Yellow Brick Road, one doesn’t see her from only one angle. The entire environment unfolds around the viewer.

Ben Grossman, CEO of Magnopus studio, one of the creative partners on the project, said it was never about stretching the film.

“The original movie is in a rectangle. Inside the Sphere, it would be the biggest rectangle in the world, but it would still be a rectangle,” he said. “We had to go beyond that. We needed to complete the characters that were cut off outside the field of view of the original cinema production. We had to generate performances that were never captured on camera.”

The process, known as outpainting, allowed AI models to generate missing body parts, facial features, and costume details by referencing the film itself.

“Using generative AI techniques, we fine-tuned the models on the original Wizard of Oz data. So if you wanted to see Dorothy’s shoes in a scene where her feet weren’t visible, the AI knew how to show them,” said a DeepMind researcher.

Ralph Winter, head of physical production at Sphere, demonstrated how the system worked. “This is where the model is learning from the training from The Wizard of Oz data. It takes a video input and extends it. We’re not just upscaling pixels – we’re asking the AI to imagine what should be there, based on the intent of the filmmakers.”

Maintaining that intent was paramount. “You can’t just take it and do anything with it,” said producer Jane Rosenthal. “This is part of our cultural history. The key was to maintain the integrity of the original filmmakers.”

What resulted can hardly be called an upgrade. If anything, it is a new cinematic category.

Sphere’s executive chairman Jim Dolan, who had shepherded a series of boundary-shifting productions at the venue put the venue in context: “This is not just a building. You go in, and you’re inside the content. It’s experiential. We needed a property that would take advantage of every capability inside the venue.”

The capabilities were put to the test. Every close-up of Judy Garland’s face had to be enhanced using super-resolution models, trained on footage and Technicolor references from archival sources.

“We even tracked down a Technicolor notebook from a cameraman on Gone With the Wind to work out how the lenses operated at the time,” said one researcher.

The AI “shoot” meant the entire film had to be broken down into individual shots, with new detail added and missing visuals invented – from eyelashes to background props. “It’s a little bit of an archaeological dig,” said Rosenthal. “But we had to do it with AI.”

Sundar Pichai described the outcome as “a glimmer into the future of what’s possible with AI in media and entertainment.” He pointed out that, even 12 months ago, such an undertaking would not have been feasible. “The capabilities just weren’t there.”

It was made possible by the Google Cloud infrastructure, designed not just for scale but for speed.

Thomas Kurian, CEO of Google Cloud, described it during a media briefing at the conference: “We’ve optimised our infrastructure for both training and inferencing of models. It’s not just about the AI – it’s about what customers and partners can build with it.”

That includes Google’s custom Trillium chips and water-cooled systems, which allow massive models to run efficiently.

“Since January 2023, we’ve reduced the cost of inferencing by more than 20 times,” Kurian told Gadget. “And we use sustainable energy sources – wind, hydroelectric, solar – in regions that are now over 95% renewable.”

But infrastructure alone couldn’t drive the complexity of projects on the scale of the new Oz. Such orchestration will now also lean on Google’s Agent Development Kit, a system that allowed individual AI agents to specialise in tasks such as lighting, facial tracking, costume texture and motion continuity. And the new Agentspace platform announced during the conference provides the ecosystem for such agents to “talk” to each other.

That orchestration is built into Google’s larger enterprise strategy, said Kurian: “Many organisations spend huge amounts of time trying to find information across siloed systems. Agent Space gives them a unified interface, where AI can do the research, summarise results, and take action.”

The implications go well beyond cinema. With Google Cloud’s new region now live in Johannesburg, African developers and content creators can begin using the same tools that brought Oz back to life.

“We’re not only installing infrastructure,” Kurian said. “We’re building with local partners, using Google Distributed Cloud to complement our regions. That lets us bring AI capabilities to areas where we don’t yet have full-scale data centres.”

As Sphere Studios continues to work on new immersive experiences – including one set inside the human body – The Wizard of Oz remains the most ambitious demonstration to date of what AI can do with memory, imagination, light and data.

* Arthur Goldstuck is CEO of World Wide Worx and editor-in-chief of Gadget.co.za. Follow him on Bluesky on @art2gee.bsky.social.

Subscribe to our free newsletter
To Top