Skip to main content

Donald Glover is making a movie with Google’s new video AI

Donald Glover sitting in a cabin with a movie crew.
Google
Sundar Pichai stands in front of a Google logo at Google I/O 2021.
This story is part of our complete Google I/O coverage

Google unveiled its “most capable” model for generative AI video at Google I/O 2024, and it went beyond a few cherry-picked samples. Google is working with Donald Glover, aka Childish Gambino, to create a short film that leverages Google’s new AI model, which it calls Veo.

Details are light about Veo at the moment, but Google says it’s able to turn a text prompt into a video, similar to OpenAI’s Sora. Google is currently testing the model with select creators, but says it plans on rolling out Veo for a wider release soon. You can sign up for the wait list at labs.google.com.

“Everyone’s gonna become a director, and everyone should be a director,” said Glover in a short video clip showcasing some behind-the-scenes moments f4rom the unnamed project. The project is a collaboration between Glover and Google DeepMind, and the company says it’s “coming soon.” You can see the video clip, along with several examples of the generated video, below.

We put our cutting-edge video generation model Veo in the hands of filmmaker @DonaldGlover and his creative studio, Gilga.

Let’s take a look. ↓ #GoogleIO pic.twitter.com/oNLDq1YlHC

— Google DeepMind (@GoogleDeepMind) May 14, 2024

Google says these clips are the “raw, unedited output” of Veo, and all of the clips look very impressive. Rather than simply creating a video with your specified subjects, Google says Veo is able to capture nuances like camera techniques and visual effects that you may want in your shot. This, according to Google, allows you to quickly iterate on ideas to get closer to the final shot you want.

It all sounds a little too good to be true, but generative AI for video has developed rapidly over the past few months. Just two months ago, OpenAI showed several videos generated with Sora that looked like they were captured with a camera. And reports suggest that OpenAI is shopping Sora in Hollywood, and working with studios to add its generative AI tool to movies. We’ve also seen text-to-video added to Adobe Premiere Pro, which brings generative AI to one of the most widely used video-editing tools.

Although Veo is exciting, it’s important to keep the AI tool grounded in reality. Companies like OpenAI have been very limited with their generative AI for video, and Google looks to be taking a similar approach. There’s a good reason for that, as photorealistic AI video in the wrong hands can cause a lot of problems. Hopefully, Google will work out the kinks in Veo before it’s released to the general public. Otherwise, we might have a repeat on Bing Chat’s insanity on our hands.

Jacob Roach
Lead Reporter, PC Hardware
Jacob Roach is the lead reporter for PC hardware at Digital Trends. In addition to covering the latest PC components, from…
Zoom debuts its new customizable AI Companion 2.0
overhead shot of a person taking a zoom meeting at their desk

Zoom unveiled its AI Companion 2.0 during the company's Zoomtopia 2024 event on Wednesday. The AI assistant is incorporated throughout the Zoom Workplace app suite and is promised to "deliver an AI-first work platform for human connection."

While Zoom got its start as a videoconferencing app, the company has expanded its product ecosystem to become an "open collaboration platform" that includes a variety of communication, productivity, and business services, both online and in physical office spaces. The company's AI Companion, which debuted last September, is incorporated deeply throughout Zoom Workplace and, like Google's Gemini or Microsoft's Copilot, is designed to automate repetitive tasks like transcribing notes and summarizing reports that can take up as much as 62% of a person's workday.

Read more
Google expands its AI search function, incorporates ads into Overviews on mobile
A woman paints while talking on her Google Pixel 7 Pro.

Google announced on Thursday that it is "taking another big leap forward" with an expansive round of AI-empowered updates for Google Search and AI Overview.
Earlier in the year, Google incorporated generative AI technology into its existing Lens app, which allows users to identify objects within a photograph and search the web for more information on them, so that the app will return an AI Overview based on what it sees rather than a list of potentially relevant websites. At the I/O conference in May, Google promised to expand that capability to video clips.
With Thursday's update, "you can use Lens to search by taking a video, and asking questions about the moving objects that you see," Google's announcement reads. The company suggests that the app could be used to, for example, provide personalized information about specific fish at an aquarium simply by taking a video and asking your question.
Whether this works on more complex subjects like analyzing your favorite NFL team's previous play or fast-moving objects like identifying makes and models of cars in traffic, remains to be seen. If you want to try the feature for yourself, it's available globally (though only in English) through the iOS and Android Google App. Navigate to the Search Lab and enroll in the “AI Overviews and more” experiment to get access.

You won't necessarily have to type out your question either. Lens now supports voice questions, which allows you to simply speak your query as you take a picture (or capture a video clip) rather than fumbling across your touchscreen in a dimly lit room. 
Your Lens-based shopping experience is also being updated. In addition to the links to visually similar products from retailers that Lens already provides, it will begin displaying "dramatically more helpful results," per the announcement. Those include reviews of the specific product you're looking at, price comparisons from across the web, and information on where to buy the item. 

Read more
Meta and Google made AI news this week. Here were the biggest announcements
Ray-Ban Meta Smart Glasses will be available in clear frames.

From Meta's AI-empowered AR glasses to its new Natural Voice Interactions feature to Google's AlphaChip breakthrough and ChromaLock's chatbot-on-a-graphing calculator mod, this week has been packed with jaw-dropping developments in the AI space. Here are a few of the biggest headlines.

Google taught an AI to design computer chips
Deciding how and where all the bits and bobs go into today's leading-edge computer chips is a massive undertaking, often requiring agonizingly precise work before fabrication can even begin. Or it did, at least, before Google released its AlphaChip AI this week. Similar to AlphaFold, which generates potential protein structures for drug discovery, AlphaChip uses reinforcement learning to generate new chip designs in a matter of hours, rather than months. The company has reportedly been using the AI to design layouts for the past three generations of Google’s Tensor Processing Units (TPUs), and is now sharing the technology with companies like MediaTek, which builds chipsets for mobile phones and other handheld devices.

Read more