Skip to main content

OpenAI’s new AI-made videos are blowing people’s minds

An AI image portraying two mammoths that walk through snow, with mountains and a forest in the background.
OpenAI

OpenAI’s latest venture into AI might be its most impressive one to date. Dubbed “Sora,” this new text-to-video AI model has just opened its doors to a limited number of users who will get to test it. The company launched it by showing several videos made entirely by AI, and the end results are shockingly realistic.

OpenAI introduces Sora by saying that it can create realistic scenes based on text prompts, and the videos shared on its website serve to prove it. The prompts are descriptive, but short; I’ve personally used longer prompts just interacting with ChatGPT. For instance, to generate the video of wooly mammoths pictured above, Sora required a 67-word prompt that described the animals, the surroundings, and the camera placement.

Recommended Videos

Introducing Sora, our text-to-video model.

Sora can create videos of up to 60 seconds featuring highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions. https://t.co/7j2JN27M3W

Prompt: “Beautiful, snowy… pic.twitter.com/ruTEWn87vf

— OpenAI (@OpenAI) February 15, 2024

“Sora can generate videos up to a minute long while maintaining visual quality and adherence to the user’s prompt,” said OpenAI in its announcement. The AI can generate complex scenes filled with many characters, scenery, and accurate motion. To that end, OpenAI says that Sora predicts and reads between the lines as needed.

“The model understands not only what the user has asked for in the prompt, but also how those things exist in the physical world,” OpenAI said. The model doesn’t just tackle characters, clothing, or backgrounds, but also creates “compelling characters that express vibrant emotions.”

Sora can also fill in the gaps in an existing video or make it longer, as well as generate a video based on an image, so it’s not all just text prompts.

While the videos look good as screenshotted stills, they’re borderline mind-blowing in motion. OpenAI served up a wide range of videos to show off the new tech, including Cyberpunk-esque Tokyo streets and “historical footage” of California during the Gold Rush. There’s more, too, including an extreme close-up of a human eye. The prompts cover anything from cartoons to wildlife photography.

Sora still made some mistakes. Looking closer reveals that, for instance, some figures out in the crowd don’t have heads or move strangely. The awkward motion stood out at first glance in some samples, but the general weirdness took multiple viewings to spot.

It might be a while before OpenAI opens Sora to the general public. Right now, the model will be tested by red teamers who will assess potential risks. Some creators will also get to start testing it now, while it’s still in the early stages of development.

AI is still imperfect, so I went in expecting something quite messy. Whether it’s the low expectations or Sora’s capabilities, I’m walking away impressed, but also mildly worried. We’re already living in a world where it’s hard to tell a fake from something real, and now, it’s not just images that are in jeopardy — videos are, too. However, Sora is hardly the first text-to-video model we’ve seen, such as Pika.

Others are raising the flag as well, such as the popular tech YouTuber, Marques Brownlee, who tweeted that “if this doesn’t concern you at least a little bit, nothing will” in response to the Sora videos.

Every single one of these videos is AI-generated, and if this doesn't concern you at least a little bit, nothing will

The newest model: https://t.co/zkDWU8Be9S

(Remember Will Smith eating spaghetti? I have so many questions) pic.twitter.com/TQ44wvNlQw

— Marques Brownlee (@MKBHD) February 15, 2024

If OpenAI’s Sora is this good now, it’s hard to imagine what it’ll be capable of after a few years of further development and testing. This is the kind of tech that has the potential to displace many jobs — but, hopefully, like ChatGPT, it will instead coexist alongside human professionals.

Monica J. White
Monica is a computing writer at Digital Trends, focusing on PC hardware. Since joining the team in 2021, Monica has written…
From Open AI to hacked smart glasses, here are the 5 biggest AI headlines this week
Ray-Ban Meta smart glasses in Headline style are worn by a model.

We officially transitioned into Spooky Season this week and, between OpenAI's $6.6 million funding round, Nvidia's surprise LLM, and some privacy-invading Meta Smart Glasses, we saw a scary number of developments in the AI space. Here are five of the biggest announcements.
OpenAI secures $6.6 billion in latest funding round

Sam Altman's charmed existence continues apace with news this week that OpenAI has secured an additional $6.6 billion in investment as part of its most recent funding round. Existing investors like Microsoft and Khosla Ventures were joined by newcomers SoftBank and Nvidia. The AI company is now valued at a whopping $157 billion, making it one of the wealthiest private enterprises on Earth.

Read more
Who needs Sora when you’ve got Meta Movie Gen?
A lady holding a pocket-sized bear on a deck overlooking the ocean

Meta revealed Movie Gen, its third-wave multimodal video AI, on Friday.  It promises to "produce custom videos and sounds, edit existing videos, and transform your personal image into a unique video," while outperforming similar models like Runway's Gen-3, Kuaishou Technology's Kling 1.5, or OpenAI's Sora.

Meta Movie Gen builds off of the company's earlier work, first with its multimodal Make-A-Scene models, and then Llama's image foundation models. Movie Gen is a collection of all of these models — specifically, video generation, personalized video generation, precise video editing, and audio generation — that improves the creator's fine-grain control. "We anticipate these models enabling various new products that could accelerate creativity," the company wrote in its announcement post.

Read more
ChatGPT’s new Canvas feature sure looks a lot like Claude’s Artifacts
ChatGPT's Canvas screen

Hot on the heels of its $6.6 billion funding round, OpenAI on Thursday debuted the beta of a new collaboration interface for ChatGPT, dubbed Canvas.

"We are fundamentally changing how humans can collaborate with ChatGPT since it launched two years ago," Canvas research lead Karina Nguyen wrote in a post on X (formerly Twitter). She describes it as "a new interface for working with ChatGPT on writing and coding projects that go beyond simple chat."

Read more