Skip to main content

Digital Trends may earn a commission when you buy through links on our site. Why trust us?

This new AI animation tool is blowing people’s minds

More AI tools are popping up to advance features of the popular generators that are already available, and the latest one is blowing people’s minds.

One AI research company, Runway, has recently introduced the second generation of its Motion Brush tool, which helps animate aspects of AI-generated images, such as those created in Midjourney. The simple brush tool to animate images feels like magic — which is always true when AI is done right. The video below, as posted by AI enthusiast, Rory Flynn, shows the new tool in action.

Recommended Videos

Full Video: https://t.co/iyJr8TEyyt

— Rory Flynn (@Ror_Fly) November 25, 2023

Many creators are already having fun with the Motion Brush tool, bringing to life still images such as trucks driving down a dirt road, panning nature shots, moving people and animals, leaves twisting in the wind, and moving clouds. Runway also showcased examples of making waterfalls, fish in a tank, fire, and the smoke from a burning cigarette move.

Motion Brush works by uploading an image into the service. Select Start with Image and then select the Motion Brush tool. Then use it to draw a highlight over the area of the image you would like to animate on the image. You can also generate an image within Runway using a text prompt before using Motion Brush. Confirm the horizontal, vertical, and proximity controls at the bottom of the screen, and then press Save. Once saved, you can generate the video by selecting the Extend 4s button. You can also click Extend 4s again to expand the length of the video up to 16 seconds. Generated videos are available for downloading, sharing, and use in other editors, among other functions.

Some of the features that you can use together are Motion Brush and Camera Controls, which allow you to set aspects of the image to move while the camera pans or zooms at the same time. Other updated features include the Gen-2 Style Presets, which allow you to add style to content without prompts, and Director Mode updates allowing for adjustments to camera moves at a fraction of a second.

Runway Motion Brush preview.
RunwayML

Its interface resembles most image or video editors. You’ll have access to different functions and limits in the service, depending on your price tier, which includes basic, Standard, Pro, Unlimited, or Enterprise. Currently, the Motion Brush tool is in beta, making it available to all Runway members.

In addition to the Motion Brush update, Runway recently introduced new Gen-2 Style Presets, and updated Camera Controls, among other features, the company said on its X (formerly Twitter) profile.

Runway is free to sign up and you can use Google or Apple as sign-up options. There is also a Single Use Sign-on available only for the Enterprise tier.

Fionna Agomuoh
Fionna Agomuoh is a Computing Writer at Digital Trends. She covers a range of topics in the computing space, including…
Perplexity’s two new features take it beyond just a chatbot
An abstract image showing floating squares used for a Perplexity blog post.

Perplexity AI, makers of the popular chatbot by the same name, announced Thursday that it is rolling out a pair of new features that promise to give users more flexibility over the sorts of sources they employ: Internal Knowledge Search and Spaces.

"Today, we're launching Perplexity for Internal Search: one tool to search over both the web and your team's files with multi-step reasoning and code execution," Perplexity AI CEO Aravind Srinivas wrote on X (formerly Twitter). Previously, users were able to upload personal files for the AI to chew through and respond upon, the same way they could with Gemini, ChatGPT, or Copilot. With Internal Search, Perplexity will now dig through both those personal documents and the internet to infer its response.

Read more
What is Gemini Advanced? Here’s how to use Google’s premium AI
Google Gemini on smartphone.

Google's Gemini is already revolutionizing the way we interact with AI, but there is so much more it can do with a $20/month subscription. In this comprehensive guide, we'll walk you through everything you need to know about Gemini Advanced, from what sets it apart from other AI subscriptions to the simple steps for signing up and getting started.

You'll learn how to craft effective prompts that yield impressive results and stunning images with Gemini's built-in generative capabilities. Whether you're a seasoned AI enthusiast or a curious beginner, this post will equip you with the knowledge and techniques to harness the power of Gemini Advanced and take your AI-generated content to the next level.
What is Google Gemini Advanced?

Read more
No, generative AI isn’t taking over your PC games anytime soon
Cyberpunk 2077 running on the Samsung Odyssey OLED G8.

Surprise -- the internet is upset. This time, it's about a recent article from PC Gamer on the future of generative AI in video games. It's a topic I've written about previously, and something that game companies have been experimenting with for more than a year, but this particular story struck a nerve.

Redditors used strong language like "pro-AI puff piece," PC Gamer itself issued an apology, and the character designer for Bioshock Infinite's Elizabeth called the featured image showing the character reimagined with AI a "half-assed cosplay." The original intent of the article is to glimpse into the future at what games could look like with generative AI, but without the tact or clear realization of how this shift affects people's jobs and their creative works.

Read more