Firefly can now generate videos from image and text prompts, as well as extend existing clips, Adobe announced on Monday. The new feature is currently rolling out to Premiere Pro subscribers.
The video generation feature makes its debut in a number of new tools for Premiere Pro and the Firefly web app. PP’s Generative Extend, for example, can tack on up to two seconds of added AI footage to either the beginning or ending of a clip, as well as make mid-shot adjustments to the camera position, tracking, and even the shot subjects themselves.
The generated video is available in either 720p or 1080p resolution at 24 frames per second (fps). The tool can also extend the clip’s sound effects and ambient noise by up to 10 seconds, though it cannot do the same with spoken dialog or musical scores.
The Firefly web app is receiving two new AI tools of its own: Text-to-Video and Image-to-Video tools are rolling out in limited public beta, and you can apply for the waitlist here. They do what they sound like they do. Text-to-Video generates short clips in a variety of artistic styles and enables creators to iteratively fine-tune the output video using the web app’s camera controls.
Image-to-Video, similarly, uses both a text prompt and reference images to get the model closer to what the creator has in mind, in fewer iterations. Both web features take around a minute and a half to generate videos up to five seconds long at 720p resolution and 24 fps.
While none of these new video generation features are particularly groundbreaking — Runway’s Gen-3, Meta’s Movie Gen, and OpenAI’s upcoming Sora all boast nearly identical features and functionalities — Firefly does offer its users an advantage over other models in that its outputs are “commercially safe.”
Adobe trained its Firefly model on Adobe Stock images, openly licensed content, and public domain content, meaning that its generated outputs aren’t likely to trigger any copyright infringement claims. If only the same could be said for rivals Runway, Meta, and Nvidia.