A month ago, Adobe announced Firefly, its entry into the generative AI game. Initially, Firefly’s focus was on generating commercially safe images, but the company is now pushing its technology beyond still images. As the company announced today, it will soon bring Firefly to its Creative Cloud video and audio applications. To be clear, you won’t (yet) be able to use Firefly to create custom videos yet. Instead, the emphasis here is on making it easier for anyone to edit videos, color grade using just a few words, add music and sound effects and create title cards with animated fonts, graphics and logos. However, Firefly also promises to automatically turn scripts into storyboards and pre-visualizations — and it will recommend b-roll to liven up videos.
Maybe the highlight of these promised new features is being able to color grade a video by simply describing what a video should look like with just a few words (think “golden hour” or “brighten face”).
It’s no secret that color grading is an art — and not one that comes easy to most people. Now, anyone will be able to describe the desired mood and tone of a scene and Adobe’s video tools will follow suit. In many ways, it’s this democratization of skill that’s at the heart of what Adobe is doing with Firefly in its creative tools.
Other new AI-based features include the ability to generate custom sounds and music. Firefly will also help editors create subtitles, logos and title cards by having them describe what they want them to look like. Those, too, are somewhat specialized skills that take some familiarity with the likes of After Effects and Premiere.
The real game changer, though, is that Adobe also plans to use Firefly to read scripts and automatically generate storyboards and pre-visualizations. That could be a massive time saver — and I wouldn’t be surprised if you saw those videos pop up on TikTok.
It’s worth noting that for now, we’ve only seen Adobe’s own demos of these features. It remains to be seen how well they will work in practice.
Adobe’s aim is to ensure that all of its generative AI tools are safe to use in a commercial environment. With its generative image creator, this meant that it could only train it on a limited number of images that were either in public domain or part of its Adobe Stock service. However, this also means it’s quite a bit more limited when compared to the likes of Midjourney or Stable Diffusion.