After launching a new design of its firefly app in April, Adobe has been releasing a major update to Generative AI Hub in a nearby monthly clip. Today, the company is introducing a handful of new features to help users use firefly video capabilities.
To start, Adobe is making it easy to add sound effects to AI-infield clips. Right now, the majority of the video model produces footage without audio. Adobe is addressing it with a sophisticated small feature that allows users to first describe the sound effect they want to prepare and then develop themselves. The second part is not that the Adobe model can imitate the sound. Rather, it is because the system can have a better idea about the user’s intensity and time.
I was shown in Demo Adobe, an employee of the company used this feature to add Zip to the sound of non -zipper. He made a “ZTTTTT” sound, which Adobe’s model used loyalty to reproduce the effect in the required volume. The translation was less convincing when the employee used the tool to add the sound of the steps to the concrete, though it doesn’t matter if you are using this feature according to the purpose of Adobe. When adding sound effects, there is a timeline editor at the bottom of the interface to make the audio easier to make time properly.
Adobe
The other new features that are incorporating Adobe today are called composition reference, frame crop and video prostations. The first of them allows you to upload a video or photo you have received to guide the generation process. Together with video presets, you can explain the final output style. Some options offering Adobe Launch allow you to make clips with mobile phones, black and white or vector art styles. Finally, with the frame crop you can upload the first and last frame of a video and select a aspect ratio. The firefly then will produce a video that lives in your desired form.
In June, Adobe increased the support for additional third party models, and is doing the same this month. Most notable is the involvement of VEO 3, which was a premiere of Google at its I/O 2025 conference in May. At this time, VEO 3 is one of the only AI models that can make a video with sound. Like all other partner models that Adobe has presented in the firefly, Google has agreed not to use Adobe users’ data for training of future models. Each photo and video produced by the firefly is digitally signed with the model used to make it. This is one of the safety arrangements that includes Adobe so that firefly users do not mistakenly send an asset that violates copyright content.
According to Zack Coach, Vice President of the Adobe Flight Product Management, consumers can expect the updates to continue fast. “We are shipping many things as soon as we can,” he said. The coach added that Adobe will continue to connect further third -party models, as long as his provider agrees with the company’s data privacy terms.


