Filmtools
Filmmakers go-to destination for pre-production, production & post production equipment!
Shop NowLet’s start with a rundown of this Post-Adobe MAX update of new tools & features! As of late October 2025, Adobe released new AI-powered features across Creative Cloud, including Generative Fill and Generative Upscale in Photoshop, AI Object Mask and Fast Vector Mask in Premiere Pro (beta), and AI Assisted Culling in Lightroom (beta). New beta apps and features include Premiere on iPhone, Adobe Express with a new AI Assistant and Prompt, and Firefly’s Image Model 5, Generate Soundtrack, and Generate Speech (public beta).
Released features
- Photoshop: Generative composition with Harmonize, next-level Generative Fill, and AI-powered object selection.
- Premiere Pro: Fast track masking and AI object mask for video editing.
- Adobe Express: Includes a new AI Assistant, Prompt, and Edit features.
- Lightroom: New AI assisted culling to help select the best photos faster.
- Firefly: Firefly Boards for moodboarding, and integration with partner models from Google, OpenAI, and Luma AI.
Beta features and apps
- Premiere Pro: AI Object Mask, Rectangle, Ellipse, and Pen Masking, and a redesigned Fast Vector Mask.
- Adobe Express: A new AI Assistant.
- After Effects: Enhanced 3D and vector workflows, plus new audio effects like Gate, Compressor, and Distortion.
- Lightroom: AI Assisted Culling feature.
- Firefly: Image Model 5, Generate Soundtrack, and Generate Speech are in public beta.
- Illustrator on the web (Beta): A web-based version of Illustrator.
Other updates
- Adobe’s new Creative Cloud is its fastest yet, with phone-to-desktop editing flows and enhanced Media Intelligence search function.
- New subscription tiers are available, with unlimited image generations for subscribers.
Adobe Firefly Tools online
Adobe has brought together all their AI and Express Tools into an easy to navigate online hub where you can find the tools you need at https://firefly.adobe.com/
Here’s a good overview of where the new tools are for Adobe Firefly online and how to access them, from @jasongandy on YouTube.
I’ll be digging into more of these tools in coming articles, but I touched on a few of them playing with some projects and seeing how it responds. As I go along, I’ll tell you whether it’s useful/helpful or if it’s not ready for prime time. As you know, there are other AI tools out there, but yet – none of them do EVERYTHING – or even the same things well every time.
A big thing to consider is cost. Most all AI tools have limits on how many renders you can get with you subscription and offer to sell you more credits, but beware the subscription trap. It ALL really adds up quickly, and as you might guess, Adobe credits aren’t cheap!
Unfortunately by the time this article is out, Adobe’s offer for free render credits will expire on December 1st, 2025
Obviously, I can’t cover everything that just dropped this week, but I will touch on a few that I’ve played with and will share the good/bad as usual. But this is going to take awhile to unroll, so I’ll be testing various tools/features both in the Firefly tools online (web/phone) as well as updates to the Adobe desktop apps, and how the tools can be helpful to video producers as well as content creators.
Hands-on Experiences
As I’ve done in all my articles, I try to break down my workflow which typically involves several different tools to accomplish.
So to start things off, we’ll begin with an AI generated host avatar that I had originally created in Midjourney but did a lot of work with clothing and styling in Photoshop 2025 (beta). Starting with this base image which has dramatic studio lighting, I needed to even out the lighting on the model without changing anything else about her look.
This is where the new Harmonize feature has truly sped up my AI production workflow and produces much higher quality results! I extract the model first from the original background using Select Subject and then Select & Mask to refine the hair and edges. Having the model on her own layer provides a lot of flexibility to making changes and reproducing this affect with any kind of background I choose. In this case, I picked a well-lit home interior (also AI Generated with Adobe Firefly) to match the lighting and brighten her up.
You can see in the first composite image that she’s not integrated into the background scene with her original lighting and color. She’s only been isolated to a new layer and floating above the green screen background here before revealing the background.
While hiding the green screen layer and exposing the interior shot in the background before selecting the Harmonize button.
As you can see, as Adobe Firefly typically does in Photoshop, it provides 3 results to choose from.
The finished result is really clean now and ready to animate.
Using the original isolated layer, I select the layer’s transparency and clean up the edges of the mask of the new Harmonized layer and float it over the green screen layer so the model can be animated or re-designed with other clothing for various looks.
While this is a great workflow for utilizing your same characters in various scenarios and scenes, we can still have some fun in Adobe Firefly giving her new a completely new wardrobe. I start by submitting the image of the model on the green screen background and simply prompt what I want her to be wearing and using Gemini 2.5 Nana Banana, the results are the best I’ve seen to date in any other app.
It was easy to prompt and change several new looks from business to character!
Check out my previous article AI Tools: HeyGen Avatar IV Gets Real to see how these various models can be animated and then composted easily in After Effects or Premiere. I’ll be doing a write-up on this process in detail soon, as well as in a free workshop I’ll be hosting after the New Year.
Character Animation & Sound
I needed to do a quick animation for a client website, based on a character I made up on my iPhone in Midjourney during a meeting and decided to animate it for an introduction and several marketing promotions.
Starting with the selected character pose from Midjourney, I uploaded it to Adobe Firefly Generate video tool, along with a blank background I extracted in Photoshop using the Remove tool. I now have a start frame (blank) and the last frame with the character posed in front of the camera.
I then write in my prompt for the action I wanted and it worked almost immediately on the first couple tries. This was after an hour’s failing in Midjourney and other AI image to video tools.
THE GOOD: The details maintained in this animation were superior to any other tools I’ve used to date even using the VEO 3.1 model. It faithfully maintained the character’s face, eyes and personality and more impressively the lettering on the nametag.
THE BAD: Currently, Adobe Firefly will only produce renders up to 5 seconds and only 1080p. It will not extend your videos either so you’d have to grab a final frame and try to re-run it and match edit segments or extend it in Premiere Pro like I did a couple times with the AI-generated Extend tool.
Adding AI Sound
The next step was to try out the new Adobe Firefly Generate sound effects tool. The cool thing about this is the ability to use a microphone to record timing and impact of sound effects, so you can record while playing back your animation and use your voice to trigger the effects. It’s not quite as intuitive as you’d think and I had to redo a bunch of times. I actually think it may have been easier to just record and modify mouth sounds directly in Audition, but it was fun to try using the tool to test this.
Finally, I had all of my components generated to bring it all together in Premiere and mix it down. I seriously could have done this project in about an hour or so if I had my workflow streamlined so the steps were repeatable, but since the extensions added by premiere didn’t utilize VEO 3.1, the nametag was botched up in those renders so I had to bounce back and forth to After Effects and Photoshop to motion track a clean name tag over the rendered one so it was still legible.
Here’s the end result of this process:
So there is still a long way to go with AI generated content, but seeing all this integration and the much improved results does give one hope for better tools and content generation that edits cleanly.
AI Masking in Premiere Pro
My friends at Adobe know that I’ve been complaining about the Roto Brush tool in After Effects for over a decade. It’s ok for quick and dirty high-contrast selections but was still really unreliable in a real production, and required a lot of futzing with settings and re-selections, etc.
So I did a super quick test with a clip provided by Adobe to do a simple mask layer over text using the new AI Object Mask tool in Premiere. This seriously took my like 10 minutes to complete from start to finish.
I imported my test footage, duplicated the layer and sandwiched a simple text layer between them. Using the new Object Mask tool, you can draw around the area you want to mask and track.
Once the object is selected you track the scene to allow the tool to find all the edges throughout the shot.
In this shot the tree branch comes into the shot about halfway through and I did a second mask on the same top layer to track it as well.
While I’m impressed with the speed and overall accuracy of the quick masking provided by this tool, it still lacks refinements beyond just choking the matte and feathering. In a real world scenario, I’d still have to bounce masks and paint in area that didn’t hold if I was trying to get a clean composite, but for quick and dirty marketing videos or social titles, this will probably work for most people’s needs. Just not ready for broadcast or streaming feature quality yet.
Here’s the resulting video test – both in the original clip’s slomo speed and sped up to normal speed:
Stay tuned for more deeper dives shortly!





















