Site icon ProVideo Coalition

Adobe MAX 2023 sneak peeks

Generative AI is expanding access to powerful new workflows and unleashing imaginative ideas, at least on its own paths (there’s no such thing as a free lunch). Adobe’s Sneaks offer a “sneak peek” into the future. At this year’s Adobe MAX 2023 (roundup), Adobe researchers demonstrated several cutting-edge technologies that might someday become features in Adobe products.

So, what did Adobe MAX 2023 Sneaks session show? How about: generative fill for video, scene and camera change integrations, animatable surfaces, video upscaling, auto-dubbing, next-gen gen-fill, reflection removal, customizable vector lettering, Illustrator 3D modeling, AI vector images from doodles, AI 3D for cartoons and storyboards.

Some of this tech is available elsewhere in different forms in the ever-shifting landscape of AI applications, as seen at Futuretools.io or right here in roundups at PVC, for example in Jeff Foster’s series. The development is amazing, but it’s easier to integrate new approaches into established toolsets.

You can watch the full Sneaks session (in an hour and a half), or wander below for individual sessions.

If you’re looking for a succinct roundup, AI news hounds Matt Wolfe and Theoretically Media summarized AI news at MAX 2023 in Adobe Goes ALL IN with AI: Huge Updates, Adobe Firefly 2: A No Hype Look, and Adobe Just Stunned The AI Creative World. Theoretically Media didn’t miss that new Illustrator 3D can be used in After Effects.

Also, Matt WhoisMatt Johnson (who noisily switched to Resolve this year) had some comments for video people in Adobe Firefly Video AI Is Going To Change Premiere Pro Forever.

 

Project Fast Fill

Project Fast Fill harnesses Generative Fill, powered by Adobe Firefly, to bring generative AI technology into video editing applications. This makes it easy for users to use simple text prompts to perform texture replacement in videos, even for complex surfaces and varying light conditions. Users can use this tool to edit an object on a single frame and that edit will automatically propagate into the rest of the video’s frames, saving video editors a significant amount of texture editing time.

 

Project Scene Change

Project Scene Change makes it easy to composite a subject and a scene from two separate videos — captured with different camera trajectories — into one scene with synchronized camera motion.

Artificial intelligence renders a 3D representation of the background scene from a prerecorded image as if it was captured by a free-moving camera, then composites the separately filmed subject, with proper shadows, into a new scene with compatible motion. This removes any limitations due to the camera motion of existing video assets, and allows video editors to place a subject into a new environment with realistic camera motion.

 

Project Primrose

Project Primrose makes it possible to bring designs to life in real objects. Displayed at MAX as an interactive dress with wearable textiles which allow an entire surface to display content. Designers can layer this technology into clothing, furniture, and other surfaces to unlock infinite style possibilities — such as the ability to download and wear the latest design from a favorite designer.

 

Project Res Up

Project Res Up is a video upscaling tool that uses diffusion-based technology and artificial intelligence to convert low-resolution videos to high-resolution videos for applications. Users can directly upscale low-resolution videos to high resolution. They can also zoom-in and crop videos and upscale them to full resolution with high-fidelity visual details and temporal consistency. This is great for those looking to bring new life into older videos or to prevent blurry videos when playing scaled versions on HD screens.

 

Project Dub Dub Dub

Project Dub Dub Dub uses generative AI to auto-dub videos or audio clips in more than 70 languages and over 140 dialects. It uses speech-to-speech translation to automatically translate and match the speakers’ voice, tone, cadence and the acoustics of the original video, whether the video or audio clip is brand new or one from a user’s video archives. All users have to do is press a button to auto-dub content, transforming this historically labor- and cost-intensive process into one that can be completed in minutes.

 

Project Stardust

Have you ever taken a photo or created content with Adobe Firefly and wanted to quickly modify specific objects in the image?

Project Stardust is an object-aware editing engine that uses artificial intelligence and generative AI to revolutionize image editing. This technology automates sometimes time-consuming parts of the image editing process — filling in backgrounds, cutting out outlines for selection, blending lighting and color, and more. In addition, the generative AI features let you add objects and make creative transformations. Stardust makes image editing more intuitive, accessible and time efficient for any user, regardless of skill level.

 

Project See Through

It’s difficult and often impossible to manually remove reflections. Project See Through simplifies the process of cleaning up reflections by using artificial intelligence. Reflections are automatically removed, and optionally saved as separate images for editing purposes. This gives users more control over when and how reflections appear in their photos.

 

Project Draw & Delight

With Project Draw & Delight, creators can use generative AI to guide them, helping transform initial ideas (rough doodles or scribbles) into polished and refined sketches.

This technology goes beyond text-to-image by providing users with the ability to augment text-based instructions with visual hints, such as rough sketches and paint strokes. Draw & Delight then uses the power of Adobe Firefly to generate high-quality vectors of illustrations or animations in various color palettes, style variations, poses and backgrounds.

 

Project Neo

Incorporating 3D elements into 2D designs (infographics, posters, logos or even websites) can be difficult to master, and often requires designers to learn new workflows or technical skills.

Project Neo enables designers to create 2D content by using 3D shapes without having to learn traditional 3D creation tools and methods. This technology leverages the best of 3D principles so designers can create 2D shapes with one, two or three-point perspectives easily and quickly. Designers using this technology are also able to collaborate with their stakeholders and make edits to mockups at the vector level so they can quickly make changes to projects.

 

Project Glyph Ease

When creating flyers or posters, designers often need to manually create each individual letter to maintain a consistent style. Project Glyph Ease uses generative AI to create stylized and customized letters in vector format, which can later be used and edited. All a designer needs to do is create three reference letters in a chosen style from existing vector shapes or ones they hand draw on paper, and this technology automatically create the remaining letters in a consistent style. Once created, designers have flexibility to edit the new font since the letters will appear as live text that can be scaled, rotated or moved in the project.

 

Project Poseable

Project Poseable makes it easy for anyone to quickly design 3D prototypes, comics, and storyboards with generative AI.

Instead of having to spend time editing the details of a scene — the background, different angles and poses of individual characters, or the way the character interacts with surrounding objects in the scene — users can tap into AI-based character posing models and use image generation models to easily render 3D character scenes.

 

Exit mobile version