Site icon ProVideo Coalition

Making audio more accessible to editors

Making audio more accessible to editors 1
It’s always fun to talk about how new tools and innovations can create efficiencies, but many sound and picture editors don’t take advantage of the capabilities that are literally built into the solutions they use on a daily basis. A prime example of this is the audio capabilities that are built into Premiere Pro that editors either do not use, or in some cases, realize are right there. What does it mean for an editor to take full advantage of the tools that they have right in front of them?

To explore what these capabilities look like and how they should both evolve and be leveraged, PVC experts Scott Simmons and Woody Woodhall connected with Durin Gleaves from Adobe. Scott is an editor in Nashville, Tennessee, which often has him doing all the audio mixing for a given project, which isn’t ideal but is the reality of lower production budgets. Woody is the owner of Allied Post Audio in Santa Monica, California, where he serves as the main rerecording mixer, supervising sound editor, and sound designer. He posts feature films, feature documentaries, and a lot of television programming, and he also runs Los Angeles Post Production Group (LAPPG), which is a thriving networking community for post-production. Durin Gleaves is the Product Manager of Audio at Adobe. He arrived at the company shortly after Adobe acquired the application that became Audition, which became his main focus, but over the last few years, he’s helped define audio workflows across multiple applications in the digital video and audio segment of Adobe.

Their conversation was so big that we broke it up into three parts. They touch on everything from the logistics of working with AAF and OMF files, to how Audition and Premiere work together, to the potential of automated audio extraction, to defining what it truly means for editors to “edit your way.”

 

Part 1Part 2Part 3

 

Scott Simmons: Woody, you’ve built a career in audio in a way that editors like myself have not, because we use audio as part of the many other things that we have to do. So I’m wondering, as someone who works exclusively in audio almost all the time, how do you use the sound that you’re given from an editor like myself to try to help build a more powerful story? Is that something you’re relying on the editor to give you everything you need? Or are you able to add more things with your expertise to help make that story better?

Woody Woodhall: What I’ve found is that we need to add quite a bit of additional sounds, to fill out the program and to also help the filmmaker tell the story. Audio is a very subliminal part of the film experience for audiences. It helps with pacing, the time of day and locations, and also the moods of scenes. Sound designers understand the impact that a properly placed door slam or shuffled footstep can add to a scene. Audiences just take for granted that it was all there, recorded with the dialog at the time.

The Quality Control process for a lot of programming has really changed too, so they require filled-out music and effects tracks. For anyone who doesn’t know what an M&E is, if you have someone speaking in a kitchen, and they’re rummaging around and opening the refrigerator, and all that, and they’re speaking, and then you mute that track, all of the activity that they’re doing is also muted. So in that case, we have to go ahead and fill all that so that when the music and effects only track is delivered, it passes quality control (QC). These M&E tracks are used for all sorts of additional purposes including foreign language dubbing. All of which is to say we typically will do a whole lot of sound design and editing in addition to what the editor provides.

I will say that one of the things that I’ve found over the years is when I open up either an OMF, or typically nowadays an AAF file into Pro Tools, I can tell immediately, by the way that that audio is laid out, whether or not the picture edit is going to be any good. Because I can see that if this timeline is meticulously created, then they’re probably a very experienced, good picture editor. We basically will use everything that the editor has put into their timeline. Particularly in television these tracks have been through many people and approvals and we are not in the position to change those choices. That includes the two-track mix that they’ve generated as the temp mix to go along with their movie file, which we use as our guide track for the edit and the mixing. We do that because sometimes things just don’t come across properly in the AAF so we need to try and understand why certain music cues may be overlapped or what a random sound effect may be or whatever it is that we might see that does not seem to make sense. We can always go back to that two-track stereo track that was married to the movie and say, “oh, this is what they were doing there,” or “that should have been muted…. etc.”

Usually, when looking at the sound design of a project, and it really doesn’t matter if it’s a documentary film or a television program or a narrative feature, we are pretty much going to be adding backgrounds and atmospheres to reflect each scene. We’re going to be doing hand pats and footsteps. We’re going to be doing hard effects like car passes and door slams. So as much of that from location recordings that the editor has left in the timeline, we will trim all that out and separate the dialog from what we call production effects (PFX) and use all of that work.

Scott Simmons: You mentioned OMFs and AAFs, and both of those types of files can be exported out of Premiere Pro. They’re packaged files that contain all of the audio used in your timeline, usually with some kind of handles designated by the editor when they export the AAF or OMF that sends that packaged file over to the sound mixer. They can then open up that file in their digital audio workstation to basically recreate the entire timeline. Then the audio engineer has that timeline to work with to do the mix, and they obviously need all the media to do the mix.

When time is taken to carefully and properly sort of lay out a timeline it makes your job as a mixer a whole lot easier. If you’ve got all the interviews next to each other on specific audio tracks that’s much preferable to having them spread over ten different tracks. How important is that sort or organization for you?

Woody Woodhall: There are definite challenges when we have to work with less experienced picture editors, or even someone who is lazy or working too quickly.

For instance, let’s say that the location recordist used a boom mic and a lav mic and the picture editor is checkerboarding those recordings. The boom is on track one, but then it’s on track three and then it’s on track two. As part of that, the lav will be on track two but then on track one and then wherever else, so as you can see it quickly becomes difficult to sort and keep track of the mics through the scenes. And depending on the program length this can be quite difficult.

It’s a real challenge because you can’t look at a waveform and know what it is and oftentimes, the nomenclature or the title of a track might not indicate what it was clearly either. So you are literally checking with headphones from one short bite to the next to sort out what is what. For proper mixing we have to rearrange the entire timeline to know that track one is the boom, track two is the lav, etc.

From my point of view, as a mixer, I require consistency. Even if it’s only scene-by-scene, that boom and that lav have to be on dedicated tracks because I might need noise reduction on the boom that I don’t need on the lav. Or I might need a compressor on the lav that I don’t need on the boom or any number of things like that.

Scott Simmons: Let’s talk for a second about AAF and OMF exports out of Premiere.

On the one hand, as Premier adds more audio capabilities into the application itself, the need to export out as an AAF or OMF for an audio mixer seems to be lessened. Obviously though, a dedicated audio professional like Woody can do a better job mixing the show than I can. So can we talk about the interoperability that’s been designed in Premiere and how that impacts OMF and AAF workflows? Is it sort of an old thing that’s no longer updated? Is it just in there because it’s a legacy thing? How does that little piece of the puzzle fit into the overall Adobe picture?

Durin Gleaves: At this point, AAF is around twenty years old but there hasn’t been a lot of changes to the specifications behind it. It’s a very lossy interchange format. You’re losing a lot of the information and metadata and parameter settings that the picture editor put in place when that sequence or that timeline is converted to an AAF. However, the industry workflows around using AAF as that interchange mechanism are sort of attuned to those conditions now.

While it’s not a standard that Adobe developed or had any input on, it’s certainly one that we want to ensure that we maintain for our customers who need to send those projects out of house for audio post. OMF is supported in Audition but not in Premiere. It’s really not something that we spent a lot of time with in Premiere, but AAF certainly has had a lot of work done over the last several years. We’ve addressed a lot of the issues that some post editors had with the AAF’s that were coming in from Premiere. There are always ways we can continue to improve, and you’ll see those come through here and there in the next year and over the next couple years. So it’s very much part of what we’re focused on moving forward.

Scott Simmons: Woody, I know you do some picture editing for LAPPG YouTube videos, so what can you tell us about that workflow? Do you mix those videos in an inherently different way because you’re using Premiere rather than Pro Tools? Why would you not always go out to Pro Tools?

Woody Woodhall: I do it the exact way my clients do it, which is I do my picture edit in Premiere and then export an AFF and a movie file and bring it into Pro Tools.

That’s not because Premiere couldn’t do that. It certainly has the sophistication to do the mixes that are required for the LAPPG meeting videos which are presentations and typically just one presenter. It’s honestly just muscle memory and that’s the way I’m used to doing stuff. But Premiere could certainly do it. Most of the plug-ins that I would be using in Pro Tools are available to me in Premiere as VSTs. So it’s really just more of a habit.

Scott Simmons: You mentioned plug-ins and I think it’s worth noting that there are quite a few industry standards for the architecture for audio plug-ins. If you go out and buy a plug-in from Waves, which has a lot of really high-end plug-ins, they adhere to a standard so once they’re installed, many different applications can read them. You could use that same plug-in with Pro Tools, or Audition or Premiere which means you can get some pretty high-end audio plugins and use those in multiple places.

Is VST the standard these days?

Woody Woodhall: I’m on a Mac so typically, when I buy new plug-ins and install them I’ll get an AU, which is an Apple unit plug-in, a VST, and then also AAX file for Pro Tools, so with the variety of formats on the installs they’re mostly available in any program, be it Premiere Pro, Avid, Fairlight, etc.

Scott Simmons: Speaking of the multiple systems and tools that editors have at their disposal, Durin, you’re really the perfect person to talk about Audition and Premiere together.

Over the years, it feels like Premiere has been taking some of the tools out of Audition and integrating them right into Premiere. I think about some of the audio filters that are available in the application but also the essential sound panel. Can you talk about that a little bit? Is that sort of thing about bringing more audio tools into Premiere without having to go to Audition, even though you have Audition sitting right there as part of the Creative Cloud Suite? What’s the thinking behind that integration and process?

Durin Gleaves: If you look back over the past ten years or more ago, the kind of standard workflow for broadcast and for film was very specialized and segmented. A big part of that was getting to “picture lock”, when a video editor didn’t want to make any more changes, everything was done and there was plenty of time and budget for audio and color and graphics and everything else. At that time, the project would be handed off to audio post to mix in and master in whatever application they wanted. That final deliverable would then come back to Premiere or whatever NLE someone was using.

In reality, that really was never a thing. Picture lock was a shared lie that we all played along with. One of the biggest hurdles with this was in audio post, which is something Woody can probably detail, where changes to picture lock, conforms and managing the EDLs and spending a lot of time to make sure that when those changes came through you didn’t lose a lot of the work that you’ve already done.

So at the time, when we were building up what was then the Creative Suite, the idea was to be able to support that sort of industry workflow that was very segmented. But as the tools grew and the market changed, with such a big influx of smaller teams, individuals, social media creators, were really kind of doing, etc., we saw they weren’t really in that specialist role anymore. They were coming in and doing most of the production themselves.

As part of that what we started to hear was that they didn’t want to leave Premiere. It didn’t matter how good another application was at a particular stage in that workflow. They wanted to keep everything together so that they could kind of do that iteration back and forth, with a little bit of sound and a little bit of picture. That spurred us to really improve the workflows inside Premiere Pro as much as we could to not only to make them richer and deeper, but also make them more accessible to people who maybe didn’t go to college and get an audio engineering degree, or didn’t want to spend a lot of time really learning all the parameters and nuts and bolts behind RMS thresholds and compression, for example. These users wanted to get good results quickly within the NLE environment that they were comfortable working in.

 

Part 1Part 2Part 3

 

Exit mobile version