Site icon ProVideo Coalition

Behind the Workflow and Post-Production Process of Gone Girl

Behind the Workflow and Post-Production Process of Gone Girl 3

gonegirl1

David Fincher’s latest feature film, Gone Girl, was shot in 6K and reviewed in a full 4K pipeline. It’s the first feature film to be cut on Adobe Premiere and needed to employ an incredibly advanced technical workflow to handle all of the data and ensure there weren’t any bottlenecks.

Two people who were essential parts of that were post-production engineer Jeff Brue, and assistant editor Tyler Nelson. Jeff is the founder of Open Drives Inc. and has extensive experience as a Technical Advisor for commercials, television, and feature film projects while Tyler is known for developing cutting edge technical workflows that enable directors to make changes until the very last minute. Both of them played an essential role in the development and implementation of the Gone Girl workflow, all the way from pre-production to distribution.

We talked with Jeff and Tyler about how the workflow for Gone Girl came together, what sort of challenges they encountered during this process and discuss the logistics working with 75.5 MB frames.

 

How did you get involved in Gone Girl as the post engineer? At what stage of production was the movie at?

Jeff Brue: I got involved with the Fincher crew on House of Cards. I accepted Tyler’s invitation to design a storage system so that they could still edit in Final Cut 7 on 10.6.8, and that was back when Apple had pretty much deprecated XSAN on 10.6.8. From there, it was really a seamless transition to preparing for Gone Girl, which was still a couple months away from shooting.

Tyler and I talked about what it would look like to switch to Premiere and started doing rough designs of the system hardware and network architecture that would be needed to handle both the data demands of Premiere as well as the task of keeping the entire feature film in an online state.

 

Specifically, what does a post engineer do on a film of this size? What was your primary responsibility?

Jeff Brue: I think the primary responsibility is to make sure the workflow that we envisioned and helped design actually worked. The key concept of that was the storage architecture underneath the entire system and making sure that everyone could have concurrent access to all of the media, which on this feature was not an insubstantial amount of footage.

For Gone Girl, much of my focus was around getting Premiere to interface correctly into a feature film style workflow. I needed to test every aspect of what we thought would be needed in editorial and what features would need to be developed in order to successfully finish the film.

 

Were you using Macs or PCs for the editorial workflow? Do you see one or the other dominate this space?

Jeff Brue: We were using both. It was essentially a hybrid environment. For the main feature editorial, that was done on Mac. Most of the After Effects and final compositing was done on PC, but it was truly an integrated environment.

From the Open Drive side, we’re definitely seeing this hybrid approach in a lot of places rather than one clear front-runner, because everyone wants the best of both worlds.  There are definite advantages to Mac and there are definite advantages to PC, and so many creators want to and need to utilize both of those advantages.

Tyler Nelson: It’s hard to make everyone switch 100% one way or another. A lot of users that have been working on NLE’s are used to a Mac environment, so it’s difficult to convince them to entirely switch over to PC. Creating a hybrid environment helps people slowly get past their muscle memory and get past their hang-ups about working on a PC station, which not many editors and assistant editors are used to.

So we took the approach of figuring out the best way we could make sure our editors and the rest of the editorial team were comfortable, but also get a lot of the power from our Z820s which could easily intergrate with a visual effects timeline. We had a lot of subtle visual effects and a lot of Dynamic Link interaction between Premiere and After Effects so we wanted to make sure that anytime we did jump into After Effects we had a lot of power behind that.

 

Let’s talk about the specifics of the workflow that was used on the film. How was it established and realized?

Tyler Nelson: This was a couple years in the making. The first time we actually attempted to use Premiere was when we worked on a Calvin Klein commercial back in 2012. We jumped in and were using what I believe was a beta of CS7 that’s of course now called Creative Cloud. It was the first time we used Dynamic Link, which got us to see that we could actually get to a virtually online scenario when we were still working in offline.

When we jumped into this commercial we tested Dynamic Link and saw how much we used it for the cut, and we wanted to continue using it. So we used that as a template for other means of exploration to make our offline editorial environment for Gone Girl as best as it could be. From that point forward, we knew that we were starting with something better and more powerful and more stable which would work for a feature film of this size.

For Gone Girl, our editor Kirk Baxter would be working in his offline and doing the placement of a composite, but it was usually the hacksawed version of it. So he would send what he was working on to an After Effects project which an assistant would pick up and do all the fine tuning, or it would be sent to a VFX guy to do even more complicated work. While that’s happening, Kirk can continue his offline editorial. He can now focus on making the edit as best as it can be, so he’s working on the timing and the nuance performance of each of these things while the glossy compositing stuff happens in the background. So these two things can happen simultaneously because of that Dynamic Link between Premiere and After Effects.

So we had a template for the workflow, but we needed to make sure the stability was there as well as the ability to handle a certain amount of clips per project. Our interaction with Adobe was forward thinking and we were always trying to make sure it was performing in the best way possible.

That’s something Jeff helped out with quite a bit. Since he was creating our storage system, he basically saw every single piece of media that was going through and accessed on a daily basis, and he can share some details about that process.

Jeff Brue: From the Open Drive side, we did a really deep analysis into how the use of such extensive Dynamic Linking in After Effects projects affected the storage system, which was a fascinating thing to see. We also heard Adobe’s reasons and reasoning for how they accessed storage.

What they were actually doing was essentially guaranteeing a level of concurrency, but by doing so they needed to be able to access all the attributes of the file system constantly, and one of our main challenges when one of these very large projects would access us was how we could accelerate that process from the storage side. We got some very interesting benchmarks from that process.

From the course of production to the final timelines we were able to accelerate the opening of an Adobe Premiere project between 8x-10x. We designed systems to specifically accelerate how Adobe accesses data. We were eventually able to get that eight or ten minute project opening time down to a minute or a minute and a half, and that was essentially because we looked at the data. When we saw how we were being accessed we were able to build structures for these guys that drastically sped them up.

 

Were there any concerns you had about utilizing this workflow that relied so much on Premiere?

Tyler Nelson: We knew this was going to be the first feature film done on Adobe Premiere, so it was an educated risk. We knew there might be pitfalls with the application at the beginning, but we were working with incredibly committed people and we had amazing assistants that were able to help figure out workarounds and trouble-shoot whatever we encountered.

It was scary, but what production isn’t? Any time you’re sitting on the bleeding edge there’s risks involved, but we’ve been working together and doing such advanced workflows that we aren’t scared of that. We know that we might encounter difficulties, but we also knew we had enough talent to resolve any issue that might come up.

Jeff Brue: We made sure we had backup plans, to backup plans, to backup plans. That way we knew we could protect the production, and as long as we know that, we can go out and learn.

 

How did the DIT transfer and back up on set? What about conversion and color grade? What tools were used?

Tyler Nelson:  We don’t really use DIT’s, we just have an assistant editor near set, and I was the person that was there in Missouri for the first two months. When we came back to LA I passed that responsibility onto another assistant editor.

Basically though, we used FotoKem’s nextLAB mobile unit of 30 TB of raw storage as an ingest station. “The Red Mags were ingested to the RAID and then went through a verification process with MD5 checksums to make sure all the 1’s and 0’s are identical between the source and the destination. From there we transcoded all of our dailies, both edit media and viewing quicktimes, and the viewing quicktimes were automatically uploaded to PIX which had every single piece of metadata associated with every single piece of footage.

Also, we archived right then and there to a set primary and secondary copy of LTO5s. We shipped out tapes on a daily basis to make sure we had reduendency. It was your standard near-set workflow.

We relied on the nextLAB’s system for our last two shows, and it’s an easy to use unit that’s very powerful.

Jeff Brue: Back in the office, the last step was we uploaded all of the footage, the original R3Ds, to an Open Drive system and we kept it all in an online state, accessible throughout the entire process.

 

Speaking of the native R3D, what where your offline specs as far as timeline settings and resolution? Were you monitoring at larger than HD or scaling down for client monitor viewing?

Tyler Nelson: We didn’t cut in native R3D. We’ve always used a proxy and probably will until latency between the on-the-fly debayer is pretty much nothing. We transcoded everything to an atypical frame size, which was 2304×1152 ProRes 422 (LT) Quicktime. The reason we chose that frame size is that when we were on set we were shooting 6K. Those dimensions were 6144×3072 and we were extracting a 5K center from that, which was 5120×2133, and that was a 2.40:1 center extraction from a 2:1 image. If you do the math there, and scale the 6144×3072 down to our offline editorial specs that I just mentioned (2304×1152), the center extraction of that 5K extraction matches 1920×800 which is a 2.40:1 center extraction from the offline.

That’s a whole bunch of numbers, but what that meant was that our offline media was a representation of 6K, but our viewing area and timeline was representative of the 5K center extraction that we were framing for on set. So when all of our dailies were actually in a timeline, Kirk was looking at what David was looking at through the lens. But we still had the real estate to move the image up, down, left or right if we wanted. We had a pixel for pixel representation of our raw R3D file. Nothing was cropped, it was just a real time center extraction.

 

Let’s talk about the off-line to on-line process. Specifically, what were the various on-line tools (conform, color, audio, special effects) and how was the transition accomplished? 

Jeff Brue: The offline tool was obviously Adobe Premiere, and the entire feature film was cut in that along with multiple levels of integration with After Effects.

The whole film was actually completely debayered for final using the new GPU options inside Redline to generate 6K DPX. That was entirely generated through CUDA and debayered on NVIDIA Quadro K6000s and Quadro K5200s for final delivery to Light Iron, which used the Pablo Rio to actually do the final color grading in 4K.

In the case study that was recently published, you mentioned how huge GPU debayering was for you. In practical terms, what kind of an impact did this ability have on the project?

Jeff Brue: Just in terms of sheer logistics, it meant that machines that were free could be used to debayer footage. Essentially, by not having to have specialized hardware and instead being able to rely on what are effectively commodity graphics cards, it meant that multiple machines could be turned on to handle this debayering. Since we were dealing with frames that are 75.5 MB, just having the ability to crunch that through on multiple machines was a big deal.

During digital effects review, we utilized an AJA Io 4K hooked up to an HP Z820 to essentially playback 6K with a 4K center extraction. So the fact that the Quadro K6000 was able to play back 6K for review was also crucial.

 

I see lots of talk about the utilizing of After Effects and Dynamic Link in the process. After Effects isn’t a particularly realtime playback tool so how would you utilize extensive AE work in Premiere Pro? Did Dynamic Linking require a lot of rendering for playback? Did you export self-contained compositions from AE and bring those back into Premiere Pro? What file format and codecs did you use when bringing effected media back into Premiere Pro?

Tyler Nelson: We debayer everything to 10-bit DPX sequences, so we’ve never needed to rewrite from the raw R3D file. When we were using our offline After Effects projects, we would always render out within the timeline of Premiere, and it would hold that render most of the time. We ran into some render holding issues, but most of those have since been resolved by a new feature Adobe has just announced.

Jeff Brue: It’s a feature we specifically requested, which is the After Effects Dynamic Link render and replace, and it’s in 2014.1 that was just released. What it does is it renders in a Quicktime movie that’s static until you decide to manually update it to the latest version of the After Effects comp. Essentially, that ability allows you to create a VFX versioning methodology, which is really quite nice.

 

As mentioned above After Effects isn’t a realtime playback tool so how would the production deal with the simple but important task of just watching work that was completed in After Effects? Even a large amount of RAM can only get you so much playback after a RAM cache.

Jeff Brue: The magic trick is throughput. It turns out that most modern computers have the processing capability to be able to process this imagery in a very fast manner. The real key is when you’re wanting to do a split level comp from multiple layers or if you’re wanting to use stabilization. It’s not so much a question of processing power. The NVIDIA GPU’s definitely give us enough processing power.

The way that we did it for the HP Z820S at 6K was we utilized Fusion IO cards that were able to consistently deliver over 2.6 GB a second to all the visual effects workstations. So every single VFX workstation had a device inside of it that could playback 6K in realtime. We also utilized the HP Z Turbo cards and I believe our playback speeds there were in excess of 2 GB a second.

 

How everything was archived? What were the failsafes and backup systems?

Tyler Nelson: We had three versions of everything in pretty much every phase of production. As I mentioned, all of our files were written to LTO5 tapes and were stored in multiple locations. In addition to that, we had Jeff’s Open Drives backend to house all of our R3D files as well. We tried to keep a live copy, and if anything happened to that copy, we’d be able to restore it from the primary or secondary copy of the LTO, and we also did an MD5 checksum on each of those tapes when they were written to LTO. We made sure we had redundancy, and we also made sure that redundancy was correct.

Jeff Brue: One added feature that we utilized was snapshotting. Essentially, we always had a time window so we could also roll back the entire file system and central storage.

 

What kind of interesting deliverables were there? Did you have to account for a 4:3 delivery? Or PAL?

Tyler Nelson: No 4:3, no PAL. 16×9 was the only one we had to deal with, which was like any other type of 16:9 deliverable. However, Fincher reframes according to his preferred framing, which was sometimes removing the letterbox crop and reframing accordingly, or zooming in, depending on how complicated the visual effects shot was underlying it. It’s all circumstantial based on what the shot is as well as the composition itself.

 

Any major take-aways from the experience that are going to help guide you in your next endeavor?

Tyler Nelson: Every movie is a learning experience, and each person is a product of what they worked on in the past, whether it’s a good experience or a bad experience. You can only move onward and upward from there.  We took a lot away from this project as far as what the pitfalls were and how to avoid those pitfalls in the future. Adobe is really invested in making Premiere as amazing of a product as we want it to be, and that’s something we were happy to see and experience, and something which we’re excited to see developed and pushed further.

Everybody that works on Fincher’s films want to make them as good as they can possibly be. There’s something about stepping inside this building that makes everybody want to be amazing at their job. Jeff has said many times that we force him to be a better engineer. Fincher forces me to be a better assistant and he forces Kirk to be a better editor. Each moment of every job that we work on is something that we know is going to help us in the future, and that’s universal in this industry.

Jeff Brue: It’s really an evolution. After an idea gets implemented there are ten more that follow. The thing about an environment like Finchers’, and being able to have the truly unique collaboration opportunity with Adobe, is that whenever we go down one of these rabbit holes we know these same tools are eventually going to be made available to everybody. We know these tools aren’t developed or used for a singular production, so it allows us to know we’re really building and building on something that’s going to be useful to a lot of people. The same methodologies that we’ve always wanted and have now created are going to be made available to everyone.

That’s one of the best parts about the experience and about this film. Every idea, every late night, every process…they all get codified and integrated into a collective whole instead of being limited to one specific production. These are things that will be available to everyone and will hopefully help them create their own stunning projects.

 

 

 

Exit mobile version