Site icon ProVideo Coalition

Making “One Man, One Vote”

Making "One Man, One Vote" 16

Making "One Man, One Vote" 1

Ten years ago a fellow named Marshall Spight posted a challenge on DV-L called “Throwing Down the DV Gauntlet”, in which he said, “everyone talks about shooting serious dramatic films with DV, but does anyone actually do it?” I responded, and we wound up making a 20-minute short called “The Beautiful Thing” using Sony DCR-VX1000s, the first 1/3″ 3-CCD DV camcorders. It came out so well (it was for a time the top-rated dramatic film on iFilm.com, an early and long-defunct predecessor to YouTube) that we set about making a short political drama/comedy (?), “One Man, One Vote”. This one gave us a few more challenges.

The Plan

We were happy with the pictures from the VX1000s, but wanted to explore the brave new world of widescreen 16×9 video and a more filmic shallow depth of field. Marshall obtained a Sony DSR-500WSL, a 2/3″ 3-CCD DVCAM camcorder with a 15x Canon lens. It shot 60i, as all NTSC video cameras did back then (the DVX100 was still three years in the future), but we didn’t see this as an issue; several times viewers of “The Beautiful Thing” asked us what film stock we used, so clearly 60i wasn’t stopping us from telling our stories.

Most of “The Beautiful Thing” was shot with a Tiffen Black Pro-Mist #1 diffusion filter and we really liked the look. I wanted “One Man, One Vote” to have a warm, soft, dreamy, magic-hour look, “like a Cadillac commercial, or those GE bring-good-things-to-life ads”, so we ordered up a Pro-Mist #2 filter for a more noticeable diffusion (“The Beautiful Thing”‘s Black Pro-Mist was on the edge of being too subtle at times). We also ordered up a couple of brand-new wireless mics.

I read up on the DSR-500 and practiced with it a bit; Marshall got permits for our shooting locations and found actors through a casting agency, and we were good to go. Or so we thought.

Stuff We Did Wrong

The blind DoP, part 1: I wanted warm, magic-hour sunlight, and we actually got it—on one day only. The rest of the time we had overcast skies. I’m not good enough in color correction to make flat, overcast lighting look like a warm sunny day, but even worse, I didn’t see that the light was wrong. So the Pro-Mist stayed on and we kept on shooting… and we wound up with flat, soft images, not crisp, warm images with a dreamy glow.

And by flat, I mean flat: lots of shot have blacks at 20% and peak whites at 70%, half the available brightness range (less if you allow for peak whites at 108%). We were shooting 8-bit DV25, so effectively we were getting 7-bit images. Oops.

A typical uncorrected image and its waveform.

No Testing: The wireless mics and the Pro-Mist filter arrived just before the shoot was supposed to start, so we just threw them into production without running tests.

One of the two wireless mics had input switching noise in its diversity receiver and was unusable, but we didn’t discover this until we were on location, with the the largest crowd of actors in the picture standing around waiting. We wound up putting the one working wireless lavalier on the side of the primary actor closest to the secondary actor, so it would pick up both people’s lines. Surprisingly, this worked.

Fortunately, this only affected one shot (it’s the shot where Allan and Emilio walk away from the rally, discussing how they’re going to collect votes); everything else in the film was recorded using practical mics (when Driscoll is addressing the crowd), or my Sennheiser ME80 on a boom.

I had noticed when shooting “The Beautiful Thing” that the effect of the Black Pro-Mist varied with focal length and ambient light level, but hadn’t really internalized that lesson. I also didn’t realize that a Pro-Mist #2 on a 2/3″ camera wasn’t going to give twice the diffusion of a Pro-Mist #1 on a 1/3″ camera: in fact, it was considerably softer. Indeed, the effect of a Pro-Mist depends on the entire optical system it’s attached to; the focal length of the lens; the ambient light hitting the filter; the existing scene contrast; maybe even the phase of the moon (though probably not during the daytime… much). In short, saying, “we’ll shoot this show with a Pro-Mist #2” was a dumb move: we should have said, “we’ll shoot this show with varying diffusion from not much to a whole lot, depending on the shot, so as to get a consistent, diffused look”.

No worries, though; we wanted a soft, dreamy look, so with not much more than a peek through the viewfinder with and without the filter, we agreed that the Pro-Mist looked good. This decision would come back to haunt us later&mdashand by “us”, I mean “me”.

The blind DoP, part 2: We had carefully and cleverly blocked the camera during Driscoll’s speech: it started off on his left, and circled in front of him though a series of cuts to wind up shooting the crowd from over his right shoulder.

The Canon 15x lens on the DSR-500, like many 2/3″ lenses, had a separate macro ring with a locking mechanism. Unlike previous Canon ENG lenses I’d used, this lens used a soft-touch button instead of a pull-up knob to lock the macro ring in its normal position. At some point during setting up the first of these shots (the pull-back from the ruined ceiling at the Palace of Fine Arts, with one of Driscoll’s aides testing the mics), I bumped the macro ring’s button and managed to turn the ring, not enough for it to be obviously in macro mode, but enough to throw back-focus off just a bit. As a result, I’d zoom in tight to focus, then pull out, and the image went soft— not soft enough to see in the finder, really, but soft enough to see in post! Most of the setups in our cleverly-blocked sequence were thus rendered unusable.

Lack of Coverage: We had a detailed shot list, so we knew what we needed to shoot. By extension, we knew what we didn’t need to shoot, like a wide master for an entire scene. After all, if we’re starting wide for an establishing shot and then progressively cutting to tighter and tighter close-ups, why waste tape and time shooting the whole thing in a master?

I’m an editor; I should have known better. When a scene cut together as planned, it wasn’t a problem, but when I needed a cutaway to cover an inconsistency or to change the rhythm of the scene? Oops.

I was saved by pure dumb luck. We did enough different takes that I could usually pull an unrelated bit of video from another take and stuff it into a sequence to cover a hiccup.

For example, when Allan meets Mr. Kern, we only had the opening two or three lines in a wide shot, but we had half a dozen takes. When later in the scene, I needed coverage for Kern coming back to the door with votes in hand, I had to resort to an alternate take of the opening shot, when Kern comes to the door the first time.

For Driscoll’s speech, where most of the angles had to be tossed due to excessive softness, I was saved only by the fact that Driscoll had to run through his speech in its entirety on each take. Coverage by default, not by planning.

The blind DoP, part 3: To avoid faffing around with white balance on location, we left the camera on preset WB and used the filter wheel to choose between tungsten and daylight imaging. Most of the scene with Mr. and Mrs. Stinson was shot with the camera balanced for tungsten (3200K) instead of daylight (5600K), even though the lighting was daytime exterior shade (thus, its color temperature was even higher than normal daylight). The DSR-500’s EVF used a monochrome CRT, and we had no external monitoring, so it wasn’t obvious, but even so: properly checking the camera’s settings before each shot would have caught this at the start, rather than very late in the game. D’oh!

The purple pictures could be roughly corrected, but with the excessive image-flattening diffusion and the limits of 8-bit YUV-encoded video, it simply wasn’t possible to get all the colors in the scene back where they belonged. Also, getting skintones and white areas (like the door frame) back where they belonged meant that washed-out highlights and the sky turned bright yellow.

Before and after color correction: dig those crazy yellow skies!

Basic Cutting

We wound up with about 2.5 hours of DV25 footage, consolidated on one standard-sized (a.k.a. large) DVCAM cassette. I captured it into 15 different clips (due to timecode breaks) using Final Cut Pro 5.0.2 on a dual G5 in June of 2006; it only took up 33 GBytes of disk space.

Captures started out in the usual FCP place: HD > Users > [username] > Documents > Final Cut Pro Documents > Capture Scratch > [projectname]. For simplicity of backup and portability, I created a 1m1v folder on the root of a new disk, and moved the Capture Scratch directory into it, relinking the clips the next time I opened up FCP. All collateral material—stills, sound effects, music tracks, intermediate renders, graphics elements, and the project files themselves—went into the /1m1v folder or subfolders, so all I had to do to migrate the project to another disk was to drag the /1m1v folder to the new disk. Eventually there were five copies of this folder structure (two on different internal disks in the G5; one on a FW800 portable drive, one on a FW800 RAID 0 array, and one on an eSATA RAID 1 array), and the drag’n’drop scheme kept things simple.

For backup, I either dragged’n’dropped the entire folder, overwriting previous backups (this normally took tens of minutes to complete) or I used the command-line utility rsync to do incremental backups (normally taking under five seconds). Rsync is very powerful but the syntax can be daunting; if I weren’t such a lazy geek I’d have found or written a nice GUI wrapper for it (and may yet do so).

I used FCP’s DV start/stop detection to break the clips into individual scenes, which I made into subclips, and then added scene numbers, take numbers, and logging comments. By sorting the bin by Scene, all the subclips were arranged in rough editing order.

Subclips sorted by scene.

I broke the show into seven “chapters” ranging from nine seconds to nearly two minutes in length:

  1. open / Driscoll rally
  2. interstitial (Allan running through the streets)
  3. Stinson
  4. interstitial / “you promise” (running; abbreviated vote-getting)
  5. Kern
  6. interstitial evening
  7. election day

These bite-sized pieces made it easier to work on the timelines, and also made experimentation easier: for example, I swapped chapters 4 and 5 to see what would happen (what happened was: it made the film worse). The chapters were nested into a “1m1v” sequence to make the whole show, and then that sequence of seven chapters was nested into a “ColorGrade” sequence, so that I could apply a single color-correction filter to impart a look to the show without having to cut-‘n’-paste the same filter into multiple subsequences or individual clips.

I then pulled the titles out of the 1m1v sequence itself, and put them in track 2 of the ColorGrade sequence, so that they weren’t affected by the ColorGrade “look”.

For a quick-and-dirty 4×3 master needed for a festival, I nested ColorGrade into a 4×3 sequence in FCP. It worked, but it wasn’t elegant, because the interlaced material didn’t rescale very cleanly.

When it came time to create progressive-scan web and iPod / Apple TV versions, I tried various exports through Compressor, but I wasn’t satisfied: the live action looked good with adaptive deinterlacing, but the edges of the text had a tendency to quiver and jump depending on how the underlying image affected the deinterlacing process. Also, the nice, crisp title text became less crisp for being put in an interlaced sequence, even without the variations caused by adaptive deinterlacing.

I turned off the title track in the ColorGrade sequence, exported the title-free sequence to Compressor for adaptive deinterlacing (but otherwise leaving the format unchanged), re-imported the resulting progressive-scan clip, put the titles on top of it, and then sent that sequence into Compressor for rescaling and reformatting, both as 16×9 and as 4×3 letterboxed. This turned out to be the best of both worlds: Compressor did a great job of deinterlacing the video material, while the titles stayed in their full-resolution, progressive format to begin with.

Recipe for Deinterlacing:
In Compressor, I tried Fast (Line averaging), Better (Motion adaptive), and Best (Motion Compensated) deinterlacing. Fast effectively blurs the two fields together; it works, and it’s fast, but the results were unacceptably soft (especially given our excessively Pro-Misted source clips). Better and Best gave almost identical results on the motion in our clips, but Best took orders of magnitude more processing time, so I stuck with Better.

The show started off in FCP 5.0.2 on OS X 10.3 with a DV25-native timeline, but migrated over time through FCP 5.0.3, 6.0.1, 6.0.2, 6.0.3, and 6.0.4. In each case the “new” FCP was installed on a different partition and the project was loaded, rendered, and played to ensure that it would work in the “new” version, before the old partition was upgraded or erased.

When I moved the edit to a MacBook Pro, I needed to upgrade my copy of Color Finesse to an Intel-compatible version. That upgrade worked flawlessly and all the existing Color Finesse tweaks worked as before.

Overall there were no upgrade problems, and the project file now bounces between an Intel MacBook Pro and a PowerPC G5, both running OS X 10.4.11, without any glitches at all.

(FWIW, the big, fire-breathing, liquid-cooled dual 2.5 GHz G5 machine takes slightly longer to render the show than the 2.3 GHz Core2 Duo MacBook Pro. Progress happens.)

We switched to ProRes422 HQ native timelines and intermediate renders in the final stages of the project; the quality is noticeably higher for multigeneration work (VFX) and for titles, and the entire project folder (including FCP and Motion projects, Photoshop PSD and JPG files, even a couple of uncompressed 10-bit VFX clips) still takes only 56 Gbytes of disk space.

Next: Fixing the Mistakes; VFX…


Fixing the Mistakes

Pickups were needed for several important bits, both things we always knew we’d have to re-create as VFX and things we’d just not thought to shoot at the time.

We shot motion pickups using a Sony HVR-Z1 and a Panasonic HVX200, both in 16×9, 60i DV25 mode—without a Pro-Mist of any sort (I found that stretching the tonal scale in post on the Pro-Misted clips shot in low-contrast, overcast conditions pretty much erased the obvious diffusion effects, and I wanted to create as little hassle for myself as possible). I wasn’t worried much about the differing looks of the cameras as all images were to be highly manipulated in post. Furthermore, 720×480 4:1:1 DV25 was a limiting factor; when squeezed through this narrow pipe, the obvious differences between these cameras were minimized.

Stills (mostly cloudy skies, but also elements for the deserted freeway) were shot on a Sony DSC-T1 pocket camera.

One image (the ripped-up car on the deserted highway) was shot using the camera on a Treo 650 smartphone, because I saw that car in the practice area of a local firefighter’s training yard and grabbed it while I could with what was available. It’s such a small element of the final composite, and it’s been so heavily retouched in Photoshop, that the limits of the Treo’s camera aren’t apparent.

Driscoll’s speech was carefully blocked to be edited together from multiple angles, most of which were too out of focus to be rescuable (yes, we tried multiple built-in and third-party filters; I even tried sending frame sequences into Photoshop to use various sharpeners there. Nothing fixed the softness without making the images look worse). In the end, I had to use short clips from some of the bad shots, and concentrate more on the more focused angles. The original blocking is completely gone, but what remains isn’t too horrible. Yes, the out-of-focusness is still apparent (especially in the full-resolution program on a big screen), but the gauzy effect of the Pro-Mist #2, which generally degraded sharpness throughout the picture, wound up helping here, by making the contrast between sharp and unsharp less apparent.

Fixing the sunlight became an issue when the few sunlit shots stood out in an otherwise overcast production. For the slashes of sunlight that fall on the doorframe at Kern’s house, I used FCP’s Proc Amp filter to pull down those bright highlights, making them less noticeable.

For the shots of Allan leaning out of his window, harsher measures were called for.

The original shot, and the final composite.

The shot was a lockdown, so I could export a still frame to Photoshop and paint it into a high-contrast matte to block out the nice blue sky. I filled the hole with a still image of clouds (at 2500×2000 resolution) and drifted it slowly to the right as the shot progressed to add some realism.

Three 3-way color corrector filters married the foreground to the clouds: A primary correction to fix blacks, whites, and gamma; a secondary to desaturate the blue sky’s reflections in the windows; and a secondary to bring down the sunlit highlights on the roof and gutters.

Similarly, I added clouds to one of the intro shots:

The original shot, and the final composite.

Here the camera executes a pan and tilt, and the trees blow in the wind; no painted matte will work here. I added my H. Chroma Blur filter to clean up the chroma edges (I could have used FCP’s 4:1:1 Chroma Smoother, too, but the H. Chroma Blur gave me better results on this image), then used FCP’s Chroma Keyer to knock out the sky. I dropped in another cloud pic, this time squashed vertically to make the aspect ratio of the clouds (which had been shot at about a 45 degree up-angle) a better match the viewing angle of the shot.

To match-move the clouds, I simply kept the lower left corner of the cloud image (easily seen by turning on image overlays, and selecting the cloud layer as my current clip) affixed to a ground shadow midway between the campaign sign and a tree trunk. It only took nine keyframes to make the motion work. I added a slight gradient to the clouds using the “Overlay” composite mode to better marry them into the left-to-right shot lighting:

The overlaid gradient; tracking the corner of the clouds.

The gradient only needed two keyframes to sufficiently sell its movement.

If you look at the comp, it’s hard to miss the hard shadows and slanting sunlight in the foreground shot, but the feel of the scene matches the shots around it well enough that it doesn’t call attention to itself.

In other shots, I simply used secondary correction to desaturate and darken the sky. Only one shot remains entirely unfixed: the shot from Allan’s POV as Emilio shouts up to him on the morning of the election.

The look of the film was supposed to be a richly saturated, warm, happy-happy-joy-joy look, as an ironic contrast to the bleakness of the story. Fortunately, the bulk of our scenes were shot on bleakly overcast days, which served handily to protect us from irony poisoning: while I could re-color and re-saturate the low-con images, I couldn’t re-create the warm glow of directional sunlight in overcast scenes (and no, I don’t want to hear about your prowess with Power Windows, thanks just the same).

Additionally, the Stinson footage shot with the wrong color filter showed a considerable loss of color range and fidelity when color-corrected back to normality; this is to be expected with 8-bit YUV-encoded video, but even so it was distressing. Clearly, any look that boosted saturation levels would only show these defects more clearly.

We chose instead to go with a faded, desaturated, somewhat sepia-toned look. It’s a more “standard” look for future-dystopia programs (though I stayed away from the trendy but overused greenish tint), it de-emphasized the differences in color rendition between our various source clips, and it didn’t stress the limits of 8-bit 4:1:1 DV material.

Besides, it looked good, and it fit the material.

In the base sequences I simply tried to get all the clips to look the same. Almost every clip has its black levels pulled down and its whites boosted, and most have some midtone / gamma corrections. Whites were white-balanced and blacks were black-balanced and color saturations were made consistent; I simply strove to undo the damages done by overuse of the Pro-Mist filter and/or incorrect white balancing.

Most of the shots were corrected using FCP’s 3-way color corrector, but I used Synthetic Aperture’s Color Finesse plugin for all the unbalanced Stinson footage and a few other difficult shots. Color Finesse (CF) gave me a “traditional” colorist’s workspace with multiple ways of manipulating both primary and secondary corrections; I could have done the work with multiple layers of FCP’s 3-way corrector (using “Limit Effect” to pick off colors for secondary correction), but I found after some fiddling about that I was able to get the desired results a lot more quickly and precisely using CF, mostly (I think) due to CF’s more powerful user interface.

One of two secondaries used in CF to fix yellow highlights caused by a radical primary correction.

(Why didn’t I use Color, which comes in the box with Final Cut Studio 2? I started this work before FCS 2 came out, and besides, I could use CF as an inline plugin, instead of round-tripping sequences between FCP and Color. Different needs, different tools.)

The “look” was applied to a nest of the edited show within a ColorGrade sequence, so I could play with a single set of color-correction parameters and see how it affected the entire show. I probably wouldn’t try this on a longer show where the look evolves as the show progresses, but it worked here as we wanted the same look throughout. The look itself turned out to be almost disappointingly simple: desaturate, and push the midtones to orange. It was easily done in the 3-way color corrector.

The FCP 3-way correction used for “the look”.

VFX

We initially planned to have many more shots of a post-apocalytic San Francisco—aerials of deserted freeways, a bum feeding books to the fire in an apartment halfway up a derelict skyscraper, etc.—but as the edit came together I felt that too much of that would distract from the central story and simply increase the running time (which is on the long side as it is). We cut things back to a couple of establishing shots, and distressing some of the location footage. Several people contributed to the VFX, each with his own toolkit; I’ll just discuss a few of the effects I worked on.

The deserted highway is a Photoshop composite of four shots: two of the freeway itself, taken a few seconds apart; the trashed car; and the ever–present clouds.

In Photoshop Extended CS4, all I’d have to do to make a deserted highway is take several shots with cars in different positions and stack them in Median mode to eliminate the cars. As I was using Photoshop 7, it was hand work: superimpose and align two handheld pix, selectively paint out cars on the top layer to reveal blank road on the underlying shot, use the Stamp brush to paint over areas where there were cars in both shots, and perform minor retouching.

Then I took the mobile phone snap of the mashed-up car, erased its background, dropped it onto the freeway, and resized it to fit. I also used the Free Transform tool to distort it a bit, so it would better match the perspective and angle of the background. I touched up the paint (where the Treo’s simple cam blew out on the highlights) with the usual Photoshop tools, and adjusted brightness and saturation to make it fit in better.

I duplicated the car, put the dupe layer below the original, and used the Levels control to make it black. I added a blur filter and reduced the layer’s opacity, offsetting it down and to the right: instant ground shadow. A similar dupe layer was used for the shadow on the jersey wall. Both shadows got trimmed a bit with the erase brush, and hey presto! Abandoned car!

The original; car distorted, repainted, and color corrected; lower shadow; lower and rear shadows; car and shadows composited with background.

For two nighttime shots, cars and lights had to be removed. Both shots were lockdowns, so these were Photoshop parties.

The “Driscoll For Decency” shot was an on-location focus pull between poster and street, but there was too much flickering light from TVs in apartments and lights reflected off the cars—and most of the cars had to go.

Driscoll’s part of town, before and after.

I took one freeze-frame with the background focused, and one with the poster focused, and dissolved between them. It’s cheap and sleazy, but it gets the idea across.

I then painted out stuff I didn’t want to see. I’m far from the world’s greatest painter, so Photoshop’s Stamp Clone brush came in handy. I threw in a couple of Driscoll posters on the far walls, extracted from a frame in the rally scene, and I was good to go. For the most part, I doctored the in-focus background still, then took the reworked chunks, dropped them onto the frame with the out-of-focus background, and blurred them to match. I took advantage of Photoshop to give my 1/3″ chip camera shots more filmic depth-of-field: I blurred the out-of-focus part of each frame a bit more. As I still had FCP open behind Photoshop, and it was set to automatically relink changed media, I could pop into FCP and preview my changes in context, fine-tuning as I went.

Finally, I took the cross-dissolving stills, nested them in a sequence, and applied the Add Noise filter:

Type: Gaussian (Flim Grain) [yes, it really says, “Flim Grain”!]
Blend Mode: Add
Autoanimate: Yes
Mix: 100%

(I should add that most of the VFX using stills had some manner of “Add Noise” applied to them to better match them to the noise inherent in the rest of the show.)

The “Gary Garvitch: Safety, Order, Stability” (S.O.S., get it? Naah, nobody else got it, either) shot was a lockdown without a focus pull, but with a flapping poster.

Garvich’s neighborhood, before and after.

I exported a freeze frame, created a soft-edged matte to mask off the part of the frame where motion occurred, and depopulated the still frame through the good offices of Photoshop. Again, I ain’t da woild’s greatest ottist, but it’s a brief take on a dark night, and I threw in a bit more filmic background blur to cover my many sins.

Stack ’em up in FCP—original shot, matte, Photoshopped still with an Add Noise filter applied—and again we have a vision of a San Francisco so deserted, so desolated, that parking could actually be found in the Marina district.

I never said it was a realistic film.

Allan runs past posters several times, though no posters were present in any of the original running clips. Several folks worked on these comps; I’ll describe the one I did.

First, I needed posters: I grabbed still frames of the Driscoll and Garvich face posters from the political rally footage, composited the Garvich poster from two different shots, and flattened out the perspective with Photoshop’s Free Transform tool. The resolution stunk, but hey: it wound up even smaller in the frame than it originally was, so it was plenty good enough.

The text poster was a PDF from the pick-up shoot, so I just used it as it was.

I pulled a still frame of the locked-down scene into Photoshop and dropped my three posters on top of it. I eroded the posters’ edges with an Eraser brush, scribbled a couple of tears across them (selecting the torn areas and Free Transforming them with slight rotations, to mimic a sloppy paste-up job), and scribbled some “wrinkle” lines on separate “distress” layers, which then were embossed to look like proper wrinkles, and composited using the Overlay transfer mode to look more natural. In the final composite the wrinkles weren’t visible enough, so I doubled them up: they look overdone in Photoshop, but in the scene they’re just strong enough to “sell” the image.

I duped the background image and used the Emboss filter on it, converting it to a gray, bas-relief image, and set its opacity to 17%. By compositing this image onto the posters, using the Linear Light transfer mode, I gave the flat posters the texture of being pasted onto the underlying brick wall (the choice of opacity and transfer mode were done by trial and error, just like everything else!).

The poster layers were duped, blurred slightly, and made all black with the levels control; slipped under the originals at 29% opacity, they work as shadows, slightly separating each poster from its surroundings.

The finished Photoshop file, with extra layers turned off but present in case of need.

The Photoshop file and the original clip were opened in Apple Motion for the fun stuff: rotoscoping the posters in behind Allan as he runs past. I found it easiest to make a three-layer stack: the original shot on the bottom, then the posters, then the original shot (again) on top. It’s that topmost shot I rotoscoped, simply building Bezier masks for the bits of Allan that pass in front of the posters. Because Allan is a complex fellow, I masked individual components separately: body and backpack, arm, hand, head, left leg, right leg, top strap, middle strap, and bottom strap. I found it simpler to separately animate a bunch of simple shapes than to try to make a mask that tracked all of Allan’s motion in a single shape.

I made and animated the masks starting in the middle of the shot, since that was easier (Allan is in front of contrasting posters), and ran into a gotcha: Motion was happy to maintain a mask after the last keyframe I assigned to it, all the way to the end of the shot, but for some reason it wouldn’t leave the mask parameters before the first keyframe alone. When I went back toward the head of my shot, I found my control points drifting at random, like errant sheep wandering across the frame! It took a bit of sheepdog work to corral them back into some semblance of my original shapes; fortunately I only had to backtrack as far as the first frame where Allan starts to occlude a poster. While the shot runs almost eight seconds (including the fade-to-night and the dissolve to the skyline), I only had to cover the 2/3 second where Allan is in front of the posters.

The posters themselves had dynamic noise added, as well as contrast, desaturate, and color balance filters, and some gaussian blur to soften them to match the background.

Motion, showing masks used for Allan, as well as the filters used on the posters. For clarity, the background plate has been turned off.

The comp was rendered in Motion using ProRes422 HQ, then inserted into FCP atop the original shot.

Losing the light: The nighttime skylines had too many skyscraper lights for a desolate city. I took a still frame from each shot into Photoshop, and painted a simple mask image over every light I wanted to extinguish. I used that mask to super duplicated shots in FCP, shifted up or down by six or ten scanlines, so that the unwanted lights would be replaced with unlighted elements just above or below. Since the blowing clouds and “film grain” were the same, no further work was needed: a cheap and simple way to turn out the lights.

Similarly, a focus-pull from a distant cityscape to Allan running towards the camera showed a busy stretch of freeway. I used FCP’s 8-point garbage matte to cut out the freeway and reveal a chunk of the same shot, replicated on another layer, shifted to hide the offending cars. With a bit of edge softening, the dirty deed was done— and it held up well enough even with the shot’s camera shake that no motion tracking was needed.

Titles were created in Motion, using both edge and fill gradients, and the main title had an animated glow (using Motion’s Highlight text behavior) and a reflection added (a simple clone layer, flipped).

Initially I had the .motn project files embedded in my FCP timeline, to allow for edit-in-place simplicity. However most of the previews, and several of the renders, showed field-rendering glitches, as if a fast move lasting one frame had taken place. Usually this occurred at the start of a title, but once it happened in the middle of one. The glitches didn’t appear in Motion, and the same projects rendered out of Motion itself were fine, so I simply rendered them out using the Animation codec and imported them into FCP as clips. I never figured out why the embedded projects had problems (I asked Motion guru Mark Spencer about it at a MacFilmmakers meeting, and he agreed that there were still unexplained and unexplainable bugs in the FCP/Motion interchange).

So there you have it: despite screwing everything up in production, we managed to rescue it in post. Perfect? No. Watchable? I think so.

Mind you, it would have been a lot simpler to have simply done things right in the first place… but I guess there are some lessons we’ll never really learn…

http://www.meetstheeye.com/films/1m1v/1m1v.php



Exit mobile version