It’s one thing to shoot video. It’s another thing to shoot for post. That is, to manage the way you shoot video so that it outputs in a way convenient to the workflow of an editor or post production supervisor. Believe it or not, you stand to benefit in a sort of residual way. Not only will you become a more professional videographer because of this, you will become a better videographer, period.
The reason is simple. Eventually what you come up with “in camera” will go through an additional process thus resulting in a format viewable via different mediums. And the final product may shock you. You will discover how your footage is just an aspect of a context that goes beyond your initial participation. You have, in effect, given over your contribution to be deconstructed, recontextualized and reinterpreted by someone else. However, a videographer needs to apply a certain amount of attention to detail. Without it, the following process could slow down or be shut down altogether. And therefore you’ll find yourself in the awkward position of having to start over from scratch.
Your involvement as a videographer is not all that different from being a still photographer on a commercial shoot. You provide images, yes. But the “vision” is more or less pre-determined and art directed by a creative team representing the advertising firm or magazine that has hired you. With video it can amount to the same thing. You may be hired to direct that shoot. But you do so under the provision that it meets the needs of the client you are working for. That same provision applies to those working in post production as well. So a form of communication needs to be set up between the end client, the producer of the project, the post production people and you.
Now sometimes you’ll find yourself in the position of filling the roles of producer, editor, director as well as camera man. Sometimes you’ll take on the role of two out of four. But if someone else has been hired to edit your footage, you need to be on the same page when it comes to the final result. Post production is the last phase of the project. They are completing the client’s vision you initiated.
Let’s start off by understanding what exactly is involved in post production and what happens after your video recording has been delivered to an editor. Chances are they receive a certain amount of clips uploaded from your CF or SD card. That and the audio tracks recorded separately during the shoot. Usually a process occurs called “Log and Capture” where the footage is run through an editing application, then tagged and labeled electronically. “Clip” meaning a single take of a point in an interview or scene recorded on camera. Some could last seconds. Some could last minutes. And some are separated into close up, medium and wide shots.
A Non Linear Editor (NLE for short) refers to a digital editing application used in post production. Popular programs are Final Cut Pro 7 (pictured), Final Cut Pro X, Adobe Premiere and Avid.
Those clips are then organized into “bins” or folders and a “sequence” is created. Think of this as a page in a document that allows one to bring your footage into a timeline thus beginning the editing process. That sequence is set to a specific format which would include resolution, frame rate and file type. This more or less establishes the intended final output of the completed piece. The rest is pretty obvious: the editor syncs the audio to the visual, moves clips around, trims them, adds text and possible motion graphics until he or she achieve an initial cut. It is then exported into a file friendly format so that the client can view it and give their two cents.
The first step is to understand what the final output of the completed project is intended to be. Will this be streamed on the internet? Will your footage be edited into a full ad or promo that will air on an HD TV? Will it be projected on a large screen? Establishing that detail gives you an idea of what you are shooting for. Generally, in my experience, you’re better off recording video at the highest format possible. For example, if you’re working with an HDSLR, chances are you can shoot up to 1080p HD. So go for broke with that. It’s the editor’s job to scale it down. Unless you are shooting in 4K (more on that later).
Consistency in shot after shot is an obvious factor. You need to strive for continuity. Your subject should not be wearing a hat in one shot while wearing one in another (especially if it’s the editor’s job to collect your footage in a way to make it look like the overall piece took place in a matter of minutes). The video should also be evenly lit from shot to shot. If you chose a frame rate, stick to it. One clip should not be more or less exposed than the other. Again: consistency. Because it is not only the editor’s job to accumulate all of your footage into a cohesive whole, it’s also up them to color correct it. The more lacking in consistency your exposure or white balance is, the more challenging it becomes to match each clip color wise.
(By the way: color grading is not the same as color correcting. Color grading is a more creative part of the process, where you establish an aesthetic look to your overall video. Such as emphasizing specific colors while desaturating everything else. Or giving the video a deliberate hue or visual style. Color correcting is simply the science of having the color and tone of all the video clips match up.)
In other words, you want to achieve as much as you can in camera without having to depend on the editor to fix your footage in post. Not everything is fixable in post. And it’s fairly time consuming to deal with when it is. But maintaining consistency with your footage should be an obvious point to make. Now let us get into the not so obvious.
Should you be hired as one out of many videographers working on the same project – it happens,especially if the project involves acquiring footage from different locations – your shooting format has to be consistent with everyone else. So make sure you are shooting at the same frame rate and resolution and go from there. That way the editor does not have to convert some footage formats to match others. For example, it can be tricky to work with 24 FPS footage when dropped into a 30 FPS timeline, or time will be wasted converting that footage into a comparable format.
Let’s briefly go off topic and discuss shooting in 4K vs. 2K (or less). Those who are lucky enough to shoot in a 4K resolution should check with their producer before taking that leap. A 4K image can be very sexy indeed, but if the project will output to 1080p, then there is no need to shoot in 4K. It can actually be a hassle for those working in post. Because only recently released versions/updates of digital editing systems can efficiently handle the 4K format. However, there is an advantage to shooting in this resolution regardless if the final project will be less than 4k. And that would involve up scaling the image to simulate a close up in post without losing any sharpness or clarity. If the editor is willing to make those adjustments then that saves you the trouble of using more than one camera (one for the close up, another for a wide shot), or readjusting your zoom. Now let’s get back to the topic at hand.
In the second part of this series we covered the use of audio recording. If you’re not recording the audio in camera – which is ill advised, as most DSLRs aren’t equipped to record audio in a precise fashion – then you’ll be supplying that audio track as a separate file. While most editors have access to a plug-in or application that will synch your audio to the visual automatically, you might want to use a clapboard or “slate” just in case. Simply hold it up to the camera lens right in front of your subject, then “clap” the clapboard. An editor will take that frame where the clapboard makes contact and match that up to the “clap” on the audio track.
A typical clapboard or “slate” with a digital readout.
Recently, I edited an interview where the videographer attempted to make use of a digital clapboard. And yet throughout the majority of the footage I received, the clapboard failed to make an appearance thus the “clap” would appear on the soundtrack off camera. Which kind of defeated the purpose, right? Fortunately, I have access to Pluraleyes which is one of those automatic synching apps I mentioned earlier (also, as an editor you have the option to display the waveforms on your audio track. It’s fairly easy to match that up to the waveforms displayed along with your video footage. Unless the footage was shot without the internal mic turned on. Which means nothing will help to synch that audio to the visual and, basically, the editor’s up the creek without a paddle because you didn’t use your clapboard properly. In other words, always keep the camera mic on even if that audio will not be used in the final product).
Another issue I encountered with this same videographer was how overzealous he or she was when it came to stopping then restarting the camera after every take. This was just an interview. But after every five to six seconds the camera would stop. Then restart. Clapboard (which was off camera most of the time. Geesh), five or ten more seconds. Stop. It turned out he/she did this forty times thus resulting in forty video and forty audio files. This was for a project that would ultimately be no more than two-and-a- half minutes long. That, my friends, was unnecessary and burdensome. Since these files were delivered via a cloud storage service – with each file in its own respective folder – this meant I had to download eighty individual files separately. It’s time consuming. So word of advice: don’t stop the camera at the end of every take. Keep it rolling as long as possible even if the subject stumbles or pauses and wants to start over. By lessening the times you stop-then-restart your camera, you lessen the number of files that need to be handled on the post production end. Leave it to the editor to break it down and separate into individual clips.
If you plan to shoot more than one subject: there is plenty of material out there written about the visual science of juxtaposing shots. Yes, there is a rule. But sometimes rules are made to be broken. But I wouldn’t tempt fate while working on a professional gig. So this is what you need to know. If, say, you have one person conducting an interview with another, you want to have multiple camera options. In any case, you’ll need more than one; Camera “A” needs to face the interviewer while Camera “B” faces the interview subject. The rule I mentioned refers to keeping both cameras within a 180-degree arc. Think of your on camera talent as sitting across from each other on the axis line within a circle. Both cameras should be placed anywhere along the 180-degree circumference (one half of the circle) facing this axis. But neither camera should cross that axis, or move outside of that half of the circle. By sticking to this rule, you maintain the psychological effect this has on a viewer when you cut from one person talking to the other. It’s easier to take in visually whereas the alternative can be too jarring to process. Just watch a two-camera placement scene from any movie featuring two characters and you’ll see what I mean.
Example A: two juxtaposing shots following the 180-degree rule. If this were viewed on a video timeline, you would cut from the pink object talking, to the green object responding.
Example B: this does NOT follow the 180-degree rule. Instead of cutting to an angle that registers in a natural way, the second shot instead appears as if the pink object was switched out for the green. This is too jarring for the viewer.
Since this is common practice, the editor is also aware of the rule and will organize your shots accordingly. Provided you have followed this. Because if you haven’t, then it will make the editor’s life a bit more difficult trying to get that scene to work in post.
Let’s recap by listing the basic bullet points regarding shooting for post.
• Always be on the same page with your producer as to what the intended output for the video will be (internet streaming, HDTV viewing, large screen presentation, etc.). Otherwise, always shoot in the highest resolution your camera can manage (if that happens to be 4K, check with your producer first).
• Always be on the same page with your production team when it comes to frame rate and resolution.
• Shooting with sound: when using a clapboard, always make sure it appears on camera. And even if you’re recording audio on a separate device, always keep the internal microphone of your camera on.
• Always white balance.
• Be mindful of continuity throughout.
• Be mindful of the amount of files you are creating when stopping then restarting your camera. It adds up and can be a headache for an editor to deal with on the post production end.
• Shooting more than one person: be aware of the 180-degree rule.
Also: if you plan to utilize any in camera effects such as a picture profile (black & white mode, neutral mode, etc), please check with your production team first. Or, if you plan to work in a specific profile that encourages more dynamic range in your visual (such as Neutral or Technicolor’s Cinestyle profile), please inform the post production team so they can color correct accordingly.
One more tip (this is it, I promise): should you be shooting an interview – be it for a documentary or promotional piece – cover your tracks by shooting some “B-roll” footage. That is, record video of your subject’s surroundings. Heck, take footage of your subject walking around and interacting with his or her environment. This provides options for the editor should one want to break the monotony of the interview by inserting a complimentary visual or to cover up any jump cuts that may occur in between edits.