Site icon ProVideo Coalition

From Photography To Video Part 3: Frame Rate, Shutter Speed and the Moving Image

From Photography To Video Part 3: Frame Rate, Shutter Speed and the Moving Image 4

In this article, I felt it was important to explain the differences between shooting for motion as opposed to still since videography – filmmaking if you will – requires a change-up of mindset when it comes to setting aperture and exposure especially while dealing with the added element of frame rate.

 

Thus far, in this series on making that transition from photography to video I have offered an overview on what to expect beyond the tools and knowledge one has gained as a photographer. The first installment focused on the gear and anticipated accessories. The second focused on the recording of audio. In this article, I felt it was important to explain the differences between shooting for motion as opposed to still, since videography – filmmaking, if you will – requires a change of mindset when it comes to setting aperture and exposure especially while dealing with the added element of frame rate.

It’s not really all that tricky. It’s actually an easy adjustment to make. But before we get into the how, I need to address the why. And this requires a bit of a history walk-through when it comes to the development of motion picture technology.

When the Lumiére brothers premiered “Arrival of a Train” back in 1896, it elicited the kind of “oh wow” reaction one would get from virtual reality or a 4K UHDTV today. It blew the minds of pre-20th century audiences.  While this represented the first attempt to project a motion visual onto a screen, the practice of creating the illusion of moving images had already existed within the confines of flip books and zoetropes. It wasn’t until 1878 when photographer Eadweard Muybridge photographed a series of sequential images featuring a running horse that the idea of applying movement to a series of still photographs finally came into play. As both the movie camera and the film projector continued its development into the earlier part of the 20th century, so did the frame rate.

The frame rate refers to the frequency with which sequential images are either recorded through a camera or projected onto a screen per second. Because cameras were hand cranked right up to the late 1920s, the frame rate could vary from 12 fps to sometimes 26 fps. However, with the arrival of sound a standard had to be set so as to consistently match the audio to the visual. Hence, the 24 fps model remains the standard to this day.

As mentioned in the first installment of this series, 24 fps is the reason why film looks like “film” as opposed to video or a live broadcast. An increase in fps results in a sharper image and less motion blur. In fact, film technology experts have warned against projecting anything faster than 72 fps because the human mind cannot process that much information. An “uncanny valley” response would go into effect – the image would look too realistic to the point where mere mortals couldn’t handle it.

The key word in the above paragraph is “projecting,” by the way. For it is one thing to project or transmit a visual at anything faster than 60 fps, it can be a slightly unrelated thing to shoot higher than 30 fps. With analog cameras, to shoot at a frame of 60 or higher would result in slow motion whereas to shoot in less than 24 would result in fast motion (hence the sped up feel of silent movies as the majority were shot at approximately 16 fps). Let me try to simplify this so you understand the relationship between the projection/transmission of a visual and the actual shooting thereof:

1) If you shoot at 48 fps or higher with the intention of projecting or transmitting that visual at 24 fps or less, then your image will appear in slow motion.

2) If you shoot at 48 fps or higher with the intention of projecting or transmitting that visual at the comparable frame rate (a 48 fps visual projected/transmitted at 48 fps) then it will not appear in slow motion but will be sharper in clarity due to the increased loss of motion blur.

Ergo, remember all that hype surrounding the HFR technology developed for the “Hobbit” series of movies directed by Peter Jackson? HFR meaning “High Frame Rate” aka 48 fps? To view that footage as Jackson intended would require a projector – digital or analog – that could transmit said footage at the same frame rate as it was shot (this isn’t new technology, by the way. Special effects legend Douglas Trumbull developed a 72 fps system titled Showscan in the early 1980s. Appearently it was so sharp in its clarity it was almost like experiencing 3-D without the glasses).

Most HDSLR cameras released within the past four to five years are capable of shooting at 24p, 30p, 60p, sometimes 120p (“P” is short for “progressive scan.” The only difference between “fps” and “p” is that “p’ is applied to the digital recording of video while “fps” is generally applied to the shooting of analogue film. The digital version still approximates a “per second” standard). On some cameras you may come across an additional 50i or 60i frame rate. That “i” is short for “interlaced” which refers to the setting you’d use when recording for broadcast. But you’ll never have to worry about that unless you plan to shoot video that will be transmitted into a live feed. Even if you wanted to move into that direction, you’d most likely have to upgrade to an altogether different kind of camera anyway.

 

A repost of the image I supplied with the first part of this series: the difference between progressive scan and interlaced technology. “Progressive” (left) involves creating the illusion of a moving image through the sequential appearance of still images. “Interlaced” (right) refers to moving images broadcast via a series of lines appearing back and forth across a monitor or screen. Like an old school television screen.

 

So let’s stick with what you have: a DSLR that shoots 1080p HD. And let’s say you are about shoot some video and you choose to work with a 24p (or fps) frame rate because, well, you like that “film look.” So where do you go from here?

This is where the traditional mindset of still photographers differs from that of videographers. To achieve a certain look, the still photographer can adjust both the ISO, the aperture and shutter speed to however they want. Ideally, they would keep the ISO as low as possible to avoid that pesky grain or noise. Yet, should you be utilizing the appropriate lens, the more light you let into your camera by slowing down the shutter speed and increasing the aperture, the more blurry your image could potentially become. This will also depend on the focal length of your lens. In fact, shallow depth of field is traditionally accomplished by distancing yourself from a subject while just focusing on that subject. That’s not necessarily true when shooting video.

Allow me to digress a bit. Only recently has one been able to apply f-stop, aperture and ISO to videography. Part of this is due to the quality of HD video made available to DSLRs like the first iteration of the Canon 5D. Although applying this process to videography existed within cine cams like the Arri Alexa and Red models, accessibility to this kind of thing was prohibitively expensive for the budget minded cameraman. Also, the kind of technology required to capture video was not dependent on the mechanics of aperture, ISO, etc. This is because (up until very, very recently) video technology was not progressive but interlaced. This negated the use of terminology associated with DSLRs, SLRs or analog movie cameras at the time.  For example, a videographer would not adjust exposure according to “F-stop” but to “gain.” And shutter would be set by degrees as opposed to a percentage.

So let’s get back to that mindset while shooting video. First: set your frame rate. We’re already using 24p as an example. Now set your shutter speed. The general rule of thumb when shooting for motion is that the shutter speed should be twice that of the frame rate. So if your frame rate is 24p, the shutter speed should be 1/48. I’ll explain why in a moment but I’d like to point out that not all DSLRs have that 1/48 option. Which is fine. Don’t worry. Most DSLRs do have a 1/50 option. That’s the next best thing. So go with that if you don’t have the other. And then adjust aperture and ISO accordingly.

 

Most DSLRs do not offer a 1/48 option. One of the rare exceptions is Panasonic’s  Lumix DMC-GH4 (right). Most cameras like the Canon D Mark III (left) allow you to set the shutter speed at 1/50 which is a close enough approximation.

 

Should you be using a very fast lens, the wider the aperture, the greater the potential there is for shallow depth of field. This depends on how bright your lighting is and the setting of your ISO speed, of course. But shutter speed should never change. Unless…

And here is where we get into the relationship of frame rate and shutter speed, an elaboration promised in the first entry of this series. By keeping the shutter speed at double the frame rate, you minimize the amount of blurring in the visual (should you go less than double) or avoid a look that would come off as sped up or “flickery” (by tripling the shutter or more). That’s not to say an interesting effect could not come about by readjusting the shutter speed. For example, most of you have seen Danny Boyle’s horror classic “28 Days Later.” You know that weird, almost sped up jittery look applied to whenever the rage virus antagonists appeared on the screen? That was most likely achieved by increasing the shutter speed beyond multiplying the frame rate times two (just for the record those misunderstood creatures were not “zombies” as they are not undead. They were virally infected humans.  I thought it bears mentioning because, well, never mind).

So there you have it. A dissertation on the history of the motion picture frame rate as it relates to shooting video today.  Now that you are equipped with a certain amount of knowledge, I encourage you to go out and experiment with different settings, etc. When it comes to everything else, the same photography principles apply. For example, if it’s an incredibly bright day but you want to achieve a look that requires letting in more light without over exposing certain elements then you would use an ND filter just as you do in still photography (whether you want to capture more detail in the sky or, again, go for that shallow depth of field). Try playing around with different lenses as well.

Next item in the series: shooting for post production. While you may never have to edit the footage you shoot for whatever professional reasons, you will have to be cognizant of how that footage and the way you shoot it can impact both the workflow and creative approach to video editing.

Exit mobile version