I used to be frustrated by my inability to use light meters in HD the way I used to in film. A while back I figured it all out, and now I’m going to tell you how to do it too. In particular I’m going to tell you how to use the Zone System with an incident meter. That’s right, not a spot meter, but an incident meter. Read on…
The Zone System in Film
Back in my film days I grew to trust my spot meter almost completely. Occasionally I pulled out my incident meter but, honestly, I didn’t trust it: if I didn’t know how bright the object was that I was metering, how could I know how bright it was in reality?
Ah, I was so young and naive.
Back in those days the Internet didn’t exist. Well, okay, it did, but it was a military research tool. There were local BBSs in Hollywood but none dedicated to cinematography, and DPs who were willing to share their knowledge with a complete stranger were hard to find. (And those DPs who were willing to share couldn’t always explain why they did the things they did, or why those things worked!)
Knowledge was hard to come by. When I heard an explanation as to why something worked I grabbed on to it for dear life if it resulted in somewhat consistent results. Then, when I heard another competing explanation that also resulted in consistent results I had no idea what to believe. (I run into this all the time, as do my local rental houses. Occasionally I’ll get a call from a local rental house asking to subrent some IRND filters for a RED shoot. “They know those don’t work on REDs, right? They have to use hot mirrors instead,” I say. “We’ve stopped telling them that,” says the rental house. “Someone told them they have to use IRNDs and we can’t make them wrong.”)
The best modern resource I’ve found for cinematography-related information, the Cinematography Mailing List, is right at least half the time, and that’s really remarkable. Before the Internet became publically accessible the ratio of correct and useful information to completely incorrect information was much, much higher, and instead of lots of people learning through blog posts like this one, the way you are now, I had to talk to one person at a time. I could only hope that this person was (1) knowledgeable, and (2) coherent. Most people turned out to be (3) knowledgeable sounding, which is very different.
For guidance I turned to the masters, in particular Ansel Adams. Uncle Ansel (not a relative, sadly) was not only a smart guy but a good teacher who enjoyed writing books, and I devoured “The Negative” eagerly because it gave me a system by which I could produce predictable exposures. I tried it and it seemed to work, although I had to shrink his system from a ten stop range to about eight for color negative.
If you don’t know what The Zone System is you should take a moment and read up on it. This is a good start.
A Google search for “The Zone System” will point you to a lot of great resources.
My biggest problem was trying to place brightness values on the appropriate zones. Uncle Ansel gave hints: Zone VII was bright flesh tone, Zone III was dark rock, etc. None of these were very helpful as I had a hard time envisioning just how dark a dark rock was. Instead, I came up with word games: I’d ask myself where in this scale something fell:
Zone IX: White.
Zone VIII: Textured white.
Zone VII: Light light gray.
Zone VI: Light gray.
Zone V: Middle gray. Not light gray, not dark gray.
Zone IV: Dark gray.
Zone III: Dark dark gray.
Zone II: Black with texture.
Zone I: Pretty much black.
Uncle Ansel’s original system encompassed ten stops, but color negative at the time I shot it had a bit less range, particularly when it was printed. (Most of the film that I shot was for print release, and print stock has a lot more contrast than negative stock.)
So I’d ask myself: “Is that back wall light gray or light light gray?” Depending on the response I’d expose accordingly, opening up either one or two stops from what my spot meter told me. This trick worked most of the time, but sometimes I had difficulty determining what Zone something fell on, particularly if that object had strong color. Saturated color throws The Zone System right out the window.
At such times I’d turn to my incident meter, but I hated doing that as I didn’t feel it was as reliable a tool. It didn’t tell me what was actually happening in the shot, and forced me to trust my eyes–and I REALLY didn’t trust my eyes, especially at the end of a long day. When I was a camera assistant I used to watch DPs meter almost exclusively with incident meters and get good results and I had no idea how they did it, especially when they aimed the lumisphere (the half ping-pong ball on the top of the meter) toward the camera and let it be bright on one side and dark on the other.
The point of this kind of meter is that it averages light, but averages it how? As I didn’t understand exactly how it averaged light I didn’t trust. I once lit half the lumisphere of my incident meter with a hard light from one side and used zero fill on the other to see what it read, and the reading came up at T5.6. Aiming the meter at the light gave me T8, and aiming it the other way read “error.” It wanted me to open up a stop for a “perfect” exposure, yet that would simply make the light side of an object twice as bright and leave the dark side in pitch black shadow!
(Surprisingly, when metering in situations where the fill doesn’t read “error” opening up one stop from the key works surprisingly well for sidelight and three-quarter backlight. I haven’t figured out why yet.)
Translating the Zone System to Video: The First Attempt
At the time video cameras were pretty amazing… compared to what came before, which is not saying much. Overexposure latitude above 18% gray was about 2.5 stops on a good day with a great camera. Black was a couple of stops below middle gray. This made my carefully memorized Zone System “word + imagery” trick useless.
I remember a camera assistant talking about how she worked with a DP on a video project and the guy never used a meter: he completely trusted the monitor. “Sure,” I thought, “How else would you shoot video on location? Meters don’t work!”
In college I took a “Lighting for Television” class from a guy who’d been a still photographer on Gone With the Wind and went on to shoot television from its earliest days right up through his last credit on Family Feud. He had a formula: 100fc key, 125fc backlight, 50fc fill. That worked great for studio cameras at the time, and it allowed him to work very quickly with a meter, but that was useless on location–and 99% of my work was on location.
I needed a better way. It took a long time to find it.
A Return to Metering
Most of my early video work was run-and-gun doc-style work, doing the best I could on location with what I had. That was a great education, especially as my eye for film lighting tended to be a bit more sophisticated than those I was shooting for in the doc and corporate world. I could make mistakes and nobody knew better, so I experimented a lot.
As I started shooting more narrative projects and commercials, though, I needed consistency. I tried using a spot meter but the gamma curves were so different between cameras that I’d need to do copious testing and notating to make any sense of my readings at all. (At the time I wasn’t as into camera testing as I am now.) When I tried using my incident meter I discovered that it was really difficult to get it to agree with the manufacturer-stated ISO of the cameras I was using. Part of this was not really understanding where 18% gray fell on a waveform monitor, and the other part was that manufacturers didn’t seem to take the whole ISO thing seriously. (This seems to have changed.)
I ended up lighting by eye and then using my meter to document light levels. The stops I read on the meter had nothing to do with how the camera was set up, but rather gave me basic instructions as to how to reproduce the setup. I just kept the same T/stop on the lens and used my notes to keep the lighting levels consistent consistent.
This worked for a while. Then I started consulting for DSC Labs, maker of fine video charts.
I learned that the brightest matte white object known to man was barium sulfate, a white powder used not only in white paint but as a base reference for reflectivity. It reflected about 90% of the light hitting it–maybe a little more–and that was considered the best “matte white” reflective reference around.
Not long after that I noticed that log curves (these were quite new at the time for video) published by camera manufacturers showed values for both 90% white and 2% black. The black chip on a DSC Labs Chroma Du Monde–not the one in the center but the darkest chip in the gray scale–looked to be about 2% black as measured by my spot meter, so that would appear to be the standard measurement for “video black.”
It’s not quite this simple, though. There’s a difference between matte black and glossy black. DSC Labs owner David Corley describe it to me this way:
Imagine that a matte surface consists of a series of microscopic mountain ranges. As light strikes them they bounce light not only back at the viewer but also off of each other. This has the effect of creating flare, or unwanted rays of light that interfere optically with the true color of the object being viewed.
A glossy surface is perfectly smooth. The bad news is that smooth surfaces are more obviously reflective than matte surfaces; matte surfaces are still reflective even though they don’t appear to be, so they look to be more accurate even though they aren’t. The good news is that if you can see the reflection in a glossy surface you can eliminate it, and if you do then you’ll see the color of that surface as clearly as possible.
A printed matte black, such as black printed by an average inkjet printer, has a reflectance value of only about 4.5%. Glossy blacks dig a little deeper and give us 2% or so.
Knowing this gave me an idea. Turn the page to read about my new version of The Zone System…
The New Zone System
Trying to keep a mental image of 10 or 11 gray patches in my head was a bit difficult, to the point where I had to play word games to keep them all straight. I needed to simplify, and looking at log curves gave me that opportunity.
The standard reflectance for the white patch in a video chart is about 90%. That’s about 2.2 stops brighter than 18% gray. That’s about the same brightness as a really bright white piece of copy paper.
4.5% gray, or the darkest black an average inkjet printer can print, is two stops darker than 18% gray. It’s not very black. The dark patch on a Macbeth ColorChecker falls into this range.
2% gray is a little more than three stops darker than middle gray. It’s the darkest you can print something on glossy paper. Look at some black lettering on a postcard and you’ll see something that falls between 4.5% and 2% black.
To sum up: just about everything in the world that isn’t shiny is going to fall into a range of reflection between three stops below and two and a half stops above 18% gray. If I remember what 18% gray looks like (a perfectly average middle gray, not light gray nor dark gray), how dark an ink jet printer can print a black square, and how bright a piece of really bright copy paper is, I have enough references to be really dangerous with The Zone System when shooting with a camera in log mode… or even WYSIWYG mode.
Here’s the best part: I now know how to use The Zone System with my incident meter.
How to Use the Art Adams (as opposed to the Ansel Adams) Zone System
I shot this image with an F55 in SGamut3.cine/SLog3 while doing some color science tests for Sony:
The shadowy part of the building looked to be about 18% gray in reflectance, so I took a spot meter reading and set that on the lens. The sky was two stops brighter and looked fairly light, so that’s what I expected to see on the monitor. And I did.
What concerned me were the birds. I couldn’t spot meter them, but I knew they were in direct sunlight and my meter said that was two stops brighter than the exposure I’d set for the building. Knowing that bird feathers aren’t terribly shiny, and that the most light a non-specular surface could reflect was about 2.4 stops brighter than middle gray, I did a quick calculation: the sunlight was two stops (incident) brighter than the exposure I’d set for the building, and white bird feathers were two stops (reflected) brighter than that, so the bird feathers had to be around four stops brighter than the exposure I’d set on the lens. The F55 has 6+ stops of overexposure latitude so I’d still hold detail in the highlights (if anyone bothered to look–you can’t tell at this resolution but I was shooting 4K XAVC).
The white birds held nicely at 4-4.5 stops over 18% gray.
To sum up:
I decided the shadows on the front of the building would look great exposed at 18% gray. I read those shadows, set the exposure, and then checked the sky behind the building. It was only two stops brighter than my shooting stop so I knew that it would be no brighter than a piece of copy paper held in the sun.
An incident light reading was two stops more than my shooting stop, and the birds had white feathers that couldn’t be any brighter than 18% gray plus two stops, so I knew they’d be around four stops brighter than 18% gray. They’d be bright white but with plenty of texture as it would take another two stops of brightness before the camera clipped.
Here’s another example:
In this case I wanted to hold some detail in the darkest shadows, but I still wanted to preserve detail in the lit beach in the background as best I could. I looked at the shadow under the trees and imagined that as a printed image: what value would that be? It appeared by eye to be about as dark as a print could be and still hold detail, which–using a spectacular printer with glossy paper–is three stops below 18% gray, so I read that value with my spot meter and closed the lens down three stops from meter reading. The beach fell two or three stops brighter than what I’d set on the lens so I knew I’d have plenty of detail, as a piece of white copy paper is only two stops brighter than 18% gray. I rolled, and this is the image that resulted.
A little later I stuck my meter out into a shaft of sunlight and discovered that the incident reading was the same as I’d used on the lens for this shot.
To sum up:
I visualized the dark shade as having just enough detail that I’d see it in a glossy print, which I know can be printed no darker than 2% black or three stops under middle gray. I closed down three stops from my reflected meter reading of the shadows and called that my shooting stop. Then I read the beach: as it fell between two and three stops brighter than my shooting stop, and I know that diffuse white falls between 2-2.4 stops brighter than middle gray, and the camera can handle six stops of overexposure latitude, I knew I’d have plenty of detail in the sand and it would look bright but nowhere near crazy blown-out white.
I could have underexposed the shadows more, or exposed the sand brighter, and I’d probably get great results if a colorist was working with this footage. I could have made an artistic decision to skew the exposure one direction or the other. In this case I liked what I saw by eye, so I sought to match it.
The only grading that occurred here, by the way, is the addition of the LC-709 Type A LUT in Resolve 10. That’s it.
I’ve found this is the best way to use my spot meter: look at one or two tones, set a stop and shoot. Otherwise it’s quite easy to get lost measuring tones and trying to figure out how they’ll map to HD in your head. A famous cinematographer once told me that he’d light by eye, measure a highlight, mid-tone and shadow with his spot meter, set a stop and shoot. I like that idea as it forces me to actually look at the scene as it is rather than turn every value in the shot into a number, which distracts me from the other hundred things I have to pay attention to on set.
These days, however, I prefer using my incident meter as it’s a bit more right brain than left: rather than map every tone with my spot meter and use a lot of brain power visualizing the results, I can use my eyes and simply try to expose the scene the way it looks. Knowing what common matte/diffuse brightness values are makes using an incident meter nearly as exact as using a spot meter. For example:
I really liked the value of the shadow side of the white boat in the center of the frame. White can be deceptive because our brain reads it as white in certain contexts even when it’s darker than white. From experience I knew that the shady side of the boat was probably 18% gray in reflectance compared to the lit side, so I took an incident reading holding the meter on the shadow side of the boat, facing left, so it was struck only by skylight and bounce light off the ground.
As I know the brightest printed or painted white is generally about two stops brighter than 18% gray I subtracted two stops from the shadow side incident reading, set that stop on the lens and took a look. The shot looked great, so I rolled.
To sum up:
I read the amount of light falling on the shadow side of the boat by holding my incident meter in such a way that I only read the light that was illuminating the shadows. Knowing the boat was painted matte white, which is two stops brighter than 18% gray, I subtracted two stops from that reading to make it 18% gray.
I did take a look at the bright side of white boat with my spot meter, but it was only 2-3 stops brighter than 18% gray (it had a little bit of glare to it, making it appear slightly brighter than it was) and as I knew that was well within range of what the camera could capture I didn’t worry about compensating in any way.
I could have simply held my meter facing the camera, took a reading, and shot… but then I have no idea what’s really going on in the scene. I hate that, especially if I do something I like and can’t figure out how to replicate it.
None of these images has been color corrected beyond being run through Sony’s LCC-709 Type A LUT in Resolve 10. No gain, gamma, lift or offset adjustments have been appled. I shot with the F55 in SGamut3.cine/SLog3 mode, metered the scene, set the exposure and rolled.
When This Doesn’t Work
Any time you’re dealing with a shiny surface that is actively reflecting light your incident meter will be useless.
Spot meters can’t accurately read specular highlights if they’re too small. Often you have to eyeball those things and base your exposure off something else in the scene.
Remember, this 5.4 stop range, from 2% to 90%, is only valid if everything is lit by the same light. The reason log curves have so much overexposure latitude is because you’ll see much brighter highlights when shooting out windows, or punching streaks of sunlight across a set, or leaving practical lights in the shot. Still, if I read the talent at T4 with an incident meter and read what looks to be a white or creme colored wall behind them and come up with T2, I know the wall will appear middle gray: the paint can’t be brighter than 18% gray plus two stops, so it’s an easy calculation to make.
Using this method takes advantage of what your brain is trying to do anyway, which is normalize your surroundings. Even if a white wall is dark it still appears white. A dark surface that’s light by more light than the rest of the scene still looks like a dark surface. If you can determine how dark that is in relation to a glossy black postcard (-3 stops), back printed on an inkjet printer (-2 stops), 18% gray or a piece of copy paper (+2 stops) held in the same light, you can quickly determine an exposure that fits in with the rest of the scene.
What the Zone System Really Does in HD
The sweet spot for HD cinematography is a roughly five stop range that falls between 10% and 80% on a waveform monitor. I’ve created a log curve that’s a rough composite of a number of different curves just to show where those points might fall when recording in log:
Anything beyond about 60 IRE in a log curve is going to be dedicated to specular highlights or bright highlights on matte surfaces, such as when you’re in a shady room and a shaft of sunlight rakes across a back wall. Most log curves place 18% gray somewhere between 32 IRE and 40 IRE (the actual Rec 709 WYSIWYG value is 40 IRE), and 2% black ends up around 11-12 IRE.
When this curve is stretched out in post it looks a little more like this:
I think of the exposure sweet spot as being between 20 IRE and 80 IRE as that’s where the middle four stops of dynamic range is going to be stretched out. Shadows will have plenty of texture, however down to 10 IRE. Humans see mid-tones better than highlights or shadows, so it’s important this this range have lots of contrast. (HD has historically handled shadows better than highlights, which is the opposite of film. This is why I find there’s almost always more shadow detail than highlight detail, although several modern cameras have met or surpassed what film could achieve.
2% black falls at around 11-12 IRE, and this is about the darkest textured black you’ll reliably see in a video image. The camera captures a LOT more than that, but if you’re shooting for television the ends of the S-curve are going to become pretty steep so tonal values will be compressed. Originally video was only designed to capture 2% black to 90%-100% white, period! We’re still stuck with a display medium that was only designed to contain that much information but we have cameras that can capture twice that much and more, so in order to make 14 stops fit in a six stop container shadows and highlights must be rolled off aggressively. That’s okay, that’s what our eyes do anyway–but the more we compress those areas the less contrast there will be between tones within them, which means the amount of detail we can see in highlights drops. You’ll still see textures in there, but the closer you get to black or white the harder they are to see.
I’ve written before about the difference between what I call “gravy stops” and “paycheck stops”:
“Gravy stops” are nice-to-haves. They’re a place to put shadows and highlights that should have some detail but aren’t crucial.
“Paycheck stops” are where the most crucial details are placed: right in the middle of the tonal curve. If those details aren’t there, your paycheck stops.
What I’ve figured out over time is that I can use my eye and my incident meter to very quickly figure out how to expose a scene and be sure of where the reflected light values will fall without ever taking a single spot reading. Or, if I do take a spot reading, I can take one or maybe two readings and know exactly how things will fall. As long as I can mentally recall images of a sheet of copy paper, 18% gray, matte black printed on an inkjet printer and deep glossy black like I might see on a postcard, I can place objects and people in a scene on their proper Zones with only an incident meter and be accurate to within about ⅓ stop. That’s pretty cool.
Disclaimer: I’ve worked as a paid consultant to both Sony and DSC Labs. The camera used to shoot the tests above has been loaned to me by Sony. I assisted Sony in developing the LC-709 Type A LUT/MLUT.