Art Adams – ProVideo Coalition A Moviola Company Sat, 24 Jun 2017 14:23:23 +0000 en-US hourly 1 Art Adams – ProVideo Coalition 32 32 Defining “The Cinematic Look” Tue, 09 May 2017 00:03:16 +0000 Many of us talk about “the cinematic look” without defining what it is. Here’s my take. For a Mother’s Day outing, my spouse, in-laws and I went to Filoli, one of the last country estates of the early 20th century and now a private park. The main attraction is the gardens, which are particularly lovely,

The post Defining “The Cinematic Look” appeared first on ProVideo Coalition.

Many of us talk about “the cinematic look” without defining what it is. Here’s my take.

For a Mother’s Day outing, my spouse, in-laws and I went to Filoli, one of the last country estates of the early 20th century and now a private park. The main attraction is the gardens, which are particularly lovely, but the old manor house is open for self-guided tours.

I’m not terribly interested in flowers, or manor houses, or self-guided tours, but I love creating images, so I use such trips as an opportunity to transform the sights into images that appeal to me. On this day I must have been in a bit of a dark mood, as I spent my time transforming this lovely estate into an images of a creepy haunted house and a lost bygone era.

This is the image that triggered this theme:

I snapped this shot spontaneously as I walked out of one of the nicer of the manor’s rooms. I felt the germ of something as I looked up the staircase, and it took me a second to figure out what it brought to mind. It was, of course, this fairly famous image:

The Brown Lady.

This is The Brown Lady of Raynham Hall. When I was a kid, this was one of the most famous “ghost photographs” ever taken. It’s clearly a fake, but most of the images I make professional are exactly that: fake. The trick is that fake images serve a purpose: they tell a deliberate story and evoke a specific mood. I love the mood of this image: it’s mysterious and other worldly, and the primitive photographic technology adds a layer of abstraction that makes me think, “You know, that could have happened, and the image is far enough removed from perfection that it almost feels more real.” If the base image had been shot on an iPhone it would hardly have felt as creepy… at least without a lot of manipulation.

Have I mentioned yet that I’m a huge fan of Hipstamatic? No? Well, as soon as I looked up this staircase I knew what image I wanted to create, and Hipstamatic delivered in spades:

My image doesn’t have a ghost in it, but the mood almost invites one to appear. Everything about this—the softly blown-out window, the contrast, the lack of color, the “film grain,” and the sloppy border–adds a level of abstraction that becomes a story in itself. Out of sheer boredom, I transformed the image of a historical monument’s backlit staircase into a foreboding image of a haunted moment in time.

I love that. This is what I get paid to do: tell stories through images, and transform real places into fantasies that sell products. I rarely get to create anything so creepy in my commercial work, as there aren’t many products that are branded as “creepy ghostware,” but I love to transform places into images that reflect something that they aren’t. I do that at work all the time, and I also do it for fun.

There’s a power to the aged photograph that’s hard to communicate. It’s truly a moment lost to time, and there are questions surrounding it that will never be answered. I love stories that dwell on such things. One of my favorite TV mini series is called The Lost Room, and while it is fairly low budget the concept is powerful: one sunny morning in 1961, a hotel room near Gallup, New Mexico is torn from reality, and everything within it reappears possessing extraordinary powers. Some of the objects, such as the wristwatch, are benign: put an egg in the watchband and it will be instantly hardboiled. Some, though, are more interesting: touch the bus ticket and you will be instantly teleported to a deserted highway near Gallup, no matter where in the world you happen to be.

Why? No reason. Or whatever the reason, it’s beyond understanding.

In this scene, our heroes take one of these objects—a Polaroid picture—to the site of the lost hotel room, and discover that, through it, they can see how the room appeared just before it went missing in 1961:

I don’t find this series to be shot in a terribly cinematic way, but that scene gives me chills. Over the course of the show the mystery of the room is slowly revealed, and the fact that we can only glimpse what it once looked like through an old, wrinkled Polaroid picture adds enough abstraction that we are drawn into this image, willing to accept as real whatever it happens to show us. The very act of putting a frame around an image makes it abstract and draws us in. In this case, the frame within a frame is what grabs us.

For me, part of the process of making something “cinematic” is transformation: taking one thing and turning it into an abstract image that pulls the viewer away from reality and puts them into a visually constructed story with a strong point of view.

There’s been a trend away from this kind of abstraction, in the form of using documentary techniques to tell dramatic stories both in commercials, movies and TV. This has been an interesting experiment, and I’ve shot quite a lot of this kind of material, but my hope is that we can return to crafted storytelling sometime soon. Historically, audiences have responded more to abstract imagery than to realistic imagery, and while doc-style production is fast and easy I feel as if it is slowly losing its power. Even in its abstraction, audiences are seeing it more and more as realistic, and it’s becoming boring. Audiences, especially younger audiences, want to see images that are compelling—not necessarily real.

This is why we have Hipstamatic and Instagram. It’s also why I used used lenses that were between 50 and 80 years old to shoot a recent commercial project. The cleaner the images we can capture technologically, the more we yearn for images that had more character. (And when people say “character,” I usually translate that as “strong point of view.”)

Film is abstract by its very nature: you had to look past the grain and the softness and edges of the frame to see the story within. We still have the frame edges to work with, but digital is very clean and sharp. Sometimes that’s the right look, but other times… it’s nice to make a statement by affecting the image optically, in a way that you’d never see if you were viewing the same scene by eye. Reality is interesting when you’re experiencing it, but when an audience is watching a story through a frame, they want to see that story through a strong viewpoint that isn’t their own. That means making it appear visually less real while telling a story that feels more real as a result.

After I created a ghost story out of a staircase, I set out to show the house as it would have appeared in snapshots of the time in which it was built. I wanted to see how abstraction made modern images tell a distant story. Here are some more pictures, and then I have three videos to show that illustrate an aspect of “cinematic” imagery.

This snapshot felt like it could be interesting, but as captured it was too “real.” I knew, though, that with a little tweaking I could do something special with it.

Transforming it into an image from another time made it much more compelling:

There’s a story here, and it is told both through its precision (the framing, the window panes pushing the viewer’s attention toward the stone head, the layers of depth within the image) and its abstraction (the sloppy border, the textured sepia “paper”, the “film grain”). By adding abstraction, I made a humdrum image into a compelling historical snapshot by embracing past limitations.

I snapped this shot as we approached the front of the mansion. It’s a very pleasant-looking building, but I found that boring. I did a little interpreting:

This feels a bit like a historical photograph that pushed the edge of the film’s technical boundaries.

This image tells a different story. The image above just feels old, but this one feels as if it’s a moment of time that’s actually in the process of decaying, to be lost forever. The mood is very different. Also, the image above feels a little like a cheap postcard due to its soft contrast, whereas the image below has some “snap” and feels like a much more professionally crafted image.

Both are more interesting than the underexposed color photo I snapped with my iPhone on full auto, and yet I knew roughly what I wanted the final shots to look like when I captured the original image.

I wanted this image to communicate the feeling that someone from a distant time was just about to step into view of the mirror:

My iPhone is great for capturing quick shots on the fly, but not so good at interpretation. I did the rest in Hipstamatic:

Now I’ve got a story. The fact that I’m showing only part of the mirror, while it is clearly the focal point, and the mirror is only showing part of the room, makes the image mysterious. The image doesn’t tell a story in itself, but it feels as if it’s about to tell one. That’s even more interesting.

This is not so much about showing what’s possible when shooting snapshots with an iPhone, but to show that one can impart mood by carefully removing an image from reality by removing it from reality, through a combination of lensing, framing, exposure and coloring. What’s more, one can decide on what one wants to do in advance and create photographs that will work even better for that particular look. For those of us who create compelling images to order, this is great fun.

Here are some other ordinary images that became interesting snapshots from another time:


This last image was one of my favorites:

I couldn’t get the exact shot I wanted using the iPhone’s wide lens, but I knew that I could crop in a fair bit while dodging the branches I didn’t want to see at the top:

I was inspired to take this picture, and in a very roundabout way:

This is a frame from a film that, when I saw it in film school, had a profound impact on me: Last Year in Marienbad. This is a film that people either love or hate. I loved it, even though—ultimately—it is completely style over substance. Or, it’s whatever you want it to be: when interviewed about the film shortly after its release, the director and writer gave deliberately contradictory answers about the film’s plot, and never willingly explained what the film was about.

It’s extraordinarily pretentious, stunningly beautiful, abstract, and precise, all at the same time.

This is a beautiful cut down showing off this film’s imagery while eliminating much of its pretentiousness:

If you ever watch the movie, you’ll discover that there’s not much story there. Perhaps that’s deliberate. One theory about this film is that you’re meant to make of it whatever you will. My point, though, is that this is about as far from reality as you can get, and that just makes it more beautiful and compelling.

For me, this is cinematic. It’s very precise, very symmetrical, very constructed… and very wrong. It’s not at all like reality, but it’s an interpretation of reality where every frame tells the story in a cohesive way. This is not the only way to shoot cinematic images, but rather an example that shows that the most cinematic images come about when you figure out what story you’re trying to tell, distill it to its essence, and then create images from that essence.

While researching this article, I came across a music video that was inspired by this movie. Some creative soul then took that song and recut the music video using footage from the actual movie.

Here’s the music video:

And here’s the recut version using actual images from the film:

The music video is fun, but it’s not as good as the movie. Clearly it’s hard to follow in the footsteps of a modern work of art, but I tried to figure out what it was that drew me out of it. I think the answer is “precision.” The movie is very precise in its compositions and movements, to the point where the camera is disconnected from the “reality” it is capturing, and that makes the imagery extremely compelling to me. It’s as if the actors are on a stage, constrained to the movements of the camera, instead of the camera being constrained to the movements of the actors.

The music video doesn’t quite succeed in the same way. It is 85% of the way there, but the camerawork isn’t as rigidly precise. There are wide angle shots from head height, for example, where the foreshortening feels wrong, as if placing the camera at the same height as a person’s eyes for a wide shot was a bit too real and accommodating.

Cinematic images are artistic images. They are not reality, they are interpretations of reality. And, if you think about it, there’s no such thing as a “real” image: as soon as you put a frame around reality, you are imparting a layer of abstraction upon it that represents your point of view as a storyteller. I don’t see a good reason not to take that as far as possible, because that’s where the truly amazing visual stories live. That can mean distorting the image, or making it sharper, or moving the camera precisely, or shaking it a lot. What’s important is figuring out what this visual storytelling language is in advance and sticking to it.

Recently I shot a spot for a company where we captured images distorted by wine glasses held right up against the lens. It worked for the product and it looked great… and the abstract imagery makes the spot one that draws viewers right in. The distorted image tells a visual story that’s different and interesting, yet totally appropriate.

No matter whether we’re shooting spots, movies or TV, the question to ask ourselves is: why make this look “real”? It’s never going to be real, so why not push the limits? And, rather than shoot the story and find it later, why not find its essence and let that permeate every shot? That’s when “cinematic” truly happens.

“Cinematic” imagery is intentional imagery. Viewers don’t want to see reality. They want to see your reality.

Art Adams
Director of Photography

The post Defining “The Cinematic Look” appeared first on ProVideo Coalition.

]]> 3
Lenses: My Likes, Dislikes, and the Return of the Cooke Speed Panchro Thu, 04 May 2017 15:53:11 +0000 Every lens has a feel and a purpose. Some reproduce the world perfectly, some imperfectly. They all have their place, but I have come to appreciate “imperfect” over “perfect.” There is a sense in cinematography circles that digital cameras are too “clean.” Common complaints center around the fact that film’s analog nature introduces certain characteristics

The post Lenses: My Likes, Dislikes, and the Return of the Cooke Speed Panchro appeared first on ProVideo Coalition.

Every lens has a feel and a purpose. Some reproduce the world perfectly, some imperfectly. They all have their place, but I have come to appreciate “imperfect” over “perfect.”

There is a sense in cinematography circles that digital cameras are too “clean.” Common complaints center around the fact that film’s analog nature introduces certain characteristics to an image that sensors don’t. The most obvious is that randomized grain, visible in film, is replaced in the electronic world by noise, which is less random and has a different “feel” altogether. Less obvious is the quirky way the three color layers that make up film (from the top down: blue, green, and finally red) reproduce images. Blue, being the top layer, exposes much more easily than red, because light has to travel through several layers of filtration before it reaches that bottom red layer. The red image tends to be just a little bit softer than blue or green, perhaps because red light has to travel through more layers before exposing, or possibly because its wavelength is longer so it focuses differently. (This is well known: the scenes that camera assistants dreaded the most in film took place in red-lit darkrooms, as lens focus marks became useless.)

It is these intangibles that gave film an abstract beauty, and abstraction is what audiences crave. Audiences don’t want to see realistically captured images: they want to see interpretations of reality that are more interesting than the reality they see daily. The more unique and compelling an image is, the more attractive it is to an audience… within reason, of course.

There’s only so far a color grade can be pushed, and even then the look tends not to be very “analog,” or randomly distorted. That’s why old glass is seeing a resurgence. Such lenses work their magic before the light strikes the sensor, and—unlike a color grade which works at the color bit depth of the source or intermediate digital files—they work at the full bit depth of the sensor. And old lenses do some wonderfully unpredictable things.

Lenses have always been a bit of a kluge. Zooms, in particular, are compromises in every way, and often show all sorts of defects from the distortion of horizontal and vertical lines to color fringing around contrasty objects. Primes are generally less prone to annoying distortions, but certainly possess more positive distortions that give them “character.”

One overlooked quality of lenses is lens color. A while back, while shooting a travel job, I had to mix Cooke S4 primes with an Angenieux 24-290 zoom due to limited equipment availability. It wasn’t long before I noticed that the Cookes, which are nice and warm, contrasted sharply with the Angenieux zoom, which appeared cool and cyan. While this is relatively easily fixed in a color grade, it does add time and expense to that process, and a less experienced colorist might not fix that issue at all.

When I got home I set out to compare and contrast lens colors so I didn’t end up in this situation again. I knew that director wanted to work with either Leica or Cooke primes, and I needed a zoom that would match either choice. This article illustrates how I compared both Cooke and Leica primes to Angenieux and Canon zooms in order to determine which pairing was the best match for color. In the end I found that the warm Cooke and Canon lenses matched fairly well, whereas the Leica and Angenieux lenses were a better match because they were both cool and a bit green.

What’s interesting is that the lenses weren’t simply an overall hue, but showed variations in how they transitioned from warm to cool. For example, in a range from blue to red, the purple hues transitioned to red faster on the Canons than on the Cookes but their overall warmth was the same.

I’ve come to think of all the common lens choices in the following ways:

Cooke S4 primes. They don’t always play well with horizontal and vertical lines, but those distortions work magic on faces and figures. They render skin tone as wonderfully “creamy,” and the quality of the out-of-focus image (known as “bokeh) is painterly and slightly dreamy. They tend to pop warm hues and downplay cold ones, which—once again—is wonderful on faces. Their low contrast doesn’t work well with white limbo backgrounds, but otherwise they are my go-to favorites.

Zeiss Ultra Primes. These are my workhorse lenses, as they tend to be a little cheaper than Cooke S4s and are more often available when rental houses are busy. They are very clean and the wider lenses show gentle barrel distortion, which I find pleasing in an old-fashioned way. They don’t flare easily, and in general are just good, decent, all-around lenses. Where Cooke S4 lenses don’t work so well on white limbo backgrounds, Ultra Primes are much more contrasty and work perfectly.

Zeiss Master Primes. I’m not a fan. They are very, very good lenses, but they are so free of distortion that I find them a little boring. Still, they are probably the sharpest low light lenses around, and some of the sharpest lenses available overall. They show a little barrel distortion, like Ultra Primes do, but unlike Ultra Primes I find the quality of that distortion a little disturbing. For some reason the distortion never feels symmetrical to me, as if the lens is slightly offset in its mount, even though it is really the world that is slightly offset in relation to the lens. I can spot this effect fairly easily.

If I’m shooting a project where I know the production company wants to resize the image significantly in post, these are my go-to lenses as I know there aren’t many lenses that are sharper. Like Ultra Primes, they show a lot of contrast and are very resistant to flares.

Leica Summicron-C primes. I’ve used these, and they are nice lenses, but like the Master Primes they are almost too perfect. They are also a little cool and green, which is easily corrected but not my favorite look on faces. Still, they are razor sharp and free of distortion, and sometimes that’s the look you want. In a recent video comparing these lenses to Ultra Primes, I noticed that the Leicas didn’t cause highlights to bloom as much as the Ultra Primes but the field of view was completely flat, where the Ultra Primes added a bit of roundness to the image. I found the Ultra Primes to be technically lower in quality but artistically more pleasing.

Zeiss Super Speeds. These old lenses have a lot of character. Their bokeh is distinctive: highlights are hot in the center and bleed off into a gentle glow. They show more flare than their modern counterparts (Ultra Primes are basically updated Super Speeds) and a bit more barrel distortion, and they aren’t as sharp as the other lenses on this list, but they still do a surprisingly good job. In the early Alexa days I was often able to get that camera on jobs by compromising on lens choice to keep costs down, and I came to enjoy matching such an awesome camera to a set of old and worn Super Speed lenses.

Speaking of old and worn lenses… this is the part where I talk about my new favorite lens. A company in the U.K. called TLS (for True Lens Services) has been rehousing old Cooke Speed Panchro lenses, from the 1920s through the 1960s, for modern motion picture use. They have all the functionality of modern Cooke primes, but the funkiness of old and worn lenses from an era when lens technology was a lot less forgiving. (Recently, while shooting handheld with these lenses and controlling my own focus, I found myself spinning the focus knob crazily and wondering why it wasn’t engaged… but then noticing that focus did actually change. That’s how insanely smooth the focus mechanism is on these rehoused lenses!)

A Cooke Speed Panchro lens in a new TLS housing. The focus ring is so smooth that it often feels as if the follow focus isn’t actually engaged!

These rehoused old lenses are so popular that Cooke Optics dug the designs out of a filing cabinet somewhere and is re-releasing them as a brand new product, the Cooke Panchro/i Classic, in all their funky glory.

This decades-old Cooke Speed Panchro lens has a beautiful and unique look that works well for our digital age, but the physical design isn’t conducive to working as quickly and precisely as we must on modern productions.
The updated Cooke Classic updates the classic Speed Panchro design while giving camera assistants the kind of focus mark precision they need when working quickly and precisely at low light levels. It also makes the lens compatible with standard accessories, as does the TLS housing.

Cooke Optics recently released this promotional video showing off the “new” Cooke Speed Panchro, now known as the Cooke Classic:

These are the most imperfect lenses I’ve ever used…  and I absolutely LOVE THEM.

These lenses flare like mad. I remember a time when that was a bad thing, but modern aesthetics have changed. In this article I showed how lens flares introduce depth by creating a wash of light across the image at the point closest to the viewer, creating a barrier between them and the image. A setting sun gives one a sense of infinity within the frame, but a lens flare does the opposite by appearing closer to the viewer than everything else.

Flare can be used for dramatic purposes by forcing the viewer to look harder to see what’s going on. Suddenly eliminating that flare is even more dramatic as it suddenly drops the viewer right into the shot.

Flares adds a layer of depth between the viewer and the subject.
Eliminating that flare drops the viewer right into the shot.

I used TLS-rehoused Cooke Speed Panchros lenses on a recent commercial, where we played “peak-a-boo” with the sun by flaring the lens and then shading it, and this added a lot of drama to what would have otherwise been a fairly boring shot.

One of the things I love about these lenses is that the out-of-focus image is very smeary and painterly. It also seems to have layers. In this shot, which moves slightly from side to side, the out-of-focus branches in the foreground distort the out-of-focus branches in the background, like ripples in a pond. The layers of branches seem to interact with each other.

I exploited this characteristic in a recent commercial where we put wine glasses directly in front of a 100mm TLS Cooke Speed Panchro, with the aperture wide open, and the glass in the foreground of the shot created wonderful ripple effects in the background as they moved through the frame.

It feels as if the light from the soft branches in the distant background has to bend around the closer branches in order to reach the lens.

This shot shows another element of the Cooke Speed Panchro look: when used with the aperture wide open, the front of the lens cuts into the image somewhat, creating a soft vignette around the frame that also distorts the boken into ovals. The highlights at the top left of this frame should be circles, but they are cut in half by the lens housing itself. The result is swirling effect that pushes the eye into the center of the image. All lenses do this to some extent, but these lenses do it in the most interesting way I’ve ever seen.

This “swirling” effect can be eliminated by stopping modern lenses down one stop from wide open, although older lens designs (like the Cooke Speed Panchros and the new Classics) often require 2 ⅔ stops before this effect dissipates. This effect is easy to see: look through the back of any lens, with the aperture wide open, and hold it such that the front end of the lens becomes visible, cutting off part of the image circle. Then close the f/stop down a stop or two. Now that the hole is smaller, you can’t see the edge of the lens cutting into the image anymore.

Cooke lenses have a creaminess to them that make people look great. What’s also interesting about this image is that we can see that the highlights have a very slight edge to them, where the outside of the highlight is a little brighter than the rest of it. “Perfect” boken means this highlight should be exactly the same brightness all the way across, but that can often feel a bit too “clean.” Zeiss Super Speeds make the center hotter than outside edge, but Cookes make the outside edge hotter than the center.

Too much of this can be unpleasant. Many still lenses will turn out-of-focus background highlights into donuts, with a very dark center, and those don’t blend together very well. The effect is very distracting as the background highlights end up appearing almost as sharp as the foreground. These Cooke lenses add just the right amount of edge, giving the bokeh the feel of a lens that’s not quite perfect, but in a pleasing way. The highlights feel less like donuts and more like diamonds.

Throwing everything out of focus shows just how smeary and almost ghostly these lenses render out-of-focus backgrounds.

These lenses aren’t the answer to every production, but when they do work for the story or the product they really are amazing. At the moment it’s possible to find TLS-rehoused Cooke Speed Panchros at a lot of rental houses (I’ve found them in my native San Francisco Bay Area, and I also used a local set on a recent project in Phoenix). This summer you’ll see these lenses re-released as brand new Cooke Classics, and I suspect you’ll be able to find them everywhere.

At least I hope so… I’m going to be asking for them quite a lot.

(You can read my article on Cooke’s anamorphic lenses here.)

Art Adams
Director of Photography

The post Lenses: My Likes, Dislikes, and the Return of the Cooke Speed Panchro appeared first on ProVideo Coalition.

]]> 0
A Guide to Shooting HDR TV: Day 6, “How the Audience Will See Your Work” Sun, 23 Apr 2017 17:00:04 +0000 This is the sixth installment of a six-part HDR “survival guide.” Over the course of this series, I hope to impart enough wisdom to help you navigate your first HDR project successfully. Each day I’ll talk about a different aspect of HDR, leaving the highly technical stuff for the end. You can find part 5

The post A Guide to Shooting HDR TV: Day 6, “How the Audience Will See Your Work” appeared first on ProVideo Coalition.

This is the sixth installment of a six-part HDR “survival guide.” Over the course of this series, I hope to impart enough wisdom to help you navigate your first HDR project successfully. Each day I’ll talk about a different aspect of HDR, leaving the highly technical stuff for the end. You can find part 5 here.

Thanks much to Canon USA, who responded to my questions about shooting HDR by sponsoring this series.


1. What is HDR?
2. On Set with HDR
3. Monitor Considerations
4. Artistic Considerations
5. The Technical Side of HDR 
6. How the Audience Will See Your Work < You are here

There are several competing standards for HDR distribution. Each has consequences for the cinematographer and their work.


The Dolby Vision standard utilizes the PQ curve (see appendix) for image encoding. It also specifies 12-bit encoding with a peak brightness of 10,000 nits as defined in the ST2084 specification, although Dolby currently recommends a target peak white value of 4,000 nits.

Dolby Vision content is currently mastered within the P3 color gamut, although it is capable of reproducing imagery in any gamut from Rec 709 to Rec 2020 depending on the capabilities of the display. Rec 2020 color is far beyond what modern displays can produce but leaves room for future growth.

Dolby’s key strength is that it provides dynamic metadata that instructs a proprietary decoder chip, built into a television, how to best adjust imagery to fit within the constraints of a consumer display on a frame-by-frame or shot-by-shot basis. If a program was mastered on a monitor that exceeds the specs of the television on which it is being viewed, either in dynamic range or color gamut, then the dynamic metadata that travels with the program tells the decoder chip how to expand or contract each shot’s color gamut and dynamic range to fit within that display’s abilities.


The HDR10 standard, which is championed by a number of television manufacturers who don’t want to license technology from Dolby, also uses the PQ curve. It aims for 10-bit encoding, encompasses the same Rec 2020 color gamut, and has been adopted as the Blu-ray DVD encoding standard. Most online streaming services offer both it and Dolby Vision as options. (HDR10 can be implemented in HDR TVs as a software upgrade, whereas Dolby Vision TVs require a built-in chip.)

Where Dolby Vision’s dynamic metadata aids in adjusting color gamut and peak brightness to match a television’s capabilities on a shot-by-shot basis, HDR10 incorporates only one instruction that applies to the program overall. There is some discussion about adopting a shot-by-shot metadata scheme similar to Dolby’s, but this has not been finalized.

The biggest difference is that there is no specification for what happens if a program’s peak white exceeds the capabilities of a consumer TV. It is up to each manufacturer to develop a roll-off scheme—likely some sort of highlight-only gamma curve—to compress highlights in a pleasing way that will retain some of the artistic integrity of the image.

This is of some concern to the discerning cinematographer.


The third impending standard is HLG. It is backwards compatible across a wide range of TVs as it employs a gamma curve that becomes progressively flatter as brightness increases, much like a log curve. A consumer TV will reproduce brightness levels as high up the curve as it can and roll off the rest.

In theory, the same content is viewable on both an old SDR TV and a new 1,000 nit HDR TV, although with dramatically different results.

HLG’s ever-flattening brightness curve doesn’t allow for the same size color gamut as other formats, so highlight saturation is reduced compared to Dolby Vision and HDR10.

Currently, HLG is only being considered for broadcast television, where legacy sets will be an issue for years to come.


  • Dolby currently offers the best scheme for adapting imagery to HDR televisions.
  • Dolby and HDR10 are both currently available as video streaming options.
  • HDR10 is the de facto Blu-ray standard.
  • HLG is meant to be an over-the-air broadcast standard only.
  • You have no control over which of these technologies will preserve or distort your creative vision.

Wrapping it all Up

There are a lot of unanswered questions about HDR origination and broadcast, but there are certain on-set practices that should make the transition easier. Key among these is to use an HDR monitor to train your eye to recognize when technical issues will arise regarding highlights, shadows and camera movement. There aren’t a lot of cheap or bright on-set monitors available right now, but at least one—the Canon DP-V2410—seems to be bright enough, dark enough, light enough and affordable enough to be a good on-set reference monitor.

Beyond that, it’s important to remember that bit depth matters, especially when the camera’s entire dynamic range will be reproduced on a display with little or no tonal compression. When broadcast standards call for 10-bit and 12-bit deliverables, it pays to shoot at a higher bit depth to leave room for grading in post, while pushing your tonal scale farther from the Barten Ramp boundary. 10 bit RGB is the bare minimum and won’t leave you much room in post. 12-bit is better, and 16-bit is best.

Grading will be a new experience. One colorist told me that there weren’t any limits as to what he could do, given well-exposed material at a high bit depth. This is exciting news for any cinematographer who is included in the grading process. Sadly, at least in short form work, this is not always the case.

As display dynamic range, color depth and resolution increase, our margin for error on set decreases. At the same time, HDR gives us a license to push imagery to perceptual realms never before possible in either film or video. The dynamic range of a projected film print can’t compete with HDR. Like film, it will take some time to learn to evaluate a scene’s visual impact strictly by eye and light meter. Fortunately, such training can happen in real time thanks to the availability of set-friendly HDR monitors.

The End


1. What is HDR?
2. On Set with HDR
3. Monitor Considerations
4. Artistic Considerations
5. The Technical Side of HDR 
6. How the Audience Will See Your Work < You are here

The author wishes to thank the following for their assistance in the creation of this article.

Canon USA
David Hoon Doko
Larry Thorpe

Dolby Laboratories
Bill Villarreal
Shane Mario Ruggieri

Disclosure: I was paid by Canon USA to research and write this article.

Art Adams
Director of Photography

The post A Guide to Shooting HDR TV: Day 6, “How the Audience Will See Your Work” appeared first on ProVideo Coalition.

]]> 5
A Guide to Shooting HDR TV: Day 5, “The Technical Side of HDR” Sat, 22 Apr 2017 17:00:07 +0000 This is the fifth installment of a six-part HDR “survival guide.” Over the course of this series, I hope to impart enough wisdom to help you navigate your first HDR project successfully. Each day I’ll talk about a different aspect of HDR, leaving the highly technical stuff for the end. You can find part 4

The post A Guide to Shooting HDR TV: Day 5, “The Technical Side of HDR” appeared first on ProVideo Coalition.

This is the fifth installment of a six-part HDR “survival guide.” Over the course of this series, I hope to impart enough wisdom to help you navigate your first HDR project successfully. Each day I’ll talk about a different aspect of HDR, leaving the highly technical stuff for the end. You can find part 4 here.

Thanks much to Canon USA, who responded to my questions about shooting HDR by sponsoring this series.


1. What is HDR?
2. On Set with HDR
3. Monitor Considerations
4. Artistic Considerations
5. The Technical Side of HDR < You are here
6. How the Audience Will See Your Work

As a result of the research that went into writing this article, I’ve developed some working rules for shooting HDR content. Not all of it has been field tested, but as many of us will likely be put in the position of shooting HDR content without having the resources available for advance testing, we could all do worse than to follow this advice.


A lit candle in SDR is a white blob on a stick, whereas HDR can show striking contrast between the brightly-glowing-yet-saturated wick and the top of the candle itself. Such points of light are not just accents but can be attention stealers as well. The emotional impact of a scene lit by many brightly colored bulbs placed within the frame could range anywhere from dazzling to emotionally overwhelming in HDR, depending on their saturation and brightness, whereas the same scene in SDR will appear less vibrant and could result in considerably less emotional impact.

The trick will be to learn how to work in both SDR and HDR at the same time. As HDR’s emotional impact is stronger than SDR’s, my preference is to monitor HDR but periodically check the image in SDR. Ideally we’d utilize two adjacent monitors—one set for HDR, and the other for SDR—but few production companies are likely to spend that kind of money, at least until monitor prices fall considerably. One monitor that displays both formats should be enough, especially as HDR monitors tend to be a bit large and cumbersome for the moment.

We saw this comparison earlier:

Compare the highlights on the C-stand post, and then look at this diagram:

(above) SDR compresses a wide range of brightness values into a narrow range. HDR captures the full range, which can then be reproduced with little to no contrast compression on an HDR monitor.

Specular highlights appear larger and less saturated in SDR than in HDR. They are also much less distracting in SDR. Highlight placement and evaluation will be critical. Most importantly, highlights should be preserved as they will retain saturation and contrast right up to the point of clipping.

Shadows respond in much the same way:

SDR sees only the broadest strokes. HDR sees the subtleties and the shape.

In the absence of a monitor, a spot meter and knowledge of The Zone System become a DP’s best friend. A monitor will be helpful, however, as it takes time to learn how to deploy highlights artistically and learn to work safely at the very edges of exposure. There are technical considerations as well: large areas of extreme brightness within the frame can cause monitors to reduce their own brightness on a shot-by-shot basis (see Part 3, “Monitoring Considerations”) and the end result can be difficult to evaluate with a meter alone. It’s also difficult to meter for both HDR and SDR at the same time, and much easier to light/expose for one and visually verify that the other works as well.

It might be possible to create an on-set monitoring LUT that preserves your artistic intent in SDR.

My suspicion is that monitoring SDR on set will become positively painful, as cinematographers will inevitably be disappointed at seeing how their beautiful HDR images will play out on flat, desaturated SDR televisions. Nevertheless, this will likely be necessary for a few more years. HDR is coming quickly, but SDR is not going away at the same pace. “Protecting for SDR” will be the new “Protecting for 4:3.”

Canon monitors are able to display both SDR and HDR. Canon will shortly release the DP-V2420 1,000 nit monitor for Dolby-level mastering, but the 2410’s price and weight may make it the better option for on-set use. Even with a maximum brightness of 400 nits, HDR’s extended highlight range and color gamut are clearly apparent, and the monitor’s size is still very manageable. The user can toggle between SDR and HDR via a function button.


Every step in the imaging process has an impact on the HDR image. This is true of SDR as well, but the fact that SDR severely compresses 50% or more of a typical camera’s dynamic range hides a lot of flaws. There’s no such compression in HDR, so optical flaws and filtration affect the image in previously unexperienced ways.

Lens and sensor combinations should be tested extensively in advance. Some lenses work better with some sensors than others, and this has a demonstrable impact on image quality. Not all good lenses and good sensors pair optimally, and occasionally cheaper lenses will yield better results with a given sensor.

Lens flares and veiling glare can be very distracting in HDR, and will in some cases compromise the HDR experience. Testing the flare characteristics of lenses is good practice, especially when used in combination with any kind of filtration. Zoom lenses may prove less desirable than prime lenses in many circumstances.

Additionally, strong scene highlights—especially those that fall beyond the ability of even HDR to capture—can create offensive optical artifacts.

It is strongly suggested that the cinematographer monitor the image to ensure that highlights, lenses and filters serve their purpose without being artistically distracting or technically degrading.


This is something I do habitually, as the native ISO of many cameras tends to be a bit too optimistic for my taste, and I’ve learned that Bill Bennett, ASC, considers this a must when shooting for HDR. Shadow contrast is so high that normal amounts of noise become both distinctly visible and enormously distracting.

Excess noise can significantly degrade the HDR experience. The best looking HDR retains some detail near black, and noise “movement” can cause loss of shadow detail in the darkest tones. Rating the camera slower, or using a viewing LUT that does the same, allows the colorist to push the noise floor down until it becomes black, while retaining detail and texture in tones just above black.


Most recording codecs fall into one of two categories: RGB or YCbCr. Neither of these is a color space. Rather, they are color models that tell a system how to reconstruct a hue based on three numbers.

In RGB those three numbers represent values for red, green and blue. Y’CbCr is different in that it is a luma-chroma color model that stores three values but in a different way:

Color and luma are completely detached. Y’ is luma, which is not shown above because it stands (mostly) separate from chroma, but all other colors are coded on one of two axes: yellow/blue (warm/cool), or red/green. This is how human vision works: blue and yellow are opposites, and red and green are opposites, and each make natural anchors between which we can define other colors. Nearly any hue can be accurately represented by mixing values from specific points that fall between blue/yellow and red/green.

The real reason this is used, though, is because of chroma subsampling (4:4:4, 4:2:2, 4:2:0, etc.). Color subsampling exists because our visual system is more sensitive to variations in brightness than variations in color. Subsampling stores a luma value for every pixel, but discards information in alternating pixels or alternating rows of pixels. Less chroma data equates to smaller file sizes.

The Y’CbCr encoding model is popular because it conceals subsampling artifacts vastly better than does RGB encoding. Sadly, while Y’CbCr works well in Rec 709, it doesn’t work very well for HDR. Because the Y’CbCr values are created from RGB values that have been gamma corrected, the luma and chroma values are not perfectly separate: subsampling causes minor shifts in both. This isn’t noticeable in Rec 709’s smaller color gamut, but it matters quite a lot in a large color gamut. Every process for scaling a wide color gamut image to fit into a smaller color gamut utilizes desaturation, and it’s not possible to desaturate Y’CbCr footage to that extent without seeing unwanted hue shifts.

My recommendation: always use RGB 4:4:4 codecs or capture raw when shooting for HDR, and avoid Y’CbCr 4:2:2 codecs. If a codec doesn’t specify that it is “4:4:4” then it uses Y’CbCr encoding, and should be avoided.


The 10-bit PQ curve in SMPTE 2084 has been shown to work well for broadcasting, at least when targeting 1,000 nit displays. Dolby® currently masters content to 4,000 nit displays, and they deliver 12-bit color in Dolby VisionTM.

When we capture 14 stops of dynamic range and display them on a monitor designed to display only six (SDR), we have a lot of room to push mid-tones around in the grade. When we capture 14 stops for a monitor that displays 14 stops (HDR), we have a lot less room for manipulation, especially when trying to grade around large evenly-lit areas that take up a lot of the frame, such as blue sky.

The brighter the target monitor, the more bits we need to capture to be safe. It’s important to know how your program will be mastered. If it will be mastered for 1,000 nit delivery, 12 bits is ideal but 10 might suffice. Material shot for 4,000 nit HDR mastering should probably be captured at 12 bits or higher. (It’s safe to assume that footage mastered at 1,000 nits now will likely be remastered for brighter televisions later.)

When in doubt, aim high. The colorist who eventually regrades your material on a 10,000 nit display will thank you.

I’ve spoken to colorists who say they’ve been able to make suitable HDR images out of material shot at lower bit depths, and while they say it’s possible they also emphasize that there’s very little leeway for grading. More bits mean more creativity in post.

Dolby® colorist Shane Mario Ruggieri informs me that he always prefers 16-bit material. 10-bit footage can work, but he feels technically constrained when forced to work with it. ProRes4444 and ProRes XQ both work well at 4,000 nits.

Linear-encoded raw and log-encoded raw both seem to work equally well.

One should NEVER record to the Rec 709 color gamut, or use any kind of Rec 709 gamma encoding. ALWAYS capture in raw or log, at the highest bit depth possible, using a color gamut that is no smaller than P3. When given the option of Rec 2020 capture over P3, take it.

In some cases, a camera’s native color gamut may exceed Rec 2020, so that becomes an option as well.


There are three main HDR standards in the works right now: Dolby VisionTM, HDR10 and HLG. They are similar but different.

As broadcasting bandwidth is always at a premium, HDR developers sought a method of encoding tonal values in a manner that took advantage of the fact that human vision will tolerate greater steps between tonal values in shadows than in highlights.

This is a simplified version of a graph called a Barten Ramp. It illustrates how the human eye’s response to contrast steps (how far apart tones are from each other) varies with brightness. We visually tolerate wide steps in lowlights, but require much finer steps in highlights. Banding appears when contrast steps aren’t fine enough for the brightness level of that portion of the image.

Gamma curves (also known as power functions) capture more steps than necessary in the highlights, so more data is transmitted than is necessary. Gamma also fails across broad ranges of dynamic range, so while traditional gamma works well for 100 nit displays it fails miserably for 1,000 nit displays. Banding and other artifacts become serious problems in HDR shadows and highlights.

Log curves fair better, but they waste too many bits on shadow areas, making them an inefficient solution.

For maximum efficiency, Dolby® Laboratories developed a tone curve that varies its step sizes. It employs broader steps in the shadow regions, where our eyes are least sensitive to contrast changes, and finer steps in highlight regions, where we’re very sensitive to banding and other image artifacts. This specialized curve is called PQ, for “Perceptual Quantization,” and was adopted as SMPTE standard ST.2084. It’s the basis for both Dolby VisionTM and HDR10.

This curve is only meant to reduce data throughput in distribution or broadcast, where severe limitations exist on the amount of data that can be pushed down a cable or compressed onto a BluRay disk. One should never need to record PQ on set.


Canon recommends working with Canon Raw or log footage up to the point where the final project is mastered to HDR. Canon HDR monitors have numerous built-in color gamut and gamma curve choices, and they are all meant to convert Canon Raw or log footage into HDR without entering the realm of PQ.

PQ is the final step in the broadcast chain. The only circumstance in which you might use it on set is when you’re working with a camera that is not natively supported by your HDR monitor. For example, if a Canon monitor does not natively support a camera’s output, it can still display an HDR image as long as the camera outputs a standard PQ (ST.2084) signal.

Alternately, an intermediate dailies system will often translate log signals through a LUT for output via PQ to on-set or near set displays.

Currently Canon HDR monitors work natively with both Canon and Arri® cameras. Future monitor software upgrades will add native support for image data from additional vendors, enabling Canon HDR monitors to work with a wide variety of non-Canon cameras.

NOTE: PQ is meant for broadcast and monitor/camera compatibility only. It should never be used as a master recording format.


Canon HDR monitors contain the core of ACES in their software. The image can be graded through the use of an attached Tangent Element TK Control Panel, or via ASC CDL controls located on the front of the monitor itself.

(above) Function key mappings for DP-V2410 in-monitor CDL control.


In the SDR world, log images are not meant to be viewed directly. Exposure is much more easily evaluated by looking at a LUT-corrected image as tonal values are displayed correctly instead of arbitrarily.

In HDR, though, a log waveform may be the best universal method of ensuring that highlights don’t clip. Most log curves place a normally exposed matte white at around 60% on a luma waveform, so any peak that falls between 60% and white clip will end up in HDR territory. This “highlight stretch” makes for more critical evaluation of highlight peaks.

It’s important to know, however, that log curves don’t always clip at 109%. Log curves clip where their particular formula mathematically runs out of room, and this varies based on the curve’s formula and the camera ISO. Rather than look for clipping at maximum waveform values, it may be more prudent to look for the point where the waveform trace flattens, indicating clip and loss of detail. This can be found anywhere from 90% to 109%, depending on the camera, log curve, LUT and ISO setting.

A tiny bit of clipping may be okay, but this can only truly be evaluated by looking at a monitor on set. In general, any highlight clipping should be avoided.

A better option for examining highlights, and one found in both the Canon DP-V2410 and the upcoming DP-V2420 monitors, is a SMPTE ST.2084 (PQ) waveform that reads in nits. It’s divided into quarters: 0-10 nits, 10-100 nits, 100-1,000 nits and 1,000-10,000 nits. The log scale exaggerates the shadow and highlight ranges and shows exactly when detail is hitting the limits of either range.


The following is an excerpt from a December 1, 2015 paper released by Dolby Laboratories on the subject of original capture source formats.

For mastering first run movies in Dolby Vision, before other standard dynamic range grades are completed, wide-bit depth camera raw files or original scans are best.

Here is a list of Original (raw camera or film scan) Source Formats from best at top to not so good for Dolby Vision at the bottom (the first three in the list are of equal quality):

  • Digital Camera raw*
    • De-bayered digital camera images – 16bit log or OpenEXR (un-color-corrected)
    • Negative scans – scanned at 16bit log or ADX OpenEXR (un-color-corrected)
    • Negative scans – scanned at 10bit log (un-color-corrected)
    • IP scans – scanned at 16bit log or ADX OpenEXR (un-color-corrected)
    • IP scans – scanned at 10bit log IP scans (un-color-corrected)
    • Alexa-ProRes (12bit444) – under some circumstances we’ve gotten good results from this format
    • ProRes-444: This can provide a better looking image than you’d get in standard dynamic range but it can be limited. Results may vary.

*”Raw” means image pixels which come straight out of a digital motion picture camera before any de- bayer operations. [Art’s note: Photo sites on a sensor are not “pixels.” A pixel is a point source of RGB information (a “picture element”) that is derived from processing raw data. “Raw” is best defined as a format that preserves individual photo site data, before that data is processed into pixels.] For the current digital cameras, this format has 13-15 stops of dynamic range – depending on the camera make and model.

You can consider raw format images the same as original camera negative scans, which also have lots of dynamic range. Either of those formats, if available, will give good Dolby Vision performance because they have wide dynamic range.

Anything less than the above will not give good Dolby Vision performance. HDCAM SR will not provide acceptable quality Dolby Vision. A good Dolby Vision test scene is one that has deep shadows and bright highlights with saturated colors. If a compromise on these characteristics must be made, use a scene that has at least two of those attributes. For Dolby Vision tests and re-mastering projects, content owners should provide a master color reference (DCP, Blu-ray master or broadcast master) for use as a color guide.


  •  Your HDR imagery will also be seen in SDR. Be sure to check your images periodically to make sure they work for both formats. It might be helpful to craft a separate LUT for SDR to try to preserve your creative intent.
  • Test every part of your optical path to ensure that lenses, filters and sensors compliment each other. Verify these combinations using a 4K HDR monitor.
  • Rate the camera at half its native ISO, or create a viewing LUT that will do the same. Noise is your enemy.
  • Always record in raw or log at the highest bit depth possible. Never use a WYSIWYG or Rec 709 gamma curve.
  • 16-bit capture is the best option (currently). 12 bits will work. 10 bits can work but will leave less leeway for significant post correction. 8-bit capture is never advisable.
  • Record to the largest color gamut possible, which is either the camera’s native color gamut (such as Canon’s Cinema Gamut) or Rec 2020. Never record to Rec 709.
  • Always record to an RGB 4:4:4 codec. Avoid Y’CbCr. Any codec that is subsampled (less than 4:4:4) is a Y’CbCr codec.
  • It’s a good idea to work with camera original data all the way through post. PQ transcoding should only happen as a final step when producing deliverables.
  • Support for a broad selection of multiple camera formats should appear shortly in on-set monitors. If a monitor is incompatible with a camera native signal (such as Canon Cinema Gamut/Canon Log 2) then a PQ feed from the camera should bridge the gap. (Not all cameras will output PQ.)
  • PQ should never be recorded on-set.
  • Waveforms are critical in avoiding clipped highlights. Some monitors, like the DP-V2410, will display a logarithm waveform in nits, which greatly expands the highlight range for high precision monitoring. If this isn’t available, viewing a log signal will dedicate nearly half of a standard waveform’s trace (above 60%) to highlight information.


1. What is HDR?
2. On Set with HDR
3. Monitor Considerations
4. Artistic Considerations
5. The Technical Side of HDR < You are here
6. How the Audience Will See Your Work < Next in series

The author wishes to thank the following for their assistance in the creation of this article.

Canon USA
David Hoon Doko
Larry Thorpe

Dolby Laboratories
Bill Villarreal
Shane Mario Ruggieri

Jimmy Matlosz
Bill Bennett, ASC

Disclosure: I was paid by Canon USA to research and write this article.

Art Adams
Director of Photography


The post A Guide to Shooting HDR TV: Day 5, “The Technical Side of HDR” appeared first on ProVideo Coalition.

]]> 0
A Guide to Shooting HDR TV: Day 4, “Artistic Considerations” Fri, 21 Apr 2017 17:00:13 +0000 This is the fourth installment of six-part HDR “survival guide.” Over the course of this series, I hope to impart enough wisdom to help you navigate your first HDR project successfully. Each day I’ll talk about a different aspect of HDR, leaving the highly technical stuff for the end. You can find part 3 here.

The post A Guide to Shooting HDR TV: Day 4, “Artistic Considerations” appeared first on ProVideo Coalition.

This is the fourth installment of six-part HDR “survival guide.” Over the course of this series, I hope to impart enough wisdom to help you navigate your first HDR project successfully. Each day I’ll talk about a different aspect of HDR, leaving the highly technical stuff for the end. You can find part 3 here.

Thanks much to Canon USA, who responded to my questions about shooting HDR by sponsoring this series.


1. What is HDR?
2. On Set with HDR
3. Monitor Considerations
4. Artistic Considerations < You are here
5. The Technical Side of HDR
6. How the Audience Will See Your Work


Specific code values in HDR data are mapped to specific nit/brightness values on the monitor. Choose a 10-bit code value at random, and that code value will cause an HDR monitor to emit a specific, repeatable amount of light. (This is true of both Dolby Vision and HDR10, but not technically true of HLG, which is an HDR format meant primarily for over-the-air broadcasting. It’s extremely unlikely that you’ll ever monitor in HLG on set. These formats are covered in Part 6.)

Without grading, an object exposed at five stops (reflected) above middle gray will generate a brightness level that corresponds to five stops above middle gray on a consumer HDR TV (if the consumer TV is capable of emitting 800+ nits). The dynamic range captured during photography will map very closely to the dynamic range displayed on the end-viewers TV: if two objects differ in brightness by one stop on set, they will also differ in brightness by one stop on a consumer HDR television.

Also, whereas we’re used to seeing reduced color saturation in SDR as exposure increases, this is less likely to happen in HDR.

For this reason it’s important to understand your camera’s dynamic range and color space, from brightest highlight to darkest black and most saturated hue to least saturated hue. There is no hiding at the extremes of exposure. We’re no longer exposing for middle gray and letting the rest “roll off.” Every stop of dynamic range counts. Highlights should always be protected, and shadows lit to the point that some detail is visible.


There are two theories about shooting for HDR:

  • Don’t shoot for delivery; capture the most data possible for post manipulation.

This method dictates capturing imagery with less dynamic range than might be optimal for delivery, so the colorist has more room to push bits around in post. Rather than play highlights or shadows on the edge, you’d light and expose a little flatter so there’s plenty to manipulate later. The overall “shape” of the look still has to be created on set, so “layers” within the image (foreground, mid-ground, background) should be separated in color, contrast or brightness such that the colorist can enhance their separation, rather than try to create separation from scratch.

  • Shoot for delivery; make the image appear exactly the way you want.

This is a little trickier. It helps to know what peak brightness level to shoot for (4,000 nits, 1,000 nits, 400 nits, etc.) in order to properly monitor the image. This can be difficult, as there are a limited number of monitors on the market that are set ready, affordable and can display brightness at higher nit levels.

The answer seems to be to shoot with a “fat digital negative.” (In film terms, this refers to overexposing film slightly, robbing from the highlights in order to make the shadow density deeper and darker, which results in reduced graininess.) In general, it is a good idea to rate the camera slower in order to crush noise, while recording to an RGB log or raw codec at the highest bit depth possible. (Recording codecs will be covered in Part 5, “The Technical Side of Shooting HDR.”)

Regardless of the technique chosen, one has to be aware that the colorist has more control than ever before while also being severely constrained. There is very little shoulder or toe to the exposure curve that can hide clipped highlights or conceal noise in shadows. HDR’s constant contrast means that there’s little roll-off, or compression, at either end of the exposure curve from which more information can be pulled, or uncompressed. A 14-stop image on a six stop SDR monitor allows for a lot of leeway, but a 14-stop image on a 14-stop monitor offers considerably less.

At the same time, HDR isn’t HDR without taking full advantage of its capabilities. Director of photography Jimmy Matlosz, who has photographed several HDR test projects for Dolby® Laboratories, told me he prefers to use the entire dynamic range of the camera when possible. “I try to make sure that every f/stop of dynamic range is represented.”

This won’t work for all HDR material—I doubt showrunners for the typical sitcom would appreciate this approach—but, in general, this technique should produce the most stunning images.

I anticipate the greatest challenge will be convincing our bosses that darkness is our friend. More often than not my clients speak of dark areas as if they are holes in the image, as to them black symbolizes a lack of information. “If it’s dark, something is missing. I want to see it!”

Rather than talking about black as a gap in the image, I try to talk about it as a color. Dark areas aren’t missing detail, they are accents—just like red, or yellow, or green. “I’m going to jazz this up a bit by adding some black.” It’s exactly like painting, where black is just another artistic choice on my palette.


Large highlight areas may have an adverse effect on viewers. At the very least they may cause the rest of the shot to appear considerably darker than a light meter might indicate due to the principle of simultaneous contrast (where bright objects make adjacent dark objects appear darker, and vice versa). An on-set monitor helps in this situation, as it concentrates our attention on the image in a manner similar to how the audience will see it. Simultaneous contrast works differently between looking at a set and viewing a small, bright image surrounded by black in a dark room.

It’s important to think critically about what objects or surfaces in the frame will be brighter than two stops above middle gray (or brighter than diffuse white) as that’s the realm of highlights in HDR. They should almost never be clipped as their slightest detail—or lack thereof—will be evident to the consumer. (Tiny areas of clipping may be okay.)

There is so much range for correction in HDR that underexposing overall may be preferable to clipping highlights as long as shadow detail isn’t completely lost.

Solid HDR blacks can be disconcerting and, in many cases, are less pleasing than seeing some detail in the darkest shadows. Back in the days of film it wasn’t unusual to “light for black”: rather than let shadows fall off into darkness, a DP might set a small light to impart just a hint of illumination. This gave film emulsion “something to do” and resulted in richer and more interesting shadows. HDR is similar: often it is more pleasing to see some small amount of detail in the shadows than to see nothing at all.

Dolby’s Dolby Vision Source Format document (a portion of which is reproduced in the appendix) notes that an ASC cinematographer found that plus or minus four stops from middle gray is the “sweet spot.” A subject can walk from sunlight into shadow and no stop change is necessary if the dynamic range of the scene stays within that eight stop range. At plus or minus six stops one can see easily into the toe or shoulder but highlight and shadow detail will start to roll off, depending on the final grade.

I spoke with Bill Villarreal, Dolby’s Senior Director of Content Development, who specializes in remastering feature films for HDR. He tells me that most interior scenes don’t exhibit so much contrast that they automatically fall into HDR territory. Often the brightest thing in the frame is a wall sconce, practical lamp or window. His team spends a lot of time reducing the intensity of interior highlights, as they tend to be more distracting in HDR than when projected from film.

Daytime work is more dramatic: glints on chrome bumpers and tree leaves tend to “pop” and make the image feel sharper, as small points of bright light stand out well against dark backgrounds without overwhelming them. Light streaming through windows and backlit dust in air look amazing.

His biggest pet peeve is that they spend a lot of time removing movie lights and other equipment from windows that appear overexposed in SDR but are perfectly exposed in HDR.

He echoed a sentiment communicated by several other colorists: HDR shadows can be very disturbing if they are completely without detail. His feeling is that we don’t see deep, inky blacks in daily life, and they can feel wrong in a medium that reproduces (and mimics) the dynamic range of human vision.

I view this as an artistic choice: yes, most scenes work better if there’s some detail in the darkest shadows, as HDR reproduces those with amazing depth and clarity… but this is not a hard and fast rule, and can be broken for dramatic effect. An on-set monitor can be helpful in placing detail just at the very edge of blackness, or judging the psychological effect of featureless black.

Mr. Villarreal mentioned a grading session with a famous director who looked at an actor’s closeup and commented that he’d prefer to see softer lighting in HDR, as tonal transitions appear sharper due to higher contrast. In particular, he felt lighting on faces should be much softer. He commented that the specular highlight on an actress’s forehead looked fine in SDR but was distractingly bright and contrasty in HDR, and would have looked better had it been more diffuse and a bit dimmer.

At the same time, it’s not unusual for HDR colorists to “stretch,” or add contrast, to flesh tones—making highlights a little brighter and shadows a little darker—as this makes faces appear richer.


 HDR is not simply about monitors that are brighter and darker than ever before. New technologies naturally create new storytelling opportunities for creative minds. Dolby® colorist Shane Mario Ruggieri shared his thoughts on what he calls “temporal HDR,” which is the use of HDR to create complex emotional responses in an audience over time through changes in contrast and brightness.

He posits four types of temporal HDR:

INTRA-FRAME. This is what we think of as “traditional” HDR, where a camera’s full dynamic range is accurately reproduced on a consumer television. Highlights are brighter than ever before, and shadows are deeper. Imagery takes on an almost three-dimensional feel.

INTER-FRAME. This type of HDR takes advantage of the human visual system’s ability to adapt to large changes in brightness over time. Imagine a scene in a western where the camera follows a character from a bright street, lit by noon-day sun, into a dark saloon illuminated only by indirect light filtering through dusty windows. Normally we’d hide a change in f/stop during that transition, as SDR televisions don’t have enough contrast to produce a usable image otherwise. HDR, however, does allow for this, and it’s possible to expose the image such that no stop pull is necessary and the audience adapts naturally to the brightness change. (Use of an on-set monitor might be wise to ensure that critical action takes place after the visual adaptation is complete.)

INTRA-SCENE. Dramatically different brightness levels in adjacent shots communicate emotion over time. The transitions can take place slowly, where some element of the scene (overall exposure, highlights alone, shadows alone, or overall contrast) changes incrementally across cuts, for a subtle effect, or quickly, through jarring smash cuts. Once again, the audience adapts naturally to the changes in brightness, and it’s up to the creative team to determine the forcefulness of the transition.

INTER-SCENE. Brightness levels change across scenes. Often color is used to communicate location cues to the audience, but now it’s possible to use extreme changes in brightness as well. For example, scenes set in a major metropolis might be underexposed to emotionally communicate the shady environment that exists between tall buildings, whereas a scene set in a scorching hot desert might be consistently overexposed by one, two or three stops—which is possible in HDR without losing highlight detail that might otherwise be crushed in Rec 709.

VARIABLES: Through all of these techniques, it’s important to recognize that our old friend and exposure aid, middle (18%) gray, will be of limited usefulness, as middle gray will shift with the audience’s adaptation to brightness changes. It may be possible for a cinematographer to “chase” middle gray with a light meter, but they’ll likely need to find their new middle gray value by sitting in front of an HDR monitor long enough that their vision adapts to the brighter or darker image in the same manner as the intended audience. The cinematographer can then visually identify a new middle gray value in the scene and adjust their light meter accordingly.

While HDR content can be exposed solely by light meter, an on-set monitor is the best way to evaluate whether such temporal changes produce the desired emotional effect, and to help ensure that critical action occurs only after the audience has time to adapt to large overall changes in brightness.


  • HDR’s strength is high dynamic range and extended color saturation, but this strength reveals weaknesses in optics, filtration, and how those interact with a camera’s sensor design. Test in advance to check that your particular combination of these will work to your satisfaction.
  • Shooting for the grade—by capturing lower contrast images that don’t push the extremes of exposure—gives a colorist more control in post. Shooting for delivery may limit the colorist’s choices at the extremes of dynamic range (mostly in the deepest darkest shadows just above the noise floor). Middle gray plus/minus four stops is the “sweet spot” of exposure, although HDR can be pushed much further.
  • Clipped highlights should be avoided. Underexposure may be preferable to clipped highlights.
  • Completely black shadows can be disturbing as we never see them in nature. Shadows might need additional illumination to bring out textures near black.
  • Don’t hide mistakes behind overexposure or underexposure. This works for SDR and film, but does not work in HDR.
  • HDR’s higher contrast makes highlight/shadow transitions look harder than they do in SDR. Softer light may be preferable in HDR, particularly on faces.
  • HDR can be exposed solely by meter, but it helps to have a monitor nearby to see the image the way consumers will in order to better judge physiological and emotional responses to the picture.
  • HDR within the frame is the most basic implementation, but HDR can also be used to impart emotion through changes in contrast, brightness, darkness and saturation across frames, shots and scenes. It helps to have a monitor on set to assess how these transitions play against each other.



1. What is HDR?
2. On Set with HDR
3. Monitor Considerations
4. Artistic Considerations < You are here
5. The Technical Side of HDR < Next in series
6. How the Audience Will See Your Work

The author wishes to thank the following for their assistance in the creation of this article.

Canon USA
David Hoon Doko
Larry Thorpe

Dolby Laboratories
Bill Villarreal
Shane Mario Ruggieri

Jimmy Matlosz
Bill Bennett, ASC

Disclosure: I was paid by Canon USA to research and write this article.

Art Adams
Director of Photography

The post A Guide to Shooting HDR TV: Day 4, “Artistic Considerations” appeared first on ProVideo Coalition.

]]> 0
A Guide to Shooting HDR TV, Day 3: “Monitor Considerations” Thu, 20 Apr 2017 17:00:34 +0000 This is the third installment of a six-part HDR “survival guide.” Over the course of this five-part series, I hope to impart enough wisdom to help you navigate your first HDR project successfully. Each day I’ll talk about a different aspect of HDR, leaving the highly technical stuff for the end. You can find part

The post A Guide to Shooting HDR TV, Day 3: “Monitor Considerations” appeared first on ProVideo Coalition.

This is the third installment of a six-part HDR “survival guide.” Over the course of this five-part series, I hope to impart enough wisdom to help you navigate your first HDR project successfully. Each day I’ll talk about a different aspect of HDR, leaving the highly technical stuff for the end. You can find part 2 here.

Thanks much to Canon USA, who responded to my questions about shooting HDR by sponsoring this series.


1. What is HDR?
2. On Set with HDR
3. Monitor Considerations < You are here
4. Artistic Considerations
5. The Technical Side of HDR
6. How the Audience Will See Your Work

There are several crucial considerations when choosing an on-set monitor:

  • How does the monitor process color information?
  • How does it handle out-of-range highlights?
  • Will the monitor be viewed in total darkness or under some form of ambient illumination?
  • What happens to the display when large areas of the screen approach maximum intensity?


The camera used on the Mole-Richardson shoot fed an unprocessed 10-bit raw signal to the DP-V2410 monitor, which was de-mosaic’d within the monitor, passed through ACES® and mapped to HDR. The color gamut setting was Canon Cinema Gamut > Rec 2020, which scales the camera’s large native color gamut to fit within the Rec 2020 color gamut. That result is then scaled once again to fit within the display’s color gamut, which is approximately that of P3.

There are exact settings for P3 and Rec 709 as well, although Rec 709 is not considered suitable for HDR.

HDR monitors can only, at present, reach P3 saturation levels or slightly beyond, but data should be captured in Rec 2020—or a camera’s native color gamut, if it is larger—for future-proofing. As we saw in Part 2, consumer monitors are generally considered to be HDR-ready if they can reproduce 90% of P3’s gamut, and indeed most programs are currently mastered at P3 saturation levels—but this will likely not be the case for long.

The term “color gamut” brings to mind the traditional scalloped CIE 1931 color chart, but that doesn’t tell the entire story. That chart represents only a thin slice of the full color gamut, cut from the middle of its tonal scale. The full range of a color gamut is a 3D shape that bends and narrows at its extremes, as certain hues can’t be fully saturated at the limits of luminance. Dolby® Laboratories has coined the term “color volume” to better describe this shape.

(above) A comparison between the “flat” CIE representation of SDR’s color gamut, using a slice through the color volume at middle gray, versus two different 3D representations of the same gamut that show how colors saturate and desaturate across the luminance range. (©2016 Dolby® Laboratories.)

(above) This representation of SDR’s color volume, shown within HDR’s, illustrates Dolby VisionTM‘s increased overall saturation as well as improved saturation in highlights and shadows. (©2016 Dolby® Laboratories.)

“Color volume” is a wonderful description as it communicates the concept of a three dimensional color space in a more intuitive manner than does the traditional 2D CIE chart. It reminds us that changes in saturation occur across the full luminance range: highlights and lowlights may not be as saturated as mid-tone hues, and some hues may appear more saturated than others at different brightness levels. The traditional CIE chart shows none of this as it represents only a narrow slice of mid-tone hues.

I will, however, use the more commonly accepted term “color gamut” throughout the remainder of this article, as that’s the term used in the various HDR specification papers.


Canon’s HDR range scales out-of-range highlights to fit within the display limitations of the monitor, and can be found in all Canon HDR monitors. Setting HDR range to 4,000 scales the dynamic range of a 4,000 nit highlight to fit within the DP-V2410’s 400 nit container. Image contrast is distorted—mid-tones and blacks will darken—but highlight detail, contrast and saturation are easily evaluated.

As HDR range renders the image unusable except for examining highlights, it can be toggled via a function key for intermittent use. Simply program the maximum nit level desired (from 400 to 4,000) and push a button.

(above) What the camera captures may not always fall within the capability of a monitor to display.


(above) Code values that exceed what the monitor can display will be truncated, resulting in clipped highlights.


(above) Temporary use of the HDR range feature allows the user to remap the camera’s dynamic range to fit within the monitor’s dynamic range. Mid-tones and shadows will darken, but highlight contrast, detail and saturation can be visually assessed.

 At present, given HDR’s rigid encoding scheme, this is the only method available for viewing out-of-range highlights short of using a custom LUT. The good news is that this feature is built-in and ready to go, so users will never be left without a way to visually check highlights that fall out of the range of their monitor. (At present, unless one has a very large, very expensive and very power hungry 4,000 nit monitor on set, there is no other way to evaluate highlights.)


Tests show that OLEDs and LCDs thrive in very different ambient lighting conditions.


(above) 1,000 nit OLED display in total darkness. This is a DSC Labs XYLA 20 stop dynamic range chart.

(above) 1,000 nit LCD display.

In perfect darkness the OLED display shows more contrast in the shadows when compared to the LCD display. The steps drop off more quickly due to OLED’s deeper black levels. Due to backlight leakage, the LCD screen can never be as dark as the OLED screen due to slightly elevated black levels.

(above) 1,000 nit OLED in uncontrolled lighting conditions.

(above) 400 nit LCD (Canon DP-2410) in uncontrolled lighting conditions.

 On the other hand, any ambient light striking the OLED monitor results in visual loss of detail near black, as the shiny black surface will quickly reflect its surroundings. Shiny surfaces can be made infinitely black in environments that lack ambient lighting, but this is not all environments.

The slightly lifted blacks of the LCD display accurately portray shadow detail in situations where ambient light cannot be controlled.


It takes a lot of power to drive an HDR display, and with power comes heat. OLED displays dissipate heat horizontally, across pixels, so the display will automatically dim large highlight areas in order to protect the monitor. LCD monitors are subject to similar heat issues, but to a lesser degree. Both contain ABL circuits—for “Automatic Brightness Limiting”—that will dim the screen overall to prevent heat from destroying the monitor.

Here a 1,000 nit OLED monitor is compared to the 400 nit Canon DP-V2410 LCD monitor:

Left: OLED (1,000 nit). Right: LCD (Canon DP-V2410 400 nit display). Large areas of white cause the OLED to dim dramatically, while the LCD display, although dimmer initially, remains fairly consistent in brightness.

The white square in the first image is designed to clip on both displays. As the box, which requires maximum power to display, increases in size, both displays reduce brightness overall to control heat dissipation. This is a feature not only of professional displays, but of consumer displays as well. It is wise to take this into account while shooting.

There are two post terms that come into play that will weigh heavily on how your material will be seen by consumers: MaxCLL and MaxFALL. They are defined by the SMPTE ST.2086 spec for HDR encoding, and apply to HDR10 encoding (covered in section 5, “How the Audience Will See Your Work”).

Maximum Content Light Level (MaxCLL) is a metadata value that records the nit level of the brightest pixel in the frame. If a consumer television can’t produce that level of brightness then it will attempt to compensate in some way.

Maximum Frame Average Light Level (MaxFALL) is a metadata value that records the average brightness of every pixel in the brightest frame of a program. This is another piece of information used by a consumer television to modify a program whose brightest scenes may cause its ABL circuits to activate.

As every HDR monitor to date suffers from heat dissipation issues, and very bright pixels generate a lot of heat, every monitor has a MaxFALL value. When the image hits MaxFALL it’ll start to clamp down so the display isn’t damaged by excess heat. That’s what we’re seeing above: the white square is coded to be the brightest white possible, and the monitor is happy to reproduce that intensity… to a point. Once the image’s MaxFALL value exceeds the television’s MaxFALL value, the television will attempt to save itself by reducing overall image brightness. And every television and on-set monitor will have a different MaxFALL limit that is governed by it’s maximum brightness level and how well its manufacturer thinks it can dissipate heat.

My rule of thumb is that ABL circuits tend to activate when more than 10% of the screen reaches maximum intensity, but larger highlight areas of lesser intensity can also trigger ABL protection.

It’s important to keep this in mind when shooting HDR. Large, bright highlight areas may cause ABL dimming to occur, and if these are important for artistic effect then it may be important to increase ambient light levels to compensate for this. Otherwise, exposing those areas to be less intense will retain detail in the rest of the scene.

It should rarely be necessary to drive large screen areas to maximum intensity. The brightest highlights should generally be the smallest, as they tend to stand out better against a wide variety of backgrounds.

Remember, any neutral tone that is two stops brighter than middle gray is going to appear white. Beyond that, you’re painting with intensity and creating an artistic statement that physically impacts viewers while battling the limitations of display technology.

Care should be taken to prevent large bright backgrounds, such as windows, from reaching the maximum intensity of the average consumer monitor. If they are a small part of your wide shot, plan ahead so they don’t become a large part of a closeup. Failing to plan for this may cause your colorist to make some compromises across both shots. (Some level of brightness mismatch is okay as we often see this happen optically: out-of-focus specular highlights drop by about one stop in brightness as they diffuse across a dark background, and of course we cheat closeup lighting all the time.)

 Watching for MaxFALL excursions is only one of the many reasons it helps to have an HDR monitor on location. HDR is prone to motion judder at fast panning speeds due to HDR’s increased contrast, especially at frame rates as slow as 24fps (120fps has tested well for broadcasting 4K sports). Movement at slow frame rates also reduces resolution in finely detailed images due to motion blur: this can be seen in 4K sports programming shot at 24, 25 or 30fps, where richly textured grass dissolves into a blur as soon as the camera moves. HDR monitors are especially useful for critically evaluating and/or eliminating lens flares, which are brighter, more saturated, and potentially more distracting in HDR than in SDR. Veiling glare can also eliminate the HDR effect in dark shadows, and should generally be avoided.


Rec 2100, which defines modern HDR, does not specify resolution. It is entirely possible that we could see 1920×1080 HDR televisions come to market. 4K televisions have become so cheap, though, that it’s imperative that we monitor on set in 4K when shooting HDR. The difference in resolution, in conjunction with enhanced dynamic range and perceived constant contrast, does not allow for mistakes: everything in the frame will be visible in 4K.

The Canon DP-V2410 4K monitor is positioned well as a low-cost and physically manageable on-set monitor, as brighter monitors tend to have a larger footprint and require more power. (The next generation monitor, the DP-V2420, qualifies as a Dolby Vision mastering monitor and complies with the ITU-R BT.2100-0 HDR standard, which specifies a peak luminance of 1000 nits and a minimum luminance of 0.005. It’s basically the post version of the V2410, although it could be used on location as well.)

As part of my research I asked whether it was possible to use a consumer HDR television as a cheap on-set reference monitor. I learned that this is not currently possible as consumer televisions receive HDR signals over HDMI, which carries Dolby® or HDR10 metadata—created in a color grade—that is crucial to shaping the HDR image to fit the television’s capabilities. Cameras don’t output this data, and without it consumer televisions will default to SDR.

There is a possibility that tools will be developed in the future to overcome this problem.


  • The color gamut of a professional monitor will likely approximate P3. In spite of this, always shoot to the largest color gamut available for future proofing.
  • Color gamuts are three dimensional. Colors will saturate differently across varying luminance levels.
  • Out-of-range highlights can be assessed by toggling Canon’s HDR range
  • OLED monitors work well in perfect darkness. LCD monitors work well in low ambient light.
  • Large bright areas that push a monitor beyond its Maximum Frame Average Light Level (MaxFALL) will cause it to dim in order to avoid heat damage. This limit varies from monitor to monitor. It can be avoided by limiting maximum brightness to small highlights, or by raising fill light levels to compensate for possible monitor dimming. The 10% rule (ABL activates when 10% of the screen reaches maximum intensity) is a good general guideline, but not an absolute rule.
  • HDR is unforgiving. Always monitor in 4K in a color gamut not less than P3.


1. What is HDR?
2. On Set with HDR
3. Monitor Considerations < You are here
4. Artistic Considerations < Next in series
5. The Technical Side of HDR
6. How the Audience Will See Your Work

The author wishes to thank the following for their assistance in the creation of this article.

Canon USA
David Hoon Doko
Larry Thorpe

Disclosure: I was paid by Canon USA to research and write this article.

Art Adams
Director of Photography

The post A Guide to Shooting HDR TV, Day 3: “Monitor Considerations” appeared first on ProVideo Coalition.

]]> 0
A Guide to Shooting HDR TV: Day 2, “On Set with HDR” Wed, 19 Apr 2017 17:00:40 +0000 This is the second installment of a six-part HDR “survival guide.” Over the course of this series, I hope to impart enough wisdom to help you navigate your first HDR project successfully. Each day I’ll talk about a different aspect of HDR, leaving the highly technical stuff for the end. You can find part 1

The post A Guide to Shooting HDR TV: Day 2, “On Set with HDR” appeared first on ProVideo Coalition.

This is the second installment of a six-part HDR “survival guide.” Over the course of this series, I hope to impart enough wisdom to help you navigate your first HDR project successfully. Each day I’ll talk about a different aspect of HDR, leaving the highly technical stuff for the end. You can find part 1 here.

Thanks much to Canon USA, who responded to my questions about shooting HDR by sponsoring this series.


1. What is HDR?
2. On Set with HDR < You are here
3. Monitor Considerations
4. Artistic Considerations
5. The Technical Side of HDR
6. How the Audience Will See Your Work

I shot my tests at the Mole-Richardson stage in downtown Pocoima. My goal: illustrate the differences between working in SDR and HDR on set.

As there’s no way to view HDR in print or on the web using standard displays, I opted to compare the SDR and HDR monitors by photographing them side-by-side with a Canon 1DX Mk2, exposing for HDR’s highlights. All image pairs were captured simultaneously, and minimally corrected for keystone distortion in Adobe® Lightroom® 6.

One of the first things I photographed was a color chart, as I wanted to ensure the best color possible when I processed the Canon 1D’s raw images at a later date. This confirmed what I’d learned elsewhere: HDR is not simply a brighter image.

Left: DP-V2410 monitor set up for SDR. Right: DP-V2410 monitor set up for HDR.

When I viewed my DSC Labs OneShotTM color chart in both SDR and HDR, the white patch appeared nearly the same brightness on both monitors. That patch reflects about 90% of the light striking it and is known as “reference white.” It is a standard in the television industry, and it falls at 100 IRE on a waveform monitor.

The white patch appears dimmer in HDR because we have the HDR Level feature set to 800 nits, to better evaluate the highlights on the C-stand. Scaling the dynamic range of the captured image to fit within that of the monitor causes mid-tones and shadows to darken, but it opens up the highlights for critical evaluation of detail and color.

The specular highlight on the C-stand arm and knuckle is flat in SDR, while the same highlight in HDR, viewed through HDR range, shows much more contrast and shape. HDR doesn’t “roll off” highlights; rather, it gives them room to breathe.

In SDR, the white and black chips on this chart are where highlight dynamic range ends. In HDR, they are where dynamic range begins.

Mid-tones are key in SDR as they retain the most contrast. Nearly every WYSIWYG gamma curve focuses on making the middle four stops of dynamic range as contrasty as possible, while compressing the highlights and shadows so that viewers still see some detail but without much contrast. They “flatten out,” although we almost never notice because the human visual system does as well.

For example, a variation of one stop in the middle of the SDR tonal range is the difference between middle gray and light flesh tone, but the difference of a stop between five and six stops above middle gray is the difference between barely different shades of white. Detail—which is primarily revealed through contrast—is retained, but just barely.

The same happens in SDR shadows, but to a lesser extent. The difference between five and six stops below middle gray is often noticeable in SDR but this is because modern SDR LED displays aren’t very dark, which results in natural lowlight compression.

(above) Tones at the exposure limits are compressed and lose contrast in SDR, while in HDR they retain much the same contrast that the camera saw at the time they were captured.

As the HDR display’s dynamic range and color gamut better matches what the camera saw in the first place, it reproduces tones and hues that are more true to life than ever seen before on a television display, and certainly with more contrast and vividness than can be seen in traditional digital cinema theaters.

I’ve coined a phrase for this: “constant contrast.” Contrast occurs in equal steps across the grayscale. There is no compression in shadows or highlights, as in SDR—or even film, where highlight roll-off is part of the look. Every stop of dynamic range is equally important in HDR. (This applies primarily to image capture. Highlights and shadows may be rolled-off in the color grade or when broadcast, but it’s good practice to assume this won’t happen.)

Update: Nick Shaw, of Antler Post, points out that true “constant contrast” would not reproduce the scene in a visually pleasing way due to two visual quirks:

The Hunt Effect: Colorfulness increases with luminance. At high luminance, an object will look much more colorful than the same object at low luminance.

The Stevens Effect: Contrast increases with luminance. At high luminance, whites appear whiter and blacks appear blacker.

Because of this, there is always some rendering intent applied to an HDR image, as a scene without any perceptual modification will not produce the desired result.

The core of ACES can be found at the heart of Canon HDR monitors, so there is always an output transform applied to the image to account for these characteristics, among others.

Rather than “constant contrast,” the better phrasing might be “perceived constant contrast.”

Consumer HDR specifications call for displays to have a dynamic range of at least 13 stops to be considered HDR capable, which is more than twice that of SDR. If two tones on set differ in brightness by one stop, they will likely appear to differ by one stop on a consumer display at a later date, no matter where they fall within the camera’s dynamic range. In SDR this is true of only the middle two to four stops of dynamic range.

(above) This wide shot, captured in Canon Raw/Canon Log 2 and exported from DaVinci Resolve® 12.5, was lit almost entirely by two 5Ks and a nine-light Maxi Brute bounced off the backdrop outside the window. Some of that light bounced off the ceiling and added some fill to the set, but the majority of the light came through the window.

The C300 MarkII camera captured nearly the entire dynamic range of the Mole-Richardson scene, from the Translight background at six stops above middle gray (reflected) to the foreground at F0.5 (incident). When shooting for SDR I’d be happy to see any detail outside of that window, regardless of how flat or desaturated it might be. I can also see detail in nearly every shadow. This image looks much like what I saw by eye.

(above) This frame was captured by DSLR from a Canon DP-V2410 monitor screen displaying an SDR image. The exposure is set to capture highlights, as the still camera’s dynamic range cannot capture the full contrast range of an HDR display.

The image above is typical of what happens when we shoot a high contrast scene with a wide dynamic range camera and view it on a limited contrast screen: we’re so thrilled that we can see ANY detail outside that window that we don’t mind that it’s low contrast and desaturated. We’re just happy that it’s not “clipped video white.”

(above) This next frame was captured simultaneously from an adjacent DP-V2410 monitor set up for 400-nit HDR. The increased contrast and color gamut reveals the golden hue of the lightbulb’s filament against the window, a detail that is completely lost in SDR, while also revealing the color of the sky and the complex textures of the building exterior.

HDR shadows are much darker than SDR: the top of the globe, lit by bounce light off the ceiling, is a darker tone in HDR, as is the side of the globe lit by the smaller lightbulb in the foreground. The DP-V2410’s blacks are significantly deeper than those of an SDR monitor, and they appear as significantly crushed blacks on a DSLR with inferior dynamic range.

Left: SDR. Right: HDR.

It’s impressive to see so much detail in a backdrop that was lit bright enough that it illuminated an interior set. Also, notice how much darker the shadows are in HDR. The eye can see this, although a still camera can’t.

(above) Frame grab exported from Resolve.
This is a flattened version showing all the image data that’s available for grading.

(above) This is a still photograph taken of the DP-V2410’s screen while in SDR mode. The still camera captures nearly the entire dynamic range of the image.

(above) The same image, photographed with the DP-V2410 set to HDR mode. Highlights are much brighter and more saturated than in SDR, and the increased shadow contrast exceeds the dynamic range of the still camera.

The differences between SDR and HDR are striking. The SDR bulb looks bland by comparison. The filament is bright but not saturated, there’s no detail in the curtains in the background, and there’s very little color saturation in the highlights. The Rec 709 color gamut is so small, and the dynamic range is so low, that any saturated hue that falls too far outside the mid-tone range (plus or minus two stops) will flatten out and desaturate.

The HDR bulb’s filaments, however, are golden yellow. Detail is visible in the curtains behind the background lightbulb, and the warmth of that bulb’s filament clearly separates it from the background. In the SDR image, there’s no difference in hue between the color of the curtains viewed directly and their color as seen through the bulb’s tinted glass, but the HDR image reveals a significant difference in both hue and brightness. HDR’s deeper shadows make the bulb appear more three dimensional due to the increased contrast between the light and dark reflections in its surface.

The DP-V2410 monitor, in its native mode, adds two stops of dynamic range to the typical SDR set. That, and the increased color gamut, allow highlights to retain their saturated hues to the point that the monitor screen feels more like a window than a display. HDR’s blacks are much sharper and richer to the point where they feel “chiseled,” as if they’ve been carved out of the display.

Throwing focus to the background is revealing:

(above) SDR

(above) HDR

The soft but warm foreground filament almost disappears in the SDR image, as reduced contrast and saturation causes it to blend with the reflection of the key light in the glass bulb. The HDR image, with its increased color gamut and highlight contrast, shows clear separation between the reflection in the bulb and the filament itself. This illustrates not only HDR’s broader color gamut (currently defined as reproducing at least 90% of the P3 color gamut), but also what happens when highlights are given room to “breath.” Highlight roll-off preserves some detail when mapping a high dynamic range image to a low dynamic range display, but much more is lost than retained.

The increased highlight contrast in HDR is startlingly obvious even when photographing something as simple as a sheet of paper. Here we lit a product manual such that the vertical surface of the rolled-back page received several stops more light than that of the flat page:

(above) SDR

(above) HDR

SDR’s highlight compression is obvious, whereas the HDR image looks much the same as what I remember seeing by eye. Highlight detail and contrast are retained, and HDR’s increased shadow contrast makes the transition between highlight and shadow look harder and sharper than in SDR.

The typewriter (below) was backlit by a 4’x4’ frame of heavy diffusion with full CTO gel clipped across the bottom and full CTB clipped across the top.

(above) SDR

(above) HDR

The increased color saturation and contrast in the HDR highlights is stunning, and the depth of the shadows makes the keys feel three dimensional. The SDR keys feel flat and dull by comparison.

(above) HDR waveform via DSLR, exposed for highlights. The foreground lightbulb trace is buried in that of the background bulb.

Adjacent light sources don’t always stand apart on waveform monitors. It can be difficult to get a sense of how bright either of these sources are simply by looking at waveform peaks. The HDR monitor not only visually reveals the brightness and saturation differences between the two filaments, which will appear to be of similar brightness and saturation in SDR, but it is the only way to assess whether the brightness and saturation of an element within the frame is artistically appropriate.

In the image above, the HDR monitor’s waveform is looking at the input signal. Occasionally the dynamic range of the camera will exceed that of the monitor and highlights will appear visually to be clipped. In the event that it is not expedient to toggle the HDR range function to verify whether they are truly clipped, highlight integrity can be verified by looking at a log representation of the input signal.

The Canon DP-V2410 monitor can be switched into SDR mode at the touch of a function key. This is a very useful function given that SDR is not going to disappear any time soon. My parents watch TV on an old NTSC tube television that cuts the image off at title safe. No matter how stunning HDR is, they aren’t going to spend money on an HDR set—and yet they still want to see a decent image on their ancient CRT. We’ll have to keep them in mind for a bit longer.


  • HDR displays are capable of reproducing at least twice the contrast and dynamic range as SDR.
  • It is not possible to “hide” flaws in highlights and shadows anymore as they will not be rolled-off, or compressed, nearly as much in HDR as they would in SDR.
  • “Constant contrast” means that the camera’s dynamic range is mapped to the HDR display in a 1:1 fashion. This may change in color grading and distribution, but this is likely how you will monitor the image on set and may represent a “best case” scenario for consumer viewing.
  • In general, current HDR monitors are capable of displaying P3 color. Consumer monitors must be capable of reproducing 90% of P3’s color gamut to be rated suitable for HDR.
  • Color gamuts are three dimensional shapes (“color volumes”). Saturation changes with luminance. HDR highlights and lowlights are much more saturated than SDR due to expanded color gamut, dynamic range and lack of roll-off.
  • Waveforms don’t tell the whole story. Display contrast is so high that an on-set monitor helps considerably in judging the physiological and emotional impact of both highlights and shadows.
  • Programming will be released in both HDR and SDR for the foreseeable future. The two are so visually different that evaluating both kinds of images on set may aid in quality control.


1. What is HDR?
2. On Set with HDR < You are here
3. Monitor Considerations < Next in series
4. Artistic Considerations
5. The Technical Side of HDR
6. How the Audience Will See Your Work

The author wishes to thank the following for their assistance in the creation of this article.

Canon USA
David Hoon Doko
Larry Thorpe

Mole Richardson
Larry Parker

Disclosure: I was paid by Canon USA to research and write this article.

Art Adams
Director of Photography

The post A Guide to Shooting HDR TV: Day 2, “On Set with HDR” appeared first on ProVideo Coalition.

]]> 1
A Guide to Shooting HDR TV: Day 1, “What is HDR TV?” Tue, 18 Apr 2017 23:43:16 +0000 This is the first installment of a six-part HDR “survival guide.” Over the course of this series, I hope to impart enough wisdom to help you navigate your first HDR project successfully. Each day I’ll talk about a different aspect of HDR, leaving the highly technical stuff for the end. Thanks much to Canon USA,

The post A Guide to Shooting HDR TV: Day 1, “What is HDR TV?” appeared first on ProVideo Coalition.

This is the first installment of a six-part HDR “survival guide.” Over the course of this series, I hope to impart enough wisdom to help you navigate your first HDR project successfully. Each day I’ll talk about a different aspect of HDR, leaving the highly technical stuff for the end.

Thanks much to Canon USA, who responded to my questions about shooting HDR by sponsoring this series.


1. What is HDR? <You are here
2. On Set with HDR
3. Monitor Considerations
4. Artistic Considerations
5. The Technical Side of HDR
6. How the Audience Will See Your Work

There was no shortage of HDR displays on the NAB 2016 show floor. At each booth I sought out someone of authority and asked, “What do I—as a cinematographer—need to know about shooting for HDR release?” I received only one response, from a colorist demonstrating HDR grading techniques:

“Leave it to me, I’ll fix it in post.”

This is not something one says to a cinematographer. We consider ourselves to be the “authors of the look,” and while we don’t mind colorists enhancing our artistic vision, we certainly don’t want them to replace it.

Only one manufacturer took my question seriously, and while they had no answer for me at the time they vowed to help me find one. Several months later I found myself standing on a soundstage near Hollywood with a Canon C300 MarkII camera, two Canon DP-V2410 monitors, and a small group of Canon technicians.

Note: While Canon commissioned this article series and helped facilitate this test, the opinions in this series of articles are solely my own, and are based on my honest opinions of what I’ve seen.

My goal: evaluate the differences between SDR (standard dynamic range) and HDR (high dynamic range) displays in real time, with an eye toward teaching cinematographers how best to light and expose for this new medium. That research, in combination with knowledge gleaned from several cinematographers and colorists who currently work in HDR, led to the article you are reading.

There are a lot of questions still to be answered about HDR grading and delivery, but by the end of this article you should know enough to shoot an HDR project and thrive, rather than simply survive.


Modern cameras capture a dynamic range of 14-16 stops. Modern televisions display six linear (uncompressed) stops. When the “Rec 709” HDTV standard (known from here on as SDR) came into being, six stops of linear dynamic range seemed enough. This captures the dynamic range of non-shiny surfaces in the real world: matte white is about two stops brighter than middle (18%) gray, while matte black is two to three stops darker.

The X-Rite® color checker illustrates how narrow a range this is, while illustrating how critical those few stops can be. The difference between the white and black patches is only about four stops, and yet that covers a range of tones that appears black, white, and every shade of gray.

SDR doesn’t account for specular highlights, deep shadows, or uncontrolled lighting conditions in general. Now, 20 years after SDR was defined, the worst professional camera captures twice the dynamic range that SDR can display. New monitor technologies (OLED and local dimming LED) make it possible to display a camera’s full dynamic range with minimal tonal compression and more saturated colors than ever before. Where SDR was content with six stops of dynamic range, a modern consumer television can’t be certified as HDR ready without being able to display at least 13.

HDR, however, is not simply brighter television. A good HDR display will reproduce nearly the entire dynamic range of any professional camera with the same contrast that existed when the image was captured. No longer can we hide cables in the shadows and light fixtures in the highlights, as both now display as much contrast, saturation and detail as mid-tones.


(above) This graphic best illustrates the differences between SDR and HDR. At the top, the full contrast range of a scene is reproduced by the human visual system. SDR, in the middle, reduces the original contrast of a scene in order to display it on an SDR television, with dramatically less dynamic range than can be perceived by the human eye. At the bottom, HDR preserves most of the dynamic range of the original scene, displaying it on a consumer television with contrast that closely matches what the eye would have seen had it been present during image capture. (Illustration provided by ©2016 Dolby® Laboratories.)

Display brightness is measured in candelas per meter squared, or “nits.” A calibrated studio SDR monitor will emit a maximum of 100 nits. Current HDR displays are capable of emitting anywhere from 400 to 4,000 nits, and the Dolby VisionTM specification stipulates that HDR image data can be as bright as 10,000 nits. (Currently most programs are mastered with a peak luminance of 1,000 to 4,000 nits.)

Depth of shadows is as important to HDR as intensity of highlights. Where a normal SDR studio monitor may reproduce black between 1 and 2 nits, the Canon DP-V2410 HDR monitor is capable of reproducing black at 0.025 nits.

The Canon DP-V2410 400 nit HDR display.

SDR’s six stop mid-range, from matte black to matte white, will look the same on both an SDR and HDR set. The difference is that HDR goes way beyond, giving us shades of white and black that are brighter and darker than anything we can see in SDR. And, because we aren’t cramming 14 stops of captured dynamic range into a six stop bucket, every stop is displayed with roughly the same contrast. This results in an increase in highlight and shadow detail as well as color saturation.

Displays come in various flavors of HDR, and it’s helpful to have a reference as to how bright or dark they are in relation to each other. Doubling or halving a light’s intensity results in an exposure change of one f/stop, so it’s fairly simple to determine how many additional f/stops of highlight dynamic range a 1,000 nit HDR display can reproduce over a 100 nit SDR display:

(above) Nits vs. stops, as reproduced by a typical HDR display. Thanks to Nick Shaw of Antler Post for this data.

What’s tricky is that HDR code values are absolute: if a monitor can only display five stops above middle gray, and a camera captures six, that monitor will not automatically display that last stop. Highlights that fall into that last stop won’t be clipped in the recording, but they’ll be clipped on the monitor.

There are several schemes that allow for the reproduction of highlights that fall beyond the capabilities of a consumer television, but none of these are appropriate for professional use. They will be covered later. Canon professional HDR monitors offer a feature called HDR range, which scales the HDR signal to fit the constraints of the monitor. This is covered in part 3, “Monitoring Considerations.”


  • HDR is not simply increased brightness, but increased dynamic range in the monitor itself. SDR’s dynamic range is only six stops from brightest white to darkest black. HDR boasts 13 stops or more.
  • Matte white is two stops brighter than middle gray, while matte black is two to three stops darker. Display brightness values beyond those limits fall into the realm of HDR.
  • HDR code values map to specific nit values. Nit values beyond the display capabilities of a professional monitor will be clipped unless viewed in a different mode that scales the full signal to fit within the monitor’s dynamic range.


1. What is HDR? <You are here
2. On Set with HDR <Next in series
3. Monitor Considerations
4. Artistic Considerations
5. The Technical Side of HDR
6. How the Audience Will See Your Work

The author wishes to thank the following for their assistance in the creation of this article.

Canon USA
David Hoon Doko
Larry Thorpe

Disclosure: I was paid by Canon USA to research and write this article.

Art Adams
Director of Photography

The post A Guide to Shooting HDR TV: Day 1, “What is HDR TV?” appeared first on ProVideo Coalition.

]]> 2
Cooke Anamorphic Lenses Bring Class and Character to a Clean Digital World Fri, 31 Mar 2017 16:54:30 +0000 I remember the first time I worked in anamorphic. I’d landed a job as “A” camera operator on a feature film’s additional photography unit. Looking through the Panaflex viewfinder, I saw a very wide but not very tall frame. I couldn’t judge much about the image at all as it was so small. That made

The post Cooke Anamorphic Lenses Bring Class and Character to a Clean Digital World appeared first on ProVideo Coalition.

I remember the first time I worked in anamorphic. I’d landed a job as “A” camera operator on a feature film’s additional photography unit. Looking through the Panaflex viewfinder, I saw a very wide but not very tall frame. I couldn’t judge much about the image at all as it was so small. That made me nervous, as it was a toss up in the film days as to who got fired first for out-of-focus dailies: the assistant, for not nailing focus, or the operator, for not seeing the image was soft.

At lunch I placed a phone call to a union business agent who I new to be a former operator. “How,” I asked him, “do you judge anamorphic focus through a Panaflex viewfinder?”

“Switch off the de-anamophoser in the viewfinder,” he told me. “Look only at the unsqueezed image.”

I did, and saw a bright, clear, square image that allowed me to judge focus perfectly. I had to retrain my brain to compose a wide frame without seeing its width, but I survived a month of shooting where, for budget reasons, dailies were only viewed once a week.

That’s only one of the ways anamorphic forces us to think differently about how we shoot images, especially in the digital world. The lenses are rarely optically perfect, and show imperfection that we don’t see in many other kinds of lenses: horizontal lens flares, oval bokeh, inconsistent sharpness across the field of view, pin cushion and barrel distortion… all the things that manufacturers try to eliminate from spherical lenses are what give anamorphic lenses their appeal.

I’ve long been convinced that audiences want to see interpretations of reality, rather than reality itself, so a clean, pristine, perfectly realistic image is often unsatisfying. The very act of telling a story is bending reality to fit a narrative, so why should cinematography be any different?

Instagram seems to be living proof of this. The point is to take a digital picture and then mess it up. The results are often abstract, unrealistic, and beautiful. The flaws help to tell the story.

Digital capture is very clean compared to film, and—in retrospect—one of the things that made film so attractive was its abstraction. Its grain gave it texture. Different stocks rendered color and contrast in dramatically different ways, so much of the look depended on whose negative stock one ran through the camera. It was malleable, but not as much as digital, and creating a “look” often meant learning chemistry rather than invoking power windows in a DI suite. The imperfections were what made it special.

Lens choice has always made a difference, but never more so than now. Without film’s texture and chemical “funkiness,” we’ve lost a layer of abstraction that—if Instagram is to be believed—audiences appreciate and expect. There are ways to reintroduce that feel in capture and post, through the introduction of noise or the manipulation of contrast and color, but often creativity comes from the happy accident or the unexpected.

The way that a glass lens warps reality and presents it to a sensor, where the look is captured at that sensor’s full bit depth instead of being applied later to 10-bit compressed digital footage, is one of the few ways we have left to “bake” a look into an image. It’s also one of the best ways to introduce happy accidents to the filmmaking process. There’s a reason so many DPs are opting to shoot new projects with old glass or with anamorphic lenses: they distort the world in ways that we find pleasing, that suit the narrative process, and that we’d never think of while sitting in a dark DI suite at a desk lined with assorted snack foods.

In the same way that font choice influences our perception of the written word, or choice of brush affects the texture of a painting, the choice of glass through which we tell stories is itself telling part of the story.

Recently I decided to ply Carey Duffey, Cooke’s European Director of Sales, with questions about Cooke Optic’s approach to developing anamorphic lenses. Mostly I wanted an excuse to learn more about Cooke’s new SF anamorphic line, which will be on display at NAB, but I didn’t tell him that.

Art: Thanks for talking with me, Carey. Can you tell me about Cooke’s design goals in creating Cooke anamorphic lenses?

Carey: Having only started at Cooke in January 2016, I was not involved in any of the original ideas, concepts or discussions about the aims and objective for the Cooke Anamorphic lenses. That is all down to our chairman, Les Zellan, and our senior optical lens designer, Ian Neil. However, once I accepted my position as European Sales Director, it was more than apparent to me that it was extremely important to educate myself as to what Cooke Anamorphics delivered.

So to answer the question directly and simply, the idea was to design a set of lenses based on historical anamorphic lens principles: rear spherical elements with anamorphic front cylinders.

Art: I seem to remember that this is key to the anamorphic look. When I worked on features in Hollywood I saw anamorphic zooms that were converted by putting an anamorphic cylinder on the back of the lens, but while that cobbled a wide screen look out of a spherical lens, it didn’t have that beautiful oval bokeh or the horizontal streak flares that we love so much.

Carey: Cooke sought to produce a range of anamorphic lenses that took on the characteristics of what we consider “good things” about the look of Anamorphic lenses: optical bokeh, reduced depth of field, field curvature, and pin-cushioning or saddle effect, depending on the focal length. We worked aggressively to avoid bad things such as the “anamorphic mumps”, where the closer a face is to the lens the wider it becomes. Also, our original anamorphic lenses steered away from exaggerated horizontal flares. This was a conscious decision as the lenses better matched the flare characteristics of the extremely clean S4/i and 5/i lenses.

Art: If I recall correctly, “anamorphic mumps” was a big issue when anamorphic lenses were first introduced in the 1950s, and on into the 1960s. Some stars dreaded working in anamorphic as they felt their closeups made them look fat.

How would you describe the appeal of anamorphic imagery?

Carey: Most non-technical people generally agree that anamorphic images encapsulate the “motion picture” look. They say that 2.4:1 anamorphic look “just like the movies!”  Directors of photography simply tend to say anamorphic lenses have personality.

Also, many DPs feel that the digital look is too clean and lacks the texture and character of film.

Art: There’s been a resurgence of interest in old lenses as a way to give digital images more character. I know that TLS rehouses old Cooke Speed Panchros for modern use, and they are very popular precisely because of their lack of perfection (and, of course, because they still possess the famous “Cooke look”). I’ve long felt that audiences don’t want to see reality, but rather an interesting interpretation of reality. Sometimes digital feels a little too real for abstract storytelling.

So how did Cooke preserve and reinvent its distinctive “look” in a set of anamorphic lenses?

Carey: Wide-angle anamorphic lenses inherently display distortion, so we intentionally retained elements of that distortion so as not to make a flat image. You will find that the 25mm, 32mm and 40mm do have some distortion top to bottom and side to side. However this has been controlled as to not make the viewer feel uncomfortable when the camera pans or tilts. Too much distortion can make the viewer feel as if they are rolling or swimming in the image. This also results in wavelike distortion when the camera is locked off and a person or object moves across the frame.

Some distortion must be present to add “personality” but should not render the viewing experience unpleasant or uncomfortable.

Also, I think that as we view the world in our natural state of looking around, we perceive multiple focal planes in the real world due to our quick focusing reflex. The look of front anamorphic lenses adds a “roundness” to the focal planes, which results in an additional sense of dimensionality.

This is why anamorphic lenses look great when photographing landscapes and exterior natural environments. Architectural photography might require special framing to avoid these characteristics, or they can simply be embraced as part of the anamorphic look. We leave that choice to the DP.

After the wider focal lengths we move onto the 50mm lens, which has sharper sides and softer corners than the wider-angle lenses.  The designers felt this complemented this focal length.

Of particular interest is our newer 65mm macro. Anamorphic lenses have not historically been great at focusing close, but this lens is a 4:1 macro, which also reaches effortlessly to infinity.

Art: I remember older anamorphic lenses focusing no closer than 4′, and the only way to get around this was to use diopters. Those change the focus markings on the lens, so during prep camera assistants would measure out every focus mark on every lens and create new follow focus disks. If the DP called for a close-up on a wide lens, the assistant would add a diopter and then change out the follow focus disk so they still had reliable focus marks.

The first time I looked at a Cooke Anamorphic lens I was startled to see that the wider ones focus closer than 3′, and they look sharp wide open. My understanding is that anamorphic lenses can be thought of as two lenses combined—a primary focal length in the vertical axis and a second focal that’s 50% wider in the horizontal axis—and their depths of field must overlap in order for an image to look sharp. Many early anamorphic lenses didn’t focus unless stopped down to f/4, but your lenses look tack sharp at T2.3!

I heard one story, about an anamorphic feature film that ran through two or three assistants for focus issues, until they hired an assistant with anamorphic experience. He pointed out that the DP was shooting with the lenses wide open and they simply wouldn’t focus.

I don’t see this being a problem with your lenses.

Carey: The longer focal lengths—75mm & 100mm—sees the field curvature change slightly into what I can only describe as a delicate saddle effect, as opposed to the pincushion of the wide angles.


Art: I’m not familiar with “saddle effect.” How would you describe it?

Carey: By “saddle” we mean that the centre and corners are sharp and the edges are softer, without distorting the center of the frame. Remember that when we discuss fall off on anamorphic lenses—as well as optical artifacts such as pin cushioning, saddle effect and barrel distortion—the sweet spot of an anamorphic lens is based on a centrally located horizontal oval, not a circle. The setting of this fall off from the oval sweet spot, and how large it is, determines the final look of the lens.

Our anamorphic lenses are based around an elliptical shape that covers 80% to 90% of the vertical and horizontal frame. Saddle effect doesn’t cause issues such as “anamorphic mumps”, but instead subtly enhances the viewing experience. It’s part of the character that DPs expect out of an anamorphic lens.

The pincushion and saddle focal plane effects are what determine key aspects of an anamorphic lens’s focal length characteristic. The saddle effect on longer focal length lenses draws the image out toward the viewer, and pin cushioning on wide-angle lenses has the effect of drawing the viewer in.

We use these effects sparingly and carefully. Using too much can make the image overbearing and difficult to watch!

Finally, the longer focal lengths—135mm, 180mm and 300mm (yes, 300mm!) have a flatter image due to their length, but have minimal color fringing by comparison to other anamorphic lenses of these focal lengths.

Art: What’s the greatest issue you must overcome when selling anamorphic lenses in the digital era?

Carey: Some of my customers tell me, “It must be easy to sell Cooke anamorphic lenses in the digital era because the digital format on its own is so boring and sterile. Adding a lens with personality can only help to achieve more interesting images!” That said, the biggest problem that we face in anamorphic cinematography today is that some people think that letter-boxing the image is all they have to do to create an anamorphic look. They have all heard of anamorphic, but they don’t really know what it means to shoot true anamorphic. Lenses which have rear anamorphic cylinders, as you have mentioned, go long way to confusing the issue because they don’t produce a historically “correct” anamorphic image.

There needs to be a greater distinction between shooting 2.4:1 on spherical lenses and doing the same with front anamorphic lenses or rear anamorphic lenses. I think this will help everyone in the end, as DPs can be more specific with directors and producers about the look they are trying to create.

Art: I understand that you have introduced some new anamorphic lenses that have even more “character” than your existing lenses. What can you tell me about those?

Carey: Yes, we’ll be showing off our new SF (“special flare”) anamorphic lenses at NAB. One of the most asked for “personality traits” of anamorphic is the famous horizontal blue streak flare. This can be cheated in spherical formats by letter-boxing the image and adding a streak filter. Presto, job done. (Well, not really… you can’t create anamorphic “personality” so easily!)

Our SF anamorphic lenses are mechanically the same as our current anamorphic lenses, except that many of the optics have been recoated to produce this distinctive anamorphic flare. This is not a retrofit, a filter or an attachment: the actual glass elements of the lens have been specially treated. This creates an extremely organic flare that is enhanced by the unique ways light can bounce around inside an anamorphic lens. There is no hiding when the light hits the lens.

No matter which style of lens a DP chooses, they will always have the classic “Cooke look.”

The following projects were shot through Cooke Anamorphic /i lenses. Look for footage showing off the new SF line of lenses at NAB.

See Cooke /i and SF anamorphic lenses at NAB 2017 in Cooke’s booth, C5414. That’s adjacent to the DSC Labs booth, where Adam Wilt and I will be demoing color, resolution and dynamic range charts.

Art Adams
Director of Photography

Disclaimer: I have worked as a paid consultant to DSC Labs.

The post Cooke Anamorphic Lenses Bring Class and Character to a Clean Digital World appeared first on ProVideo Coalition.

]]> 0
Three Variations on Lighting a Single Shot Commercial Sun, 19 Mar 2017 19:39:53 +0000 I love a challenge, as that’s when I get to do my best work. One of these spots was easy, one was moderately difficult, and one was hard. All turned out perfectly. Before I go any further, take a look at the finished pieces: Each of these three spots has a name. In order, they

The post Three Variations on Lighting a Single Shot Commercial appeared first on ProVideo Coalition.

I love a challenge, as that’s when I get to do my best work. One of these spots was easy, one was moderately difficult, and one was hard. All turned out perfectly.

Before I go any further, take a look at the finished pieces:

Each of these three spots has a name. In order, they are: “Lightbulbs,” “Napkins” and “Typewriter.”“Typewriter” was the simplest of the three. It’s just a straight pull back. One big wall, lots of texture, easy to light.

Texture makes a difference. Our brains love variety. I often use dappled lighting to break up flat, even surfaces. Textures built into the set allow me to light more simply and quickly.

Based on the boards, this spot looked easy to light. The camera move could be executed on a dolly.

This prompted me to call my line producer, Christopher Knox, and ask for a Technocrane.

“Hmmm,” he said. “Can you do this with a jib instead? We’re not really budgeted for a Technocrane.”

“Maybe,” I said. “The jib operator is going to have to jockey the base around a bit, as the arm pivots in space as it swings up. If they say they can boom up while pushing the crane across the floor to maintain the composition, I’m happy to use a jib.”

“Okay, let me call the crane guy,” he said. About an hour later he called me back. “We’re using a Technocrane.”

As a compromise, I offered to shoot the project on a classic Arri Alexa. That camera is about six years old now but looks as phenomenal as an Amira or Mini. The disadvantage is it won’t shoot resolutions higher than HD. The good news is that it’s a lot cheaper than an Amira or a Mini.

Shuffling money around to make the project work is a key part of commercial production in the modern world, and while I certainly love to work with the latest and greatest tools, I also know exactly what kind of camera gear I can get away with for any given job. Shooting green screen with a Canon C300 is a disastrous mistake. Putting a heavy zoom on a Sony FS7 will result in soft focus post-production tears. A RED camera captures critical color with one particular OLPF filter, and one only. Things were easier in the film days. Now… it helps to be a serious geek when figuring all this out.

Fortunately, I am a world-class geek.

Often I’m able to give production choices: “I know you want to use camera “X,” but I can give you a comparable look using camera “Y” so you can put the savings into something else that we need to make the job a success…” Like a Technocrane.

For my part, I told Knox that I would use the Technocrane for every setup, as it’s much faster to use than a traditional dolly. There’s a lengthy camera move in each of these spots, so a Technocrane could really pay off for all three. We only needed it for one shot, but it was going to reduce setup times on every shot.

I could have shot this last spot using a dolly, but the end of the Technocrane is very narrow (the camera is wider than the arm) and the crane eliminated the need for dolly track. As we’d be shooting lots of reflective glass very near to the lens, moving the camera on a narrow black arm made more sense than on a large shiny dolly mounted on track. If we hadn’t needed the crane for “Napkins” I’d have made a dolly work, likely by draping it, myself and the dolly grip in black duvetine. That’s not my favorite way to work, but it would have been a cost effective choice.

“Lightbulbs” was clearly the hardest setup from a rigging perspective, as we had to hang dozens of lightbulbs in an attractive cloud formation. I outlined the following specs:

  • Each bulb must be easily and quickly adjustable, both in height and location.
  • Each bulb required its own dimmer.
  • The bulbs in the center of the rig had to be built on an adjustable channel system. I wanted the camera to just barely clear them as it pulled back, so the nearest bulbs swept dramatically past the lens.

This sounds simple. In reality it required a small truss setup with a hundred pounds or more of cabling, no floor stands, and a portable 30-channel dimmer system.


Originally, when I spoke to art director Bret Lama about how to arrange the sets on the smallish stage we had to work in (such are the realities of living in a secondary market), we spoke of putting them side-by-side. After doing some measuring, though, I suggested stacking them up in a line.

The crane specs showed that its collapsed length was about 14′. That left 28′ in which to build the set and move the camera, which was *probably* enough, but I dislike working at the limits of what I have available. I knew I wanted to use a large light source to reveal the lightbulbs’ reflective glass surfaces, and it occurred to me that rotating the set 90° away from the stage’s white cove would allow me to turn that cove into a bounce source.

Further, as the “Lightbulbs” and “Typewriter” sets consisted of single walls, we could build them parallel to each other and dress both at the same time on the pre-light day. Upon finishing photography on the first set we’d take that wall away to reveal the next wall, move in a new desk and props, make a couple of lighting tweaks and be ready to shoot without having to shift the crane base.

This arrangement worked for a number of reasons. We could set up the Technocrane operating station at the beginning of the day and never move it again. The crane itself only had to move once, to get to the “Napkins” set. We used the same shooting space and overhead rigging for our two single-wall sets. And the first two sets could share some lighting while giving us room to pre-light the third set.


Knox and I had a discussion very early on about the “Lightbulbs” set. We’d already planned to rig most of the lighting, with full grip and electric crews, on the day before the shoot, but the art department had an additional day of set construction. He put some money aside so my gaffer and I could come in on that first day and work out the mechanics of the cloud rig without our minions standing idle. (We overachieved: by the end of the day we had all three sets figured out, and “Napkins” was 75% pre-lit!)

We settled on a speed rail truss system that we’d build on the floor, supported initially by floor stands. Once the rig was built, the grip crew would run speed rail down from the grid, grab the truss, and pull the floor stands away.

I wanted the lightbulbs to pass as close to the camera as possible, so I asked that the middle bulbs be rigged from two movable pipes. This gave us the ability to quickly set a channel width for the middle rows of lightbulbs.

We ran out of speed rail, so we had to improvise with wooden boards. To the left of the board there are two pieces of pipe that run parallel to each other. (You can see the ends sticking off the back end of the rig, and the clamps that hold them in place.) Those are the channel pipes. Once the camera was in place we were able to change the width of those pipes such that the camera passed as close to those bulbs as possible over its move.

I told my gaffer, Andy Olson, to bring every stinger he had. He laughed. The next day, during the pre-light, he told me, “I thought you were joking, but I brought them all anyway. We’re using almost all of them!”

The “channel” pipes are easily visible here as they form a “V” that extends across the length of the truss. All lights are hung using A-clips, for speedy adjustment.

Andy brought a 30-channel dimming system, and we rigged 32 lights in total. (Four lights shared two circuits, but we put those at the end of the move and assumed the audience would never notice them coming on simultaneously amongst all the other cloud lights.)

As far as lighting the set, I’d initially wanted to use soft side light to create broad linear highlights in the bulb surfaces. One of the reasons I’d asked the sets to be oriented at a right angle to one of the stage’s white cyc walls was so I could use it as a large bounce source. My key grip, Gordon McIver, went even further and turned it into a massive book light.

The light source to the right was three Arri M18s aimed into a cyc wall, which was then further diffused by a frame of 12’x12′ grid cloth. The reflection of this source on one lightbulb was pretty, but seeing it on 30 lightbulbs was amazing.

The big light source was the easy part. It took the better part of a day to rig and power our “truss o’ bulbs.” Once we had everything roughed in, I snapped some photos and texted them to Greg Rowan, our director, who then talked me in on the overall shape of the bulb cloud. We were 95% ready at call time the next day, with only some final bulb adjustments once the camera was in place. Every light was held in place by a spring clip on its cord, so changing the height of each bulb was easy. Adjusting the placement was a little more difficult as we could only slide the bulbs left, right, forward or back on the bar to which they were attached, but we made do by adding crossbars, and then crossbars between crossbars.

Unfortunately, on the shoot day, the big soft source lighting scheme had to be scrapped as it revealed seams in the set walls that the client didn’t want to see. The seams in non-textured flats can be hidden by taping them over and painting them, but this doesn’t work on textured flats. I’d hoped that the seams could be explained away as possible grout lines in a brick facade, but early that morning some concerns were expressed and I initiated a backup plan. Once the final decision was made to hide the seams, we switched off the big source and turned on some smaller ones.

A Source 4 Leko, hung from the end of the truss, lights up the desk blotter, creating a soft bounce source. A 4’x4 Kino Flo peaks over the back of the set as a backlight. We later added a splash of light from a tweenie on the back wall to bring out its texture and prevent it from fading into blackness.

The bulbs lit themselves, but I needed a quick and elegant way to light the actress. I find low bounced light sources to be especially interesting as they feel “ambient” to me, as if they naturally belong no matter the environment. Sunlight striking a floor creates much the same look, and indeed there are many situations where much of the light in an environment is light radiating upwards from flat surfaces. I had the electrical crew hang a Source 4 from the truss and aim it at the desk blotter, which was a large enough source to wrap nicely around the actress’s face.

Alexa’s wonderful dynamic range and highlight handling allowed me to do this without worrying that the blotter would blow out to an awful, clipped video white. It truly lets me light as I would when shooting film.

The original Stanford University Shopping Center Apple Store design incorporated huge, milky plexiglass ceilings lit from behind by fluorescents that made the ceiling a single large, flat and perfectly diffuse light source. The floor was made of a very bright white material that caught this light and reflected it upward, resulting in nearly equal amounts of light from both the floor and the ceiling. The store interior felt as if it had an internal glow, and everyone inside was lit as if they were a fashion model. Sadly, the floor scuffed easily and was eventually removed, and while the ceiling is much the same, the feel of the store is very different without the radiant white floor.

When in doubt, I’ll often light from below. I started doing this in my low budget corporate video years, when I frequently found myself lighting people at conference tables. The most interesting—and quickest—solution was to hang a light overhead and smack it into the table, where scattered pieces of note paper bounced the light back on faces. I was able to shoot in any direction, and the soft upward shadows felt both interesting and real.

We scheduled “Lightbulbs” first as Knox and I decided to get the hardest setup out of the way early in the day. Although Andy and I tried to create a preplanned bulb illumination sequence, fading up specific bulbs as they entered the frame, this proved too time consuming. In the end we gave Andy his own reference monitor near the dimmer board and let him feel his way through the shot.

We set an overall bulb brightness level on the dimmer board using the master fader, so he could focus on bringing the individual faders up at the proper times without having to hit the same brightness level 30 times.

The foreground was very warm, so I lit the background to be a little cool. Cinematography is about creating depth and contrast, and it’s well known in design circles that “warm colors advance and cool colors recede.” Making the background a little cool created a pleasant color contrast between foreground and background while enhancing the depth of what is a not-very-deep shot.

One thing I love about the Technocrane is that I get to operate wheels again. I miss gear heads. They’re so smooth, precise, and just plain fun.

It’s clear that I’m doing some operating at the beginning of this camera move, but once I’d finished tilting up I had to immediately start tilting down to keep the top of the frame level across the rest of the move. The fact that I’m spinning wheels throughout this shot is impossible to see in the final spot, but it would have been obvious if I hadn’t. I love that.

The same thing happened on “Typewriter.” The crane arm wasn’t perfectly dead on to the set, but I was easily able to keep the actor perfectly centered across the entire move. The camera doesn’t appear to be panning or tilting at all.


Once we had “Lightbulbs” in the digital can, we swapped out the desk, knocked down the set wall, and quickly lit for “Typewriter.” We’d already rigged two 4’x2 Kino Flos on the “Typewriter” set wall to act as large, downward washes. We’d left the large soft source for “Lightbulbs” built and ready to go, so we turned it back on for ambient fill. We also taped a piece of typing paper to the keys of the typewriter and aimed our Source 4 at it for a little upward-facing glow.

This small bounce wasn’t a big enough source to be flattering to the actor’s face on its own. We added a small Chimera, fitted with a directional grid and rigged to our overhead truss system.

In order to speed things up, we didn’t bother removing the light bulb rig until we’d finished this setup. I had the electrical crew pull all the lightbulbs up to the truss, and we waited to disassemble it until we moved to the final “Napkins” setup.

I would have preferred to light this with the table lamp only, but that would have required cutting the shade and using a bigger bulb. Art direction decisions of this type often happen on the day of the shoot, and I opted not to spend time tearing apart a lamp on a moment’s notice. Half of my job is getting the look right, and the other half is getting it done on time.

There’s not much else to say about this spot. It was very straightforward. The Technocrane worked perfectly and the lighting setup was fairly simple. While shooting this setup I had the electrical crew work ahead and turn on the lighting for the “Napkins” spot, which we’d already put in place.


Initially this setup was a bit of a quandary. I knew I had to light someone laying on the ground and then pull straight back for a good distance without seeing lights or casting camera shadows. I had a full set of Cooke S4 primes on set, but I’d decided to only use the 32mm when possible. The 32mm is wide enough to capture a good-sized set in a small space, but isn’t so wide that it distorts faces in unpleasant ways. As all three spots started in closeup and ended wide, the 32mm seemed like the best choice for each setup. (32mm and 35mm primes are great all-around lenses. If I’m handheld and shooting quickly, I’ll often put one of those on and never take it off.) In theory I could’ve only rented the 32mm lens, but that’s asking for trouble if anything changed at all on the day of the shoot.

The trick in preproduction was to figure out whether I could get a head-to-toe shot on a 32mm lens at our maximum camera height, which was 18′ to the stage’s grid. I also had to find out what my angle of view was at that height, so I could tell the art department how much floor space to dress with napkins. It’s times like these that PCam is invaluable. Thanks to its various calculators I determined that it was possible for me to get a head-to-toe shot on a 32mm lens at 18′, and also that I needed 12’x12′ of “napkin space” to fill the final frame.

During our “pre pre-light” day, Andy and I talked about ways to make this sea of napkins interesting while also lighting an actress’s closeup. I’d toyed with bouncing light off the cyc wall, but it felt like that might be too broad for what was essentially a person laying on a flat white surface. Even confining the bounce to the lower part of the wall, such that it skimmed the napkins, seemed like it might be too much of a flat wash, and it left the issue of getting light around the front of the actress’s face from a very high angle. I broached the subject of hanging a Chimera in the grid, but Andy talked me out of it as he hates hanging lights in case he has to move them quickly. The napkin floor would have kept us from bringing in a ladder or a scissor lift unless the light was hung well off center.

In the end I embraced my “inner hard light.” We put a 2K fresnel on a stand as high as we could get it, and the angle turned out to be perfect for a classic, 1950s Hollywood key light. One of the tricks with hard light is getting the fill light right, as hard light casts shadows that emphasize skin imperfections. Lowering the contrast of those shadows, by placing a soft light near the lens—and preferably under it, so it reaches into the actor’s eyes—fixes a lot of issues. We had a small portable LED light ready to mount under the lens in the event I needed it, but the actress had perfect skin. She was a dream to light.

I wanted to create a pool of light around the actress, but not in a perfect circle as if it were a spot light. As part of our “pre pre-light,” Andy and I experimented with distorting our key light in interesting and visually random ways. We started off spotting the light all the way in, to create a pool of light with a hot center that grew darker at the edges. Then we added a cucaloris (“cookie”) for some breakup. Normally you’d flood a light all the way out so the cookie cast hard shadows, but I wanted to see how the shadows softened at full spot.

And… it was too soft. The pattern was mush.

I knew from experience, though, that adding another pattern would cause the two patterns to interact, so I asked Andy for a second cookie. He didn’t have one (almost no one carries them anymore, they’re considered a bit dated) but he had a piece of foam core with a round hole cut in it. We set that inches in front of the spotted fresnel to see what happened.

It was magic. The result is a little hard to see in the final shot, but the interaction of the small aperture in the foam core, the spotted fresnel, and the cookie resulted in a wonderfully random pool of light. It looked amazing on the Sony A170 OLED monitor we used on set, although it loses a little something in a highly compressed 8-bit file. Still, it looks fairly nice: not too theatrical, not too perfect.

This is a trick I use often. Stacking patterns causes the holes in the front pattern to act as apertures through which the rear pattern is projected. The result is a wonderfully random combination of hard and soft shadows that interact in unpredictable ways. I wrote an article about this effect here.

At each corner of the napkin region we placed a 4’x8′ piece of foam core, and bounced a Kino Flo into each one. Ideally I’d have filled from behind the camera, but no matter how big the source I’d still end up shadowing the actress with the camera at the beginning of the shot. Placing large bounces around the perimeter of the shot gave me the same effect without putting any lights behind the camera.

We placed four Source 4 Lekos on the ground, two on either side of the napkin field, to skim across the napkin edges and prevent them from appearing too flat.

We may have added a light CTO gel to the 2K fresnel to make it feel a bit like warm sunshine.

All that was left was to finesse the camera move. The crane operator, Robert Barcelona, placed the camera in a direct-down position on the head, and then joined me in operating it. My job was simply to pan, while he retracted the arm over the course of the shot so that the camera started centered on the actress’s face and ended up centered on her body.

This pool of light doesn’t look like it took a lot of work to create, but natural light can be surprisingly hard to reproduce at times—because it’s never perfect.

It took a couple of rehearsals to choreograph our dance, but we figured it out fairly quickly. Robert had the hardest job, as precisely retracting the arm while watching a spinning image couldn’t have been easy.

Alexa presets tend to look a little green. Adding CC-4 to the color temperature (top right, “3200 -4”) tends to fix this issue. I use a center “dot” instead of crosshairs when working with Alexa, as I like to see the center of the frame without showing the director and clients a crosshair. (One of my assistants likes to make the frame lines and crosshair red, to emulate the old “Panaglow” illuminated markings that helped film operators frame shots at night. Red worked well in that case because it didn’t affect night vision, but I prefer white because—in every other context—red indicates an error.)


We shot this on an Arri Alexa Classic, and while the savings in rental price over an Amira or Mini didn’t buy the Technocrane, it certainly defrayed the cost somewhat.

For quite a long time I’ve been very picky about noise, and I will normally rate a camera at half its stated ISO as most aren’t as noise free as I’d like. I’m starting to ease up on that practice, but at the time I shot this I was still deep in my anti-noise phase, so I rated the camera at ISO 400. Reducing noise does help in compression for the web, which is where a lot of my recent projects have landed.

I’d wanted to shoot the project on TLS-rehoused Cooke Speed Panchros, which are beautiful old lenses with all sorts of funky anomalies. Sadly, my assistant found that the 32mm was out of collimation at the prep and there was no one in the rental house on the prep day to fix it. I ended up with a set of Cooke S4s, and I really can’t complain as those are phenomenal lenses for shooting closeups, but the funkiness of the TLS Cooke Speed Panchros would have added an extra something to the project. The bokeh on “Lightbulbs” would have been wonderfully random, and the natural vignetting in those lenses would have added some character to the fairly flat fields of both “Typewriter” and “Napkins.”

“Lightbulbs” and “Napkins” were shot at T2 as I wanted the backgrounds to go a bit soft, at least at the beginning of the camera move. “Napkins” was shot at T4 1/2 as reduced depth of field didn’t make any difference to the flat set, and I decided to give my camera assistant—the excellent John Gazdik—a break at the end of the day. He didn’t need it, but I remember my days as a camera assistant and I try to be kind to mine whenever possible.

I nearly always shoot with a 144° shutter (1/60th second exposure at 23.98fps) as I’m extremely paranoid about flicker. Fluorescents flicker, LEDs flicker, and recently I’ve even seen tungsten halogen bulbs flicker.

My theory is that energy efficient halogen filaments are thinner, so they cool—and dim—faster when the AC current changes directions. I’ve had issues when shooting on-set tungsten practicals at 48fps/180°, and I’ve seen the same practicals flicker at 24fps/180° when dimmed. 144° puts me directly in the middle of the 60hz flicker-free window, and allows me to worry about more important things than whether an errant lightbulb might be misbehaving in the background… or, in this case, 30 dimmed lightbulbs in the foreground.

All spots were shot at 23.98fps to LogC, ProRes4444.

Me (left) on the “Lightbulbs” set with line producer Christopher Knox. Photo by Bret Lama. I love this picture.

Production Company: Teak SF
Director: Greg Rowan
Line Producer: Christopher Knox
Executive Producer: Greg Martinez

Director of Photography: Art Adams
Art Director: Bret Lama
Gaffer: Andy Olson
Key Grip: Gordon McIver
First Camera Assistant: John Gazdik
Crane Owner/Operator: Robert Barcelona

Art Adams
Director of Photography

Support ProVideo Coalition
Shop with

The post Three Variations on Lighting a Single Shot Commercial appeared first on ProVideo Coalition.

]]> 15