Art Adams – ProVideo Coalition https://www.provideocoalition.com A Moviola Company Wed, 26 Apr 2017 13:36:14 +0000 en-US hourly 1 https://wordpress.org/?v=4.7.4 https://cdn.provideocoalition.com/app/uploads/cropped-Moviola-Favicon-2016-32x32.png Art Adams – ProVideo Coalition https://www.provideocoalition.com 32 32 A Guide to Shooting HDR TV: Day 6, “How the Audience Will See Your Work” https://www.provideocoalition.com/a-guide-to-shooting-hdr-day-6-how-the-audience-will-see-your-work/ https://www.provideocoalition.com/a-guide-to-shooting-hdr-day-6-how-the-audience-will-see-your-work/#respond Sun, 23 Apr 2017 17:00:04 +0000 https://www.provideocoalition.com/?p=50728 This is the sixth installment of a six-part HDR “survival guide.” Over the course of this series, I hope to impart enough wisdom to help you navigate your first HDR project successfully. Each day I’ll talk about a different aspect of HDR, leaving the highly technical stuff for the end. You can find part 5

The post A Guide to Shooting HDR TV: Day 6, “How the Audience Will See Your Work” appeared first on ProVideo Coalition.

]]>
This is the sixth installment of a six-part HDR “survival guide.” Over the course of this series, I hope to impart enough wisdom to help you navigate your first HDR project successfully. Each day I’ll talk about a different aspect of HDR, leaving the highly technical stuff for the end. You can find part 5 here.

Thanks much to Canon USA, who responded to my questions about shooting HDR by sponsoring this series.

SERIES TABLE OF CONTENTS

1. What is HDR?
2. On Set with HDR
3. Monitor Considerations
4. Artistic Considerations
5. The Technical Side of HDR 
6. How the Audience Will See Your Work < You are here


There are several competing standards for HDR distribution. Each has consequences for the cinematographer and their work.

DOLBY VISIONTM

The Dolby Vision standard utilizes the PQ curve (see appendix) for image encoding. It also specifies 12-bit encoding with a peak brightness of 10,000 nits as defined in the ST2084 specification, although Dolby currently recommends a target peak white value of 4,000 nits.

Dolby Vision content is currently mastered within the P3 color gamut, although it is capable of reproducing imagery in any gamut from Rec 709 to Rec 2020 depending on the capabilities of the display. Rec 2020 color is far beyond what modern displays can produce but leaves room for future growth.

Dolby’s key strength is that it provides dynamic metadata that instructs a proprietary decoder chip, built into a television, how to best adjust imagery to fit within the constraints of a consumer display on a frame-by-frame or shot-by-shot basis. If a program was mastered on a monitor that exceeds the specs of the television on which it is being viewed, either in dynamic range or color gamut, then the dynamic metadata that travels with the program tells the decoder chip how to expand or contract each shot’s color gamut and dynamic range to fit within that display’s abilities.

HDR10

The HDR10 standard, which is championed by a number of television manufacturers who don’t want to license technology from Dolby, also uses the PQ curve. It aims for 10-bit encoding, encompasses the same Rec 2020 color gamut, and has been adopted as the Blu-ray DVD encoding standard. Most online streaming services offer both it and Dolby Vision as options. (HDR10 can be implemented in HDR TVs as a software upgrade, whereas Dolby Vision TVs require a built-in chip.)

Where Dolby Vision’s dynamic metadata aids in adjusting color gamut and peak brightness to match a television’s capabilities on a shot-by-shot basis, HDR10 incorporates only one instruction that applies to the program overall. There is some discussion about adopting a shot-by-shot metadata scheme similar to Dolby’s, but this has not been finalized.

The biggest difference is that there is no specification for what happens if a program’s peak white exceeds the capabilities of a consumer TV. It is up to each manufacturer to develop a roll-off scheme—likely some sort of highlight-only gamma curve—to compress highlights in a pleasing way that will retain some of the artistic integrity of the image.

This is of some concern to the discerning cinematographer.

HLG

The third impending standard is HLG. It is backwards compatible across a wide range of TVs as it employs a gamma curve that becomes progressively flatter as brightness increases, much like a log curve. A consumer TV will reproduce brightness levels as high up the curve as it can and roll off the rest.

In theory, the same content is viewable on both an old SDR TV and a new 1,000 nit HDR TV, although with dramatically different results.

HLG’s ever-flattening brightness curve doesn’t allow for the same size color gamut as other formats, so highlight saturation is reduced compared to Dolby Vision and HDR10.

Currently, HLG is only being considered for broadcast television, where legacy sets will be an issue for years to come.

THINGS TO REMEMBER

  • Dolby currently offers the best scheme for adapting imagery to HDR televisions.
  • Dolby and HDR10 are both currently available as video streaming options.
  • HDR10 is the de facto Blu-ray standard.
  • HLG is meant to be an over-the-air broadcast standard only.
  • You have no control over which of these technologies will preserve or distort your creative vision.

Wrapping it all Up

There are a lot of unanswered questions about HDR origination and broadcast, but there are certain on-set practices that should make the transition easier. Key among these is to use an HDR monitor to train your eye to recognize when technical issues will arise regarding highlights, shadows and camera movement. There aren’t a lot of cheap or bright on-set monitors available right now, but at least one—the Canon DP-V2410—seems to be bright enough, dark enough, light enough and affordable enough to be a good on-set reference monitor.

Beyond that, it’s important to remember that bit depth matters, especially when the camera’s entire dynamic range will be reproduced on a display with little or no tonal compression. When broadcast standards call for 10-bit and 12-bit deliverables, it pays to shoot at a higher bit depth to leave room for grading in post, while pushing your tonal scale farther from the Barten Ramp boundary. 10 bit RGB is the bare minimum and won’t leave you much room in post. 12-bit is better, and 16-bit is best.

Grading will be a new experience. One colorist told me that there weren’t any limits as to what he could do, given well-exposed material at a high bit depth. This is exciting news for any cinematographer who is included in the grading process. Sadly, at least in short form work, this is not always the case.

As display dynamic range, color depth and resolution increase, our margin for error on set decreases. At the same time, HDR gives us a license to push imagery to perceptual realms never before possible in either film or video. The dynamic range of a projected film print can’t compete with HDR. Like film, it will take some time to learn to evaluate a scene’s visual impact strictly by eye and light meter. Fortunately, such training can happen in real time thanks to the availability of set-friendly HDR monitors.

The End

SERIES TABLE OF CONTENTS

1. What is HDR?
2. On Set with HDR
3. Monitor Considerations
4. Artistic Considerations
5. The Technical Side of HDR 
6. How the Audience Will See Your Work < You are here


The author wishes to thank the following for their assistance in the creation of this article.

Canon USA
David Hoon Doko
Larry Thorpe

Dolby Laboratories
Bill Villarreal
Shane Mario Ruggieri


Disclosure: I was paid by Canon USA to research and write this article.

Art Adams
Director of Photography

The post A Guide to Shooting HDR TV: Day 6, “How the Audience Will See Your Work” appeared first on ProVideo Coalition.

]]>
https://www.provideocoalition.com/a-guide-to-shooting-hdr-day-6-how-the-audience-will-see-your-work/feed/ 0
A Guide to Shooting HDR TV: Day 5, “The Technical Side of HDR” https://www.provideocoalition.com/a-guide-to-shooting-hdr-day-5-the-technical-side-of-hdr/ https://www.provideocoalition.com/a-guide-to-shooting-hdr-day-5-the-technical-side-of-hdr/#respond Sat, 22 Apr 2017 17:00:07 +0000 https://www.provideocoalition.com/?p=50712 This is the fifth installment of a six-part HDR “survival guide.” Over the course of this series, I hope to impart enough wisdom to help you navigate your first HDR project successfully. Each day I’ll talk about a different aspect of HDR, leaving the highly technical stuff for the end. You can find part 4

The post A Guide to Shooting HDR TV: Day 5, “The Technical Side of HDR” appeared first on ProVideo Coalition.

]]>
This is the fifth installment of a six-part HDR “survival guide.” Over the course of this series, I hope to impart enough wisdom to help you navigate your first HDR project successfully. Each day I’ll talk about a different aspect of HDR, leaving the highly technical stuff for the end. You can find part 4 here.

Thanks much to Canon USA, who responded to my questions about shooting HDR by sponsoring this series.

SERIES TABLE OF CONTENTS

1. What is HDR?
2. On Set with HDR
3. Monitor Considerations
4. Artistic Considerations
5. The Technical Side of HDR < You are here
6. How the Audience Will See Your Work


As a result of the research that went into writing this article, I’ve developed some working rules for shooting HDR content. Not all of it has been field tested, but as many of us will likely be put in the position of shooting HDR content without having the resources available for advance testing, we could all do worse than to follow this advice.

SDR VS. HDR: THE NEW “PROTECT FOR 4:3”

A lit candle in SDR is a white blob on a stick, whereas HDR can show striking contrast between the brightly-glowing-yet-saturated wick and the top of the candle itself. Such points of light are not just accents but can be attention stealers as well. The emotional impact of a scene lit by many brightly colored bulbs placed within the frame could range anywhere from dazzling to emotionally overwhelming in HDR, depending on their saturation and brightness, whereas the same scene in SDR will appear less vibrant and could result in considerably less emotional impact.

The trick will be to learn how to work in both SDR and HDR at the same time. As HDR’s emotional impact is stronger than SDR’s, my preference is to monitor HDR but periodically check the image in SDR. Ideally we’d utilize two adjacent monitors—one set for HDR, and the other for SDR—but few production companies are likely to spend that kind of money, at least until monitor prices fall considerably. One monitor that displays both formats should be enough, especially as HDR monitors tend to be a bit large and cumbersome for the moment.

We saw this comparison earlier:

Compare the highlights on the C-stand post, and then look at this diagram:

(above) SDR compresses a wide range of brightness values into a narrow range. HDR captures the full range, which can then be reproduced with little to no contrast compression on an HDR monitor.

Specular highlights appear larger and less saturated in SDR than in HDR. They are also much less distracting in SDR. Highlight placement and evaluation will be critical. Most importantly, highlights should be preserved as they will retain saturation and contrast right up to the point of clipping.

Shadows respond in much the same way:

SDR sees only the broadest strokes. HDR sees the subtleties and the shape.

In the absence of a monitor, a spot meter and knowledge of The Zone System become a DP’s best friend. A monitor will be helpful, however, as it takes time to learn how to deploy highlights artistically and learn to work safely at the very edges of exposure. There are technical considerations as well: large areas of extreme brightness within the frame can cause monitors to reduce their own brightness on a shot-by-shot basis (see Part 3, “Monitoring Considerations”) and the end result can be difficult to evaluate with a meter alone. It’s also difficult to meter for both HDR and SDR at the same time, and much easier to light/expose for one and visually verify that the other works as well.

It might be possible to create an on-set monitoring LUT that preserves your artistic intent in SDR.

My suspicion is that monitoring SDR on set will become positively painful, as cinematographers will inevitably be disappointed at seeing how their beautiful HDR images will play out on flat, desaturated SDR televisions. Nevertheless, this will likely be necessary for a few more years. HDR is coming quickly, but SDR is not going away at the same pace. “Protecting for SDR” will be the new “Protecting for 4:3.”

Canon monitors are able to display both SDR and HDR. Canon will shortly release the DP-V2420 1,000 nit monitor for Dolby-level mastering, but the 2410’s price and weight may make it the better option for on-set use. Even with a maximum brightness of 400 nits, HDR’s extended highlight range and color gamut are clearly apparent, and the monitor’s size is still very manageable. The user can toggle between SDR and HDR via a function button.

TEST, TEST, TEST

Every step in the imaging process has an impact on the HDR image. This is true of SDR as well, but the fact that SDR severely compresses 50% or more of a typical camera’s dynamic range hides a lot of flaws. There’s no such compression in HDR, so optical flaws and filtration affect the image in previously unexperienced ways.

Lens and sensor combinations should be tested extensively in advance. Some lenses work better with some sensors than others, and this has a demonstrable impact on image quality. Not all good lenses and good sensors pair optimally, and occasionally cheaper lenses will yield better results with a given sensor.

Lens flares and veiling glare can be very distracting in HDR, and will in some cases compromise the HDR experience. Testing the flare characteristics of lenses is good practice, especially when used in combination with any kind of filtration. Zoom lenses may prove less desirable than prime lenses in many circumstances.

Additionally, strong scene highlights—especially those that fall beyond the ability of even HDR to capture—can create offensive optical artifacts.

It is strongly suggested that the cinematographer monitor the image to ensure that highlights, lenses and filters serve their purpose without being artistically distracting or technically degrading.

CUT THE CAMERA’S ISO IN HALF

This is something I do habitually, as the native ISO of many cameras tends to be a bit too optimistic for my taste, and I’ve learned that Bill Bennett, ASC, considers this a must when shooting for HDR. Shadow contrast is so high that normal amounts of noise become both distinctly visible and enormously distracting.

Excess noise can significantly degrade the HDR experience. The best looking HDR retains some detail near black, and noise “movement” can cause loss of shadow detail in the darkest tones. Rating the camera slower, or using a viewing LUT that does the same, allows the colorist to push the noise floor down until it becomes black, while retaining detail and texture in tones just above black.

SHOOT IN RGB 444 OR RAW

Most recording codecs fall into one of two categories: RGB or YCbCr. Neither of these is a color space. Rather, they are color models that tell a system how to reconstruct a hue based on three numbers.

In RGB those three numbers represent values for red, green and blue. Y’CbCr is different in that it is a luma-chroma color model that stores three values but in a different way:

Color and luma are completely detached. Y’ is luma, which is not shown above because it stands (mostly) separate from chroma, but all other colors are coded on one of two axes: yellow/blue (warm/cool), or red/green. This is how human vision works: blue and yellow are opposites, and red and green are opposites, and each make natural anchors between which we can define other colors. Nearly any hue can be accurately represented by mixing values from specific points that fall between blue/yellow and red/green.

The real reason this is used, though, is because of chroma subsampling (4:4:4, 4:2:2, 4:2:0, etc.). Color subsampling exists because our visual system is more sensitive to variations in brightness than variations in color. Subsampling stores a luma value for every pixel, but discards information in alternating pixels or alternating rows of pixels. Less chroma data equates to smaller file sizes.

The Y’CbCr encoding model is popular because it conceals subsampling artifacts vastly better than does RGB encoding. Sadly, while Y’CbCr works well in Rec 709, it doesn’t work very well for HDR. Because the Y’CbCr values are created from RGB values that have been gamma corrected, the luma and chroma values are not perfectly separate: subsampling causes minor shifts in both. This isn’t noticeable in Rec 709’s smaller color gamut, but it matters quite a lot in a large color gamut. Every process for scaling a wide color gamut image to fit into a smaller color gamut utilizes desaturation, and it’s not possible to desaturate Y’CbCr footage to that extent without seeing unwanted hue shifts.

My recommendation: always use RGB 4:4:4 codecs or capture raw when shooting for HDR, and avoid Y’CbCr 4:2:2 codecs. If a codec doesn’t specify that it is “4:4:4” then it uses Y’CbCr encoding, and should be avoided.

BIT DEPTH

The 10-bit PQ curve in SMPTE 2084 has been shown to work well for broadcasting, at least when targeting 1,000 nit displays. Dolby® currently masters content to 4,000 nit displays, and they deliver 12-bit color in Dolby VisionTM.

When we capture 14 stops of dynamic range and display them on a monitor designed to display only six (SDR), we have a lot of room to push mid-tones around in the grade. When we capture 14 stops for a monitor that displays 14 stops (HDR), we have a lot less room for manipulation, especially when trying to grade around large evenly-lit areas that take up a lot of the frame, such as blue sky.

The brighter the target monitor, the more bits we need to capture to be safe. It’s important to know how your program will be mastered. If it will be mastered for 1,000 nit delivery, 12 bits is ideal but 10 might suffice. Material shot for 4,000 nit HDR mastering should probably be captured at 12 bits or higher. (It’s safe to assume that footage mastered at 1,000 nits now will likely be remastered for brighter televisions later.)

When in doubt, aim high. The colorist who eventually regrades your material on a 10,000 nit display will thank you.

I’ve spoken to colorists who say they’ve been able to make suitable HDR images out of material shot at lower bit depths, and while they say it’s possible they also emphasize that there’s very little leeway for grading. More bits mean more creativity in post.

Dolby® colorist Shane Mario Ruggieri informs me that he always prefers 16-bit material. 10-bit footage can work, but he feels technically constrained when forced to work with it. ProRes4444 and ProRes XQ both work well at 4,000 nits.

Linear-encoded raw and log-encoded raw both seem to work equally well.

One should NEVER record to the Rec 709 color gamut, or use any kind of Rec 709 gamma encoding. ALWAYS capture in raw or log, at the highest bit depth possible, using a color gamut that is no smaller than P3. When given the option of Rec 2020 capture over P3, take it.

In some cases, a camera’s native color gamut may exceed Rec 2020, so that becomes an option as well.

THE ULTIMATE TONAL CURVE: SMPTE ST.2084

There are three main HDR standards in the works right now: Dolby VisionTM, HDR10 and HLG. They are similar but different.

As broadcasting bandwidth is always at a premium, HDR developers sought a method of encoding tonal values in a manner that took advantage of the fact that human vision will tolerate greater steps between tonal values in shadows than in highlights.

This is a simplified version of a graph called a Barten Ramp. It illustrates how the human eye’s response to contrast steps (how far apart tones are from each other) varies with brightness. We visually tolerate wide steps in lowlights, but require much finer steps in highlights. Banding appears when contrast steps aren’t fine enough for the brightness level of that portion of the image.

Gamma curves (also known as power functions) capture more steps than necessary in the highlights, so more data is transmitted than is necessary. Gamma also fails across broad ranges of dynamic range, so while traditional gamma works well for 100 nit displays it fails miserably for 1,000 nit displays. Banding and other artifacts become serious problems in HDR shadows and highlights.

Log curves fair better, but they waste too many bits on shadow areas, making them an inefficient solution.

For maximum efficiency, Dolby® Laboratories developed a tone curve that varies its step sizes. It employs broader steps in the shadow regions, where our eyes are least sensitive to contrast changes, and finer steps in highlight regions, where we’re very sensitive to banding and other image artifacts. This specialized curve is called PQ, for “Perceptual Quantization,” and was adopted as SMPTE standard ST.2084. It’s the basis for both Dolby VisionTM and HDR10.

This curve is only meant to reduce data throughput in distribution or broadcast, where severe limitations exist on the amount of data that can be pushed down a cable or compressed onto a BluRay disk. One should never need to record PQ on set.

WORK WITH CAMERA NATIVE FOOTAGE FROM SHOOT THROUGH POST

Canon recommends working with Canon Raw or log footage up to the point where the final project is mastered to HDR. Canon HDR monitors have numerous built-in color gamut and gamma curve choices, and they are all meant to convert Canon Raw or log footage into HDR without entering the realm of PQ.

PQ is the final step in the broadcast chain. The only circumstance in which you might use it on set is when you’re working with a camera that is not natively supported by your HDR monitor. For example, if a Canon monitor does not natively support a camera’s output, it can still display an HDR image as long as the camera outputs a standard PQ (ST.2084) signal.

Alternately, an intermediate dailies system will often translate log signals through a LUT for output via PQ to on-set or near set displays.

Currently Canon HDR monitors work natively with both Canon and Arri® cameras. Future monitor software upgrades will add native support for image data from additional vendors, enabling Canon HDR monitors to work with a wide variety of non-Canon cameras.

NOTE: PQ is meant for broadcast and monitor/camera compatibility only. It should never be used as a master recording format.

CREATE YOUR LOOK IN THE MONITOR

Canon HDR monitors contain the core of ACES in their software. The image can be graded through the use of an attached Tangent Element TK Control Panel, or via ASC CDL controls located on the front of the monitor itself.

(above) Function key mappings for DP-V2410 in-monitor CDL control.

WAVEFORMS

In the SDR world, log images are not meant to be viewed directly. Exposure is much more easily evaluated by looking at a LUT-corrected image as tonal values are displayed correctly instead of arbitrarily.

In HDR, though, a log waveform may be the best universal method of ensuring that highlights don’t clip. Most log curves place a normally exposed matte white at around 60% on a luma waveform, so any peak that falls between 60% and white clip will end up in HDR territory. This “highlight stretch” makes for more critical evaluation of highlight peaks.

It’s important to know, however, that log curves don’t always clip at 109%. Log curves clip where their particular formula mathematically runs out of room, and this varies based on the curve’s formula and the camera ISO. Rather than look for clipping at maximum waveform values, it may be more prudent to look for the point where the waveform trace flattens, indicating clip and loss of detail. This can be found anywhere from 90% to 109%, depending on the camera, log curve, LUT and ISO setting.

A tiny bit of clipping may be okay, but this can only truly be evaluated by looking at a monitor on set. In general, any highlight clipping should be avoided.

A better option for examining highlights, and one found in both the Canon DP-V2410 and the upcoming DP-V2420 monitors, is a SMPTE ST.2084 (PQ) waveform that reads in nits. It’s divided into quarters: 0-10 nits, 10-100 nits, 100-1,000 nits and 1,000-10,000 nits. The log scale exaggerates the shadow and highlight ranges and shows exactly when detail is hitting the limits of either range.

DOLBY VIDEO® SOURCE FORMATS

The following is an excerpt from a December 1, 2015 paper released by Dolby Laboratories on the subject of original capture source formats.

For mastering first run movies in Dolby Vision, before other standard dynamic range grades are completed, wide-bit depth camera raw files or original scans are best.

Here is a list of Original (raw camera or film scan) Source Formats from best at top to not so good for Dolby Vision at the bottom (the first three in the list are of equal quality):

  • Digital Camera raw*
    • De-bayered digital camera images – 16bit log or OpenEXR (un-color-corrected)
    • Negative scans – scanned at 16bit log or ADX OpenEXR (un-color-corrected)
    • Negative scans – scanned at 10bit log (un-color-corrected)
    • IP scans – scanned at 16bit log or ADX OpenEXR (un-color-corrected)
    • IP scans – scanned at 10bit log IP scans (un-color-corrected)
    • Alexa-ProRes (12bit444) – under some circumstances we’ve gotten good results from this format
    • ProRes-444: This can provide a better looking image than you’d get in standard dynamic range but it can be limited. Results may vary.

*”Raw” means image pixels which come straight out of a digital motion picture camera before any de- bayer operations. [Art’s note: Photo sites on a sensor are not “pixels.” A pixel is a point source of RGB information (a “picture element”) that is derived from processing raw data. “Raw” is best defined as a format that preserves individual photo site data, before that data is processed into pixels.] For the current digital cameras, this format has 13-15 stops of dynamic range – depending on the camera make and model.

You can consider raw format images the same as original camera negative scans, which also have lots of dynamic range. Either of those formats, if available, will give good Dolby Vision performance because they have wide dynamic range.

Anything less than the above will not give good Dolby Vision performance. HDCAM SR will not provide acceptable quality Dolby Vision. A good Dolby Vision test scene is one that has deep shadows and bright highlights with saturated colors. If a compromise on these characteristics must be made, use a scene that has at least two of those attributes. For Dolby Vision tests and re-mastering projects, content owners should provide a master color reference (DCP, Blu-ray master or broadcast master) for use as a color guide.

THINGS TO REMEMBER

  •  Your HDR imagery will also be seen in SDR. Be sure to check your images periodically to make sure they work for both formats. It might be helpful to craft a separate LUT for SDR to try to preserve your creative intent.
  • Test every part of your optical path to ensure that lenses, filters and sensors compliment each other. Verify these combinations using a 4K HDR monitor.
  • Rate the camera at half its native ISO, or create a viewing LUT that will do the same. Noise is your enemy.
  • Always record in raw or log at the highest bit depth possible. Never use a WYSIWYG or Rec 709 gamma curve.
  • 16-bit capture is the best option (currently). 12 bits will work. 10 bits can work but will leave less leeway for significant post correction. 8-bit capture is never advisable.
  • Record to the largest color gamut possible, which is either the camera’s native color gamut (such as Canon’s Cinema Gamut) or Rec 2020. Never record to Rec 709.
  • Always record to an RGB 4:4:4 codec. Avoid Y’CbCr. Any codec that is subsampled (less than 4:4:4) is a Y’CbCr codec.
  • It’s a good idea to work with camera original data all the way through post. PQ transcoding should only happen as a final step when producing deliverables.
  • Support for a broad selection of multiple camera formats should appear shortly in on-set monitors. If a monitor is incompatible with a camera native signal (such as Canon Cinema Gamut/Canon Log 2) then a PQ feed from the camera should bridge the gap. (Not all cameras will output PQ.)
  • PQ should never be recorded on-set.
  • Waveforms are critical in avoiding clipped highlights. Some monitors, like the DP-V2410, will display a logarithm waveform in nits, which greatly expands the highlight range for high precision monitoring. If this isn’t available, viewing a log signal will dedicate nearly half of a standard waveform’s trace (above 60%) to highlight information.

SERIES TABLE OF CONTENTS

1. What is HDR?
2. On Set with HDR
3. Monitor Considerations
4. Artistic Considerations
5. The Technical Side of HDR < You are here
6. How the Audience Will See Your Work < Next in series


The author wishes to thank the following for their assistance in the creation of this article.

Canon USA
David Hoon Doko
Larry Thorpe

Dolby Laboratories
Bill Villarreal
Shane Mario Ruggieri

Cinematographers
Jimmy Matlosz
Bill Bennett, ASC


Disclosure: I was paid by Canon USA to research and write this article.

Art Adams
Director of Photography

 

The post A Guide to Shooting HDR TV: Day 5, “The Technical Side of HDR” appeared first on ProVideo Coalition.

]]>
https://www.provideocoalition.com/a-guide-to-shooting-hdr-day-5-the-technical-side-of-hdr/feed/ 0
A Guide to Shooting HDR TV: Day 4, “Artistic Considerations” https://www.provideocoalition.com/a-guide-to-shooting-hdr-day-4-artistic-considerations/ https://www.provideocoalition.com/a-guide-to-shooting-hdr-day-4-artistic-considerations/#respond Fri, 21 Apr 2017 17:00:13 +0000 https://www.provideocoalition.com/?p=50681 This is the fourth installment of six-part HDR “survival guide.” Over the course of this series, I hope to impart enough wisdom to help you navigate your first HDR project successfully. Each day I’ll talk about a different aspect of HDR, leaving the highly technical stuff for the end. You can find part 3 here.

The post A Guide to Shooting HDR TV: Day 4, “Artistic Considerations” appeared first on ProVideo Coalition.

]]>
This is the fourth installment of six-part HDR “survival guide.” Over the course of this series, I hope to impart enough wisdom to help you navigate your first HDR project successfully. Each day I’ll talk about a different aspect of HDR, leaving the highly technical stuff for the end. You can find part 3 here.

Thanks much to Canon USA, who responded to my questions about shooting HDR by sponsoring this series.

SERIES TABLE OF CONTENTS

1. What is HDR?
2. On Set with HDR
3. Monitor Considerations
4. Artistic Considerations < You are here
5. The Technical Side of HDR
6. How the Audience Will See Your Work


HIGH CONTRAST IS HDR’S KEY STRENGTH

Specific code values in HDR data are mapped to specific nit/brightness values on the monitor. Choose a 10-bit code value at random, and that code value will cause an HDR monitor to emit a specific, repeatable amount of light. (This is true of both Dolby Vision and HDR10, but not technically true of HLG, which is an HDR format meant primarily for over-the-air broadcasting. It’s extremely unlikely that you’ll ever monitor in HLG on set. These formats are covered in Part 6.)

Without grading, an object exposed at five stops (reflected) above middle gray will generate a brightness level that corresponds to five stops above middle gray on a consumer HDR TV (if the consumer TV is capable of emitting 800+ nits). The dynamic range captured during photography will map very closely to the dynamic range displayed on the end-viewers TV: if two objects differ in brightness by one stop on set, they will also differ in brightness by one stop on a consumer HDR television.

Also, whereas we’re used to seeing reduced color saturation in SDR as exposure increases, this is less likely to happen in HDR.

For this reason it’s important to understand your camera’s dynamic range and color space, from brightest highlight to darkest black and most saturated hue to least saturated hue. There is no hiding at the extremes of exposure. We’re no longer exposing for middle gray and letting the rest “roll off.” Every stop of dynamic range counts. Highlights should always be protected, and shadows lit to the point that some detail is visible.

SHOOT FOR DELIVERY, OR SHOOT FOR GRADING?

There are two theories about shooting for HDR:

  • Don’t shoot for delivery; capture the most data possible for post manipulation.

This method dictates capturing imagery with less dynamic range than might be optimal for delivery, so the colorist has more room to push bits around in post. Rather than play highlights or shadows on the edge, you’d light and expose a little flatter so there’s plenty to manipulate later. The overall “shape” of the look still has to be created on set, so “layers” within the image (foreground, mid-ground, background) should be separated in color, contrast or brightness such that the colorist can enhance their separation, rather than try to create separation from scratch.

  • Shoot for delivery; make the image appear exactly the way you want.

This is a little trickier. It helps to know what peak brightness level to shoot for (4,000 nits, 1,000 nits, 400 nits, etc.) in order to properly monitor the image. This can be difficult, as there are a limited number of monitors on the market that are set ready, affordable and can display brightness at higher nit levels.

The answer seems to be to shoot with a “fat digital negative.” (In film terms, this refers to overexposing film slightly, robbing from the highlights in order to make the shadow density deeper and darker, which results in reduced graininess.) In general, it is a good idea to rate the camera slower in order to crush noise, while recording to an RGB log or raw codec at the highest bit depth possible. (Recording codecs will be covered in Part 5, “The Technical Side of Shooting HDR.”)

Regardless of the technique chosen, one has to be aware that the colorist has more control than ever before while also being severely constrained. There is very little shoulder or toe to the exposure curve that can hide clipped highlights or conceal noise in shadows. HDR’s constant contrast means that there’s little roll-off, or compression, at either end of the exposure curve from which more information can be pulled, or uncompressed. A 14-stop image on a six stop SDR monitor allows for a lot of leeway, but a 14-stop image on a 14-stop monitor offers considerably less.

At the same time, HDR isn’t HDR without taking full advantage of its capabilities. Director of photography Jimmy Matlosz, who has photographed several HDR test projects for Dolby® Laboratories, told me he prefers to use the entire dynamic range of the camera when possible. “I try to make sure that every f/stop of dynamic range is represented.”

This won’t work for all HDR material—I doubt showrunners for the typical sitcom would appreciate this approach—but, in general, this technique should produce the most stunning images.

I anticipate the greatest challenge will be convincing our bosses that darkness is our friend. More often than not my clients speak of dark areas as if they are holes in the image, as to them black symbolizes a lack of information. “If it’s dark, something is missing. I want to see it!”

Rather than talking about black as a gap in the image, I try to talk about it as a color. Dark areas aren’t missing detail, they are accents—just like red, or yellow, or green. “I’m going to jazz this up a bit by adding some black.” It’s exactly like painting, where black is just another artistic choice on my palette.

HIGHLIGHTS AND SHADOWS TAKE ON NEW IMPORTANCE

Large highlight areas may have an adverse effect on viewers. At the very least they may cause the rest of the shot to appear considerably darker than a light meter might indicate due to the principle of simultaneous contrast (where bright objects make adjacent dark objects appear darker, and vice versa). An on-set monitor helps in this situation, as it concentrates our attention on the image in a manner similar to how the audience will see it. Simultaneous contrast works differently between looking at a set and viewing a small, bright image surrounded by black in a dark room.

It’s important to think critically about what objects or surfaces in the frame will be brighter than two stops above middle gray (or brighter than diffuse white) as that’s the realm of highlights in HDR. They should almost never be clipped as their slightest detail—or lack thereof—will be evident to the consumer. (Tiny areas of clipping may be okay.)

There is so much range for correction in HDR that underexposing overall may be preferable to clipping highlights as long as shadow detail isn’t completely lost.

Solid HDR blacks can be disconcerting and, in many cases, are less pleasing than seeing some detail in the darkest shadows. Back in the days of film it wasn’t unusual to “light for black”: rather than let shadows fall off into darkness, a DP might set a small light to impart just a hint of illumination. This gave film emulsion “something to do” and resulted in richer and more interesting shadows. HDR is similar: often it is more pleasing to see some small amount of detail in the shadows than to see nothing at all.

Dolby’s Dolby Vision Source Format document (a portion of which is reproduced in the appendix) notes that an ASC cinematographer found that plus or minus four stops from middle gray is the “sweet spot.” A subject can walk from sunlight into shadow and no stop change is necessary if the dynamic range of the scene stays within that eight stop range. At plus or minus six stops one can see easily into the toe or shoulder but highlight and shadow detail will start to roll off, depending on the final grade.

I spoke with Bill Villarreal, Dolby’s Senior Director of Content Development, who specializes in remastering feature films for HDR. He tells me that most interior scenes don’t exhibit so much contrast that they automatically fall into HDR territory. Often the brightest thing in the frame is a wall sconce, practical lamp or window. His team spends a lot of time reducing the intensity of interior highlights, as they tend to be more distracting in HDR than when projected from film.

Daytime work is more dramatic: glints on chrome bumpers and tree leaves tend to “pop” and make the image feel sharper, as small points of bright light stand out well against dark backgrounds without overwhelming them. Light streaming through windows and backlit dust in air look amazing.

His biggest pet peeve is that they spend a lot of time removing movie lights and other equipment from windows that appear overexposed in SDR but are perfectly exposed in HDR.

He echoed a sentiment communicated by several other colorists: HDR shadows can be very disturbing if they are completely without detail. His feeling is that we don’t see deep, inky blacks in daily life, and they can feel wrong in a medium that reproduces (and mimics) the dynamic range of human vision.

I view this as an artistic choice: yes, most scenes work better if there’s some detail in the darkest shadows, as HDR reproduces those with amazing depth and clarity… but this is not a hard and fast rule, and can be broken for dramatic effect. An on-set monitor can be helpful in placing detail just at the very edge of blackness, or judging the psychological effect of featureless black.

Mr. Villarreal mentioned a grading session with a famous director who looked at an actor’s closeup and commented that he’d prefer to see softer lighting in HDR, as tonal transitions appear sharper due to higher contrast. In particular, he felt lighting on faces should be much softer. He commented that the specular highlight on an actress’s forehead looked fine in SDR but was distractingly bright and contrasty in HDR, and would have looked better had it been more diffuse and a bit dimmer.

At the same time, it’s not unusual for HDR colorists to “stretch,” or add contrast, to flesh tones—making highlights a little brighter and shadows a little darker—as this makes faces appear richer.

TAKE ADVANTAGE OF TEMPORAL STORYTELLING TECHNIQUES

 HDR is not simply about monitors that are brighter and darker than ever before. New technologies naturally create new storytelling opportunities for creative minds. Dolby® colorist Shane Mario Ruggieri shared his thoughts on what he calls “temporal HDR,” which is the use of HDR to create complex emotional responses in an audience over time through changes in contrast and brightness.

He posits four types of temporal HDR:

INTRA-FRAME. This is what we think of as “traditional” HDR, where a camera’s full dynamic range is accurately reproduced on a consumer television. Highlights are brighter than ever before, and shadows are deeper. Imagery takes on an almost three-dimensional feel.

INTER-FRAME. This type of HDR takes advantage of the human visual system’s ability to adapt to large changes in brightness over time. Imagine a scene in a western where the camera follows a character from a bright street, lit by noon-day sun, into a dark saloon illuminated only by indirect light filtering through dusty windows. Normally we’d hide a change in f/stop during that transition, as SDR televisions don’t have enough contrast to produce a usable image otherwise. HDR, however, does allow for this, and it’s possible to expose the image such that no stop pull is necessary and the audience adapts naturally to the brightness change. (Use of an on-set monitor might be wise to ensure that critical action takes place after the visual adaptation is complete.)

INTRA-SCENE. Dramatically different brightness levels in adjacent shots communicate emotion over time. The transitions can take place slowly, where some element of the scene (overall exposure, highlights alone, shadows alone, or overall contrast) changes incrementally across cuts, for a subtle effect, or quickly, through jarring smash cuts. Once again, the audience adapts naturally to the changes in brightness, and it’s up to the creative team to determine the forcefulness of the transition.

INTER-SCENE. Brightness levels change across scenes. Often color is used to communicate location cues to the audience, but now it’s possible to use extreme changes in brightness as well. For example, scenes set in a major metropolis might be underexposed to emotionally communicate the shady environment that exists between tall buildings, whereas a scene set in a scorching hot desert might be consistently overexposed by one, two or three stops—which is possible in HDR without losing highlight detail that might otherwise be crushed in Rec 709.

VARIABLES: Through all of these techniques, it’s important to recognize that our old friend and exposure aid, middle (18%) gray, will be of limited usefulness, as middle gray will shift with the audience’s adaptation to brightness changes. It may be possible for a cinematographer to “chase” middle gray with a light meter, but they’ll likely need to find their new middle gray value by sitting in front of an HDR monitor long enough that their vision adapts to the brighter or darker image in the same manner as the intended audience. The cinematographer can then visually identify a new middle gray value in the scene and adjust their light meter accordingly.

While HDR content can be exposed solely by light meter, an on-set monitor is the best way to evaluate whether such temporal changes produce the desired emotional effect, and to help ensure that critical action occurs only after the audience has time to adapt to large overall changes in brightness.

THINGS TO REMEMBER

  • HDR’s strength is high dynamic range and extended color saturation, but this strength reveals weaknesses in optics, filtration, and how those interact with a camera’s sensor design. Test in advance to check that your particular combination of these will work to your satisfaction.
  • Shooting for the grade—by capturing lower contrast images that don’t push the extremes of exposure—gives a colorist more control in post. Shooting for delivery may limit the colorist’s choices at the extremes of dynamic range (mostly in the deepest darkest shadows just above the noise floor). Middle gray plus/minus four stops is the “sweet spot” of exposure, although HDR can be pushed much further.
  • Clipped highlights should be avoided. Underexposure may be preferable to clipped highlights.
  • Completely black shadows can be disturbing as we never see them in nature. Shadows might need additional illumination to bring out textures near black.
  • Don’t hide mistakes behind overexposure or underexposure. This works for SDR and film, but does not work in HDR.
  • HDR’s higher contrast makes highlight/shadow transitions look harder than they do in SDR. Softer light may be preferable in HDR, particularly on faces.
  • HDR can be exposed solely by meter, but it helps to have a monitor nearby to see the image the way consumers will in order to better judge physiological and emotional responses to the picture.
  • HDR within the frame is the most basic implementation, but HDR can also be used to impart emotion through changes in contrast, brightness, darkness and saturation across frames, shots and scenes. It helps to have a monitor on set to assess how these transitions play against each other.

 

SERIES TABLE OF CONTENTS

1. What is HDR?
2. On Set with HDR
3. Monitor Considerations
4. Artistic Considerations < You are here
5. The Technical Side of HDR < Next in series
6. How the Audience Will See Your Work


The author wishes to thank the following for their assistance in the creation of this article.

Canon USA
David Hoon Doko
Larry Thorpe

Dolby Laboratories
Bill Villarreal
Shane Mario Ruggieri

Cinematographers
Jimmy Matlosz
Bill Bennett, ASC


Disclosure: I was paid by Canon USA to research and write this article.

Art Adams
Director of Photography

The post A Guide to Shooting HDR TV: Day 4, “Artistic Considerations” appeared first on ProVideo Coalition.

]]>
https://www.provideocoalition.com/a-guide-to-shooting-hdr-day-4-artistic-considerations/feed/ 0
A Guide to Shooting HDR TV, Day 3: “Monitor Considerations” https://www.provideocoalition.com/a-guide-to-shooting-hdr-day-3-monitor-considerations/ https://www.provideocoalition.com/a-guide-to-shooting-hdr-day-3-monitor-considerations/#respond Thu, 20 Apr 2017 17:00:34 +0000 https://www.provideocoalition.com/?p=50617 This is the third installment of a six-part HDR “survival guide.” Over the course of this five-part series, I hope to impart enough wisdom to help you navigate your first HDR project successfully. Each day I’ll talk about a different aspect of HDR, leaving the highly technical stuff for the end. You can find part

The post A Guide to Shooting HDR TV, Day 3: “Monitor Considerations” appeared first on ProVideo Coalition.

]]>
This is the third installment of a six-part HDR “survival guide.” Over the course of this five-part series, I hope to impart enough wisdom to help you navigate your first HDR project successfully. Each day I’ll talk about a different aspect of HDR, leaving the highly technical stuff for the end. You can find part 2 here.

Thanks much to Canon USA, who responded to my questions about shooting HDR by sponsoring this series.

SERIES TABLE OF CONTENTS

1. What is HDR?
2. On Set with HDR
3. Monitor Considerations < You are here
4. Artistic Considerations
5. The Technical Side of HDR
6. How the Audience Will See Your Work


There are several crucial considerations when choosing an on-set monitor:

  • How does the monitor process color information?
  • How does it handle out-of-range highlights?
  • Will the monitor be viewed in total darkness or under some form of ambient illumination?
  • What happens to the display when large areas of the screen approach maximum intensity?

MONITOR COLOR GAMUT AND PROCESSING

The camera used on the Mole-Richardson shoot fed an unprocessed 10-bit raw signal to the DP-V2410 monitor, which was de-mosaic’d within the monitor, passed through ACES® and mapped to HDR. The color gamut setting was Canon Cinema Gamut > Rec 2020, which scales the camera’s large native color gamut to fit within the Rec 2020 color gamut. That result is then scaled once again to fit within the display’s color gamut, which is approximately that of P3.

There are exact settings for P3 and Rec 709 as well, although Rec 709 is not considered suitable for HDR.

HDR monitors can only, at present, reach P3 saturation levels or slightly beyond, but data should be captured in Rec 2020—or a camera’s native color gamut, if it is larger—for future-proofing. As we saw in Part 2, consumer monitors are generally considered to be HDR-ready if they can reproduce 90% of P3’s gamut, and indeed most programs are currently mastered at P3 saturation levels—but this will likely not be the case for long.

The term “color gamut” brings to mind the traditional scalloped CIE 1931 color chart, but that doesn’t tell the entire story. That chart represents only a thin slice of the full color gamut, cut from the middle of its tonal scale. The full range of a color gamut is a 3D shape that bends and narrows at its extremes, as certain hues can’t be fully saturated at the limits of luminance. Dolby® Laboratories has coined the term “color volume” to better describe this shape.

(above) A comparison between the “flat” CIE representation of SDR’s color gamut, using a slice through the color volume at middle gray, versus two different 3D representations of the same gamut that show how colors saturate and desaturate across the luminance range. (©2016 Dolby® Laboratories.)

(above) This representation of SDR’s color volume, shown within HDR’s, illustrates Dolby VisionTM‘s increased overall saturation as well as improved saturation in highlights and shadows. (©2016 Dolby® Laboratories.)

“Color volume” is a wonderful description as it communicates the concept of a three dimensional color space in a more intuitive manner than does the traditional 2D CIE chart. It reminds us that changes in saturation occur across the full luminance range: highlights and lowlights may not be as saturated as mid-tone hues, and some hues may appear more saturated than others at different brightness levels. The traditional CIE chart shows none of this as it represents only a narrow slice of mid-tone hues.

I will, however, use the more commonly accepted term “color gamut” throughout the remainder of this article, as that’s the term used in the various HDR specification papers.

OUT-OF-RANGE HIGHLIGHTS

Canon’s HDR range scales out-of-range highlights to fit within the display limitations of the monitor, and can be found in all Canon HDR monitors. Setting HDR range to 4,000 scales the dynamic range of a 4,000 nit highlight to fit within the DP-V2410’s 400 nit container. Image contrast is distorted—mid-tones and blacks will darken—but highlight detail, contrast and saturation are easily evaluated.

As HDR range renders the image unusable except for examining highlights, it can be toggled via a function key for intermittent use. Simply program the maximum nit level desired (from 400 to 4,000) and push a button.

(above) What the camera captures may not always fall within the capability of a monitor to display.

 

(above) Code values that exceed what the monitor can display will be truncated, resulting in clipped highlights.

 

(above) Temporary use of the HDR range feature allows the user to remap the camera’s dynamic range to fit within the monitor’s dynamic range. Mid-tones and shadows will darken, but highlight contrast, detail and saturation can be visually assessed.

 At present, given HDR’s rigid encoding scheme, this is the only method available for viewing out-of-range highlights short of using a custom LUT. The good news is that this feature is built-in and ready to go, so users will never be left without a way to visually check highlights that fall out of the range of their monitor. (At present, unless one has a very large, very expensive and very power hungry 4,000 nit monitor on set, there is no other way to evaluate highlights.)

AMBIENT LIGHT VS. OLED AND LCD

Tests show that OLEDs and LCDs thrive in very different ambient lighting conditions.

 

(above) 1,000 nit OLED display in total darkness. This is a DSC Labs XYLA 20 stop dynamic range chart.

(above) 1,000 nit LCD display.

In perfect darkness the OLED display shows more contrast in the shadows when compared to the LCD display. The steps drop off more quickly due to OLED’s deeper black levels. Due to backlight leakage, the LCD screen can never be as dark as the OLED screen due to slightly elevated black levels.

(above) 1,000 nit OLED in uncontrolled lighting conditions.

(above) 400 nit LCD (Canon DP-2410) in uncontrolled lighting conditions.

 On the other hand, any ambient light striking the OLED monitor results in visual loss of detail near black, as the shiny black surface will quickly reflect its surroundings. Shiny surfaces can be made infinitely black in environments that lack ambient lighting, but this is not all environments.

The slightly lifted blacks of the LCD display accurately portray shadow detail in situations where ambient light cannot be controlled.

ABL, MAXFALL AND THE WISDOM OF CONTROLLING HIGHLIGHTS

It takes a lot of power to drive an HDR display, and with power comes heat. OLED displays dissipate heat horizontally, across pixels, so the display will automatically dim large highlight areas in order to protect the monitor. LCD monitors are subject to similar heat issues, but to a lesser degree. Both contain ABL circuits—for “Automatic Brightness Limiting”—that will dim the screen overall to prevent heat from destroying the monitor.

Here a 1,000 nit OLED monitor is compared to the 400 nit Canon DP-V2410 LCD monitor:

Left: OLED (1,000 nit). Right: LCD (Canon DP-V2410 400 nit display). Large areas of white cause the OLED to dim dramatically, while the LCD display, although dimmer initially, remains fairly consistent in brightness.

The white square in the first image is designed to clip on both displays. As the box, which requires maximum power to display, increases in size, both displays reduce brightness overall to control heat dissipation. This is a feature not only of professional displays, but of consumer displays as well. It is wise to take this into account while shooting.

There are two post terms that come into play that will weigh heavily on how your material will be seen by consumers: MaxCLL and MaxFALL. They are defined by the SMPTE ST.2086 spec for HDR encoding, and apply to HDR10 encoding (covered in section 5, “How the Audience Will See Your Work”).

Maximum Content Light Level (MaxCLL) is a metadata value that records the nit level of the brightest pixel in the frame. If a consumer television can’t produce that level of brightness then it will attempt to compensate in some way.

Maximum Frame Average Light Level (MaxFALL) is a metadata value that records the average brightness of every pixel in the brightest frame of a program. This is another piece of information used by a consumer television to modify a program whose brightest scenes may cause its ABL circuits to activate.

As every HDR monitor to date suffers from heat dissipation issues, and very bright pixels generate a lot of heat, every monitor has a MaxFALL value. When the image hits MaxFALL it’ll start to clamp down so the display isn’t damaged by excess heat. That’s what we’re seeing above: the white square is coded to be the brightest white possible, and the monitor is happy to reproduce that intensity… to a point. Once the image’s MaxFALL value exceeds the television’s MaxFALL value, the television will attempt to save itself by reducing overall image brightness. And every television and on-set monitor will have a different MaxFALL limit that is governed by it’s maximum brightness level and how well its manufacturer thinks it can dissipate heat.

My rule of thumb is that ABL circuits tend to activate when more than 10% of the screen reaches maximum intensity, but larger highlight areas of lesser intensity can also trigger ABL protection.

It’s important to keep this in mind when shooting HDR. Large, bright highlight areas may cause ABL dimming to occur, and if these are important for artistic effect then it may be important to increase ambient light levels to compensate for this. Otherwise, exposing those areas to be less intense will retain detail in the rest of the scene.

It should rarely be necessary to drive large screen areas to maximum intensity. The brightest highlights should generally be the smallest, as they tend to stand out better against a wide variety of backgrounds.

Remember, any neutral tone that is two stops brighter than middle gray is going to appear white. Beyond that, you’re painting with intensity and creating an artistic statement that physically impacts viewers while battling the limitations of display technology.

Care should be taken to prevent large bright backgrounds, such as windows, from reaching the maximum intensity of the average consumer monitor. If they are a small part of your wide shot, plan ahead so they don’t become a large part of a closeup. Failing to plan for this may cause your colorist to make some compromises across both shots. (Some level of brightness mismatch is okay as we often see this happen optically: out-of-focus specular highlights drop by about one stop in brightness as they diffuse across a dark background, and of course we cheat closeup lighting all the time.)

 Watching for MaxFALL excursions is only one of the many reasons it helps to have an HDR monitor on location. HDR is prone to motion judder at fast panning speeds due to HDR’s increased contrast, especially at frame rates as slow as 24fps (120fps has tested well for broadcasting 4K sports). Movement at slow frame rates also reduces resolution in finely detailed images due to motion blur: this can be seen in 4K sports programming shot at 24, 25 or 30fps, where richly textured grass dissolves into a blur as soon as the camera moves. HDR monitors are especially useful for critically evaluating and/or eliminating lens flares, which are brighter, more saturated, and potentially more distracting in HDR than in SDR. Veiling glare can also eliminate the HDR effect in dark shadows, and should generally be avoided.

NOT FOR 4K ONLY

Rec 2100, which defines modern HDR, does not specify resolution. It is entirely possible that we could see 1920×1080 HDR televisions come to market. 4K televisions have become so cheap, though, that it’s imperative that we monitor on set in 4K when shooting HDR. The difference in resolution, in conjunction with enhanced dynamic range and perceived constant contrast, does not allow for mistakes: everything in the frame will be visible in 4K.

The Canon DP-V2410 4K monitor is positioned well as a low-cost and physically manageable on-set monitor, as brighter monitors tend to have a larger footprint and require more power. (The next generation monitor, the DP-V2420, qualifies as a Dolby Vision mastering monitor and complies with the ITU-R BT.2100-0 HDR standard, which specifies a peak luminance of 1000 nits and a minimum luminance of 0.005. It’s basically the post version of the V2410, although it could be used on location as well.)

As part of my research I asked whether it was possible to use a consumer HDR television as a cheap on-set reference monitor. I learned that this is not currently possible as consumer televisions receive HDR signals over HDMI, which carries Dolby® or HDR10 metadata—created in a color grade—that is crucial to shaping the HDR image to fit the television’s capabilities. Cameras don’t output this data, and without it consumer televisions will default to SDR.

There is a possibility that tools will be developed in the future to overcome this problem.

THINGS TO REMEMBER

  • The color gamut of a professional monitor will likely approximate P3. In spite of this, always shoot to the largest color gamut available for future proofing.
  • Color gamuts are three dimensional. Colors will saturate differently across varying luminance levels.
  • Out-of-range highlights can be assessed by toggling Canon’s HDR range
  • OLED monitors work well in perfect darkness. LCD monitors work well in low ambient light.
  • Large bright areas that push a monitor beyond its Maximum Frame Average Light Level (MaxFALL) will cause it to dim in order to avoid heat damage. This limit varies from monitor to monitor. It can be avoided by limiting maximum brightness to small highlights, or by raising fill light levels to compensate for possible monitor dimming. The 10% rule (ABL activates when 10% of the screen reaches maximum intensity) is a good general guideline, but not an absolute rule.
  • HDR is unforgiving. Always monitor in 4K in a color gamut not less than P3.

SERIES TABLE OF CONTENTS

1. What is HDR?
2. On Set with HDR
3. Monitor Considerations < You are here
4. Artistic Considerations < Next in series
5. The Technical Side of HDR
6. How the Audience Will See Your Work


The author wishes to thank the following for their assistance in the creation of this article.

Canon USA
David Hoon Doko
Larry Thorpe


Disclosure: I was paid by Canon USA to research and write this article.

Art Adams
Director of Photography

The post A Guide to Shooting HDR TV, Day 3: “Monitor Considerations” appeared first on ProVideo Coalition.

]]>
https://www.provideocoalition.com/a-guide-to-shooting-hdr-day-3-monitor-considerations/feed/ 0
A Guide to Shooting HDR TV: Day 2, “On Set with HDR” https://www.provideocoalition.com/a-guide-to-shooting-hdr-day-2-on-set-with-hdr/ https://www.provideocoalition.com/a-guide-to-shooting-hdr-day-2-on-set-with-hdr/#comments Wed, 19 Apr 2017 17:00:40 +0000 https://www.provideocoalition.com/?p=50588 This is the second installment of a six-part HDR “survival guide.” Over the course of this series, I hope to impart enough wisdom to help you navigate your first HDR project successfully. Each day I’ll talk about a different aspect of HDR, leaving the highly technical stuff for the end. You can find part 1

The post A Guide to Shooting HDR TV: Day 2, “On Set with HDR” appeared first on ProVideo Coalition.

]]>
This is the second installment of a six-part HDR “survival guide.” Over the course of this series, I hope to impart enough wisdom to help you navigate your first HDR project successfully. Each day I’ll talk about a different aspect of HDR, leaving the highly technical stuff for the end. You can find part 1 here.

Thanks much to Canon USA, who responded to my questions about shooting HDR by sponsoring this series.

SERIES TABLE OF CONTENTS

1. What is HDR?
2. On Set with HDR < You are here
3. Monitor Considerations
4. Artistic Considerations
5. The Technical Side of HDR
6. How the Audience Will See Your Work


I shot my tests at the Mole-Richardson stage in downtown Pocoima. My goal: illustrate the differences between working in SDR and HDR on set.

As there’s no way to view HDR in print or on the web using standard displays, I opted to compare the SDR and HDR monitors by photographing them side-by-side with a Canon 1DX Mk2, exposing for HDR’s highlights. All image pairs were captured simultaneously, and minimally corrected for keystone distortion in Adobe® Lightroom® 6.

One of the first things I photographed was a color chart, as I wanted to ensure the best color possible when I processed the Canon 1D’s raw images at a later date. This confirmed what I’d learned elsewhere: HDR is not simply a brighter image.

Left: DP-V2410 monitor set up for SDR. Right: DP-V2410 monitor set up for HDR.

When I viewed my DSC Labs OneShotTM color chart in both SDR and HDR, the white patch appeared nearly the same brightness on both monitors. That patch reflects about 90% of the light striking it and is known as “reference white.” It is a standard in the television industry, and it falls at 100 IRE on a waveform monitor.

The white patch appears dimmer in HDR because we have the HDR Level feature set to 800 nits, to better evaluate the highlights on the C-stand. Scaling the dynamic range of the captured image to fit within that of the monitor causes mid-tones and shadows to darken, but it opens up the highlights for critical evaluation of detail and color.

The specular highlight on the C-stand arm and knuckle is flat in SDR, while the same highlight in HDR, viewed through HDR range, shows much more contrast and shape. HDR doesn’t “roll off” highlights; rather, it gives them room to breathe.

In SDR, the white and black chips on this chart are where highlight dynamic range ends. In HDR, they are where dynamic range begins.

Mid-tones are key in SDR as they retain the most contrast. Nearly every WYSIWYG gamma curve focuses on making the middle four stops of dynamic range as contrasty as possible, while compressing the highlights and shadows so that viewers still see some detail but without much contrast. They “flatten out,” although we almost never notice because the human visual system does as well.

For example, a variation of one stop in the middle of the SDR tonal range is the difference between middle gray and light flesh tone, but the difference of a stop between five and six stops above middle gray is the difference between barely different shades of white. Detail—which is primarily revealed through contrast—is retained, but just barely.

The same happens in SDR shadows, but to a lesser extent. The difference between five and six stops below middle gray is often noticeable in SDR but this is because modern SDR LED displays aren’t very dark, which results in natural lowlight compression.

(above) Tones at the exposure limits are compressed and lose contrast in SDR, while in HDR they retain much the same contrast that the camera saw at the time they were captured.

As the HDR display’s dynamic range and color gamut better matches what the camera saw in the first place, it reproduces tones and hues that are more true to life than ever seen before on a television display, and certainly with more contrast and vividness than can be seen in traditional digital cinema theaters.

I’ve coined a phrase for this: “constant contrast.” Contrast occurs in equal steps across the grayscale. There is no compression in shadows or highlights, as in SDR—or even film, where highlight roll-off is part of the look. Every stop of dynamic range is equally important in HDR. (This applies primarily to image capture. Highlights and shadows may be rolled-off in the color grade or when broadcast, but it’s good practice to assume this won’t happen.)

Update: Nick Shaw, of Antler Post, points out that true “constant contrast” would not reproduce the scene in a visually pleasing way due to two visual quirks:

The Hunt Effect: Colorfulness increases with luminance. At high luminance, an object will look much more colorful than the same object at low luminance.

The Stevens Effect: Contrast increases with luminance. At high luminance, whites appear whiter and blacks appear blacker.

Because of this, there is always some rendering intent applied to an HDR image, as a scene without any perceptual modification will not produce the desired result.

The core of ACES can be found at the heart of Canon HDR monitors, so there is always an output transform applied to the image to account for these characteristics, among others.

Rather than “constant contrast,” the better phrasing might be “perceived constant contrast.”

Consumer HDR specifications call for displays to have a dynamic range of at least 13 stops to be considered HDR capable, which is more than twice that of SDR. If two tones on set differ in brightness by one stop, they will likely appear to differ by one stop on a consumer display at a later date, no matter where they fall within the camera’s dynamic range. In SDR this is true of only the middle two to four stops of dynamic range.

(above) This wide shot, captured in Canon Raw/Canon Log 2 and exported from DaVinci Resolve® 12.5, was lit almost entirely by two 5Ks and a nine-light Maxi Brute bounced off the backdrop outside the window. Some of that light bounced off the ceiling and added some fill to the set, but the majority of the light came through the window.

The C300 MarkII camera captured nearly the entire dynamic range of the Mole-Richardson scene, from the Translight background at six stops above middle gray (reflected) to the foreground at F0.5 (incident). When shooting for SDR I’d be happy to see any detail outside of that window, regardless of how flat or desaturated it might be. I can also see detail in nearly every shadow. This image looks much like what I saw by eye.

(above) This frame was captured by DSLR from a Canon DP-V2410 monitor screen displaying an SDR image. The exposure is set to capture highlights, as the still camera’s dynamic range cannot capture the full contrast range of an HDR display.

The image above is typical of what happens when we shoot a high contrast scene with a wide dynamic range camera and view it on a limited contrast screen: we’re so thrilled that we can see ANY detail outside that window that we don’t mind that it’s low contrast and desaturated. We’re just happy that it’s not “clipped video white.”

(above) This next frame was captured simultaneously from an adjacent DP-V2410 monitor set up for 400-nit HDR. The increased contrast and color gamut reveals the golden hue of the lightbulb’s filament against the window, a detail that is completely lost in SDR, while also revealing the color of the sky and the complex textures of the building exterior.

HDR shadows are much darker than SDR: the top of the globe, lit by bounce light off the ceiling, is a darker tone in HDR, as is the side of the globe lit by the smaller lightbulb in the foreground. The DP-V2410’s blacks are significantly deeper than those of an SDR monitor, and they appear as significantly crushed blacks on a DSLR with inferior dynamic range.

Left: SDR. Right: HDR.

It’s impressive to see so much detail in a backdrop that was lit bright enough that it illuminated an interior set. Also, notice how much darker the shadows are in HDR. The eye can see this, although a still camera can’t.

(above) Frame grab exported from Resolve.
This is a flattened version showing all the image data that’s available for grading.


(above) This is a still photograph taken of the DP-V2410’s screen while in SDR mode. The still camera captures nearly the entire dynamic range of the image.

(above) The same image, photographed with the DP-V2410 set to HDR mode. Highlights are much brighter and more saturated than in SDR, and the increased shadow contrast exceeds the dynamic range of the still camera.

The differences between SDR and HDR are striking. The SDR bulb looks bland by comparison. The filament is bright but not saturated, there’s no detail in the curtains in the background, and there’s very little color saturation in the highlights. The Rec 709 color gamut is so small, and the dynamic range is so low, that any saturated hue that falls too far outside the mid-tone range (plus or minus two stops) will flatten out and desaturate.

The HDR bulb’s filaments, however, are golden yellow. Detail is visible in the curtains behind the background lightbulb, and the warmth of that bulb’s filament clearly separates it from the background. In the SDR image, there’s no difference in hue between the color of the curtains viewed directly and their color as seen through the bulb’s tinted glass, but the HDR image reveals a significant difference in both hue and brightness. HDR’s deeper shadows make the bulb appear more three dimensional due to the increased contrast between the light and dark reflections in its surface.

The DP-V2410 monitor, in its native mode, adds two stops of dynamic range to the typical SDR set. That, and the increased color gamut, allow highlights to retain their saturated hues to the point that the monitor screen feels more like a window than a display. HDR’s blacks are much sharper and richer to the point where they feel “chiseled,” as if they’ve been carved out of the display.

Throwing focus to the background is revealing:

(above) SDR

(above) HDR

The soft but warm foreground filament almost disappears in the SDR image, as reduced contrast and saturation causes it to blend with the reflection of the key light in the glass bulb. The HDR image, with its increased color gamut and highlight contrast, shows clear separation between the reflection in the bulb and the filament itself. This illustrates not only HDR’s broader color gamut (currently defined as reproducing at least 90% of the P3 color gamut), but also what happens when highlights are given room to “breath.” Highlight roll-off preserves some detail when mapping a high dynamic range image to a low dynamic range display, but much more is lost than retained.

The increased highlight contrast in HDR is startlingly obvious even when photographing something as simple as a sheet of paper. Here we lit a product manual such that the vertical surface of the rolled-back page received several stops more light than that of the flat page:

(above) SDR

(above) HDR

SDR’s highlight compression is obvious, whereas the HDR image looks much the same as what I remember seeing by eye. Highlight detail and contrast are retained, and HDR’s increased shadow contrast makes the transition between highlight and shadow look harder and sharper than in SDR.

The typewriter (below) was backlit by a 4’x4’ frame of heavy diffusion with full CTO gel clipped across the bottom and full CTB clipped across the top.

(above) SDR

(above) HDR

The increased color saturation and contrast in the HDR highlights is stunning, and the depth of the shadows makes the keys feel three dimensional. The SDR keys feel flat and dull by comparison.

(above) HDR waveform via DSLR, exposed for highlights. The foreground lightbulb trace is buried in that of the background bulb.

Adjacent light sources don’t always stand apart on waveform monitors. It can be difficult to get a sense of how bright either of these sources are simply by looking at waveform peaks. The HDR monitor not only visually reveals the brightness and saturation differences between the two filaments, which will appear to be of similar brightness and saturation in SDR, but it is the only way to assess whether the brightness and saturation of an element within the frame is artistically appropriate.

In the image above, the HDR monitor’s waveform is looking at the input signal. Occasionally the dynamic range of the camera will exceed that of the monitor and highlights will appear visually to be clipped. In the event that it is not expedient to toggle the HDR range function to verify whether they are truly clipped, highlight integrity can be verified by looking at a log representation of the input signal.

The Canon DP-V2410 monitor can be switched into SDR mode at the touch of a function key. This is a very useful function given that SDR is not going to disappear any time soon. My parents watch TV on an old NTSC tube television that cuts the image off at title safe. No matter how stunning HDR is, they aren’t going to spend money on an HDR set—and yet they still want to see a decent image on their ancient CRT. We’ll have to keep them in mind for a bit longer.

THINGS TO REMEMBER

  • HDR displays are capable of reproducing at least twice the contrast and dynamic range as SDR.
  • It is not possible to “hide” flaws in highlights and shadows anymore as they will not be rolled-off, or compressed, nearly as much in HDR as they would in SDR.
  • “Constant contrast” means that the camera’s dynamic range is mapped to the HDR display in a 1:1 fashion. This may change in color grading and distribution, but this is likely how you will monitor the image on set and may represent a “best case” scenario for consumer viewing.
  • In general, current HDR monitors are capable of displaying P3 color. Consumer monitors must be capable of reproducing 90% of P3’s color gamut to be rated suitable for HDR.
  • Color gamuts are three dimensional shapes (“color volumes”). Saturation changes with luminance. HDR highlights and lowlights are much more saturated than SDR due to expanded color gamut, dynamic range and lack of roll-off.
  • Waveforms don’t tell the whole story. Display contrast is so high that an on-set monitor helps considerably in judging the physiological and emotional impact of both highlights and shadows.
  • Programming will be released in both HDR and SDR for the foreseeable future. The two are so visually different that evaluating both kinds of images on set may aid in quality control.

SERIES TABLE OF CONTENTS

1. What is HDR?
2. On Set with HDR < You are here
3. Monitor Considerations < Next in series
4. Artistic Considerations
5. The Technical Side of HDR
6. How the Audience Will See Your Work


The author wishes to thank the following for their assistance in the creation of this article.

Canon USA
David Hoon Doko
Larry Thorpe

Mole Richardson
Larry Parker


Disclosure: I was paid by Canon USA to research and write this article.

Art Adams
Director of Photography

The post A Guide to Shooting HDR TV: Day 2, “On Set with HDR” appeared first on ProVideo Coalition.

]]>
https://www.provideocoalition.com/a-guide-to-shooting-hdr-day-2-on-set-with-hdr/feed/ 1
A Guide to Shooting HDR TV: Day 1, “What is HDR TV?” https://www.provideocoalition.com/a-guide-to-shooting-hdr-day-1-what-is-hdr/ https://www.provideocoalition.com/a-guide-to-shooting-hdr-day-1-what-is-hdr/#comments Tue, 18 Apr 2017 23:43:16 +0000 https://www.provideocoalition.com/?p=50569 This is the first installment of a six-part HDR “survival guide.” Over the course of this series, I hope to impart enough wisdom to help you navigate your first HDR project successfully. Each day I’ll talk about a different aspect of HDR, leaving the highly technical stuff for the end. Thanks much to Canon USA,

The post A Guide to Shooting HDR TV: Day 1, “What is HDR TV?” appeared first on ProVideo Coalition.

]]>
This is the first installment of a six-part HDR “survival guide.” Over the course of this series, I hope to impart enough wisdom to help you navigate your first HDR project successfully. Each day I’ll talk about a different aspect of HDR, leaving the highly technical stuff for the end.

Thanks much to Canon USA, who responded to my questions about shooting HDR by sponsoring this series.

SERIES TABLE OF CONTENTS

1. What is HDR? <You are here
2. On Set with HDR
3. Monitor Considerations
4. Artistic Considerations
5. The Technical Side of HDR
6. How the Audience Will See Your Work


There was no shortage of HDR displays on the NAB 2016 show floor. At each booth I sought out someone of authority and asked, “What do I—as a cinematographer—need to know about shooting for HDR release?” I received only one response, from a colorist demonstrating HDR grading techniques:

“Leave it to me, I’ll fix it in post.”

This is not something one says to a cinematographer. We consider ourselves to be the “authors of the look,” and while we don’t mind colorists enhancing our artistic vision, we certainly don’t want them to replace it.

Only one manufacturer took my question seriously, and while they had no answer for me at the time they vowed to help me find one. Several months later I found myself standing on a soundstage near Hollywood with a Canon C300 MarkII camera, two Canon DP-V2410 monitors, and a small group of Canon technicians.

Note: While Canon commissioned this article series and helped facilitate this test, the opinions in this series of articles are solely my own, and are based on my honest opinions of what I’ve seen.

My goal: evaluate the differences between SDR (standard dynamic range) and HDR (high dynamic range) displays in real time, with an eye toward teaching cinematographers how best to light and expose for this new medium. That research, in combination with knowledge gleaned from several cinematographers and colorists who currently work in HDR, led to the article you are reading.

There are a lot of questions still to be answered about HDR grading and delivery, but by the end of this article you should know enough to shoot an HDR project and thrive, rather than simply survive.

WHAT HDR IS, AND WHAT IT IS NOT

Modern cameras capture a dynamic range of 14-16 stops. Modern televisions display six linear (uncompressed) stops. When the “Rec 709” HDTV standard (known from here on as SDR) came into being, six stops of linear dynamic range seemed enough. This captures the dynamic range of non-shiny surfaces in the real world: matte white is about two stops brighter than middle (18%) gray, while matte black is two to three stops darker.

The X-Rite® color checker illustrates how narrow a range this is, while illustrating how critical those few stops can be. The difference between the white and black patches is only about four stops, and yet that covers a range of tones that appears black, white, and every shade of gray.

SDR doesn’t account for specular highlights, deep shadows, or uncontrolled lighting conditions in general. Now, 20 years after SDR was defined, the worst professional camera captures twice the dynamic range that SDR can display. New monitor technologies (OLED and local dimming LED) make it possible to display a camera’s full dynamic range with minimal tonal compression and more saturated colors than ever before. Where SDR was content with six stops of dynamic range, a modern consumer television can’t be certified as HDR ready without being able to display at least 13.

HDR, however, is not simply brighter television. A good HDR display will reproduce nearly the entire dynamic range of any professional camera with the same contrast that existed when the image was captured. No longer can we hide cables in the shadows and light fixtures in the highlights, as both now display as much contrast, saturation and detail as mid-tones.

 

(above) This graphic best illustrates the differences between SDR and HDR. At the top, the full contrast range of a scene is reproduced by the human visual system. SDR, in the middle, reduces the original contrast of a scene in order to display it on an SDR television, with dramatically less dynamic range than can be perceived by the human eye. At the bottom, HDR preserves most of the dynamic range of the original scene, displaying it on a consumer television with contrast that closely matches what the eye would have seen had it been present during image capture. (Illustration provided by ©2016 Dolby® Laboratories.)

Display brightness is measured in candelas per meter squared, or “nits.” A calibrated studio SDR monitor will emit a maximum of 100 nits. Current HDR displays are capable of emitting anywhere from 400 to 4,000 nits, and the Dolby VisionTM specification stipulates that HDR image data can be as bright as 10,000 nits. (Currently most programs are mastered with a peak luminance of 1,000 to 4,000 nits.)

Depth of shadows is as important to HDR as intensity of highlights. Where a normal SDR studio monitor may reproduce black between 1 and 2 nits, the Canon DP-V2410 HDR monitor is capable of reproducing black at 0.025 nits.

The Canon DP-V2410 400 nit HDR display.

SDR’s six stop mid-range, from matte black to matte white, will look the same on both an SDR and HDR set. The difference is that HDR goes way beyond, giving us shades of white and black that are brighter and darker than anything we can see in SDR. And, because we aren’t cramming 14 stops of captured dynamic range into a six stop bucket, every stop is displayed with roughly the same contrast. This results in an increase in highlight and shadow detail as well as color saturation.

Displays come in various flavors of HDR, and it’s helpful to have a reference as to how bright or dark they are in relation to each other. Doubling or halving a light’s intensity results in an exposure change of one f/stop, so it’s fairly simple to determine how many additional f/stops of highlight dynamic range a 1,000 nit HDR display can reproduce over a 100 nit SDR display:

(above) Nits vs. stops, as reproduced by a typical HDR display. Thanks to Nick Shaw of Antler Post for this data.

What’s tricky is that HDR code values are absolute: if a monitor can only display five stops above middle gray, and a camera captures six, that monitor will not automatically display that last stop. Highlights that fall into that last stop won’t be clipped in the recording, but they’ll be clipped on the monitor.

There are several schemes that allow for the reproduction of highlights that fall beyond the capabilities of a consumer television, but none of these are appropriate for professional use. They will be covered later. Canon professional HDR monitors offer a feature called HDR range, which scales the HDR signal to fit the constraints of the monitor. This is covered in part 3, “Monitoring Considerations.”

WHAT TO REMEMBER

  • HDR is not simply increased brightness, but increased dynamic range in the monitor itself. SDR’s dynamic range is only six stops from brightest white to darkest black. HDR boasts 13 stops or more.
  • Matte white is two stops brighter than middle gray, while matte black is two to three stops darker. Display brightness values beyond those limits fall into the realm of HDR.
  • HDR code values map to specific nit values. Nit values beyond the display capabilities of a professional monitor will be clipped unless viewed in a different mode that scales the full signal to fit within the monitor’s dynamic range.

SERIES TABLE OF CONTENTS

1. What is HDR? <You are here
2. On Set with HDR <Next in series
3. Monitor Considerations
4. Artistic Considerations
5. The Technical Side of HDR
6. How the Audience Will See Your Work


The author wishes to thank the following for their assistance in the creation of this article.

Canon USA
David Hoon Doko
Larry Thorpe


Disclosure: I was paid by Canon USA to research and write this article.

Art Adams
Director of Photography

The post A Guide to Shooting HDR TV: Day 1, “What is HDR TV?” appeared first on ProVideo Coalition.

]]>
https://www.provideocoalition.com/a-guide-to-shooting-hdr-day-1-what-is-hdr/feed/ 2
Cooke Anamorphic Lenses Bring Class and Character to a Clean Digital World https://www.provideocoalition.com/cooke-anamorphic-lenses-bring-class-and-character-to-a-clean-digital-world/ https://www.provideocoalition.com/cooke-anamorphic-lenses-bring-class-and-character-to-a-clean-digital-world/#respond Fri, 31 Mar 2017 16:54:30 +0000 https://www.provideocoalition.com/?p=49427 I remember the first time I worked in anamorphic. I’d landed a job as “A” camera operator on a feature film’s additional photography unit. Looking through the Panaflex viewfinder, I saw a very wide but not very tall frame. I couldn’t judge much about the image at all as it was so small. That made

The post Cooke Anamorphic Lenses Bring Class and Character to a Clean Digital World appeared first on ProVideo Coalition.

]]>
I remember the first time I worked in anamorphic. I’d landed a job as “A” camera operator on a feature film’s additional photography unit. Looking through the Panaflex viewfinder, I saw a very wide but not very tall frame. I couldn’t judge much about the image at all as it was so small. That made me nervous, as it was a toss up in the film days as to who got fired first for out-of-focus dailies: the assistant, for not nailing focus, or the operator, for not seeing the image was soft.

At lunch I placed a phone call to a union business agent who I new to be a former operator. “How,” I asked him, “do you judge anamorphic focus through a Panaflex viewfinder?”

“Switch off the de-anamophoser in the viewfinder,” he told me. “Look only at the unsqueezed image.”

I did, and saw a bright, clear, square image that allowed me to judge focus perfectly. I had to retrain my brain to compose a wide frame without seeing its width, but I survived a month of shooting where, for budget reasons, dailies were only viewed once a week.

That’s only one of the ways anamorphic forces us to think differently about how we shoot images, especially in the digital world. The lenses are rarely optically perfect, and show imperfection that we don’t see in many other kinds of lenses: horizontal lens flares, oval bokeh, inconsistent sharpness across the field of view, pin cushion and barrel distortion… all the things that manufacturers try to eliminate from spherical lenses are what give anamorphic lenses their appeal.

I’ve long been convinced that audiences want to see interpretations of reality, rather than reality itself, so a clean, pristine, perfectly realistic image is often unsatisfying. The very act of telling a story is bending reality to fit a narrative, so why should cinematography be any different?

Instagram seems to be living proof of this. The point is to take a digital picture and then mess it up. The results are often abstract, unrealistic, and beautiful. The flaws help to tell the story.

Digital capture is very clean compared to film, and—in retrospect—one of the things that made film so attractive was its abstraction. Its grain gave it texture. Different stocks rendered color and contrast in dramatically different ways, so much of the look depended on whose negative stock one ran through the camera. It was malleable, but not as much as digital, and creating a “look” often meant learning chemistry rather than invoking power windows in a DI suite. The imperfections were what made it special.

Lens choice has always made a difference, but never more so than now. Without film’s texture and chemical “funkiness,” we’ve lost a layer of abstraction that—if Instagram is to be believed—audiences appreciate and expect. There are ways to reintroduce that feel in capture and post, through the introduction of noise or the manipulation of contrast and color, but often creativity comes from the happy accident or the unexpected.

The way that a glass lens warps reality and presents it to a sensor, where the look is captured at that sensor’s full bit depth instead of being applied later to 10-bit compressed digital footage, is one of the few ways we have left to “bake” a look into an image. It’s also one of the best ways to introduce happy accidents to the filmmaking process. There’s a reason so many DPs are opting to shoot new projects with old glass or with anamorphic lenses: they distort the world in ways that we find pleasing, that suit the narrative process, and that we’d never think of while sitting in a dark DI suite at a desk lined with assorted snack foods.

In the same way that font choice influences our perception of the written word, or choice of brush affects the texture of a painting, the choice of glass through which we tell stories is itself telling part of the story.

Recently I decided to ply Carey Duffey, Cooke’s European Director of Sales, with questions about Cooke Optic’s approach to developing anamorphic lenses. Mostly I wanted an excuse to learn more about Cooke’s new SF anamorphic line, which will be on display at NAB, but I didn’t tell him that.

Art: Thanks for talking with me, Carey. Can you tell me about Cooke’s design goals in creating Cooke anamorphic lenses?

Carey: Having only started at Cooke in January 2016, I was not involved in any of the original ideas, concepts or discussions about the aims and objective for the Cooke Anamorphic lenses. That is all down to our chairman, Les Zellan, and our senior optical lens designer, Ian Neil. However, once I accepted my position as European Sales Director, it was more than apparent to me that it was extremely important to educate myself as to what Cooke Anamorphics delivered.

So to answer the question directly and simply, the idea was to design a set of lenses based on historical anamorphic lens principles: rear spherical elements with anamorphic front cylinders.

Art: I seem to remember that this is key to the anamorphic look. When I worked on features in Hollywood I saw anamorphic zooms that were converted by putting an anamorphic cylinder on the back of the lens, but while that cobbled a wide screen look out of a spherical lens, it didn’t have that beautiful oval bokeh or the horizontal streak flares that we love so much.

Carey: Cooke sought to produce a range of anamorphic lenses that took on the characteristics of what we consider “good things” about the look of Anamorphic lenses: optical bokeh, reduced depth of field, field curvature, and pin-cushioning or saddle effect, depending on the focal length. We worked aggressively to avoid bad things such as the “anamorphic mumps”, where the closer a face is to the lens the wider it becomes. Also, our original anamorphic lenses steered away from exaggerated horizontal flares. This was a conscious decision as the lenses better matched the flare characteristics of the extremely clean S4/i and 5/i lenses.

Art: If I recall correctly, “anamorphic mumps” was a big issue when anamorphic lenses were first introduced in the 1950s, and on into the 1960s. Some stars dreaded working in anamorphic as they felt their closeups made them look fat.

How would you describe the appeal of anamorphic imagery?

Carey: Most non-technical people generally agree that anamorphic images encapsulate the “motion picture” look. They say that 2.4:1 anamorphic look “just like the movies!”  Directors of photography simply tend to say anamorphic lenses have personality.

Also, many DPs feel that the digital look is too clean and lacks the texture and character of film.

Art: There’s been a resurgence of interest in old lenses as a way to give digital images more character. I know that TLS rehouses old Cooke Speed Panchros for modern use, and they are very popular precisely because of their lack of perfection (and, of course, because they still possess the famous “Cooke look”). I’ve long felt that audiences don’t want to see reality, but rather an interesting interpretation of reality. Sometimes digital feels a little too real for abstract storytelling.

So how did Cooke preserve and reinvent its distinctive “look” in a set of anamorphic lenses?

Carey: Wide-angle anamorphic lenses inherently display distortion, so we intentionally retained elements of that distortion so as not to make a flat image. You will find that the 25mm, 32mm and 40mm do have some distortion top to bottom and side to side. However this has been controlled as to not make the viewer feel uncomfortable when the camera pans or tilts. Too much distortion can make the viewer feel as if they are rolling or swimming in the image. This also results in wavelike distortion when the camera is locked off and a person or object moves across the frame.

Some distortion must be present to add “personality” but should not render the viewing experience unpleasant or uncomfortable.

Also, I think that as we view the world in our natural state of looking around, we perceive multiple focal planes in the real world due to our quick focusing reflex. The look of front anamorphic lenses adds a “roundness” to the focal planes, which results in an additional sense of dimensionality.

This is why anamorphic lenses look great when photographing landscapes and exterior natural environments. Architectural photography might require special framing to avoid these characteristics, or they can simply be embraced as part of the anamorphic look. We leave that choice to the DP.

After the wider focal lengths we move onto the 50mm lens, which has sharper sides and softer corners than the wider-angle lenses.  The designers felt this complemented this focal length.

Of particular interest is our newer 65mm macro. Anamorphic lenses have not historically been great at focusing close, but this lens is a 4:1 macro, which also reaches effortlessly to infinity.

Art: I remember older anamorphic lenses focusing no closer than 4′, and the only way to get around this was to use diopters. Those change the focus markings on the lens, so during prep camera assistants would measure out every focus mark on every lens and create new follow focus disks. If the DP called for a close-up on a wide lens, the assistant would add a diopter and then change out the follow focus disk so they still had reliable focus marks.

The first time I looked at a Cooke Anamorphic lens I was startled to see that the wider ones focus closer than 3′, and they look sharp wide open. My understanding is that anamorphic lenses can be thought of as two lenses combined—a primary focal length in the vertical axis and a second focal that’s 50% wider in the horizontal axis—and their depths of field must overlap in order for an image to look sharp. Many early anamorphic lenses didn’t focus unless stopped down to f/4, but your lenses look tack sharp at T2.3!

I heard one story, about an anamorphic feature film that ran through two or three assistants for focus issues, until they hired an assistant with anamorphic experience. He pointed out that the DP was shooting with the lenses wide open and they simply wouldn’t focus.

I don’t see this being a problem with your lenses.

Carey: The longer focal lengths—75mm & 100mm—sees the field curvature change slightly into what I can only describe as a delicate saddle effect, as opposed to the pincushion of the wide angles.

 

Art: I’m not familiar with “saddle effect.” How would you describe it?

Carey: By “saddle” we mean that the centre and corners are sharp and the edges are softer, without distorting the center of the frame. Remember that when we discuss fall off on anamorphic lenses—as well as optical artifacts such as pin cushioning, saddle effect and barrel distortion—the sweet spot of an anamorphic lens is based on a centrally located horizontal oval, not a circle. The setting of this fall off from the oval sweet spot, and how large it is, determines the final look of the lens.

Our anamorphic lenses are based around an elliptical shape that covers 80% to 90% of the vertical and horizontal frame. Saddle effect doesn’t cause issues such as “anamorphic mumps”, but instead subtly enhances the viewing experience. It’s part of the character that DPs expect out of an anamorphic lens.

The pincushion and saddle focal plane effects are what determine key aspects of an anamorphic lens’s focal length characteristic. The saddle effect on longer focal length lenses draws the image out toward the viewer, and pin cushioning on wide-angle lenses has the effect of drawing the viewer in.

We use these effects sparingly and carefully. Using too much can make the image overbearing and difficult to watch!

Finally, the longer focal lengths—135mm, 180mm and 300mm (yes, 300mm!) have a flatter image due to their length, but have minimal color fringing by comparison to other anamorphic lenses of these focal lengths.

Art: What’s the greatest issue you must overcome when selling anamorphic lenses in the digital era?

Carey: Some of my customers tell me, “It must be easy to sell Cooke anamorphic lenses in the digital era because the digital format on its own is so boring and sterile. Adding a lens with personality can only help to achieve more interesting images!” That said, the biggest problem that we face in anamorphic cinematography today is that some people think that letter-boxing the image is all they have to do to create an anamorphic look. They have all heard of anamorphic, but they don’t really know what it means to shoot true anamorphic. Lenses which have rear anamorphic cylinders, as you have mentioned, go long way to confusing the issue because they don’t produce a historically “correct” anamorphic image.

There needs to be a greater distinction between shooting 2.4:1 on spherical lenses and doing the same with front anamorphic lenses or rear anamorphic lenses. I think this will help everyone in the end, as DPs can be more specific with directors and producers about the look they are trying to create.

Art: I understand that you have introduced some new anamorphic lenses that have even more “character” than your existing lenses. What can you tell me about those?

Carey: Yes, we’ll be showing off our new SF (“special flare”) anamorphic lenses at NAB. One of the most asked for “personality traits” of anamorphic is the famous horizontal blue streak flare. This can be cheated in spherical formats by letter-boxing the image and adding a streak filter. Presto, job done. (Well, not really… you can’t create anamorphic “personality” so easily!)

Our SF anamorphic lenses are mechanically the same as our current anamorphic lenses, except that many of the optics have been recoated to produce this distinctive anamorphic flare. This is not a retrofit, a filter or an attachment: the actual glass elements of the lens have been specially treated. This creates an extremely organic flare that is enhanced by the unique ways light can bounce around inside an anamorphic lens. There is no hiding when the light hits the lens.

No matter which style of lens a DP chooses, they will always have the classic “Cooke look.”

The following projects were shot through Cooke Anamorphic /i lenses. Look for footage showing off the new SF line of lenses at NAB.

See Cooke /i and SF anamorphic lenses at NAB 2017 in Cooke’s booth, C5414. That’s adjacent to the DSC Labs booth, where Adam Wilt and I will be demoing color, resolution and dynamic range charts.

Art Adams
Director of Photography

Disclaimer: I have worked as a paid consultant to DSC Labs.

The post Cooke Anamorphic Lenses Bring Class and Character to a Clean Digital World appeared first on ProVideo Coalition.

]]>
https://www.provideocoalition.com/cooke-anamorphic-lenses-bring-class-and-character-to-a-clean-digital-world/feed/ 0
Three Variations on Lighting a Single Shot Commercial https://www.provideocoalition.com/lighting-the-single-shot-commercial https://www.provideocoalition.com/lighting-the-single-shot-commercial#comments Sun, 19 Mar 2017 19:39:53 +0000 https://www.provideocoalition.com/?p=48320 I love a challenge, as that’s when I get to do my best work. One of these spots was easy, one was moderately difficult, and one was hard. All turned out perfectly. Before I go any further, take a look at the finished pieces: Each of these three spots has a name. In order, they

The post Three Variations on Lighting a Single Shot Commercial appeared first on ProVideo Coalition.

]]>
I love a challenge, as that’s when I get to do my best work. One of these spots was easy, one was moderately difficult, and one was hard. All turned out perfectly.

Before I go any further, take a look at the finished pieces:

Each of these three spots has a name. In order, they are: “Lightbulbs,” “Napkins” and “Typewriter.”“Typewriter” was the simplest of the three. It’s just a straight pull back. One big wall, lots of texture, easy to light.

Texture makes a difference. Our brains love variety. I often use dappled lighting to break up flat, even surfaces. Textures built into the set allow me to light more simply and quickly.

Based on the boards, this spot looked easy to light. The camera move could be executed on a dolly.

This prompted me to call my line producer, Christopher Knox, and ask for a Technocrane.

“Hmmm,” he said. “Can you do this with a jib instead? We’re not really budgeted for a Technocrane.”

“Maybe,” I said. “The jib operator is going to have to jockey the base around a bit, as the arm pivots in space as it swings up. If they say they can boom up while pushing the crane across the floor to maintain the composition, I’m happy to use a jib.”

“Okay, let me call the crane guy,” he said. About an hour later he called me back. “We’re using a Technocrane.”

As a compromise, I offered to shoot the project on a classic Arri Alexa. That camera is about six years old now but looks as phenomenal as an Amira or Mini. The disadvantage is it won’t shoot resolutions higher than HD. The good news is that it’s a lot cheaper than an Amira or a Mini.

Shuffling money around to make the project work is a key part of commercial production in the modern world, and while I certainly love to work with the latest and greatest tools, I also know exactly what kind of camera gear I can get away with for any given job. Shooting green screen with a Canon C300 is a disastrous mistake. Putting a heavy zoom on a Sony FS7 will result in soft focus post-production tears. A RED camera captures critical color with one particular OLPF filter, and one only. Things were easier in the film days. Now… it helps to be a serious geek when figuring all this out.

Fortunately, I am a world-class geek.

Often I’m able to give production choices: “I know you want to use camera “X,” but I can give you a comparable look using camera “Y” so you can put the savings into something else that we need to make the job a success…” Like a Technocrane.

For my part, I told Knox that I would use the Technocrane for every setup, as it’s much faster to use than a traditional dolly. There’s a lengthy camera move in each of these spots, so a Technocrane could really pay off for all three. We only needed it for one shot, but it was going to reduce setup times on every shot.

I could have shot this last spot using a dolly, but the end of the Technocrane is very narrow (the camera is wider than the arm) and the crane eliminated the need for dolly track. As we’d be shooting lots of reflective glass very near to the lens, moving the camera on a narrow black arm made more sense than on a large shiny dolly mounted on track. If we hadn’t needed the crane for “Napkins” I’d have made a dolly work, likely by draping it, myself and the dolly grip in black duvetine. That’s not my favorite way to work, but it would have been a cost effective choice.

“Lightbulbs” was clearly the hardest setup from a rigging perspective, as we had to hang dozens of lightbulbs in an attractive cloud formation. I outlined the following specs:

  • Each bulb must be easily and quickly adjustable, both in height and location.
  • Each bulb required its own dimmer.
  • The bulbs in the center of the rig had to be built on an adjustable channel system. I wanted the camera to just barely clear them as it pulled back, so the nearest bulbs swept dramatically past the lens.

This sounds simple. In reality it required a small truss setup with a hundred pounds or more of cabling, no floor stands, and a portable 30-channel dimmer system.

STAGING THE SETS FOR EFFICIENCY

Originally, when I spoke to art director Bret Lama about how to arrange the sets on the smallish stage we had to work in (such are the realities of living in a secondary market), we spoke of putting them side-by-side. After doing some measuring, though, I suggested stacking them up in a line.

The crane specs showed that its collapsed length was about 14′. That left 28′ in which to build the set and move the camera, which was *probably* enough, but I dislike working at the limits of what I have available. I knew I wanted to use a large light source to reveal the lightbulbs’ reflective glass surfaces, and it occurred to me that rotating the set 90° away from the stage’s white cove would allow me to turn that cove into a bounce source.

Further, as the “Lightbulbs” and “Typewriter” sets consisted of single walls, we could build them parallel to each other and dress both at the same time on the pre-light day. Upon finishing photography on the first set we’d take that wall away to reveal the next wall, move in a new desk and props, make a couple of lighting tweaks and be ready to shoot without having to shift the crane base.

This arrangement worked for a number of reasons. We could set up the Technocrane operating station at the beginning of the day and never move it again. The crane itself only had to move once, to get to the “Napkins” set. We used the same shooting space and overhead rigging for our two single-wall sets. And the first two sets could share some lighting while giving us room to pre-light the third set.

“LIGHTBULBS”

Knox and I had a discussion very early on about the “Lightbulbs” set. We’d already planned to rig most of the lighting, with full grip and electric crews, on the day before the shoot, but the art department had an additional day of set construction. He put some money aside so my gaffer and I could come in on that first day and work out the mechanics of the cloud rig without our minions standing idle. (We overachieved: by the end of the day we had all three sets figured out, and “Napkins” was 75% pre-lit!)

We settled on a speed rail truss system that we’d build on the floor, supported initially by floor stands. Once the rig was built, the grip crew would run speed rail down from the grid, grab the truss, and pull the floor stands away.

I wanted the lightbulbs to pass as close to the camera as possible, so I asked that the middle bulbs be rigged from two movable pipes. This gave us the ability to quickly set a channel width for the middle rows of lightbulbs.

We ran out of speed rail, so we had to improvise with wooden boards. To the left of the board there are two pieces of pipe that run parallel to each other. (You can see the ends sticking off the back end of the rig, and the clamps that hold them in place.) Those are the channel pipes. Once the camera was in place we were able to change the width of those pipes such that the camera passed as close to those bulbs as possible over its move.

I told my gaffer, Andy Olson, to bring every stinger he had. He laughed. The next day, during the pre-light, he told me, “I thought you were joking, but I brought them all anyway. We’re using almost all of them!”

The “channel” pipes are easily visible here as they form a “V” that extends across the length of the truss. All lights are hung using A-clips, for speedy adjustment.

Andy brought a 30-channel dimming system, and we rigged 32 lights in total. (Four lights shared two circuits, but we put those at the end of the move and assumed the audience would never notice them coming on simultaneously amongst all the other cloud lights.)

As far as lighting the set, I’d initially wanted to use soft side light to create broad linear highlights in the bulb surfaces. One of the reasons I’d asked the sets to be oriented at a right angle to one of the stage’s white cyc walls was so I could use it as a large bounce source. My key grip, Gordon McIver, went even further and turned it into a massive book light.

The light source to the right was three Arri M18s aimed into a cyc wall, which was then further diffused by a frame of 12’x12′ grid cloth. The reflection of this source on one lightbulb was pretty, but seeing it on 30 lightbulbs was amazing.

The big light source was the easy part. It took the better part of a day to rig and power our “truss o’ bulbs.” Once we had everything roughed in, I snapped some photos and texted them to Greg Rowan, our director, who then talked me in on the overall shape of the bulb cloud. We were 95% ready at call time the next day, with only some final bulb adjustments once the camera was in place. Every light was held in place by a spring clip on its cord, so changing the height of each bulb was easy. Adjusting the placement was a little more difficult as we could only slide the bulbs left, right, forward or back on the bar to which they were attached, but we made do by adding crossbars, and then crossbars between crossbars.

Unfortunately, on the shoot day, the big soft source lighting scheme had to be scrapped as it revealed seams in the set walls that the client didn’t want to see. The seams in non-textured flats can be hidden by taping them over and painting them, but this doesn’t work on textured flats. I’d hoped that the seams could be explained away as possible grout lines in a brick facade, but early that morning some concerns were expressed and I initiated a backup plan. Once the final decision was made to hide the seams, we switched off the big source and turned on some smaller ones.

A Source 4 Leko, hung from the end of the truss, lights up the desk blotter, creating a soft bounce source. A 4’x4 Kino Flo peaks over the back of the set as a backlight. We later added a splash of light from a tweenie on the back wall to bring out its texture and prevent it from fading into blackness.

The bulbs lit themselves, but I needed a quick and elegant way to light the actress. I find low bounced light sources to be especially interesting as they feel “ambient” to me, as if they naturally belong no matter the environment. Sunlight striking a floor creates much the same look, and indeed there are many situations where much of the light in an environment is light radiating upwards from flat surfaces. I had the electrical crew hang a Source 4 from the truss and aim it at the desk blotter, which was a large enough source to wrap nicely around the actress’s face.

Alexa’s wonderful dynamic range and highlight handling allowed me to do this without worrying that the blotter would blow out to an awful, clipped video white. It truly lets me light as I would when shooting film.

The original Stanford University Shopping Center Apple Store design incorporated huge, milky plexiglass ceilings lit from behind by fluorescents that made the ceiling a single large, flat and perfectly diffuse light source. The floor was made of a very bright white material that caught this light and reflected it upward, resulting in nearly equal amounts of light from both the floor and the ceiling. The store interior felt as if it had an internal glow, and everyone inside was lit as if they were a fashion model. Sadly, the floor scuffed easily and was eventually removed, and while the ceiling is much the same, the feel of the store is very different without the radiant white floor.

When in doubt, I’ll often light from below. I started doing this in my low budget corporate video years, when I frequently found myself lighting people at conference tables. The most interesting—and quickest—solution was to hang a light overhead and smack it into the table, where scattered pieces of note paper bounced the light back on faces. I was able to shoot in any direction, and the soft upward shadows felt both interesting and real.

We scheduled “Lightbulbs” first as Knox and I decided to get the hardest setup out of the way early in the day. Although Andy and I tried to create a preplanned bulb illumination sequence, fading up specific bulbs as they entered the frame, this proved too time consuming. In the end we gave Andy his own reference monitor near the dimmer board and let him feel his way through the shot.

We set an overall bulb brightness level on the dimmer board using the master fader, so he could focus on bringing the individual faders up at the proper times without having to hit the same brightness level 30 times.

The foreground was very warm, so I lit the background to be a little cool. Cinematography is about creating depth and contrast, and it’s well known in design circles that “warm colors advance and cool colors recede.” Making the background a little cool created a pleasant color contrast between foreground and background while enhancing the depth of what is a not-very-deep shot.

One thing I love about the Technocrane is that I get to operate wheels again. I miss gear heads. They’re so smooth, precise, and just plain fun.

It’s clear that I’m doing some operating at the beginning of this camera move, but once I’d finished tilting up I had to immediately start tilting down to keep the top of the frame level across the rest of the move. The fact that I’m spinning wheels throughout this shot is impossible to see in the final spot, but it would have been obvious if I hadn’t. I love that.

The same thing happened on “Typewriter.” The crane arm wasn’t perfectly dead on to the set, but I was easily able to keep the actor perfectly centered across the entire move. The camera doesn’t appear to be panning or tilting at all.

“TYPEWRITER”

Once we had “Lightbulbs” in the digital can, we swapped out the desk, knocked down the set wall, and quickly lit for “Typewriter.” We’d already rigged two 4’x2 Kino Flos on the “Typewriter” set wall to act as large, downward washes. We’d left the large soft source for “Lightbulbs” built and ready to go, so we turned it back on for ambient fill. We also taped a piece of typing paper to the keys of the typewriter and aimed our Source 4 at it for a little upward-facing glow.

This small bounce wasn’t a big enough source to be flattering to the actor’s face on its own. We added a small Chimera, fitted with a directional grid and rigged to our overhead truss system.

In order to speed things up, we didn’t bother removing the light bulb rig until we’d finished this setup. I had the electrical crew pull all the lightbulbs up to the truss, and we waited to disassemble it until we moved to the final “Napkins” setup.

I would have preferred to light this with the table lamp only, but that would have required cutting the shade and using a bigger bulb. Art direction decisions of this type often happen on the day of the shoot, and I opted not to spend time tearing apart a lamp on a moment’s notice. Half of my job is getting the look right, and the other half is getting it done on time.

There’s not much else to say about this spot. It was very straightforward. The Technocrane worked perfectly and the lighting setup was fairly simple. While shooting this setup I had the electrical crew work ahead and turn on the lighting for the “Napkins” spot, which we’d already put in place.

“NAPKINS”

Initially this setup was a bit of a quandary. I knew I had to light someone laying on the ground and then pull straight back for a good distance without seeing lights or casting camera shadows. I had a full set of Cooke S4 primes on set, but I’d decided to only use the 32mm when possible. The 32mm is wide enough to capture a good-sized set in a small space, but isn’t so wide that it distorts faces in unpleasant ways. As all three spots started in closeup and ended wide, the 32mm seemed like the best choice for each setup. (32mm and 35mm primes are great all-around lenses. If I’m handheld and shooting quickly, I’ll often put one of those on and never take it off.) In theory I could’ve only rented the 32mm lens, but that’s asking for trouble if anything changed at all on the day of the shoot.

The trick in preproduction was to figure out whether I could get a head-to-toe shot on a 32mm lens at our maximum camera height, which was 18′ to the stage’s grid. I also had to find out what my angle of view was at that height, so I could tell the art department how much floor space to dress with napkins. It’s times like these that PCam is invaluable. Thanks to its various calculators I determined that it was possible for me to get a head-to-toe shot on a 32mm lens at 18′, and also that I needed 12’x12′ of “napkin space” to fill the final frame.

During our “pre pre-light” day, Andy and I talked about ways to make this sea of napkins interesting while also lighting an actress’s closeup. I’d toyed with bouncing light off the cyc wall, but it felt like that might be too broad for what was essentially a person laying on a flat white surface. Even confining the bounce to the lower part of the wall, such that it skimmed the napkins, seemed like it might be too much of a flat wash, and it left the issue of getting light around the front of the actress’s face from a very high angle. I broached the subject of hanging a Chimera in the grid, but Andy talked me out of it as he hates hanging lights in case he has to move them quickly. The napkin floor would have kept us from bringing in a ladder or a scissor lift unless the light was hung well off center.

In the end I embraced my “inner hard light.” We put a 2K fresnel on a stand as high as we could get it, and the angle turned out to be perfect for a classic, 1950s Hollywood key light. One of the tricks with hard light is getting the fill light right, as hard light casts shadows that emphasize skin imperfections. Lowering the contrast of those shadows, by placing a soft light near the lens—and preferably under it, so it reaches into the actor’s eyes—fixes a lot of issues. We had a small portable LED light ready to mount under the lens in the event I needed it, but the actress had perfect skin. She was a dream to light.

I wanted to create a pool of light around the actress, but not in a perfect circle as if it were a spot light. As part of our “pre pre-light,” Andy and I experimented with distorting our key light in interesting and visually random ways. We started off spotting the light all the way in, to create a pool of light with a hot center that grew darker at the edges. Then we added a cucaloris (“cookie”) for some breakup. Normally you’d flood a light all the way out so the cookie cast hard shadows, but I wanted to see how the shadows softened at full spot.

And… it was too soft. The pattern was mush.

I knew from experience, though, that adding another pattern would cause the two patterns to interact, so I asked Andy for a second cookie. He didn’t have one (almost no one carries them anymore, they’re considered a bit dated) but he had a piece of foam core with a round hole cut in it. We set that inches in front of the spotted fresnel to see what happened.

It was magic. The result is a little hard to see in the final shot, but the interaction of the small aperture in the foam core, the spotted fresnel, and the cookie resulted in a wonderfully random pool of light. It looked amazing on the Sony A170 OLED monitor we used on set, although it loses a little something in a highly compressed 8-bit file. Still, it looks fairly nice: not too theatrical, not too perfect.

This is a trick I use often. Stacking patterns causes the holes in the front pattern to act as apertures through which the rear pattern is projected. The result is a wonderfully random combination of hard and soft shadows that interact in unpredictable ways. I wrote an article about this effect here.

At each corner of the napkin region we placed a 4’x8′ piece of foam core, and bounced a Kino Flo into each one. Ideally I’d have filled from behind the camera, but no matter how big the source I’d still end up shadowing the actress with the camera at the beginning of the shot. Placing large bounces around the perimeter of the shot gave me the same effect without putting any lights behind the camera.

We placed four Source 4 Lekos on the ground, two on either side of the napkin field, to skim across the napkin edges and prevent them from appearing too flat.

We may have added a light CTO gel to the 2K fresnel to make it feel a bit like warm sunshine.

All that was left was to finesse the camera move. The crane operator, Robert Barcelona, placed the camera in a direct-down position on the head, and then joined me in operating it. My job was simply to pan, while he retracted the arm over the course of the shot so that the camera started centered on the actress’s face and ended up centered on her body.

This pool of light doesn’t look like it took a lot of work to create, but natural light can be surprisingly hard to reproduce at times—because it’s never perfect.

It took a couple of rehearsals to choreograph our dance, but we figured it out fairly quickly. Robert had the hardest job, as precisely retracting the arm while watching a spinning image couldn’t have been easy.

Alexa presets tend to look a little green. Adding CC-4 to the color temperature (top right, “3200 -4”) tends to fix this issue. I use a center “dot” instead of crosshairs when working with Alexa, as I like to see the center of the frame without showing the director and clients a crosshair. (One of my assistants likes to make the frame lines and crosshair red, to emulate the old “Panaglow” illuminated markings that helped film operators frame shots at night. Red worked well in that case because it didn’t affect night vision, but I prefer white because—in every other context—red indicates an error.)

TECHNICAL SPECS

We shot this on an Arri Alexa Classic, and while the savings in rental price over an Amira or Mini didn’t buy the Technocrane, it certainly defrayed the cost somewhat.

For quite a long time I’ve been very picky about noise, and I will normally rate a camera at half its stated ISO as most aren’t as noise free as I’d like. I’m starting to ease up on that practice, but at the time I shot this I was still deep in my anti-noise phase, so I rated the camera at ISO 400. Reducing noise does help in compression for the web, which is where a lot of my recent projects have landed.

I’d wanted to shoot the project on TLS-rehoused Cooke Speed Panchros, which are beautiful old lenses with all sorts of funky anomalies. Sadly, my assistant found that the 32mm was out of collimation at the prep and there was no one in the rental house on the prep day to fix it. I ended up with a set of Cooke S4s, and I really can’t complain as those are phenomenal lenses for shooting closeups, but the funkiness of the TLS Cooke Speed Panchros would have added an extra something to the project. The bokeh on “Lightbulbs” would have been wonderfully random, and the natural vignetting in those lenses would have added some character to the fairly flat fields of both “Typewriter” and “Napkins.”

“Lightbulbs” and “Napkins” were shot at T2 as I wanted the backgrounds to go a bit soft, at least at the beginning of the camera move. “Napkins” was shot at T4 1/2 as reduced depth of field didn’t make any difference to the flat set, and I decided to give my camera assistant—the excellent John Gazdik—a break at the end of the day. He didn’t need it, but I remember my days as a camera assistant and I try to be kind to mine whenever possible.

I nearly always shoot with a 144° shutter (1/60th second exposure at 23.98fps) as I’m extremely paranoid about flicker. Fluorescents flicker, LEDs flicker, and recently I’ve even seen tungsten halogen bulbs flicker.

My theory is that energy efficient halogen filaments are thinner, so they cool—and dim—faster when the AC current changes directions. I’ve had issues when shooting on-set tungsten practicals at 48fps/180°, and I’ve seen the same practicals flicker at 24fps/180° when dimmed. 144° puts me directly in the middle of the 60hz flicker-free window, and allows me to worry about more important things than whether an errant lightbulb might be misbehaving in the background… or, in this case, 30 dimmed lightbulbs in the foreground.

All spots were shot at 23.98fps to LogC, ProRes4444.

Me (left) on the “Lightbulbs” set with line producer Christopher Knox. Photo by Bret Lama. I love this picture.

Production Company: Teak SF
Director: Greg Rowan
Line Producer: Christopher Knox
Executive Producer: Greg Martinez

Director of Photography: Art Adams
Art Director: Bret Lama
Gaffer: Andy Olson
Key Grip: Gordon McIver
First Camera Assistant: John Gazdik
Crane Owner/Operator: Robert Barcelona

Art Adams
Director of Photography

Support ProVideo Coalition
Shop with

The post Three Variations on Lighting a Single Shot Commercial appeared first on ProVideo Coalition.

]]>
https://www.provideocoalition.com/lighting-the-single-shot-commercial/feed/ 15
Polishing the Talking Head Interview https://www.provideocoalition.com/polishing-talking-head-interview/ https://www.provideocoalition.com/polishing-talking-head-interview/#comments Thu, 02 Mar 2017 23:42:58 +0000 https://www.provideocoalition.com/?p=47216 I’m constantly experimenting with common lighting setups and trying to make them better. There is no more common lighting setup than that of the talking head interview. Here are a couple of tricks I tried the other day that seem to work quite well. FOREGROUND In my work I’ve seen trends moving away from backlights,

The post Polishing the Talking Head Interview appeared first on ProVideo Coalition.

]]>
I’m constantly experimenting with common lighting setups and trying to make them better. There is no more common lighting setup than that of the talking head interview. Here are a couple of tricks I tried the other day that seem to work quite well.

FOREGROUND

In my work I’ve seen trends moving away from backlights, scratches and edges. Most of my clients and directors don’t want them. I’m okay with this. Often they are unnecessary, as their original purpose—which was to separate people from backgrounds when photographed on black and white film—is no longer a concern. (A person with brown hair will disappear into a red wall in a grayscale image, whereas this is less likely to happen in color.)

The traditional cheek scratch can be quite beautiful if done properly, but unfortunately the recipe varies based on the shape of a person’s face, the size of their ears, and the color of their hair. I’ve found myself having to tweak cheek scratches for each subject who sits in front of my camera, and this is generally not a good thing as interviews are often scheduled with very little time in between.

These days I focus on the one light I know I’m always going to use: the key light.

Long ago I learned not to use a small source when shooting interviews as their sharp shadows are unforgiving, requiring the light to be precisely set for every face. A side angle that works well for a person with a round face and small nose will not work for someone who has deep eyes and a long nose, but I am rarely afforded the time to make these adjustments. Recently, for example, I shot a project consisting of eight sit-down interviews spaced about ten minutes apart. There was little time for tweaking.

One method that I used for quite a long time involved a 4’x4′ frame of diffusion set close to the subject, often no more than 2-4′ away. I had noticed that the size of the source, in relation to its distance from the face, worked well in a 1:1 relationship, and I’ve followed that rule ever since, both consciously and unconsciously. For example, yesterday’s shoot employed a 6’x6′ source placed 6′ from the interview subject, although I didn’t realize I’d done this until after the shoot.

There are several reasons why I like to use large sources when lighting faces:

  1. Softness. Soft light works on everyone. It reaches into eyes and washes away imperfections. Most imperfections are revealed by their cast shadows. Eliminating or softening these shadows makes imperfections disappear. Facial features are enhanced by their shadows, so soft shadows underplay facial features that might be distracting.
  2. Specular highlights. A large source reflects in healthy, shiny skin and creates a subtle yet beautiful highlight. This is most obvious in dark skin, but it has a noticeable effect on light skin as well.

In the distant past I placed my sources high in the air, as the last thing I wanted to see was a horizontal nose shadow. Even though these are quite common in real life, nothing says “lit” faster to me than a sharp nose shadow that falls horizontally across the face. It feels sloppy and wrong to me. I’ve eased up on this perspective in recent years, but for my own work I avoid this whenever possible. Classic portraiture lighting sees the key light above and to the side of the subject, such that the nose shadow falls along the “smile line,” the invisible line drawn between the corner of the nose and the corner of the mouth. Placing a soft light higher than the subject’s head pushes facial modeling in a similar direction.

I also like to light with soft sources from below the lens. Light bounces off all the surfaces in a room, and this is most obvious in a set that has no ceiling: hair becomes unnaturally shadowed at the top of the head, and something feels wrong or off. It’s common to hang a dim light source or white reflective material over the top of a set to reintroduce the subtle highlights produced by a white ceiling. Light bouncing off the ground is less obvious when it’s missing, but when added it almost never feels wrong. In fact, when I’m at a loss as to how to light a bland room in an interesting way, I’ll almost always bounce a light off the floor or other flat surface, such as a tabletop.

(I’ve often wondered why ceilings are almost always painted white. As best I can tell, colored ceilings will dramatically shift the hue of ambient light in a room, while a neutral white ceiling ensures there is always some measure of white ambience such that walls appear, say, blue, but people within the room do not.)

For a long time I focused on using horizontal light sources, as they preserve chin shadows (often useful to hide a chin imperfection that one of my DITs referred to as “the gobbler”) while softening nose shadows. The sharper the nose shadow the more precisely it must be placed, while soft light is much faster to work with as it is very forgiving. Later, though, I experimented with vertical light sources: while those enhanced nose shadows, they also felt more “ambient” as the lack of a chin shadow made the direction of the light less obvious. It also felt more “ambient” as most soft ambient light comes from the side and below, so it does not cast a chin shadow.

Over time I’ve come to combine these lighting techniques. I’ll use a large soft sidelight that’s a little higher than eye level, in conjunction with bounced soft light below the lens that wraps the light around the front of the face. The combined look is that of window light from the side combined with ambient sunlight bouncing off the floor.

Here’s producer Courtney Harrell sitting in as a lighting reference:

The light is clearly coming from the side, but it also wraps around the face from below. The look is very much what you’d see if Courtney were sitting next to a window with sunlight streaming in, where she’d be lit both from the side and from below with a bit of shadow on the side of her face opposite the window. There’s no hard nose or chin shadow here, but the light has a definite direction. (This image was captured using my iPhone and a Sony A170 OLED monitor.)
This is a 6’x6′ book light. They don’t get much softer than this. An Arri M18 HMI bounces light off a 6’x6′ Ultra Bounce and radiates forward through a 6’x6′ Magic Cloth diffuser whose top is tied to the Ultra Bounce frame. This big soft source creates most of the look, but the 4’x8′ piece of foam core at the bottom is the finishing touch.
The foam core bounce creates the feeling of ambient “floor bounce” that wraps around the underside of the face and softens chin and nose shadows. (The sheet of 4’x4′ bead board on the far side of the light acts only to prevent light spilling out the side of the book light.)

This is the kind of lighting setup that works for everyone. It’s handy when there’s no time to tweak between interviews, and/or you don’t know what the subjects will look like.

I’ll often overexpose skin by 1/2 stop to give it a bit of a glow. This works on cameras that don’t distort or easily clip bright skin tones, such as the Arri Alexa and Amira, RED One Dragon/Weapon, and the Sony F55/F5/FS7 cameras (in Cine-EI mode). I used to overexpose flesh tones by up to one full stop, but while this looked great on the on-set monitor the final product often looked less flattering when compressed to 8-bit color for broadcast or web distribution. This eliminated much of the subtle tone and hue detail on the bright side of the face.

BACKGROUND

Backgrounds can be a pain. They’re always present, and commercial and corporate clients often want them to sing, but many locations tend to not photograph well. The latest trend is to use lens flares and highlights to bring otherwise dull backgrounds to life, so I did my best to do exactly that in this 19th floor office location.

The camera was to move constantly back and forth on a 4′ slider, so I devised two tricks: one for intermittent flares and one for sparkles.

I knew I wanted to hide flare-causing items within the shot, so I ordered some old Zeiss Super Speeds (for their flare-ability) and set them to somewhere between T1.3 and T2, depending on the subject’s skin tone. This threw the background out of focus enough that I could hide some 1’x1′ mirrors in the background:

I had my grip cover them with black tape, leaving only a 1″ slit in the middle. We then propped them up in the background, where they appeared to be nondescript black squares, or tall thin shapes that resembled some sort of textured glass vase. We lit the background with JoLekos (Joker HMIs through Source 4 barrels) projected through some office plants that we found nearby, so we placed the mirrors to catch some of that light and aim it toward the camera at specific points in the camera move.

I had one mirror aimed into the lens for the left end of the slider; another for the right; and one just off center. I made sure that I had enough room at the end of the slider move to get out of the flare, so I didn’t sit on it for too long when I had to change camera direction, and we made the slit narrow enough that the flare came and went fairly quickly across a camera move of about 4-6″. I wanted a two second flare that came and went quickly, adding interest but not upstaging the interviewee.

The couch in the foreground is soft, so focus is farther back than when we rolled. The final background was softer.

It occurred to me that some shiny round sparkles might add a bit more interest. I asked Courtney to send a PA to the nearest hardware store and buy some cheap 2″ round convex mirrors, which we then ran across the background wherever they could catch some of our fake dappled sunlight. The result was a series of tiny sparkly hits that drifted slowly through the frame during the camera move.

Unlike flat mirrors that have to be precisely aimed, convex mirrors will reflect any light in front of it across a wide angle of arc. If the camera saw the mirror, and the mirror saw the light, the camera would always see the light reflected in the mirror.

Our primary camera lenses were 50mm, 85mm and 135mm primes, and the background was 20-25′ away. My lighting gags were easily hidden within the frame thanks to the out-of-focus background, and an otherwise dull office break area came to life in a manner that was complimentary to people sitting on a couch and talking about themselves for 45 minutes at a stretch.

It’s always a challenge to make such things look interesting. While the subject matter may be riveting, it’s best to hedge one’s bets in case it isn’t. Backstopping the narrative with a pretty picture never hurts. And pretty pictures on their own help sell the narrative, as a well-wrapped package always enhances whatever is inside of it.

These convex mirrors are now a part of my regular kit. I look forward to placing them in backgrounds whenever I see the opportunity to create an out-of-focus sparkle. You should buy some for yourself sooner than later as they currently cost only $2 each. Once they catch on I’m sure they’ll be available at your nearest expendable supply store for $25 or more.

Art Adams
Director of Photography

Support ProVideo Coalition
Shop with

The post Polishing the Talking Head Interview appeared first on ProVideo Coalition.

]]>
https://www.provideocoalition.com/polishing-talking-head-interview/feed/ 26
Flicker: Why On-Set Monitors Fail Us https://www.provideocoalition.com/flicker-set-monitors-fail-us/ https://www.provideocoalition.com/flicker-set-monitors-fail-us/#comments Mon, 27 Feb 2017 17:46:50 +0000 https://www.provideocoalition.com/?p=46789 While working as a young second camera assistant on a film TV series in the 1990s, I noticed the DP set the camera shutter to 144 degrees (1/60th of a second exposure). I asked him why. “I’ve been burned by HMI flicker in the past,” he told me. “I’m never going to let that happen

The post Flicker: Why On-Set Monitors Fail Us appeared first on ProVideo Coalition.

]]>

While working as a young second camera assistant on a film TV series in the 1990s, I noticed the DP set the camera shutter to 144 degrees (1/60th of a second exposure). I asked him why. “I’ve been burned by HMI flicker in the past,” he told me. “I’m never going to let that happen again.”

A few years ago, shortly after the RED ONE was released, I found myself shooting a short film for an internal ad agency film contest. It was a “just for fun” project, and I borrowed some gear from my regular crew to pull it off on a “short film budget.” As part of the lighting package I found myself with two 1200w HMI PARs, both with magnetic ballasts.

At one point during the shoot I noticed one of the HMIs was flickering. I noticed this while looking at a white wall (large flat surfaces are great for detecting flicker by eye) but I didn’t see it on our LCD monitor. I swapped the light out for the other 1200w head, changed my shutter from 180 degrees to 144 (1/60th second), and kept on shooting.

It wasn’t until I saw the footage on my computer monitor at home that I noticed something weird: faint roll bars drifting through the image. These weren’t visible on the on-set monitor, and I couldn’t see them on my computer monitor unless I quickly scrubbed through the timeline.

About nine months later I found myself on a job with a Sony F55, which has a global shutter. One scene, in a bedroom, saw us over cranking at 48 frames per second. The image looked great on our OLED monitor, but a little while later my DIT came to me for a quick chat. “The practical lamps are flickering,” he said. I walked quickly to his station, where he showed me that the lamps on either side of the bed were drifting in and out of phase: they’d look fine for few seconds and then flicker would appear, grow stronger, peak, and then gradually disappear.

This was completely invisible on the on-set monitor.

Recently, while researching an article on HDR, I found some articles on the Internet that may explain why kinds of flicker doesn’t always show up until post. The links to these articles can be found at the end of this one, in the event that you’d like to read them yourself, but I’m going to try to summarize the key points and touch on those things that are particularly relevant to filmmakers.

There are three things (at least) working against us when trying to detect flicker on the typical on-set monitor.

MOTION BLUR

This appears to be the key to detecting flicker on-set. It’s a deceptively complicated subject.

We rarely stare at one spot on a display for any decent amount of time. Our eyes constantly scan the frame. A monitor with a refresh rate of 48hz will display a 24fps video frame twice at 1/48th of a second. That’s a fairly long exposure time to fast-moving eyes, and results in blurred pixels as our eyes dart around the screen.

REFRESH RATE

This is the “frame rate” of the monitor. At 60hz, which is a fairly common refresh rate, the monitor will display 60 discrete frames per second, at 16.66ms per frame. At 30fps, each progressive frame will be displayed twice.

At 24fps, the monitor refresh rate changes to 48hz, so each frame is displayed twice for about 21ms each time.

DITs tend to use LCD displays that function at a higher refresh rate, typically 120hz. This means that each frame is displayed five times at 8.33ms per time. As each pixel appears sharper, differences between frames become more obvious.

RESPONSE TIME

This is the amount of time that it takes for a pixel to change from one hue and intensity to another. For example, given a 60hz display, if a pixel takes 8ms to transition from black to white, then its first frame will appear as white for only half of its 16ms screen time.

The shorter this transition period is, the sharper the image will look as the eyes sweeps across it. For example, if the same pixel takes only 4ms to change, that means it is now in its intended state for 12ms.

Longer transition times result in apparent motion blur, which can conceal flicker.

SAMPLE AND HOLD

Most LCDs and OLEDs employ this technique, where a pixel is painted and left static until it is told to change. On a 24fps display this means that each pixel is left effectively unchanged for 1/24th of a second: the first 1/48th of a second sees the pixel change from one state to another, and the next 1/48th of a second sees no change at all.

This is different to how CRTs functioned, where a phosphor pixel’s brightness would start to decay almost immediately after being painted, resulting in a “dark gap” between frames. This effect was largely hidden by painting the odd scan lines during the first half of the frame’s display time, and the even scan lines during the second half (interlacing). If every line had been painted from top down, the top would have started dimming by the time the electron beam reached the bottom of the frame, resulting in a noticeable roll bar.

This “dark gap” was still visible, though, in that the display time for any given line was less than 1/60th of a second. This translated into the appearance of increased sharpness to the roving eye. Sample-and-hold eliminates this effect: a frame that is meant to be displayed for 1/24th of a second appears for exactly that long.

Some OLED displays add artificial CRT flicker through a process called Pulse Width Modulation (PWM), where “dark gaps” are introduced between frames by turning the pixels off. This artificially-induced flicker reduces each pixel’s “exposure time” and decreases motion blur.

Likewise, some high-end LCD monitors will add “dark gaps” through the use of a strobing backlight.

PHASE

When the camera’s shutter is in phase, or synced, to a flickering light source, the flicker disappears. When the camera is out of phase with a flickering light source, flicker appears. When shutter speed and lamp flicker rate are slightly out of phase, they will slide in and out of phase such that the light may not flicker noticeably for long periods of time, but will then ramp into and out of a period of flicker. Unless one is looking at exactly the right spot on the screen at exactly the right time, the flicker may not be immediately noticeable. Typically I have so much to do that I can’t spend a lot of time examining every portion of the screen for flicker, but an on-set DIT will often spot it when they scrub through clips to check them for integrity. High-speed scrubbing reveals flicker issues better than anything else.

Editors do this as well, which is why they notice flicker immediately.

SOLUTIONS

ALWAYS SHOOT WITHIN THE HMI SAFE WINDOW

Long ago, in the days of magnetic ballasts, HMIs were prone to flicker if the power mains frequency drifted or the ballast wasn’t well maintained. Shooting with a shutter angle/exposure time that captured the same number of light pulses in each frame was the safest way to avoid flicker. By phasing the camera’s exposure to the light’s flicker there was much more latitude in how far the light’s flicker rate could drift before it became a problem.

This combination of frame rate and shutter angle/exposure time became known as the “HMI Safe Window.”

I always shoot within this window. For example, I never shoot at 24fps without setting the shutter to 1/60th of a second (144 degrees). I don’t care if I’m outdoors or indoors, or shooting with tungsten lights vs. HMIs, LEDs and Kino Flos: I habitually set the shutter to 144 degrees. Doing this consistently will eliminate most forms of flicker. The “normal” shutter angle of 180 degrees results in an exposure time of 1/48th of a second, and that doesn’t sync well with a 60hz light that’s having problems. That’s literally living on the edge of the HMI safe window.

When shooting off speeds I will always aim to put the exposure within the HMI flicker free window. I live in a 60hz power country, so I’ll shoot 48fps with a 144 shutter or 60fps with a 180 shutter, both of which give me an exposure time of 1/120th second.

1/120th is a dangerous place to live. I will check the monitor very carefully for flicker, looking in particular at the on-set practicals and LED sources. When the camera captures images at 120fps, and the light is flickering at around 120fps, camera/light phasing is critical.

PRACTICALS MUST BE TESTED

Small halogen bulbs have become a serious flicker issue as of late, and my suspicion is that they are more energy efficient because their filaments may be thinner. A thin filament means a big change in brightness when the AC current cycle switches direction and the filament cools during that transition. (Larger filaments take longer to cool and are less prone to flicker.) The effect is especially pronounced if the bulb is dimmed, as the filament cools more between cycles.

I refuse to take practicals for granted: if I’m shooting at high frame rates I will always shoot a quick test and view playback on multiple monitors to see if any of them flicker.

LEDs can’t be dimmed by reducing voltage but instead are best dimmed by adjusting current, and this kind of circuit can be expensive to build. It’s cheaper to flicker the LEDs at high speeds (pulse width modulation) so they appear to dim. High quality LED lights flicker at insanely high speeds and are largely flicker free even at high frame rates. Cheap LEDs tend to flicker at a rate that is invisible to the eye but is easily visible to a camera.

Whenever I work with a new motion picture LED fixture I always look at how it performs against the camera when dimmed to its lowest setting.

Industrial LEDs seem to do their own thing. Sometimes LED practicals in the background will phase at 1/60th of a second (144 shutter) and sometimes they won’t. I’ve noticed that exit signs seem particularly troublesome in this regard.

WAVEFORMS AND FALSE COLOR

Sometimes the best way to detect flicker is to watch a luma waveform. Global shutter flicker will turn up as a flickering trace on the waveform, and may be hard to detect as most in-monitor waveforms run at reduced frame rates to save on processing power. For example, an on-camera monitor’s waveform may only run at 12fps or 6fps, making flicker harder to detect.

Rolling shutter flicker will show up as a slow rolling movement in the trace, as if part of the image is breathing.

Sometimes false color will show flicker faster than anything else if the flicker causes enough of an exposure change that a portion of the frame transitions from one color to another. Rolling the lens aperture slowly may put flicker on the edge of a false color range and reveal it more quickly.

WHAT YOU SEE IS NOT ALWAYS WHAT YOU GET

Some monitors may not show flicker very well. Some waveforms may not show flicker very well. Sometimes file compression may enhance flicker such that it appears more strongly on playback. There are no easy answers.

My solution: always shoot within an HMI safe window, no matter the frame rate. That solves 90% of the problems I’ve run into. Doing that, along with refusing to take a practical light for granted at odd frame rates, will give you the best odds of avoiding a late night call from post.

Further reading:
Factors Affecting PC Monitor Responsiveness
Why Do Some OLEDs Have Motion Blur?

Art Adams
Director of Photography

 

Support ProVideo Coalition
Shop with

The post Flicker: Why On-Set Monitors Fail Us appeared first on ProVideo Coalition.

]]>
https://www.provideocoalition.com/flicker-set-monitors-fail-us/feed/ 4