Large format is no longer out of reach of the average filmmaker, but the look is difficult to quantify. Naturally, that means I’ll give it a try.
I’ve heard a lot of theories as to why large format looks different from traditional S35. As best I can tell, it boils down to reduced depth of field. To match the angle of view between a large format camera and an S35 camera, the large format camera will require a longer lens, and this results in an apparent reduction of depth of field. In the case of an ARRI Mini vs. an ARRI Alexa LF or Mini LF, depth of field can shrink by up to 50%.
But is it really this simple?
THE DISTORTION ARGUMENT
It’s easier to correct long focal length lenses for optical distortion. The fact that large format cameras require longer lenses than S35 cameras to capture wide angles of view suggests that wide shots will show less distortion, which may make them feel more “immersive.” I don’t know that this is the case with modern cinema lenses as they tend to be very well corrected for distortion.
I compared two sets of well-corrected cinema lenses: ARRI Master Primes and ARRI Signature Primes. Master Primes are nearly optically perfect cinema lenses, while Signature Primes possess a unique combination of modern and classic attributes. Distortion, though, is not one of those classic attributes.
Master Primes cover S35 but not large format. Signature Primes are designed for both formats. Let’s see how they compare.
I blackmailed Technical Sales Rep Chase Hagen into posing for me. We captured two passes on an ARRI Alexa LF: one with a Master Prime in 3.2K, and one with a Signature Prime in 4.5K. The camera remained static between the two passes. (I chose 3.2K because it’s the format I used most toward the end of my stint as a freelance cinematographer.)
I see no difference in distortion here. What I do notice is that the Signature Prime is warmer than the Master Prime and reproduces the background with more vibrancy and contrast, even though it’s softer. The Signature Prime shows a bit more vignetting than the Master Prime, but that’s because its aperture is wide open, where the Master Prime’s aperture is not.
I see no differences between these images beyond each lens’s unique characteristics and large format’s reduced depth of field
If you’d like to know the (very simple) math used to calculate the crop factor between 3.2K and 4.5K, scroll to the end of the article. It’s really useful stuff but might not be interesting to everyone. The key takeaway is that it’s dead easy to determine what lens to use for the equivalent angle of view between 3.2K on a Mini and 4.5k on an Alexa LF (or between 2.8K HD and UHD): just move up to the next standard focal length.
25mm (Mini) > LF 35mm (LF or Mini)
35mm (Mini) > 50mm (LF or Mini)
50mm (Mini) > 75mm (LF or Mini)
75mm (Mini) > 100mm (LF or Mini)
Each step results in a reduction in depth of field by half.
DEPTH OF FIELD, PERSPECTIVE, AND DEPTH
The most obvious difference between large format and S35 is the reduction in depth of field. Large format may offer a clear advantage to those who like to shoot soft backgrounds.
In my own work, I’ve found that reduced depth of field in S35 becomes interesting at T2. Most of my freelance work was in commercials, and I found that T2.8 often felt a bit too sharp for modern tastes. In large format, though, that was often the equivalent of shooting T2 in S35.
Shooting wider than T2 can be… problematic. T1.4 on a fast lens is really f/1.2 or f/1.3, and at that point, spherical aberration is difficult to correct. This can cause apparent shifts in focus at wide apertures. This is also known as “witness mark drift.”
I saw an example of this early in my career when a lens manufacturer released a series of lenses that were frighteningly honest about this anomaly: they sported a yellow witness mark for apertures smaller than T2.8, and a blue witness mark for apertures wider than T2.8. (Shooting at T2.4 was a bit nervewracking.) They knew that, when the lens was wider than T2.8, the lens looked sharper if the lens focused at a slightly different point.
Many cinematographers who habitually work wide open have their lenses shimmed to compensate for this. In the large format world, there’s no need to work at T1.4 to capture the same look.
Or, one can work at that aperture and capture backgrounds that are far softer than what one can create in S35. I’ve seen images, captured with our own ARRI Signature Primes at T1.8, that look as if there’s a water-covered window between the subject and the background. It’s a wonderfully dreamy look.
How much of an effect does reduced depth of field have on the “large format look,” though? I worked with director of photography Matt Siegel to find out.
I imagined that we were shooting in a small room, with no space to maneuver. We then crafted and captured a series of shots from the same camera position, and with the same angle of view and T-stop, but in both 3.2K and 4.5K Open Gate modes on an ARRI Alexa LF.
Before you watch the video below, I’m going to tell you what I want you to look for:
The shapes of background objects grow as they drift out of focus. This may make soft backgrounds feel closer, while also creating a greater sense of separation between foreground and background due to their increased softness.
Wide shots may feel deeper because they show increased perspective without hard edges that compete with the foreground. They feel “flatter” while retaining a sense of depth as if the image is composed of discrete layers. (I equate this to the sense of depth I’ve felt when viewing wide shots in 70mm films.) It’s almost as if the foreground is a cut-out, placed against a soft background.
As I illustrated in this article, anamorphic and spherical lenses of the same focal length will show the same depth of field for the same active sensor size, but the spherical lens will render backgrounds as softer.
For example, when shooting anamorphic with an ARRI Alexa LF, we use UHD mode. We can also shoot in this mode with a spherical lens. The image heights on the sensor are the same. The spherical lens is the same focal length in both axes, but the horizontal axis of the anamorphic lens is half that of the vertical axis. Objects will soften normally in the vertical dimension, but they’ll stay sharper longer in the horizontal dimension.
Under these conditions, a spherical lens will produce softer backgrounds because focus drops off equally in both dimensions.
There’s an example of this at the end of the video. I digitally zoomed in to the pictures in the background to show that the spherical lens softens the horizontal and vertical frame edges equally, but the anamorphic lens doesn’t soften the vertical edges nearly as much. I wasn’t able to perfectly match focal lengths due to lens availability so I had to scale the 75mm spherical lens up a bit, but I think you’ll see what I’m talking about.
The large format background feels larger in relation to the foreground when compared to the 3.2K image. There’s also a noticeable exposure drop in the background highlights as they bleed into the surrounding dark areas. This might be considered a bonus—particularly in HDR—as highlights remain visible but are less distracting.
Part of the sense of depth I feel in the large format image has to do with the increased size of the out-of-focus pictures on the background wall. Another aspect is that hard edges in the background are so much softer in large format that my brain can’t “grab on” to them. It feels as if the woman in the foreground is on her own “depth layer” that’s distinctly separate from the other layers.
The woman feels to me as if she is part of a foreground depth layer that contains the foreground bookcase. The man feels as if he’s on another depth layer that contains the camera-left bookcase. And the background wall feels as if it’s a third depth layer. I still feel this someone in the S35 image, but it’s undeniably obvious to me in the LF image.
I quite like what’s happening in the large format frame. The perspective of the room feels “flatter” somehow, but I perceive the layers of depth more strongly.
What’s interesting to me is that I feel less of that “layered depth” sensation in the 2.39:1 images. I’ve noticed this before when shooting location stills with a 1.5:1 digital camera vs. shooting moving images in 16:9: taller images often feel as if they have a greater sense of depth. As best I can tell, this is strictly a function of psychology, and is heavily dependent on the size of the image. Here, anamorphic images don’t feel as rich in depth as do the 16:9 images above. If they were projected on a huge screen, though, I suspect I’d feel plenty of depth.
There are two additional things I’ll touch on before I wrap up. The first is that DP Matt Siegel and I both agree that we feel that large format 2.39:1 to one feels more immersive to us. I think this is due to the way spherical lenses drift out of focus, although he was blown away when I showed him these two videos (originally seen in this article):
In both cases, he said he felt that the image “wrapped around” him and created a sense of immersion. As mentioned above, wide images seem to have less depth when viewed on a small screen, but Matt saw these projected on a large screen in the theater at our new Burbank office. It’s safe to say that he was… “verbally expressive” when these videos came up.
My humble guess is that the lack of background distortion in the spherical lens’s image created a strong sense of immersion that was tempered by the anamorphic lens’s distorted and slightly sharper bokeh.
Lastly, there’s a strong case to be made for large format’s increased sense of depth when shooting for very small screens.
Softer backgrounds create a greater sense of depth on small screens. I’d love to be a film snob, but I watch an awful lot of television on my iPad during long plane flights, and television cinematography currently rivals anything I see in feature films. While an iPad is not an ideal viewing platform for cinematography, I can’t turn that part of my brain off just because I’m watching content on a small screen. And neither, I suspect, can you.
Art Adams freelanced in the film industry for 31 years and was a cinematographer for 26 of them. He is now Cinema Lens Specialist at ARRI, Inc. He’ll be at NAB 2019 in the ARRI booth, so stop by and say hi.
AND NOW, SOME SIMPLE MATH
To match the angles of view between 3.2K and 4.5K, I had to use a longer lens in 4.5K. The question was, “How much longer?” Upon consulting PCam, I found that the angle of view of a 35mm lens in 4.5K matched a 25mm lens in 3.2K. This is a “crop factor” of roughly 1.4x. This has an interesting property:
25mm * 1.4 “crop factor” = 35mm
I took this a bit further to see if it applied across a standard range of lenses. It does:
18mm * 1.4 = 25.2 (or ~25mm)
25mm * 1.4 = 35mm
35mm * 1.4 = 49mm (or ~50mm)
50mm * 1.4 = 70mm
70mm * 1.4 = 98mm (or ~100mm)
If we consider a standard set of primes to be 18, 25, 35, 50, 75, and 100, then converting from 3.2K to 4.5K simply means bumping up to the next longer focal length in the set. This also holds roughly true when converting from ARRI HD to UHD (1.33 vs. 1.4, which is a bit different but close enough to make focal length conversions easy).
This also works for slightly different focal lengths such as 16mm and 24mm, although it doesn’t work quite as well for less common (but really useful) focal lengths such as 40mm and 65mm.
One can easily determine the “crop factor” for any two sensors through some basic division. Divide the horizontal pixel count of the large sensor camera by the horizontal pixel count of the small sensor camera, and that’s your multiplier. If the cameras are configured to capture images with the same aspect ratio, multiply the smaller camera’s focal length by the multiplier to find a focal length that will give the same angle of view on the larger camera.
ARRI Mini, 3.2K mode: 3200px wide
ARRI Alexa LF, 4.5K open gate: 4448px wide
Multiplier = 4448px / 3200px = 1.39 (or round off to 1.4)
If I have a 25mm lens on an ARRI Mini in 3.2K, I’ll need a 35mm lens on an ARRI Mini LF to capture the same angle of view (25mm * 1.4 = 35mm).
It’s important to note that this doesn’t work across images with different aspect ratios, as you can only calculate angle-of-view matches for horizontal or vertical angles. Using a diagonal to match, say, 16:9 to 2.39:1, won’t yield meaningful results.