It’s easy to get people nodding in agreement about resolution in 2021. Point out that the drive for more Ks is being pushed more by TV manufacturers than cinematographers, note that most of the cinema ever seen by anyone had a resolution of about 1.5K, and drain your glass while the table applauds.
And of course it’s true, as far as it goes. The interpositive-internegative-print process of shooting 35mm was often capable of well under the resolution of an HD television broadcast, even with fairly slow, high-resolution original camera negative. Recently-released 4K releases, ideally based directly on that original negative, may actually be the sharpest, most revealing editions of many films ever seen by anyone – including the people who made them. But the 1.5K was fine, and the easy access to lots of resolution we’ve recently enjoyed has, objectively, not had much to do with the demands of moviemaking as it has long existed.
Resolution: less is not more.
In fact, more is more. There are a few perfectly valid applications for huge amounts of resolution and frame rate, although they’re somewhat outside the context of traditional cinema. VR headsets, for instance, ideally need to cover the full range of a human eyeball swivelling in its socket. If the desire of cellphone manufacturers to one-up each other is constant (and it’s probably exponential), we’ll have 16K LCD panels in cellphones by the middle of next Thursday, and breathless TV announcements of that fact. The side effect of that ought to be seriously good VR possible, assuming we can generate the graphics, although it’s a matter of opinion as to whether that’s really cinema.
Even if we’re shooting for conventional film and TV, lots of common reasons for high resolution are not actually much to do with the resolution of the final image. That incredibly precise operating that Fincher seems to like is not an artefact of supernaturally skilled camerawork; it’s the result of shooting much wider than the eventual frame will be, and cropping suitably in post with the sort of pixel-perfect frame control that only a computer feature tracker can give you – sorry, SOC members, but nobody’s that good.
Cameras can’t count, but that’s fine
It’s not even clear how much real-world, effective resolution actually comes out of most cameras. The detail level is only vaguely related to the headline pixel count, given the need for antialiasing, and non-Bayer sensor layouts complicate things further. For instance, we really should use the word “photosite” to describe the individual light sensors because a “pixel” (in a colour image) needs red, green and blue components, and any one photosite can only see one of them. On a Bayer sensor, the smallest repeating unit that we might call a pixel is two photosites square, including two greens, a red and a blue. On Blackmagic’s 12K camera, though, the smallest repeating unit is six photosites square, including six reds, six greens, six blues, and the balance of unfiltered pixels. As such there’s only 2048 of those repeating units across the sensor. Similar things apply to Fujifilm’s X-Trans sensor layout.
This does not mean that a notional 12K X-trans or Blackmagic sensor is a 2K camera, because the filters on the photosites are quite pale in colour, which means all the photosites see at least a proportion of most colours. Finishing pictures from this sort of sensor involves a lot of complicated processing doing tasks which are hard to sum up. Certainly, though, sharpness depend on where the image falls on the sensor and the colour of the subject, and we haven’t even considered diffraction limits or lens performance.
The Ursa 12K, considered as a pure 12K camera, is diffraction limited above about f/4, and even 4K cameras have a diffraction limit that exists on many real lenses. Streaming distributors might insist that material is produced at a certain resolution but it’s not nearly so common to ban narrow apertures, let alone, say, 1970s anamorphics shot wide open, both of which will capably ensure that a 12K, Super-35mm-sensor camera returns less than 12K’s worth of actual picture detail.
Which is fine, or at least it’d better be, considering how popular worth-as-much-as-a-house 1970s anamorphics are. Perhaps the irony is that the better cameras get, the less they have to do with the way the picture looks. The camera is characterising the lens, the filtration, the lighting, and what’s in front of it, and the only reason that’s a problem is the entirely subjective fact that there’s over a century of cinematographic history out there, dictating how things should look.
Perhaps the only meaningful assessor is the opinions of the cinematographer, which is sort of where we always wanted to end up, isn’t it?