Site icon ProVideo Coalition

Never twice the same colour. Still.

Closeup of an RGB waveform display in the Resolve grading software
Yeah, but 256 of what?

Choice is a wonderful thing, and as such we should all take great pleasure in the fact that modern digital cinematography tools give us so many entertaining ways to make things look wrong. In modern parlance, that can mean anywhere from slightly wrong, to yellow-skinned, purple-eyed, cross-processed-reversal wrong in the way Tony Scott used to love.

Whether or not anyone has ever deliberately selected the wrong colourspace for creative reasons, the fact that it’s now so easy to do so by mistake is an unwelcome consequence of giving the nerdiest members of the industry (guilty as charged) free access to a twenty-year, all-you-can-eat buffet of electronic camera development. It was not always thus: over decades, for the entire history of colour television, the shades of red, green and blue we use to describe colour images were the subject of painstaking standardisation processes, expected to be carefully adhered-to by manufacturers – all manufacturers – and intended to remain valid for the lifetime of an entire ecosystem of both professional production equipment and domestic TVs.

Luminance standards were less chaotic for a while, largely because luminance wasn’t very well standardised. We’ve now made up for it with at least three major kinds of HDR.

A history of colourspaces

TV colour and brightness standards were recognised as being inappropriate for cinema work long before electronic acquisition was anywhere near the mainstream. We could go into the specifics of Kodak’s Cineon system, but there’s enough discussion on the web of what logarithmic encoding is and how colourspaces actually work. What’s important today is that Kodak was not behaving irrationally; trying to build a digital film manipulation system using early-90s video technology would not have worked very well and there was no prominent system for cinema colour handling that could reasonably have been adopted off the shelf.

Even in the early 2000s, when Thompson repurposed one of its studio camera heads to build the Viper Filmstream system, digital acquisition was still so new that it wasn’t necessarily unreasonable to develop a new colourspace and brightness encoding scheme for it (and by “develop” we might mean “go with what it does by default”). The warning sign might well have been that Viper’s encoding was idiosyncratic, looking pronouncedly greenish (one might almost say Matrixesque) when displayed uncorrected on more conventional displays. That might have been a genuine technical convenience relating to the Viper camera head or it might not, but either way, it wasn’t likely to be adopted as a standard for anyone else’s products.

The bigger warning sign was probably what Sony called hypergamma on its TV cameras, particularly the famous F900, which lent the camera, if not a true log encoding, at least a somewhat wider dynamic range more suitable for cinema-style production. Almost immediately, there were several variations on hypergamma, and no way for a colourist to determine which had been used other than by receiving notes from the camera crew. Better cameras would quickly follow, and complexity would balloon.

Hello. I’m a roll of 35mm negative. I have a colourspace. Take it or leave it. Or, you know, poke it with processing, but in general…

Benevolent imposition

At this point, it should have been obvious that a standardisation process was the only way to maintain the sort of simplicity that 35mm production had benevolently imposed for decades. Yes, different labs had different ideas about different printer lights and different stocks had different behaviours, but the fundamentals were fixed: colourspace-wise, if it fits in the projector, it looks like what it looks like, and a cinematographer is free to use any practical combination of lighting, filtration, stock, processing and grading to reach any reasonable goal.

Fast-forward to 2022, and we’re all painfully aware that most cameras (and monitors, and pieces of post production software, and websites) have a Marche Movenpick’s worth of manufacturer-specific ways to encode colour and brightness. These approaches are often promoted with significant fanfare on the basis that they’ll make the pictures look more… however it is we want them to look, with the implication that the specific capabilities and performance of the camera and its sensor are somehow better served by a particular set of colour primaries and brightness encoding.

Solving the problem

The truth of this is necessarily a matter of some conjecture, since we don’t have access to enough low-level engineering data to compare two sequentially-numbered versions of a company’s log encoding independently. To risk a small speculation, though, it seems likely that the improvements offered by ever more fine differentiation of encoding systems are likely to be slight. Plausibly, they’re so slight that significantly more benefit, comparatively, could be found in standardising things .

At this point, with commercial entities making much of their proprietary approaches to colour, it seems unlikely that any sort of standardisation can be made to happen; witness the patchy adoption of ACES. At worst, that sort of attempt to create a unifying approach can simply end up adding to the landscape of mutually-incompatible options. Whether standardisation will happen naturally, as was the case with the huge range of film gauges in early cinema, remains to be seen, but until it does, the industry’s mild but persistent colour-related headache is likely to persist.

Exit mobile version