
Sometimes, new technologies are pursued because they make things work better. Sometimes, that happens because they allow people to write large numbers on promotional material. Occasionally, it’s both. Yes, this is one of those articles where we’re going to have to avoid naming names.
There is one idea which which has been proposed a lot, and even implemented more than once, over a long period of time: sensors using something other than a Bayer layout. When people keep trying something without it catching on, we’d be forgiven for asking pointed questions about why it has repeatedly failed to stick when so many people think it’s a good idea. Or, alternatively, why so many people think it’s a good idea when it has repeatedly failed to stick.
Depending on which way around we put it, the implication is different.
So non-Bayer sensors have been tried a lot, mostly in pursuit of improved sensitivity. Sony added an emerald pixel so long ago, in 2003, that it was based on a CCD, and other designs have included other secondary colours. Secondary colours pass more light than primaries; magenta passes red and blue. Some sensors have even included unfiltered white pixels, which see everything.
In principle, this is all completely valid.
Shrug-worthy benefits

In practice, results have been mixed. Better sensitivity (and thus lower noise) has often been traded against a need to crank up the saturation (and thus increase noise). There are different kinds of noise which are more or less difficult to control and reduce, but in the long term, the overall performance benefits have mostly been shrug-worthy. To date, most of the industry has not significantly moved away from Bayer.
Recently, we’ve seen non-Bayer sensors tried for some novel reasons, including the desire to build in an ability to produce outputs at different resolutions. The industry has put so much effort into chasing resolution we often overlook the fact that reducing it (without just windowing the sensor) is fundamentally difficult. Most people are aware that just leaving pixels out creates nasty, jagged artefacts. The fix for that is simple enough: we just remove all the sharp edges first, which is a blur operation.
Running a decent-quality blur on a moving image five figures wide is enough to bring even quite a large GPU out in a cold sweat. Try to do that at high frame rates and bit depths and the workload becomes overwhelming. Blur algorithms are a subject which could fill an article on its own, though that article would be full of phrases like “monodimensional decomposability” so let’s just work on the basis that doing it well is difficult.
So, on the basis that people are probably still going to want 2K output from a billion-K camera, if only for viewfinding, necessity has provoked people to figure out ways to make that possible. The real problem here is not actually the sensor or the way the pixels are picked; it’s that the low-pass filter on the sensor is not strong enough for the effective photosite size.
Low-pass filters are tremendously convenient because they effectively apply a blur that takes no computer power, no electricity and no cooling. The downside is that the radius of that blur is fixed at manufacture, unless people start designing filter mounts which can move the low pass relative to the sensor (piezo-electric buffers, maybe?).
Champagne resolution

Either way, no matter what, clean images which have less resolution than the sensor itself inevitably involve a process of averaging, whether that’s done by a glass filter, or by mathematics. Those recent designs we mentioned aim to build that averaging into the sensor itself, more or less by switching individual photosites into circuit with others. That tends to optimise for power consumption at the cost of flexibility, so the sensor might only be capable of producing outputs at (say) two-thirds or one-third of its native resolution, not half or three quarters.
In the end, as available resolution starts to exceed 10K, there is an argument that all of this is starting to matter less and less. Cameras of circa-4K resolution have been released without any optical low pass filters at all, and nobody seems to have deemed them unusable. What’s more, issues of lens performance, motion blur (even from microscopic, barely-perceptible vibration) and diffraction limitation become a big influence at very high resolutions.
Excessive resolution, let’s be clear, is a champagne problem experienced by a post-maturity market. Still, it’s generating some interesting new ideas which take sensor design in a somewhat new direction – not so much making more of the sensor, but controlled ways of making less of it.

Filmtools
Filmmakers go-to destination for pre-production, production & post production equipment!
Shop Now