Site icon ProVideo Coalition

Is the film industry really ready for 12K? What it takes to create impressive sensors for cinema cameras

Is the film industry really ready for 12K? What it takes to create impressive sensors for cinema cameras 1

Recent insights about the RED Komodo 6K S35 have gotten lots of cinematographers excited, but the new URSA Mini Pro 12K Cinema Camera has taken that excitement to the next level. The future arrives faster than ever, but was it really that long ago that we debated the merits of 4K?

That arrival and excitement will have to wait until more people can get their hands on these cameras, but in the awkward gap between a product’s announcement and its availability, there’s often a lot of pressure to speculate about how useful an unknown thing will be. In the long term, uninformed guesswork has a tendency to end up in the same file as claims about half a megabyte of memory being enough for anyone. So, instead, let’s talk to someone who really knows what he’s talking about.

Dave Gilblom is CEO at Maxwell-Hiqe corporation, and has been involved with image sensing devices for over fifty years. That’s a long enough to go from TIVICON silicon target image tubes at Texas Instruments, image tube cameras and CCD cameras, amorphous silicon flat panels and CMOS sensors for medical, scientific, industrial, defence and aerospace applications.

He is, to put it mildly, well-placed to comment.

The creation of a competitive cinema camera sensor, Gilblom says, could happen at any one of a number of places, and might take “eighteen months, something like that. It’d cost a few million bucks probably. There are several people who could do it. A couple of different Teledyne branches; the place most people go is Forza. Another company that could have done it is GPixel, a Chinese company.” These are design houses; manufacturing is contracted out to a semiconductor fabrication plant. “TSMC has a whole section that does image sensors. XFab does.”

But finding somewhere capable is not the problem; willingness is a factor. The film industry is a tiny market compared to, say, cellphones, as Gilblom says. “The question is who would be interested. Are the volumes big enough to support production runs, and how often?” The large size of sensors for cinema has both benefits and drawbacks. “For a line scan device you might get five thousand of them on each wafer. If you want a million of them it only takes a couple hundred wafers. A typical large fab takes 20-30,000 wafers a month. The bigger the device is, the easier you have it.”

“Even so,” Gilblom continues, “if you’re not ready to buy a 100,000 of them a year, the likelihood is that after you make the first batch and get them right, you go back again they’ll be screwed up. The particular machine operator who ran them is gone… non-uniformities in the silicon, oxygen striation, a small error in an implant energy that shows up as shading. There’s too many variables.” Here, he says, “Large sensors make it worse… you’ve got the issue of dead pixels.” There’s one simple, if risky and expensive, solution. “When I worked for PerkinElmer, I looked at a company which was interested in buying custom chips and said ‘if you don’t have enough volume, the only option you have is buying your entire requirement in one run.’”

Ouch.

Manufacturability aside, it’s clear that ever-higher-resolution sensors accept performance penalties per photosite that are related to fill factor – the proportion of the sensor covered with light-sensitive components. While the pixels can shrink, associated circuitry may not, so the sensitive area becomes a smaller proportion of the whole. As to whether a high-resolution sensor scaled down might have better noise performance than a lower resolution sensor, Gilblom considers it too close to call. “Without all the gory details I can’t answer it. It depends on all kinds of decisions they made in the designs as to whether it’s better or worse in the end.”

Colour filter arrays with more than just red, green and blue are not a new idea. As Gilblom puts it, with primary colour filters “you’re throwing away on the order of two thirds of the light. If you try to get it back with white pixels, to get the colour back you have to have [clever colour processing]. There are dozens of patterns. Sony invented the emerald pixel. This was in 2003. Their imaging scientists went through and did a very careful calculation to see what you could do to simultaneously improve noise performance and sensitivity. They spent a lot of money on it and the results were undetectably different.”

So, is there any route to real improvements? “Frankly,” Gilblom says, “you have a limited design space. Almost anything that’s practical has already been done. There are all sorts of impractical things proposed and some of those get done as Ph.D. thesis projects. Lots of papers but few production devices.” Big changes require fundamental advances, perhaps involving the photodiodes that detect light. “The photodiodes in imaging sensors, to put it in technical terms, are not very good. The reason is that you expose them to the processes that are necessary to make digital CMOS devices, which are very harsh. You isolate the photodiodes with [insulating] oxide layers, and when you put in the oxide layers it stresses the photodiode and raises the dark current [that is, noise]. Doesn’t matter if they’re front side or back side devices, there’s this stuff you have to do to make the CMOS circuitry. And if you try to make really good photodiodes, you’ll ruin the digital stuff.”

Here, it’s necessary to declare Gilblom’s interest in his company’s Lumiense design, in which photodiodes and circuitry are “built on separate wafers that are bonded together after they’re made. You can make scientific photodiodes that have a near one-hundred-per-cent fill factor. There’s nothing on that surface that isn’t photodiodes.” Lumiense also creates an opportunity to build global-shutter cameras with fewer of the traditional compromises to noise and sensitivity, largely caused by light falling on the circuitry that controls the shuttering. Stack that circuitry behind the light sensitive layer and the problem goes away. “The best device ever made had an extinction ratio [that is, shutter opacity] of 50,000:1,” Gilblom notes. “With Lumiense, you end up between one and ten million to one.” Lumiense is a few years from being ready for a film set, but might do well when it is.

None of this makes any particular recent development a bad idea. In general, though, Gilblom’s analysis is that making impressive sensors for cinema cameras is about “just making the decision to do it, being willing to spend the money and finding someone who’s interested in making them for you.”

 

Exit mobile version