Site icon ProVideo Coalition

Single Chip Camera Evaluation

Single Chip Camera Evaluation 1

image

At the request of Zacuto in Chicago, Bob Primes, ASC led a group of over 100 intrepid volunteers and ASC colleagues in the creation of The Single Chip Camera Evaluation. Basically, a well-planned and fairly comprehensive shootout between the Arri Alexa, Sony F35, RED MX, Phantom Flex, WeissCam, Sony F3, Panny AF100. Canon 5D, 1D, 7D, Nikon D7000 and Kodak 5213 film.

The following write-up is my assessment of the presentation of the event. As the organizers point out, many of the results are subjective and there was no clear “winner,” though several of the panelists expressed a pretty un-restrained love for the clarity, beauty and simplicity of the Alexa, closely followed for their adoration of the F35.

To start the event – held at the end of the day Tuesday at NAB to a packed, standing room only auditorium of appreciative attendees – they showed the video of the tests themselves. The organizers hope to soon release a Bluray DVD of the video. For more on the availability and more specific results of the tests, please visit thescce.org.

Here are my quick notes on the video itself:

They broke the testing down to several specific categories including: Sharpness, low light capability, exposure latitude, highlight detail, shadow detail, color reproduction, skin reproduction, shutter artifacts.

In the sharpness test, they were looking at fine detail and contrast. They used a Siemens star chart. In the sharpness test, the RED ONE and film were tops, while the 1D, 5D, 7D and Nikon were among the worst. The results were displayed as a graph.

As for the Comparative sharpness test, the image was blown up about 230% to determine how sharp the image was. Obviously, cameras with a larger size – like the 4K Red image faired a lot better being blown up. The results, however, were subjective and open to interpretation. Someone may disagree with me on my impressions. Many high level DPs were in the audience and I walked out of the event with Art Adams and Adam Wilt. Hopefully they will post their impressions of the test as well. With that caveat, I thought that, F35, Alexa, RED, film, and Phantom were good. While the Panny 100 and the slew of DSLRs were the worst.

Throughout the test, there were several times that the audience broke into knowing laughter or at least audible recognition of the fault of a specific camera. The softness of the 5D in this test was the first place that this communal realization happened.

Let me just point out that of ALL the cameras tested, the only one I personally OWN is the 5D, so please don’t feel I’m slamming the DSLRs due to some elitist bias. I WANT the 5D to be come out ahead! But I’m realistic and trying to be objective.

In the low light Sensitivity test, they wanted to see which images could resolve an image in the lowest possible light – apart from the base noise floor. This was about the only category in the entire slate of tests that the Canon 5D came out on top, objectively.

In the real world shooting test for low light, it was hard to see a difference. In these subjective, comparative tests, the D.P.s determined one “best” image to use as a comparison to go back and forth between the other cameras, kind of as a reference. They explained that choosing that reference was subjective, but, in some since, was representative of the best camera performance in that particular test. In the case of the “real world” low light test, they chose the Alexa as the reference. The 5D was at the bottom, subjectively, though there wasn’t a lot of time to judge nuances.

The Dynamic range test was an objective test based on how many stops of range could be seen. Film and Alexa were at the top. Actually, Alexa had the best latitude with film at something like 14 stopsand the Alexa beating it with something like 14.2 stops. The Red was also very good. The F3 and DSLRs also rated well.

In the Highlight detail real world test, they shot a scene with a bright window in the background. They exposed to blow out the window in the camera, then used color correction to bring back whatever they could using a Power Window. In this test, which was subjective, the Phantom. Weisscam, F3, AF100, and all of the DSLRs were bad. Again, a knowing group chuckle went up at the 5D. The organizers wanted to point out that the Weisscam and one of the other RAW cameras had some problem where they were unable to use the RAW camera data and had to use the HD-SDI. Obviously that’s a BIG disadvantage to test a RAW camera with only the HD-SDI image, so those results should be viewed with a kind of an asterisk – like Barry Bonds’ batting record.

In the lowlight shadow detail the F3 was excellent. The RED MX was used as the reference image. Film was actually bad. As Robert Primes pointed out, film actually has phenomenal highlight detail, but isn’t actually that good in the shadow detail. This test proved that point. The F3 looked good, as did the AF100 good. The DSLRs also all looked good.

The Ringer chart – by DSC I believe – showed the color sharpness and moir© issues. This was subjective, but the F35 was used as the reference and was nearly perfect looking. The Weisscam was bad the F3 was nice, the Panny was nice, the 5D was bad, the 1D was worse, the 7D was bad and the Nikon was decent.


I had a very difficult time trying to judge the color rendition test in the short time each image was on screen, but the F35 was the reference.

They also judged color by the importance of the reproduction of Skin Tones. The rendering of skin tones is very objective. Primes pointed out, “Ultimately there is no right or wrong, simply what you like.” So I’ll leave my analysis of the skin tones out.

Finally, one of the most interesting tests from a technological point of view – instead of an artistic viewpoint – was the Shutter artifacts tests to determine the results of rolling shutter problems.

Clairmont Camera designed a drum with vertical lines that spun rather quickly. The test went by very quickly, but CLEARLY revealed the winners and losers in this category. I missed some of them as I was trying to write and watch at the same time, but from my notes: The F35 looked good, the Flex was good, Weiss was good, F3 was bad, Panny was bad, 5D and 7D were among the worst, with the 1D a little better than it’s Canon brethren, and the Nikon D7000 was arguably the worst of all.

The oher rolling shutter test was a wheel with lines at 45 degree angles that spun (like an airplane propeller). I may have missed some cameras, since I was trying to write and view the tests at the same time, but from my notes: Red MX was bad, Flex was perfect, Wessscam was perfect, film, of course was perfect, F3 was not good, AF100 was not good, 5D was arguably the worst, though similar to the 1D, 7D and Nikon.

The audience and a panelist of the DPs and post professionals who worked on the test, discussed the aspects of the testing.

From quick notes during the Q&A, here are some points:

With the sharpness test, with the cameras that provided 4K, the full 4k was used on the blow up to test focus. The 5D ended up the worst. Obviously a lot of that is the compression of the image. But the DPs also admit that there could have been poor focus, since it was done manually.

The post workflow tried to get all cameras into the new Academy of Motion Pictures Arts and Sciences new ACES color space so that the various cameras’ color differences would be minimized, bur they were unable to accomplish this in the deadline for the video to be completed, so they did some minimal color correction on some of the images to be able to easier compare them.

One finding of the group was that they didn’t find a real disparity in ISO from what the manufacturers claimed. With some of the cameras – like RED RAW – you choose the ISO in post, but the camera manufacturers ISO claims were quite accurate and the manufactures were pleased with the way their cameras were represented and set up for the tests.

One bone of contention in determining ISO is whether the rating for ISO should JUST be based on the 18% gray, or should you rate a camera as having a higher ISO if it is able to deliver greater headroom (in stops) ABOVE 18% gray?

Primes disagrees that “video” cameras should even have ISO ratings because the curve is not the same as with film, so the ways to expose film based on ISO and an EI (exposure index) rating from a light meter are very different than video. Primes thinks that a video camera should really be rated and have exposure determined based on waveform monitors.

The DPs in the test all agreed that this test should be used judiciously and carefully because in the real world, you have ONE camera, and they claimed that by just doing small tweaks you could make any one of the cameras look better, but the point of the test was to come up with a single base for all of the cameras to be judged against, which is a little unfair in the “real world” because normally you WOULD adjust various parameters to get the best performance from the camera.

Many of them were very impressed that “There’s a $1700 camera that’s competing quite well” with much, much more expensive cameras.

To this end, “What’s the bang for the buck, for the money?” Some comments from the panel were: “I was very impressed with the F3.” “Some cameras are better in a highlight, some are better in the low-lights.” “Different horses for different courses.”
“Alexa has qualities of its own that no one can touch. But the F35 is so clean and sharp.”

The conclusions drawn throughout are often my own subjective and fairly unlearned observations. I hope you will explore the website for the test: www.thescce.org and consider picking up the Bluray disk of the test when it becomes available to have a look for yourself.

Exit mobile version