Site icon ProVideo Coalition

CAMERAS: Some More Thoughts on the Zacuto Shootout, as Posted to DVXuser

mct.jpg

CAMERAS: Some More Thoughts on the Zacuto Shootout, as Posted to DVXuser 1

I learned recently that there was a thread discussing the Zacuto tests on DVXuser and my articles were mentioned. I like what I wrote as I think it sums up my thoughts nicely, so I’m posting it here as well.

Here’s my post to DVXuser, in the “Zacuto Revenge Part 3” forum thread:

Hi gang-

I heard there was a conversation about the latest Zacuto test here, and I believe my articles were mentioned, so I thought I’d come over and weigh in and see if anyone has any questions for me that I haven’t addressed in my articles at PVC.

Firstly, I appreciate the work of everyone who participated in the tests. I think it’s fantastic that people are willing and able to get all these tools in the same place and try to create a presentation that they hope will benefit everyone who watches it.

In my opinion, however–and this is just my opinion–they don’t know how to do good comparative camera tests, and from an objective standpoint I believe they are largely a failure.

My biggest problem is that the people who shoot these shootouts for Zacuto most often try to create realistic storytelling situations much like they would find on an average project. That’s valuable at some level, because we do want to see how the cameras respond when used in everyday situations, but at the same time many of the conventions of storytelling get in the way. For example, once the camera or the actors move it becomes much harder for our brains to process the differences between camera images because we’re paying more attention to the movement than anything else. Also, movement obscures details in the frame that would otherwise be easy to see.

I, personally, want to see static frames and split screens between the cameras, because that’s when we can REALLY see, very quickly, what the differences are. (I don’t want the camera or the actors to move unless I’m looking only at movement, and even then there are very specific things I would do to ensure a fair test.) Split screens are what I most want to see because no other tool gives us comparative information faster. It is difficult but possible to look at a full frame image from one camera, observe certain areas of the frame (shadows, highlights, etc.) and try to memorize how they appear, and then mentally compare those things to the next camera’s image, but this is inexact at best. Human brains just don’t do that kind of thing very well.

I completely agree with Zacuto that it is the people behind the camera who make the difference, but I don’t think the way to show that is to put a different DP behind each camera. If you really want to see what a DP can do then pick one DP and vary the cameras. Otherwise you have no idea if the differences you’re looking at are due to the DP or due to the camera.

For example, the GH2 got a lot of press because Francis Ford Coppola liked it. He liked the Epic and Alexa as well, but the press picked up on the GH2. Why did he like it so much? Was it because Colt Seaman did a hell of a job lighting for it, or does the camera just look really good for what it is? All of -us- know it was Mr. Seaman, but those who are not technical will tend to give credit to the camera. Why? Objects are much easier to quantify than people–and people always want to quantify -something.- Given the choice of quantifying an intangible, like artistry, over a tangible, like a piece of gear that can be set up to create a consistent look, the tangible will always win.

The problem, though, is that many viewers didn’t think that some of the higher end cameras looked as good as some of the lower end cameras. How do we know if that’s the camera’s fault or the DP’s fault? I can point at something a camera did and say “I really don’t like that!” but it’s much harder to look at someone’s lighting and define exactly what it is you don’t like.

Camera artifacts are much easier to see than artistic artifacts, mostly because everyone can see technical artifacts but not everyone can see artistic artifacts.

If this was a true DP test then it would have been much better to show, say, a GH2 against an Alexa: take them both out into broad daylight, with no fill, and show what they each can do, and then take them into a controlled environment and let a single DP light for both cameras to make them look their best. That way you have a benchmark to compare the cameras against and you can be assured that what you’re seeing in the controlled environment is one person working to the strengths and weaknesses of each camera, instead of the cameras saving the DP. (In the first scenario the Alexa would blow the GH2 away!)

You still run the risk of non-technical people looking at the final results and deciding that the GH2 is as good as an Alexa. Tests often have political repercussions, and those have to be considered carefully.

If Colt Seaman had lit for the F3, would we be raving about the F3 instead of the GH2? Who knows? A thorough test should not leave one with those kinds of questions.

The one aspect that really was ignored is the political ramifications of showing that low-end cameras can look as good as high-end cameras. The caveat is that the low-end cameras require a much, much greater degree of control than the higher-end cameras do, but how many people will come away from Zacuto’s presentation with that message? Most of the details relating to how much extra work the low-end cameras required are in a separate “technical” document that most non-techies aren’t going to bother to download. Unless all of the important details appear on the screen with the presentation there’s an incredibly good chance they’ll never be seen by those who most need to see it.

Seeing those details with the footage is crucial to creating an accurate first impression of how that camera compares to other cameras, which is important as the first impression is the strongest impression. Separating those details forces viewers to combine the images and the data in their heads, and that almost never works–especially among the least technically oriented. A true understanding of what is happening requires viewing all the relevant information at once, or at least in quick succession, so that the first impression of a test result encompasses as much relevant information as possible.

More important than how the test is executed is how the results are presented. I had a number of problems with this.

(1) Showing the “we can make all the cameras look the same” round first, and making people choose cameras blindly, is interesting but potentially disastrous because of exactly what happened: the most famous person in the room, who is not the most visually sophisticated person in the room, liked a really low-end camera–and that got picked up by the media and spread like wildfire. This is extremely dangerous, because now we all have to deal with “You’re going to shoot with this camera because Francis Ford Coppola liked it, and we can afford to buy it rather than rent it.” Is it the right camera for every job? No. Are they asking you if it’s the right camera for the job? No–because the director of the Godfather has more clout in their minds than you do.

Since publishing my articles I’ve received emails from about a half dozen people who say that they’ve run into exactly this kind problem as a result of part two of the tests.

(2) The least Zacuto could have done is show the “pixel peeping” version first, so that we all have a baseline of how the cameras respond in relation to each other. Then everyone could see that all cameras are not equal, but that a good cameraperson can make them look equal under very controlled conditions. This message–“These cameras are all very, very different but can look the same under certain circumstances created by a talented DP”–was completely eclipsed by “Francis Ford Coppola picked the GH2 because he thought it looked as good as an Epic or Alexa!” The order of how these tests were shown dramatically changed how the media were likely to cover the test as well as how non-techies viewed the tests.

Steve says that if he showed the “pixel peeping” version first that nobody would have come back to see the next round. Everyone I know was desperate to see the “pixel peeping” version because that potentially contained the information we really wanted to see. It’s clear that Zacuto’s emphasis was on marketing, and keeping people coming back for more–and that’s perfectly valid in a company-sponsored test. Unfortunately it had unintended side effects.

(3) The inclusion of Mr. Coppola was, I think, a big mistake. It’s great for marketing, but he’s much better known than any of the other people in the room who viewed the footage much more critically. Fame trumps technical competency any day. Steven Poster, ASC and Daryn Okada, ASC are vastly more qualified to comment on the footage they saw, but Coppola is almost a household name so his comments got the press.

(4) The web is a marvelous medium because it’s very easy to segment programs so that viewers can see exactly what they want when they want. I’ve spoken to a fair number of others who agree with me that the length of the programs, and the pacing, and the fact that the test results were interspersed with long philosophical conversations on topics such as “What is a DP?” kept them from watching all the programs. One of the things that Steve doesn’t like about my articles is that I admit that I haven’t watched every minute of every video. I’m just not interested in hearing a bunch of DPs talk about what a DP does. I’m a DP, I’ve been in the industry for 25 years, and I have a fairly good idea of what I do. I want to see test results!

I’m not saying that these kinds of interviews aren’t valuable to someone somewhere, and all of us eat this stuff up at some point in our careers (if you haven’t seen Visions of Light, you should–it’s the best of the best on this subject), but the interviews were aimed at a very different audience than the test results were. Everyone was interested in the test results, but only a few were interested in the philosophizing.

I go into more detail in my articles, and others have pointed out that I haven’t even touched on things like color grading (27 layers of color correction? 90 minutes to grade one shot? How is it possible to know what -any- of the cameras are doing under such conditions? And how often do we get these luxuries in the real world?), but the bottom line is that -in my opinion- these tests are a failure and a tremendous lost opportunity. All those cameras were in one place at one time with very talented crews, and there was so much we could have learned, and there is a fair amount that can be learned from what was done… but so many variables were changed so often–including intangibles like artistic vision!–that it’s really impossible to get more than a superficial amount of objective information out of these tests.

It has been pointed out to me that maybe I have less credibility than some others out there that have ASC after their names. That may be true… but you don’t get into the ASC by excelling at comparative testing of equipment. You get in because you can consistently achieve artistic excellence with your tools. There’s a huge difference.

When I perform tests I turn my artistic side down to 20%, or off, and turn my engineering side WAY up. That way I’m testing technical things, which are largely objective, instead of artistic things, which are most decidedly not. By assigning a different DP to each camera the artistic differences made objective comparisons nearly impossible.

Bottom line: This test had an agenda, which is always a little dangerous, and it failed to execute its agenda because too many variables changed. As best I can tell, the goal was to show that it’s the DP that makes all the difference in how footage looks–and what a great agenda that could have been!–but by changing up DPs, lighting, cameras, grading, etc. it became impossible to determine who was causing what. As a result, the message picked up by the media was -really- not what was intended by the planners of the test, and it’s a really unfortunate message that has the ability to harm us all.

Because of that I felt the need to write my articles discounting the tests. If the media had not picked up on Mr. Coppola’s quote I would not have written what I have, but I just can’t be silent when the way the media is spinning the results of this test have the potential to cause me technical and political difficulties while doing a job that I love.

I believe the reason the media was able to latch on to what they did is because the presentation of the results was largely a failure. By creating three 30-40 minute segments, released a month apart, and putting incomplete technical details into a second document marked “technical,” there’s no way to guarantee that most people will be exposed to all the information necessary to understand and accept the test’s agenda–let alone be convinced by it.

And, if the audience wanted a behind-the-scenes look that could have deepened their understanding of the tests, they’d have to watch ANOTHER 90 minutes of material. I suspect there are a number of people here who are willing to take that on, but one producer has already told me that he couldn’t make it through even the first 90 minutes of material.

This is all completely my opinion, of course–for what it’s worth. Some people agree with me completely, others agree with me partially… but I’ve not found a lot of people yet who completely disagree with me on every point. If you do–and you’re not Steve, who has already made his position very clear–please let me know.

Last but not least, I do not consider myself a journalist. I am a DP first, and an educator second. Educators are not the same thing as journalists.

My articles, as well as Steve’s rebuttals, can be found at art.provideocoalition.com.

Thanks for reading.

Art Adams
DP/educator
artadamsdp.com

Exit mobile version