Site icon ProVideo Coalition

Redshift and the Law of Moore

Redshift and the Law of Moore 2

gtx-680m-chip2The way you’ve been thinking about building and upgrading your digital video workstation is about to change radically. At least it may by the time you’ve finished reading this article.

If you’ve been in the film and video industries for a while you may have noticed something strange happened around 2010-2011: your workstation stopped creaking. Right around that time the compute power and bandwidth of a well-equipped system became sufficient for the great majority of computing tasks based around 2K formats.

A lot of us didn’t really notice it at the time; the signs of the change came about two or three years later. You see, as an industry living on the edge of compute power, we were used to our systems feeling obsolete every 18 months. But with those 2011 workstations, time stretched on and new purchase orders never got placed. There was no driving need. For the most part performance was good enough. Maybe not as good as it could be, but good enough to continue to work efficiently.

Which brings us to today. Is Moore’s Law broken? Intel will tell you that it has, but I disagree. The phenomenon of mushrooming growth has just shifted. Shifted to the GPU.

Your graphics card is where all the action’s at. There’s been something of a quiet revolution going on as developers have started to push all their wizardry to the GPU instead of the CPU. With the accessibility of programming API’s like CUDA and OpenCL, it no longer requires a doctorate in pure mathematics to take advantage of your graphics card’s processing power. And that power is significant: as opposed to the 8 cores on a reasonably equipped modern CPU, Nvidia’s latest consumer graphics card, the GTX 1080 boasts 2,560 CUDA compute cores.

Now the editing system you’re working with today is probably already benefiting from this revolution: both Premiere Pro and Resolve for example are heavily dependent on the acceleration that comes from your graphics card (or cards–more on that in a moment). But to see what’s down the road for everyone in the visual industries, it usually makes sense to see what’s happening in the bleeding edge world of 3D animation and rendering. That’s where Redshift comes in.

Redshift: a look at the bleeding edge

Redshift is one of a small group of emerging GPU-based final render systems (Octane, Furryball, Iray and V-Ray RT are other examples). Now 3D computer graphics software has been using the GPU for decades as the primary means for visualizing what an artist is working on. But when it came time to render the final image, it was off to the CPU to pull off the complex operations.

Not anymore. Redshift performs the same basic tasks as a conventional CPU-based raytracer, but what might take minutes on the CPU can take seconds on the GPU. The architecture of a GPU tends to be optimized to the kinds of things that computer graphics
software needs to get done.

VFX Legion, a visual effects studio working on ABC shows like Scandal, How to Get Away with Murder, and The Catch have recently switched to using Redshift for their renders.

Here’s a quote from Rommel Calderon, their lead 3D artist:

“We were ending up with renders that were ready in a matter of seconds, rather than minutes,” he asserts. Calderon recalls doing a stress test on a render with all bells and whistles enabled, and it took hours to render each frame using the previous rendering solution. Swap to Redshift with the settings set as similarly as possible, and the same test took minutes by comparison. “It was a pretty drastic difference,” he adds.

And it’s not just about the overall speed optimization. It’s also about scalability. With an expansion chassis, it’s not difficult to add two, three, or even 8 graphics cards to a single workstation.

Redshift have taken a pragmatic approach to rendering. Where other GPU-based systems perform unbiased rendering (long story, but think of it as a very systematic, precise way of rendering), Redshift intentionally “cheats” by using biased rendering to optimize the render. The result may not be a perfect match to the laws of physics, but it’s a whole lot faster and just as pretty. (Studios like Pixar and ILM have been using biased rendering techniques on pretty much all their film effects and animations for decades).

The main downside to Redshift right now is that it only works with the Autodesk entertainment 3D apps (Maya, Softimage 3DS Max) and Houdini, but will no doubt eventually make its way to the ever-popular mograph tool Cinema 4D.

Informing your workstation purchase

So what does this all mean to you? Firstly, while Redshift is currently a high-end render tool, expect to see these kinds of GPU-accelerated raytracing systems make their way into the likes of After Effects.

Secondly–and possibly most critically–stop thinking about the CPU as the primary determinant for you workstation purchases. It seems that going forward the quality of your GPU and expandability will be far more important. (RAM is also important, but most
decent workstations will have you covered there.)

Currently Nvidia is winning the war in terms of compute accessibility, with most developers preferring the CUDA programming API over OpenCL. OTOY may disrupt this a little with their announced cross-compiler for CUDA to native ATI code, but for now your chosen workstation should probably use an Nvidia card as its primary display device.

Building out instead of up

Before you relegate your older PC or Mac Pro to the scrap heap, think about the possibility of upgrading. And not just Windows boxes: pre-‘trashcan’ Mac Pro towers can now (slightly unofficially) support generic Nvidia graphics cards via a web driver available from Nvidia’s site.

To the point: my main graphics workstation is a 2010-purchased Windows box with an i7 CPU (admittedly a little overclocked) and

five modern graphics cards connected via an expansion chassis. When working with GPU-accelerated software (Resolve, Redshift, Octane, Nuke) it feels as snappy and powerful as any 2016 box I’ve jumped on lately.

Is that a “no” for a new iMac then?

Your first conclusion to this article might be: don’t by an iMac, since there’s no Nvidia card in there and no way to expand the graphics card. But here’s the thing: the GPU doesn’t have to be driving the display to be useful. Thunderbolt PCI expansion units are slowly making their way into the market, allowing one, two, or more GPUs to be added to your system via the thunderbolt port. This is actually a great
feature; it means you could purchase a couple of Nvidia GTX 1080’s and use them on your iMac, then take them on the road with your Mac laptop.

Bank on the GPU

Regardless of whether you live in the land of 3D rendering where Redshift could change your world today, expect that your favorite video or graphics software application will become more and more dependent on the GPU over the next two to three years. Whatever you do, if you’re buying a new system, make sure it’s as GPU-expandable as possible.

 

Damian Allen is a VFX supervisor and pipeline consultant based in LA. He specializes in stereoscopic and picture-lock emergency effects work through his company Pixerati LLC. In addition to his hands-on production work, Damian lectures and trains artists around the world in compositing theory and technique.

Exit mobile version