Site icon ProVideo Coalition

The Sights and Sounds of the NVIDIA GPU Technology Conference 2015 – Day 2

The Sights and Sounds of the NVIDIA GPU Technology Conference 2015 – Day 2 1

DSC04109

The NVIDIA GPU Technology Conference pulls together professionals from across various industries to showcase where computing power is at and where it’s headed. Media and entertainment is certainly one of those industries, and it was great to discover that NVIDIA has developed some specific products that are geared toward us and used the conference as their platform to announce the details.

Day 1 highlights included a media and entertainment focused session about a GPU-rendered short film as well as some insights around how deep thinking will change the way we do just about everything. You can check out all of those insights and plenty of pictures here.

For Day 2, I wanted to get specific with NVIDIA about what they had for media and entertainment professionals, because as great as the Titan X might be, it’s more of a gaming machine. Luckily, I was able to dig into some of the big announcements that NVIDIA had in store for us, and many of those details are below.

 

Day 2 Keynote

First though, I wanted to check out the Day 2 Keynote, which was delivered by Jeff Dean from Google, entitled, Large-Scale Deep Learning for Building Intelligent Computer Systems. As you can imagine, Dean continued to explore the theme introduced by NVIDIA CEO Jen-Hsun Huang from Day 1, and he detailed the steps he’s taking to build more intelligent systems. Much of it was focused on the specifics of how Google approaches this topic, but the ramifications of how the technology can impact us are astounding.

It was exciting to hear was how this research involves a close look at how the human brain works, and how our own minds work so incredibly fast. Developers have demonstrated that they can give a network similar capabilities, but it’s got to be “big” as well as the “right” network. Defining those variables is where the majority of research is being done, but Moore’s Law means figuring out what “big” and “right” mean is going to happen sooner rather than later.

It’s not just about driverless cars or a more robust search engine though. All of this is an attempt to get a network to perceive and understand the world, which in turn changes our perspective. What will it mean to live in a word where we can interact with a network in an more intelligent manner? How will that impact our approach, experience and expectations?

Those are all questions that I’m sure will be explored in more depth at future conferences, but that concept brought me back to something Elon Musk mentioned on Day 1. Almost as an offhand remark, he pondered about whether or not his biggest concern would soon be around figuring out what human beings were going to do with everything being seamlessly automated.

Of course, that sounds an awful lot like ­­­­what Lord Kelvin said when he proclaimed there was nothing new to be discovered in physics, which was before Albert Einstein revolutionized the field. Kelvin was obviously wrong, and it’s a good lesson for anyone worried about where this technolgoy is taking us. It might feel like we’re about to solve everything, but there will always be new challenges and discoveries on the horizon.

 

Media and Entertainment Announcements

The Quadro M6000 was NVIDIA’s big news of the day, and it is essentially the professional version of the Titan X. It’s built specifically to last longer in a way the Titan X is not, and provides a GPU boost when it recognizes what application you’re running so it can go even faster. It’s their most powerful professional GPU, featuring Maxwell architecture and 12GB of graphics memory to support complex designs.

Why do you need this kind of power for though? And what does utilizing this system enable you to do?

At the Tech Conference last year, one of the highlights was a render of a full resolution model of a Honda. It provided an absolute perfect representation of a physical car using the materials and lighting in astounding detail. It proved to be an invaluable source of information for the company, because instead of having to build these models, they could create and adjust them in a virtual environment in whatever way they needed.

Physically based rendering is not a new thing, but it’s been an incredibly expensive ordeal. Creating that perfect model of the car wasn’t cheap since the processing power was so great.

However, with the Quadro M6000, being able to create and utilize physically based rendering is a reality for people without a budget like Honda. I mentioned Moore’s Law above, but it’s especially apparent here. Just last year they showcased this technology that only the biggest companies could afford, but it’s now going to be available to everyone because the costs have been reduced so dramatically.

Iray Details

“Iray is about the real world,” said Greg Estes, VP Marketing at NVIDIA. “It’s about how materials really work and how light really works.”

That’s a very brief description of what Iray does, and it has obvious implications for pre-viz when the focus is around figuring out how many different elements are going to come together. But the system creates a model that provides such incredible detail that you can’t help but consider other possibilities.

Iray delivers results reflecting real-world behaviors so designers don’t need to have an expert knowledge of computer graphics techniques to achieve physically accurate and predictable results. If you think about it, photorealistic and physically accurate are two very different things. The talking robots in the Transformers movies might look photorealistic, but they aren’t physically accurate in the way the buildings they smash into can be. With Iray, you can create a city exactly as it appears in reality, and then adjust everything from the cloud patterns to the temperature to understand the specifics of that look. That has ramifications for a project on a large and small scale, and can lend authenticity to the project as a whole.

The software can be utilized in familiar platforms as well, as NVIDIA is bringing Iray to several more 3D creation applications, including Autodesk’s 3ds MaxMaya, Revit, McNeel Rhinoceros. Lookout for more announcements from them around other platforms.

The demo they were showcasing focused on London’s “Death Ray”. In short, at a certain time of the day and certain time of a year, the sun hits a particular building in London in a way that caused the light from the sun to magnify and cause serious damage. Using Iray, this exact situation can be recreated and demonstrated in perfect accuracy, so you can literally watch this happen. You can also view it via a heat index, to get a better sense of the temperatures and other factors in the environment.

What’s really interesting is what happens when you adjust the variables in this setup. In doing so, they discovered if the architect had made a slight adjustment on the window panels, the “Death Ray” would have been nearly 10x as dangerous. And on the other side of that, they demonstrated how the issue could have been completely avoided with a slight adjustment in the other direction. As always, the devil is in the details, and those details are rendered here in stunning accuracy.

You don’t have to dwell on the concept for too long before you begin to think about the creative opportunities something like this represents, and it’s the design elements that NVIDIA are truly focused on. They’ve created a tool that allows users to see and understand how their models look and work in the real world, which means design elements and considerations will feel authentic and ring true, even if those designs end up veering away from reality.

To see more about what Greg had to say, check out his recent blog post.

Final Thoughts

I wasn’t quite sure what to except from the NVIDIA GPU Technology Conference. Coming into it, there were a number of sessions I wanted to make sure to attend, and at the end of my time here I realized I wasn’t able to attend even half of them. I was only able to briefly stop by to learn about Pixar’s real-time render engine for feature film assets and workflows, and entirely missed a couple that got into a GPU-enabled image processing framework or looking at GPU computing from a plug-in developers perspective. And that’s without even mentioning the rendering tracks that went on throughout each day.

Technical issues and challenges can torpedo a project, and it’s becoming more and more essential for creative of all types to understand what difference a product like the M6000 can make to a project. Additionally, using Iray to figure out specific details around how something is going to look and behave can be the difference between something that has an audience gasp in amazement or roll their eyes in disbelief.

Ultimately, what stood out about the event was how this technology can enhance our ability to tell stories. Whether that’s in terms of render speed so you’re able to work dynamically or providing you with models that provide you with a means to create your own authentic reality, the possibilities really are endless.

Check out more pictures from the show below, and let us know what you think in the comments section.

Disclosure: My flight, hotel and all access press pass were comped by the presenters of this conference.

Exit mobile version