Site icon ProVideo Coalition

Band Pro 3D – Iridas’ metadata driven stereoscopic toolset

Band Pro 3D - Iridas' metadata driven stereoscopic toolset 1

Steve Crouch from Iridas gave an impressive demo of their current and future Speedgrade DI versions. Highlights:
-support for all the major RAW formats, including SI-2K, Red, Phantom, D21, etc.
-realtime playback of all of these
-realtime stereoscopic playback of all of these
-embedded .look LUT files from Cineform done on set carry all the way through to conform and final grading
-nondestructive metadata driven image color and geometry manipulations

…and more. Read on.

STEVE CROUCH FROM IRIDAS:
==========================
Band Pro 3D – Iridas – pictures of their preso and slides and screen UI from the demo – good stuff!

they support Cineform RAW

talking about RAW

normally go SDI or DVI out, didn’t rig it as such

can have image only to a stereo display

(working from a big suitcase looking setup – XP 64 system from One Beyond that he’s demoing on)

he’s recording left eye to internal RAID, right eye to external RAID, live, realtime playback

RAW recording left and right eye, is a huge amount of data – one of the advantages of RAW and Cineform is that the amount of data is drastically reduced, as compared to a fully rendered DPX frame – 8 to 10 MB/frame, whereas a Cineform is about 1/10th that, an SIV file (uncompressed RAW) is 1/3 that size.

he’s looking at an ARRI D21 shot – simply R, G, and B locations in an image – all the RAW cameras are doing this – cineform, SIV, Red, D21, etc.

with appropriate software, can debayer it into a viewable image

apply a LUT, now have a viewing LUT

2880×2160 RAW image, anamorphic, so can then desqueeeze it

was rated ISO 200

on camera, they can open/close the f-stop – you don’t paint the camera, just exposing for light, then with metadata can create a look

camera matrix #s can be altered as well

nVidia 5600 (5800 is the new cool kid on the block) – can get dual link out OpenGL card is doing the display – their software doing the debayer and display

-with RAW have to render into something else to be worked with, and that can take a helluva long time

-can wrap into a QT and work with, but it can be slooooooow or require a heavy duty machine

-Phantom – .cine files – time consuming to convert

every vendor has a different approach of how they want their files handled

OK, now stereo files – separate files for left and right eyes

has various display modes – so if have to encode differently, will slow you down if you need available in different formats. If can drop it on and get to work, is a HUGE timesaver.

Can change the offset for time and space to realign files

by turning on “Mirror ” for pan, and rotation, they are in sync

if you make changes and render out, what if isn’t correct when viewing in DI size/facility? If you have to render for one screen vs the other – don’t bake anything in until you’re almost done – gotta be able to change it up to the last minute.

when go into the big room, you haven’t permanently altered anything with metadata for playback – don’t wanna bake anything in to have latttitude later on

can drop tracks – nondestructive grading tracks – onto the timeline, can have one be left eye only, one be right eye only, one for both eyes

(lets you make custom tweaks to each channel, then adjust both together -(mikenote – this works, but seems slightly reverse engineered rather than deeply integrated)

can also gang clips to affect several shots at the same time (he demos 4-up – 4 shots, all being altered the same way at the same time)

OpenGL nVidia acceleration is doing this – realtime debayer, playing back, applying looks and color correction in realtime

since changes are stored as XML metadata, you can email an XML to them and all the work you did to the same footage can be displayed on their system

good implications for the use of metadata

otherwise you’re cooking out a ton of QTs or doing 2 streams, esp. if for 2 streams to view, but changes are made, but oops not applied to the other eye, etc. – metadata driven workflows keep things simpler

Q: if shooting with F35, recording off the camera –
A: is playback only, not a capture solution

Q: how do export spot cuts for filmout?
A: set in/out and render out (DPX presumably) Film emulation LUT on an RGB stream might be used just for viewing purposes, but but NOT apply that to the rendered DPXs

******the stereo screens aren’t color accurate, FYI!******

you can do temp stereo adjustments, (or consistency fixes for color MAYBE?), can do on a 3D screen, but

CRASH – ooooooops. Crashed during the demo – doh.
EDIT – while it did crash, this was after doing a bunch of jumping through hoops, doing things no other software can do, and in the middle of a bash-it type features demo on stuff that would happen, but rarely, in actual production. Two streams of uncompressed Phantom footage doing realtime, high quality debayer and stereo adjustments. This after doing the same thing with Silicon Imaging SIV footage. In a conversation with Steve the next day, after I came up and said I’d enjoyed his presentation and found it very useful and the tech impressive, we talked about this issue. He pointed out that other software had had glitches and I hadn’t singled it out – this way well have happened, I don’t recall the specifics, but half of my note taking in these things is what stands out to me in the moment, as well as what I have time to type! The point was made, fairly, that glitches were pointed out in A that weren’t pointed out in B – not my intent.

Can render for left eye, can render for right eye, has background render queue for setting it up and walking away

we’ve been seeing the 2009 shipping version (the one that just crashd)

alpha of 2010 version:

subtitling in stereo – has a stereo file – .cine Phantom files – (note the shot of the closeup of the RAW view in the pics) – stereo properties are in a separate panel – a cleaner UI for the fixes

-can make an art card – a TARGA with an alpha channel – can go into stereo panel and adjust where does the depth of the subtitle go

-if doing lower thirds, can do the same to set the depth

-same thing for 3D logo bug

-“we’re not the offline – but you can use us to prep for the offline.” – prep it, get your look, render out (edit single eye effectively?) – have all the non-stereo modes to view flat, but spot-check stereo

once you’ve done your offline editorial, then you bring in an EDL (standard 3600 EDL), load that file, “reels not loaded”, browse to where the footage is, point to that folder (if properly organized), Load From Desktop, loads it in and conforms it in stereo from original material (WHAT IS THE CORRECT FOLDER STRUCTURE IS CRUCIAL BUT SKIPPED OVER)

-digital negative vs digital intermediate – the digitized film is the intermediate before it goes back out to film – here, digital negative – the original file is the negative – is the light as it was exposed, was everything you ever got.

EDITI had previously made a comment about the presentation here, and it unintentionally came off as mean – a planned co-presenter wasn’t able to attend, and one person had to run the demo and address the audience at the same time. He and I had a constructive conversation about it and the Band Pro Open House, and about the difficulties of splitting your energy between the logistics of the presentation, and keeping your energy directed towards the audience. It was an off the cuff, stream-of-thought comment as I was typing, but wasn’t intended as a personal attack – sorry about that, my apologies – I’ve done these kinds of demos before, and it is HARD work to do, made harder when you haven’t drilled extensively on a canned demo and suddenly have to cover it all yourself.

do your prep and alignment work in a cheaper room – not the $800/hr money colorist room – do the heavy grading, masks, etc. there – optimize your money spent

using the RAW tools and making a QT file – that look they created is wasted – if working back from source – can be a very tedious process to work with RAW and stereo

Q: can you keyframe stereo corrections?
A: yes

Tiny Hercules demo – demos keyframing fix

can do your depth grading in keyframe – don’t have to worry about it until later. Final depth grade done on the big screen

Exit mobile version