Mark Christiansen – ProVideo Coalition https://www.provideocoalition.com A Moviola Company Sat, 27 May 2017 01:56:57 +0000 en-US hourly 1 https://wordpress.org/?v=4.7.5 https://cdn.provideocoalition.com/app/uploads/cropped-Moviola-Favicon-2016-32x32.png Mark Christiansen – ProVideo Coalition https://www.provideocoalition.com 32 32 These are the latest features in After Effects CC 2017, available now https://www.provideocoalition.com/effects-cc-2017/ https://www.provideocoalition.com/effects-cc-2017/#respond Wed, 19 Apr 2017 16:00:33 +0000 https://www.provideocoalition.com/?p=50329 Today marks the first time Adobe has released updates to its core video and audio applications ahead of NAB, where the company will put After Effects and Premiere Pro front and center on the show floor. Why wait? Here’s an overview of everything new that’s available for immediate download to anyone with a Creative Cloud subscription. For a

The post These are the latest features in After Effects CC 2017, available now appeared first on ProVideo Coalition.

]]>
Today marks the first time Adobe has released updates to its core video and audio applications ahead of NAB, where the company will put After Effects and Premiere Pro front and center on the show floor. Why wait? Here’s an overview of everything new that’s available for immediate download to anyone with a Creative Cloud subscription.

For a detailed overview, you can look at my course After Effects CC 2017: New Features from LinkedIn Learning (otherwise known as Lynda.com), newly updated with everything that’s been added. This course features the examples depicted here in step-by-step detail, including this free preview of Essential Graphics.

Essential Graphics

The biggest addition to this new release is only usable in Premiere Pro; it provides the means to bring motion graphics design customization to the large community of editors unfamiliar with After Effects. With Essential Graphics, you build a motion graphics template with customizable controls for use in Premiere Pro.

The Essential Graphics template allows you to adjust text source and color of the 3D texture and light to match the sequence in Premiere Pro.

For those keeping track, the earlier 2017 release of After Effects featured Text Templates, which are now obsolete. Essential Graphics reduces what was a multiple-step process with limited uses to a drag-and-drop feature. A new Essential Graphics panel (and workspace) appears in both After Effects and Premiere Pro, and you can design a template in either application.

Only specific properties are essential

In After Effects, only specific properties are available to Essential Graphics in this initial release. You choose a composition to work with in the panel and click Solo Supported Properties, which reveals each property in the timeline that is one of the following:

  • Source Text
  • Color
  • any single-value numerical property (e.g. no 2D or 3D values)
  • Checkbox

Your design can include any features you like into the chosen composition. It’s just that only the types above can be adjusted in Premiere Pro. The exported .aegraphic file appears in the corresponding Essential Graphics panel (or Creative Cloud Library, your choice) in Premiere Pro. Although you can save the project that created it, you can’t currently open or edit an aegraphic in After Effects.

Premiere Pro ships with dozens of preset Essential Graphics templates, ready to be customized.

What’s the point? In this 1.0 version, Essential Graphics lets you create an animated graphic (most likely an overlay such as a title or lower third) to hand off for use in a Premiere Pro edit. But since we all know After Effects artists to be hackers, the new panel also serves as a heads-up display for controls that are otherwise buried in the Timeline or Effect Controls (no need to export a template).

More new feature additions

The most sophisticated new tech appears in the Camera-Shake Deblur effect. It uses optical-flow technology to reduces motion blur resulting from an unstable camera and a slow shutter. It detects blurred features and searches and replaces them using matching detail in adjacent frames. There are a few simple controls to refine the automated result.

This tool closes a limitation of Warp Stabilizer VFX. Any shot with instability jarring enough to cause a blurred frame isn’t fixed with stabilization alone. The shot, while smoother, can end up looking even worse for still containing randomly blurred detail, as you see in this before and after example:

 

A couple of features found in other Adobe applications now appear in After Effects. The Lumetri Scopes panel from Premiere Pro adds long-requested waveform and vectorscope displays to After Effects. As you might expect, you can customize the scopes for specific colorspaces. Activate the new Color workspace to put them to use.

The Lumetri Color panel from Premiere Pro now appears in After Effects.

Other Creative Cloud applications including Premiere Pro support right-to-left text (for South Asian and Middle Eastern languages). After Effects adds a new Text preferences panel but will require more work for full integration; for example, per-character animation still operates left to right.

One more thing for serious After Effects nerds: hold the option key (Mac, alt on Windows) while clicking to display a snapshot in the composition viewer to see a comparison in Difference mode. Match two frames precisely and you see a black frame, or use this to map changes too subtle to analyze in the A/B comparison otherwise provided by snapshots.

Workflow improvements

Everyone who uses After Effects hates pre-comping at some point or other. In this latest release, when you apply a compound effect (one using a separate layer as an input), you no longer need to pre-comp to make use of any effects and/or masks applied to it. Effect Input Layer Options add a pull-down menu to effects like Compound Blur and Displacement Map. Moreover, you can even use it with the Set Matte effect, so that (unlike a track matte) you have a choice whether an effect or mask is included in a selection.

Here’s one nobody foresaw: you can now rename and/or relocate the Solids folder. This is a UI element that has occupied the same place in the Project panel pretty much… forever. Additionally, timeline markers can now include not only duration but even custom color.

After Effects no longer interprets high-speed footage to a maximum 99fps. The new limit of 999fps covers most of the actual speeds you’ll encounter outside of scientific visualization, and prevents the need for pre-comp workarounds. If you do a good deal of mask or roto work, you’ll appreciate new shortcuts that let you set a mask mode as you draw or edit a mask.

For those keeping track of GPU optimized effects, in each recent release a handful more appear. And each time an effect is re-written to run on the GPU, not only is it far faster than the obsolete CPU version, it operates natively with floating-point accuracy. GPU optimization now makes the following effects faster: Levels (which I still believe to be the single most-used effect), Fractal Noise, Gradient Ramp, Drop Shadow. Offset, and Fast Box Blur.

After Effects CC 2017 new features

This is the second major release of After Effects 2017. The previous update 6 months prior brought the Cinema 4D renderer directly into the Composition viewer, one-click Adobe Media Encoder queueing, TypeKit integration, project templates, not to mention instant spacebar playback of raw footage and Team Projects.

Additions in this latest release imply more to come. Essential Graphics demands eventual support for 2d and 3D properties. Right-to-left text require a more complete feature set to bring non-western type animation to parity. Whether these remain a priority will depend on how the community puts existing features to use.

The post These are the latest features in After Effects CC 2017, available now appeared first on ProVideo Coalition.

]]>
https://www.provideocoalition.com/effects-cc-2017/feed/ 0
What does After Effects even do? https://www.provideocoalition.com/after-effects-basics/ https://www.provideocoalition.com/after-effects-basics/#respond Fri, 03 Mar 2017 18:35:28 +0000 https://www.provideocoalition.com/?p=45904 Ridiculous question, right? We all know the After Effects basics. I mean, depending on your point of view, of course. It’s a motion graphics app that can be used to create Hollywood-caliber visual effects, when you’re not using it to animate. It’s all about type choreography. Wait, no, really it’s about plug-ins so sophisticated they might

The post What does After Effects even do? appeared first on ProVideo Coalition.

]]>
Ridiculous question, right? We all know the After Effects basics. I mean, depending on your point of view, of course. It’s a motion graphics app that can be used to create Hollywood-caliber visual effects, when you’re not using it to animate. It’s all about type choreography. Wait, no, really it’s about plug-ins so sophisticated they might as well be separate applications. Really it’s just a tool for video. One thing I know is, I wouldn’t ever use it to edit, I mean, except when I did.

For those who aren’t fortunate enough to have someone walk them through After Effects personally (or, in my case, fortunate enough to have started with a beta version of After Effects 2.0 while working for George Lucas in the 1990’s), there’s a recently-added course at Lynda.com (LinkedIn) to walk you through it: After Effects: The Basics. I’m qualified to tell you about it because I’m the one who created it.

What is it—not just After Effects, but this course itself?

Let’s face it, many tutorials, even the really excellent ones, quickly get into the weeds. You can spend 20 minutes to an hour watching a talented artist talk you through building up a scene. When it’s done, you may wonder, “how many more of these do I watch to get the basic idea?”

I can relate. I like audio apps but have never used ProTools. My audio designer pals have used it for everything from music editing to mashups, EDM to diegetic sound. I have so many questions how they get their results, I don’t even know where to start. So I created an overview that shows what the majority of people do with After Effects, in a few simple steps.

There’s a notion, after all, that a tool like After Effects allows you to do anything. There’s even some truth to that, but the statement doesn’t pass the “mom and dad test.” It doesn’t clarify anything. The course is designed to help absolute beginners glimpse what makes After Effects the flagship motion graphics and compositing application, the swiss-army knife of video.

How to get from start to finish with a shot

You must know, but can’t (on your own) easily learn the simplest way to get a shot through the application, start to finish. This course covers how to accomplish that in the very first lesson, which you can watch here (no subscription required).

Most shots require more than a few simple steps. The rest of the first section of the course is a deeper dive into each major section of the application. Topics include:

  • Starting a project
  • Using the Timeline
  • Organizing with layers
  • Controlling animation with keyframes
  • Using effects and 3D
  • Working with type
  • Rendering in After Effects

The focus is on what each section of After Effects is for. In all I cover ten individual areas of the application, and the activities you’ll undertake in each of them. Why the Layer panel? It’s the only place to add paint, make rotobrush selections or point-track a shot. In the bigger picture, it’s also the “operating room” for an isolated element.

Responses to common questions

A beginner (who might also be an experienced film/video professional) has questions. What about After Effects makes it specifically good for motion graphics? Where in the app can I focus to master it most quickly (answer: the Timeline)? What are all those controls in the Viewer for? Suppose I want my keyframes to look more graceful, is that complicated? How does 3D even work? Do you have to be super advanced to get started animating type? (No.) Why is rendering still part of this application?

Maybe you already know the answers to these questions, in which case I bet others have asked you to answer them.

The final half of the course focuses on designing a single 10 second sequence. It includes steps one might considered advanced, such as creating and applying a 3D camera track. However it’s designed so that no single task requires more than a few simple steps. The focus, again, is to forego workflow details to begin, and get a sense of how it looks to keep things simple.

Who are you, and what do you love to do?

Any two accomplished After Effects artists might have little to nothing in common for skills and interests. I learned I was an animator when I found I could spend hours refining motion that plays out in a few seconds, but compositing became my jam. Most of the compositors I’ve worked with don’t animate. The filmmaker uses After Effects to take control of a scene and make realistic looking changes to it. The motion graphics artist tends to go the opposite direction, bending screen reality into something we’ve never seen or imagined before.

And, let’s face it, After Effects is not always the best tool for the job. I have to correct anyone who refers to After Effects as an application for editing video; heaven help you if you actually try to use it beyond extreme short form edits. It does 3D, sure, but it ain’t Cinema 4D or Maya. It complements and sometimes competes with Photoshop (which also handles video, after all) and Illustrator.

After Effects Basics

When someone says you can do anything in After Effects, they also mean that it helps to know the toolset very well and to be more than a bit stubborn. That stubbornness comes from really wanting to do something, and then you find that the application presents a way of working through a shot that is (for the most part, anyhow) logical and expressive.

My overall metaphor for After Effects? It’s like having a camera that can shoot anything you can imagine, in motion. If that sounds overwhelming, it often is. With The Basics, I’m merely offering you the simplest way to get rolling.

The post What does After Effects even do? appeared first on ProVideo Coalition.

]]>
https://www.provideocoalition.com/after-effects-basics/feed/ 0
Free After Effects video tutorial: Effectively track motion https://www.provideocoalition.com/free-effects-video-tutorial-effectively-track-motion/ https://www.provideocoalition.com/free-effects-video-tutorial-effectively-track-motion/#comments Tue, 24 May 2016 23:45:32 +0000 https://www.provideocoalition.com/?p=33650 Check it out. When I’ve had the opportunity to work with less experienced After Effects artists, and even some experienced ones who should know better, I see people make fundamental mistakes with the built-in Tracker. Not only are these easily avoided once you know what to do (and what not to do) they will instantly

The post Free After Effects video tutorial: Effectively track motion appeared first on ProVideo Coalition.

]]>
LDC_Tracker

Check it out. When I’ve had the opportunity to work with less experienced After Effects artists, and even some experienced ones who should know better, I see people make fundamental mistakes with the built-in Tracker. Not only are these easily avoided once you know what to do (and what not to do) they will instantly and dramatically improve results if you address them.

So why are we talking about the Tracker in 2016, when all of the other tracking tools in After Effects are, on the whole more powerful and more fully automated? Believe it or not, it’s still the most accessible and versatile solution, especially when it’s all you actually need. The unique thing that it does is to generate actual X and Y pixel data, without ever leaving the Viewer, and that data can be applied to any keyframe channel, anywhere in the project.

In this lesson, we focus on the two things you would never know unless someone told you. First is how to set the track and feature region so it will have the best chance to succeed. Second, there is a specific way to apply that data that is almost always the right choice: you create a null object, assign the motion to that, and use that null as the basis for whatever will inherit that motion.

If you get these wrong, you’re not going to have a good time with the Tracker. If you can put aside the fact that this tool involves more manual interaction than the automated Camera Tracker, you can take advantage of the fact that this tool also doesn’t force you into working in a 3D environment, if that’s not even what you want or need.

5min 4sec lesson can be viewed free in its entirety from the course After Effects Compositing 06: Tracking and Stabilization (full two hour course requires a lynda.com subscription—click here for a free 10-day trial that allows you to view the full course and series).

The post Free After Effects video tutorial: Effectively track motion appeared first on ProVideo Coalition.

]]>
https://www.provideocoalition.com/free-effects-video-tutorial-effectively-track-motion/feed/ 3
The Simpsons go live: an exclusive inside look https://www.provideocoalition.com/simpsons-go-live-exclusive-inside-look/ https://www.provideocoalition.com/simpsons-go-live-exclusive-inside-look/#comments Fri, 20 May 2016 01:15:59 +0000 https://www.provideocoalition.com/?p=33648 Adrenaline. High drama. Zero failure tolerance. Beta software. Which one of these things is not like the other? This is the story of how even a flagship broadcast brand can take an honest risk once in a while, and a glimpse of how it all played out behind-the-scenes this past weekend, as relayed to me by one David Simons, co-originator of After

The post The Simpsons go live: an exclusive inside look appeared first on ProVideo Coalition.

]]>
Adrenaline. High drama. Zero failure tolerance. Beta software. Which one of these things is not like the other? This is the story of how even a flagship broadcast brand can take an honest risk once in a while, and a glimpse of how it all played out behind-the-scenes this past weekend, as relayed to me by one David Simons, co-originator of After Effects.

Now, software developers have tackled many exciting challenges since the era when coder Margaret Hamilton put humans on the moon and changed the notion of what humans could do, or be. After all, no one knew whether it could be done until the mission succeeded, and had it not, the failure would have played out on an international stage.

Which is more or less exactly the situation Simons and fellow O.G. After Effects computer scientist Dan Wilk found themselves in Sunday evening, in a broadcast equivalent of mission control, Fox Studios in Century City.

CA_teamSimpsons

From left, Dan Wilk, Dave Simons, Allan Giacomelli and Adobe Sr. Strategic Development Manager Van Bedient celebrate a job well done.

(For the rest of the article David Simons is referred to as DaveS, a nickname he has held since the early “There’s a 50% Chance My Name is Dave” days of After Effects, and Dan Wilk is Wilk, as he says is typically called now that the team is full of guys named Dan.)

The moon-shot in question was an opportunity to improvise on live television for an audience of several million viewers—via animation—running on software that is technically still pre-release. Moreover the feat had to be completed twice, for EDT and PDT time zones.

How does an opportunity like this even come about?

The initial invitation came to Adobe a few months earlier, in February. The plan involved a sequence for the final three minutes of episode 595 (entitled “Simprovised”) that would be acted and animated in real time. The idea was to feature skilled improviser Dan Castellaneta, as Homer Simpson, responding to questions from live callers—real ones, who dialed a toll-free number—as a series of other animations played around him. The beginning and ending lines would be scripted but would still be performed in real time. The production team of the Simpsons had contemplated trying such a feat before, but it was only once Character Animator was in preview release that they felt that there was in fact a possibility of going through with it.

Technically, the challenge was unprecedented. The software wasn’t even designed for real-time rendering. In early tests, the team was not satisfied with the lip sync quality, so Adobe Principal Scientist Wil Li went to work overhauling the way phonemes (distinct units of sound in speech) were mapped to visemes (mouth shapes). “We dropped one of our mouths,” says DaveS, “added two more, and renamed one… ending up with around 11” (technically 15 total, adding in four corresponding exaggerated mouth-shape versions).  Translation of phonemes into mouth shapes created in a Photoshop file is at the core of what Character Animator does.

What worked?

Although the software can use video to determine facial expressions and body positions, in this case, contrary to what has been reported elsewhere, Castellaneta was only on a microphone. No video of his facial or physical movements was captured. “They decided they didn’t want Dan to have to worry about any performance other than what he was saying, no camera on him.”

For Homer’s body, longtime Simpsons producer David Silverman operated a keyboard with preset actions from the audio mixing room. Allan Giacomelli from Fox sat next to him, operating a second system, ready to take over if the need arose. The keyboard, DaveS explains, was also set up to trigger camera views (a wide and a close-up, which had to be rendered with separate source due to details like line thickness) as well as everything else that happened in the scene: Homer answering the phone, a series of other characters moving across the frame in cameos, and the big finale in which the walls of the “bunker” collapse to reveal Marge in curlers, gently burping Maggie on the couch.

dan-ban

Of course, none of this would be possible without extensive improv experience by the voice of Homer Simpson, Dan Castellaneta. (Photo: Getty Images)

One huge question was that of adapting the software to operate in real-time. It was designed as a module for After Effects, which of course is render-only, and so real time usage had not come into play other than for previews. “The first big challenge is that live lip-sync isn’t as good as when your timeline can see into future,” DaveS elucidates. “This isn’t just smoothing or interpolation, the software knows the odds of what’s going to happen” by using the future information to derive the most likely mouth shape.

This, in turn, led directly to a second major improvement that was fast-tracked for Character Animator: “We made it so you can (use the future-looking lip sync) mode live, with just a half-second delay.” This is a special mode that won’t be in the upcoming preview release of Character Animator, but which is likely to be in the next version. “Anyone who wants access to it can contact us to get into our pre-release program.”

The render systems for show day were the fastest available Mac Pros—two of them for fail-safe redundancy. Since Castellaneta had no visual monitor, a short delay was not a concern; it would be added to the 7 second delay needed to meet FCC requirements for a live event (to allow for bleeping of foul language—the closest thing to this having been the word “Drumpf” in the second airing, a subject widely anticipated beforehand and left in, unedited).

What could possibly go wrong?

silverman_simpsonslive

Simpsons veteran David Silverman prepares for everything to go perfectly according to plan as showtime approaches. (Photo: David Silverman)

There were just three rehearsals with Castellaneta at Fox Sports in the two weeks prior to the show. A rehearsal with employees lobbing questions was recorded, and an international version was recorded earlier last week that was ready to be cut to as a backup.

The main concern was that the puppet being animated was “the largest we had in terms of memory by far.” The 478 MB .psd file’s 2659 layers, delivered last Friday morning, included not only Homer’s rig but the full animations of all of the supporting characters as well as of the scene itself. Optimizations had been done to make this one enormous “puppet” created by the Simpsons artists operate properly; it could have been rendered just as quickly if created as a set of layers, but “they were just putting everything into the Homer puppet, one enormous Photoshop document. It was working despite the possibility that it was just too big.”

The principal line of defense against unexpected surprises was to have two Macs set up the exact same way, each with its special keyboard to trigger the animations, with audio fed to both. “If one crashed, we could switch to the other Mac. If he was in the middle of a special movement, it might glitch” but the show would go on.

CA_keyboard

In the trenches: the X-Keys keypad with customized Homer-keys at the ready (Photo: David Silverman).

It wasn’t until 4:00 pm on Sunday afternoon, one hour prior to airtime, that an issue emerged with the backup machine; “the gags were all running slowly. The main machine was running fine, but the backup one was clearly running seconds too late. The Macs were identical, so we were thinking, how do we know the main one isn’t gonna suddenly bog down?” There no way DaveS and Wilk could know for sure if some previously unseen glitch would also emerge on the main system to prevent subsequent animations from being triggered, including the wall collapse that ends the episode.

“We had to go to air without knowing what was wrong.”

The backup machine wasn’t required for the east coast broadcast, which came off without a major hitch (if you look for excerpts on YouTube, you may see stuttering motion which has been confirmed as the result of faulty capture, and did not appear in the actual transmission).

That left the other shoe to drop in 3 hours. “We got some pizza, had a drink, and then went to work.” After Wilk kicked around various unlikely theories with DaveS and Allan, he had a sudden flash of insight: “it had to be App Nap.” This feature, introduced in OS X Mavericks, causes inactive applications to go into a paused state, helping to reduce power usage. As it turns out, App Nap had been disabling the software that ran in the background for the custom keyboard.. “We used the terminal command that forces that machine to kill the feature.” Problem solved!

Except… the question remained what to do about the main machine, which had performed fine, with App Nap actively running, in the first performance. “We decided if it ain’t broke don’t fix it.” The two developers were also able to confirm a lag on the main machine if it were left idling, which, naturally, it hadn’t been.

And so, the west coast broadcast came off without a hitch.

How will this change the way an animator works, or even what it’s possible for animation to do?

An odd parallel: After Effects & the Simpsons (as a Fox series) are the same age, give or take 3 or 4 years; more to the point, each has demonstrated staying power that has extended far beyond the odds (or the competition). While the cartoon was a juggernaut nearly from its inception in 1989, the software that debuted in 1993 was anything but; yet just like the show, it soon found its passionate fans, myself very much included in both cases.

Character Animator is an application in its own right, both in terms of its power and the learning curve involved to rig a character and make full use of it, and it appears destined to become that rarest of entities, a new desktop video application developed entirely within Adobe.

He couldn’t tell me about other series or studios that are interested in Character Animator yet, but when I asked DaveS what type of show he thought was a good fit for the technology, he named Archer, a dialog-heavy series with a clean straightforward artistic look.

“Straightforward” doesn’t mean the character needs to literally face straight  toward the camera, but each new angle requires a separate puppet rig unless a production is very clever about warping a flat-shaded character. While the Simpsons has been and will continue to be animated by hand in Korea, it seems it’s only a matter of time before another major show adopts Character Animator—whether or not animation for live television becomes a bonafide trend.

Homer is ready for your call.Homer stands by.

The post The Simpsons go live: an exclusive inside look appeared first on ProVideo Coalition.

]]>
https://www.provideocoalition.com/simpsons-go-live-exclusive-inside-look/feed/ 3
AE for Editors: Prolost Speedramp https://www.provideocoalition.com/ae-editors-prolost-speedramp/ https://www.provideocoalition.com/ae-editors-prolost-speedramp/#respond Fri, 06 May 2016 18:30:20 +0000 https://www.provideocoalition.com/?p=32805 Speed ramps used to be rare, and before that, nonexistent. Slow motion has long been a part of cinema—it mirrors the psychology of how we recollect powerful or traumatic moments—but the ability to swoop gracefully in and out of slow mo belongs to the era of Phantom cameras and now, camera phones that can record 250fps

The post AE for Editors: Prolost Speedramp appeared first on ProVideo Coalition.

]]>
Speed ramps used to be rare, and before that, nonexistent. Slow motion has long been a part of cinema—it mirrors the psychology of how we recollect powerful or traumatic moments—but the ability to swoop gracefully in and out of slow mo belongs to the era of Phantom cameras and now, camera phones that can record 250fps or more.

NLEs have not kept up, so speed changes still tend to be roughed in until they can be finished externally, most often in After Effects. The result can be excellent, but the process of getting there can be confounding for an editor unused to After Effects’ animation controls. That’s why my past collaborator on feature films and a contributor my own book, Stu Maschwitz, developed Prolost Speedramp.

Speedramp is something of an anomaly: it’s not a plug-in, nor even a script, but a piece of code that is applied to a clip as an Animation Preset. There may be other cases of Animation Presets offered for sale, but really it’s just a very clever expression doing the heavy lifting.

What does it do? Although it’s not a plug-in, it operates like a rather sophisticated effect. Once you’ve purchased and installed it (no demo possible with this one):

  1. From a Premiere Pro, right-click a clip and select Replace With After Effects Composition.
  2. Scrub to the first transition point, where the first ramp (change of speed) occurs.
  3. Set a layer marker (Layer > Add Marker or use the keyboard shortcut displayed).
  4. Repeat for any following transition points (up to 4).
  5. With the layer still selected, Animation > Apply Animation Preset… and choose Prolost Speedramp.
  6. Prolost Speedramp appears in the Effect Controls. Adjust the Speed settings according to how many markers were set. Speed 1 is the rate of speed (as a percentage) prior to the first layer marker; Speed 2 through Speed 5 correspond to the sections of the clip after layer marker 1 through 4, respectively.
  7. Use the Transitions controls to set the duration of each transition (in seconds).

Preview the result, and you still have the option to make any adjustments using the markers and controls.

The true After Effects magic comes with those Transition controls; without them, you’d need to understand how to use the Graph Editor. You can still use it, but only to see the results; the adjustments are made via expressions data that will look like jibberish in the layer controls. Not sure whether your footage was shot at high enough speed to permit smooth playback? There are Calculator controls that don’t affect playback, they just help you assess the speed settings needed to match specific frame rates.

There is one bit of trickiness here: once you’ve set the preset, the layer markers no longer match what you see in the viewer. The timing moves, but the markers don’t. If you move any marker other than the last one, the timing of all of them changes. This isn’t a bug, it’s the design of After Effects not to move the layer markers according to Time Remap settings. If you find yourself fighting this as you adjust the keyframes, delete Time Remap by twirling down the layer controls, delete Prolost Speedramp from the Effect Controls panel, and try again.

Prolost Speedramp isn’t quite as easy as my iPhone 6, but it’s close, and it lets me create more than just a transition in and out. The result is way, way better than slicing up a clip in an NLE to set multiple timings, and if you’re an editor who doesn’t have an After Effects artist ready and waiting to create a ramp for you, it’s a DIY tool that easily worth the nineteen bucks.

 

3 keyframe markers are applied to 250 fps source from iPhone 6, for a target 23.976fps result. In layman’s terms, the shot goes from “fast, to slow, to slower—to regular.”

The post AE for Editors: Prolost Speedramp appeared first on ProVideo Coalition.

]]>
https://www.provideocoalition.com/ae-editors-prolost-speedramp/feed/ 0
Do Away with the Array: RE:Lens for After Effects brings clarity to extreme fisheye scenes https://www.provideocoalition.com/relens https://www.provideocoalition.com/relens#comments Mon, 02 May 2016 07:13:35 +0000 https://www.provideocoalition.com/?p=32803 RE:Vision Effects, best known for its Academy Sci-Tech award-winning motion-estimation-based effects including Twixtor and Reel Smart Motion Blur, demonstrated RE:Lens for After Effects at NAB 2016. I was their invited guest artist, demonstrating the technology in their booth, and this article goes beyond press-release coverage to give a first-hand account of what I learned working with this technology.

The post Do Away with the Array: RE:Lens for After Effects brings clarity to extreme fisheye scenes appeared first on ProVideo Coalition.

]]>
RE:Vision Effects, best known for its Academy Sci-Tech award-winning motion-estimation-based effects including Twixtor and Reel Smart Motion Blur, demonstrated RE:Lens for After Effects at NAB 2016. I was their invited guest artist, demonstrating the technology in their booth, and this article goes beyond press-release coverage to give a first-hand account of what I learned working with this technology. RE:Lens goes far beyond After Effects’ built-in (and now rather ancient) Optics Compensation effect to work with accurate FOV data and covert extreme-wide-angle, or “superfish” images commonly up to 280 degrees. The result can be used to produce a single undistorted image in virtually any format—16:9 all the way to ludicrous formats like ultra-wide 9:1—and there are also tools to round-trip to and from the 2:1 equirectangular format that is the standard for virtual reality on platforms such as YouTube 360.

Even if you’ve simply encountered frustration just working with GoPro footage using Optics Compensation (whose FOV controls don’t relate to real-world numbers, requiring instead that you measure straight lines to derive an accurate setting, as I have shown you how to do), you may find these plug-ins helpful, but it is also possible to work with footage from exotic lenses that have been out of bounds for the built-in effects that ship with After Effects. The demo video below includes shots derived from the 280 degree hemispheric Entapano Entanaya Fisheye lens that is mounted on a Go Pro via a RIBCAGE setup from Back-Bone Gear Inc. The entire setup including camera and plug-ins costs less than $1500 and was promoted in the RE:Vision booth.

Those are the basics, and you can stop reading here if just looking for industry news. What follows is what I learned from working on documentation, development and demos with RE:Vision about why handling of extreme lens distortion could have far-reaching benefits even for that most mundane of video productions, the talking-head round-table discussion.

Before we get into applications of these lenses, let’s take a look at camera arrays, the current standard for capture of ultra-wide images all the way to a full 360 degree view. Keep in mind that arrays are useful for more than just VR: they can be used to cover a scene that a moving camera can’t, and they provide the means for effects such as Matrix-style bullet-time effects.

Arrays are also a bit of a pain, for two main reasons. One is that they don’t provide a finished image; you have to stitch the results, which means averaging together overlapping areas of frames (which also happen to be the most lens-distorted areas of those frames, the edges). Many of the rigs are custom-built and don’t provide a means to sync color or frame capture, as can be learned the hard way with custom GoPro arrays. Short summary: arrays have to be fixed in post, sometimes expensively so.

The bigger problem with the array? In some ways it has thus far doomed virtual reality to be forever stuck in the wide shot, holding action 15 feet or further from camera. Sure, you can have the talent make sure to hit and hold a mark that is perfectly framed by one of the cameras and move in closer, but they’d better stay on-axis or risk being ripped apart in the resulting image. For an example of this, take a look at the fun Diamond brothers interview posted to this very site earlier this week. Use your mouse to turn the virtual camera around and take a look at interviewer Neil Smith standing below the rig, at this point in the video, right after the 2:00 mark—or rather, take a look at the fraction of his image that is displayed. Welcome to the future.

What if you could capture a scene, including close-ups, with a single camera, without the need for stitching? What I found most fascinating among our RE:Lens demos was not the VR capture and display process, but high-definition scene capture: covering a scene with a superfish lens with the intention of pulling standard, undistorted looking HD images out of it. Specifically, there are a couple of intriguing setups:

  • A 280º lens is attached to a camera positioned horizontally on its back, pointing directly upward. The result covers the full 360º around the camera, up to the apex of the sky or ceiling (but omits the nearby floor, which, if you’re looking at it, may mean it’s a pretty boring scene). Forget about VR for a moment; imagine instead a conversation between 3 people seated around a table. With one camera on a stand to be around head-height, you can grab a single of each talking head that looks no different than if you had brought 3 cameras, and if a fourth person joins in, no problem.
  • A 180º Canon EF 8-15 mm f4/L Fisheye USM lens is attached to a camera pointed at the action (the normal way of shooting, angled vertically). By removing distortion from a fisheye view, dimensional perspective is retained so that when you animate a virtual camera in post, pans and zooms don’t look like an old Ken Burns documentary, they appear to have the dimensionality of an actual camera in 3D space, as if the shot were captured with a crane or drone.

The resulting master image in either case contains no gaps, seams, no need for stitching or managing color or frame sync. In the example dance scene that can be viewed starting at 1:51 in the product overview video above, the 180 degree image was captured with an 8k sensor, so that even the softness that is apparent with 4k images is not apparent.

RELENS_dancers

The After Effects virtual camera, animated in post. Covering a scene like this makes effective use of an 8k camera that might not otherwise make sense in an HD video production.

Not deciding is a decision, and leaving camera moves to be solved in post is not the future of cinema, unless the future is pans and zooms from a static location (since moving a VR rig is problematic enough that I’m not even considering it here).

And at 4k, extreme wide shots will appear soft, as can be predicted with the simplest of math. 4k divided by 1920 rounds to 2; even dividing by 1280 only makes the field of view 3 shots wide. With a 360 degree master view, there are 9 shots to be extracted to a standard 40 degree view, and for all 9 to be HD, the sensor would need to be, let’s see, 17,280 pixels square.

But that math allows the dance scene to look pretty great cropped to undistorted HD pans and even zooms, since 8k via an 180 degree lens is, in fact, enough.

If you’re starting to feel sold on the hemispheric fisheye/single sensor setup, be aware that there are currently a few associated annoyances and minor limitations that I haven’t mentioned:

  • The source image looks strange. If shooting a lot of takes, it’s helpful to review and evaluate fish-eye lensed images in full-motion, without removing distortion for each one, but that requires a little getting used to.
  • It’s difficult to keep the lens clean. Superfish lenses protrude, have a large surface area, and may not even have a lens cap, so you have to remember not to touch them and to wipe them every so oftne if leaving them exposed for a while.
  • Chromatic aberration is often significant near the edge of frame. RE:Lens includes a Chromatic Aberration effect to remove this color offset that occurs when lens and sensor aren’t perfectly aligned, as occurs due to the heavy light-bending near the edges.
  • About half of the image from a rectangular sensor contains no useful data whatsoever. It would be fantastic to see a camera with a square (or even circular) sensor specifically designed for hemispheric lenses, since the image on the sensor is perfectly circular. By placing the sensor pixels only where light is actually hitting the sensor, the same amount of image throughput with a denser sensor could theoretically contain 50% or more extra useful data.

Nevertheless, ultra wide-angle lenses are a huge part of post-production today, and the set of plug-ins that make up RE:Lens make working with these images possible in After Effects. For the time being, this toolset is After Effects-only; for Premiere Pro VR production tools you can also check out SkyBox Studio from Mettle, which was featured in their booth in the VR pavilion area at NAB 2016.

The post Do Away with the Array: RE:Lens for After Effects brings clarity to extreme fisheye scenes appeared first on ProVideo Coalition.

]]>
https://www.provideocoalition.com/relens/feed/ 1
MOX and Open-Source Video https://www.provideocoalition.com/mox-and-open-source-video/ https://www.provideocoalition.com/mox-and-open-source-video/#respond Fri, 24 Oct 2014 05:40:27 +0000 Earlier this week Brendan Bolles launched an Indiegogo campaign to support a six month development effort that would result in a new open-source video format called MOX. I had the opportunity to interview Brendan about the project, and the full transcript of the interview follows. MOX is intended to be a moving image format that

The post MOX and Open-Source Video appeared first on ProVideo Coalition.

]]>
Earlier this week Brendan Bolles launched an Indiegogo campaign to support a six month development effort that would result in a new open-source video format called MOX. I had the opportunity to interview Brendan about the project, and the full transcript of the interview follows.

MOX is intended to be a moving image format that belongs to no company, so that the code is completely transparent as with still image formats like JPEG, PNG or EXR. Unlike MXF, some of which MOX is likely to use as a container, it is not intended to be open-ended but instead limited to what will work universally on any platform (which in this day and age means Mac, Windows and Linux).

There are many potential users of MOX; if successful it could be as universal as the still image formats mentioned above, and it serves similarly broad target users. The developer comes from the world of visual effects and understands what makes a format like EXR useful in that world, but he is also familiar with the problems of a prosumer format like QuickTime and how it fails to properly serve professionals, despite its ease of use and ubiquity. The idea is to get rid of problems such as files that won’t write or play on a particular platform, or require a missing codec, or have technical issues such as inconsistent gamma that can’t be solved by anyone except the company owning the format, who doesn’t openly share details (hello, Apple).

MOX is not a new container or codec format that risks becoming obsolete; its main risk is lack of adoption or interest. It came about because the developer proposing it realized that he has access to a significant amount of code relating to container and codec formats that basically just needs to be put together in a usable, elegant form.

Once MOX has been created, it belongs to no one, and it doesn’t rely on Brendan to update it, or on any particular company to remain in existence. it is designed to be accessed and updated by anyone, a model used extensively from Github to Wikipedia to OpenEXR.

The campaign is already over 50% toward its modest goal; extra resources help compensate Brendan to add more to the initial code, as detailed in the interview. The full transcript follows, and addresses many of the pointier questions associated with this effort.

 

Mark Christiansen: To begin, I just want you to tell me, how do you envision what you’re making being used?

Brendan Bolles: It’s really for two different audiences. Depending on who you’re talking to, you may describe it in two different ways. One would be an open source ProRes, so that would be for your video editor type people. The other people might be in visual effects, so you might say it’s like OpenEXR in a movie format. For the ProRes side, you say that the advantage of this is that you can use it in the same way you use ProRes and move video from one program to another or from one person to another. The main advantage is that because it’s open, it would be completely cross-platform. So you can go to Windows or Linux, and in Linux, to my knowledge, you don’t really have a good intermediate format.

The other problem is that people sending video back and forth often, if they’re using QuickTime, will see these gamma issues and other problems, but what are you going to do? Call Apple technical support and tell them your QuickTime movie has problems? Because of its closed nature, you just have so little control over what’s happening. You don’t have other tools that can independently examine your movie…it’s all through the Apple system.

So, for the ProRes people, that’s the main thing. We’re open because we want to be cross-platform, not just because we love open. If QuickTime was cross-platform and had API’s that could run on Linux and Windows and give you access to everything, then at least for that side we wouldn’t need to do this.

MC: You would still have the issue of the trouble you have looking under the hood when things go strange with QuickTime, like you were referencing with gamma issues, where something changes at Apples’ end and they don’t publish anywhere what changed.

Brendan: It’s true. If it was close-sourced you would still be at their mercy. That’s the advantage of open. You can get to the bottom of a problem.

It’s kind of like if you had a JPG, or a PNG. PNG is a good example because there’s a gamma tag in it. In the past, some people have opened a PNG in a web browser and think how it looks weird, but because it’s an open format you can open it in another program and get a handle on whether the file itself is bad or whether the program is reading it wrong. But because QuickTime always goes through their run-time engine, different programs are getting it from the same place so you really can’t do that. The openness allows you to go in there and see exactly what’s happening as pixels are going from one place to another, especially if you have a programmer at your disposal. You can actually get to the bottom of a problem.

 

MC: Let’s talk for a second even more basically about the difference between a container format like QuickTime and codecs, which run along with that container format. Maybe you can just describe how those two are handled by what you’re proposing.

Brendan: I don’t know what the official terminology is, if there is such a thing. I think a format is a container plus codecs. Oftentimes you use them interchangeably. Like with QuickTime. When you say “QuickTime”, you often mean the QuickTime container and the codecs that QuickTime supports.


MC: Let me restate, since it sounds like you don’t agree with the framework of the question. I’m trying to elucidate what this is for someone who’s only peripherally familiar with intermediate codecs or all the problems that come up. Or even people who are happy with QuickTime.

So what you’re building is going to be in an existing file format. And it’s going to standardize and open the standards of a number of different codecs. So let’s talk about MXF as the alternative to QuickTime and why you chose it and what you see as the positives and any potential drawbacks of going with a .mxf file format.

Brendan: First of all, I’ll just say that I don’t think people really care what the container actually is. Nobody who uses QuickTime probably cares about the actual structure of how that format works. They just want it to work properly and hold video and certain metadata. So, the main thing is I didn’t want to create my own container because the whole basis of this project being able to be successful with just one person working on it goes back to the fact that I noticed there was all this technology that was already out there. There was already a container format that people had worked on for years and had been standardized and been used in productions, so we’re pretty confident it works. With all the codecs, many of them such as FLAC and Opus are already being used all over the world. For all the still image formats, like PNG, obviously we know they work. And then you have the Dirac codec by BBC that they’ve been using themselves. They put that out there as open source, but it hasn’t really found a home, because it hasn’t been adopted by the people who control the movie formats. It’s out there, and Apple could put it in QuickTime, but they just haven’t decided to.

So, it was all of this technology already existing that made me realize that this was a project I could do now. If I had to create any of those pieces from scratch…now you’re talking about a huge scope of a project.

MC: So essentially you’re assembling a bunch of pieces that are themselves already open source, but don’t all work together currently?

Brendan: Exactly.

For example, take Dirac from BBC and MXF. Apparently, at least at the BBC, they’re already doing that. So the natural question is, “why don’t you just say you’re making MXF with Dirac?” The thing is that MXF as just a complete container format is unrestricted in terms of what could possibly be in there. You get an MXF from certain cameras and it’s going to have MPEG-2, and a different camera might have MPEG-2 but then use different metadata and be packaged differently.

So the big idea for MOX is to not have it be open ended. We want to use that container that works, but we actually want to limit what you can put into it. And that’s the key to it actually working anywhere. By knowing these are all the codecs that can possibly be in a MOX file, you can have confidence that a MOX you get from anywhere will read in any actual MOX reader. With MXF, you just can’t say that. In fact, if someone gave you an MXF with Dirac, I don’t know if there are any readers from a commercial editing software that could read it. Something could probably read it, but it’s really a roll of the dice. Even though Dirac and MXF is a completely legal thing to do.

So that’s why we can basically say, MOX is an MXF container, plus these certain codecs for audio and video, and further metadata that we’ll also standardize. MXF is also completely open in terms of metadata, which is not very helpful when you’re trying to exchange. On both sides, you need people to have a standard for how they’re actually going to communicate it. MXF is sort of like we’re using telephone copper wire, but then you need to know exactly what language you’re speaking to actually communicate. And again, that goes back to using these pieces, because I wouldn’t want to be making the entire phone system infrastructure. But, to get people to agree that we’re going to be speaking English and talking at a certain time is doable. In a sense, that hasn’t been happening yet, so it’s what we plan to do.      

 

MC: From a user point of view, if I have MOX, what I have is what we would call an intermediate format. I can use it to openly exchange files and within that I can set my own codec according to my taste. If it looks more like EXR where it has jillions of channels and very little in the way of compression, that’s one way it could go. Or if it’s fairly elegantly but aggressively compressed for archiving purposes of , say, a lot of material that started out compressed, that’s an option as well. So I can go small or I can go very complete. I have those kinds of options provided that I’m choosing a codec that is open source and therefore is part of MOX. Is that correct?

Brendan: Right. And for all of the codecs in MOX, that is the #1 rule. They have to be open source and patent-free. That’s another key thing to point out. It’s kind of very technical, but a lot of people will ask me, “why didn’t you just use H.264?”

For one, I’m not sure that it’s actually well suited for this because that’s really a delivery codec, unless you have a hardware that can remake it in real-time. But the other answer is that H.264, even though there’s open source software that can read and write it, it’s patent-encumbered. If somebody just wanted to make a little utility that uses it, they’d have to pay licensing fees, and it gets much more complicated.

The BBC released Dirac as patent-free. They own patents on it, but then they said they were making it freely available to everyone. And that’s a key reason why this format is possible.

But I want MOX to be very similar to QuickTime. So you get a dialogue that lists the codecs, and you can choose one based on your particular needs and the different capabilities and then you just write a MOX and on the reading end all those codecs will be the same. Any codec you chose will read any other program. It should be pretty simple. 

Actually, the way I’m envisioning the interface for that is kind of the opposite of QuickTime. In QuickTime, you pick the codec and then you go and see, “millions of color,” millions of colors alpha,” and all those various options. I was thinking, and this is purely an interface thing and has nothing to do with the file that will be written, that it would be good to do the opposite way. Where you would just say something like, “I’m going to write 16 bit images with an alpha with lossless encoding.” And then your codec list would just be a subset of what could actually do that, rather than scrolling through codecs to find the thing that you wanted.

So, for example, if you said you wanted floating-point, then it gives you OpenEXR as your only option.        

And writing the file is the same. The writer knows it’s writing that codec, and obviously you have to write it within the constraints of the codec.

 

MC: I’m going to quote Adam Wilt from the MOX discussion on CML here, and the gist of his question was, “how is this really different from what MXF started out trying to be and then failed at being when it got too tied up with too many variations and other barriers to it being a truly open interchange?”

Brendan: I kind of use the cliché that “less is more.” We’re using the MXF container, not because we want to use all the features in it. And in fact, we’ll very specifically say, “you are limited to these features.” This might evolve as we are creating the format and we get feedback from contributors and all the people who are testing it out.

For example, MXF lets you have your media in a completely separate location from the actual MXF file. My intention right now is not to support that. Obviously, it can open up some abilities, but it also means you might have a file that you give to someone that won’t work without that media.  

MC: I would not be sad to see that structure go away.

Brendan: Right. On the other hand, people do use QuickTime reference movies, so sometimes people do like that. That’s where the feedback is going to come in. Again, my interest in MXF is not to use all of it, and in fact, I don’t think it’s comprehensible to use all of it. It’s mainly because I believe it can do what we want to do, which is just store audio and video codecs in a container format. Because it’s well tested, that’s the reason to use it. Again though, I don’t think anyone cares. If we hit some weird roadblock and for some reason MXF doesn’t work for our purposes, I don’t think anyone would really care if we booted it. I think they really just want a movie format that works. But I have no reason to believe it won’t work. 

MC: So when you have the file format that you’ve created, it’s not an MXF file, it’s MOX. And then under the hood, it provides whatever your API allows in terms of all the variations and options of what you do or don’t allow. It sounds like you’re trying to create something that is transparent first, but also easy to use and less breakable than some of the other alternatives. It sounds like that’s the goal.

Brendan: Yeah, definitely. With MXF, they really thought of a lot of different things for a lot of situations, like streaming, and I’m hoping to just get that for free. Again, if I had to come up with my own container format, that suddenly doubles the size of this project.    

The reason why I even got that idea is I made a WebM plug-in for Premiere. So I got to know that format. And WebM is similar to this. It’s just a generalized container, but limited to a specific set of codecs. In that case, it’s the Matroska container, using VP8 and VP9 for video and Opus and Vorbis for audio. So, it’s really the same thing.

People were writing Matroska with the Divx codec and all these sort of things, and if you got an MKV file, unless you knew exactly how it was written, you couldn’t have a whole lot of confidence that you’d be able to read it. But with WebM it’s the same technology, but because it’s restricted you know that if you get a WebM file, you can read it in anything that says it can read WebM because it can only be this thing.

And again, I don’t think people really care that it’s Matroska or whatever. They just picked Matroska because it was open source, and was patent-free, and it had streaming as one of its design goals.  

MC: In the case of this, what people care about is that they’re using a format which won’t later on break or disappear. Obviously the biggest concern with any new format is whether it has the institutional backing to stick around.

 

So, let’s switch the conversation to your campaign and what it’s designed to achieve. And then we can talk about what real success would look like for this campaign and then beyond it.

Tell me about what you’re looking for with the campaign. Who are you looking to have support it, and what would be the ideal outcome besides reaching your target goal…which, frankly, is pretty modest. You cut your own salary in half by your estimate in terms of what you thought was required to pay a top level programmer like yourself. 

Brendan: It’s a weird situation to be in when you’re talking about hiring yourself. Especially something that’s going to be for the good of the community, so that’s why I’m bilking myself from my full rate.

I did write something in response to Adam’s comment, as it made me think about this. Besides being able to fund the thing, I think the reason this is so important to crowdsource is to have this little army, but hopefully a big army, of people who are interested. We’re doing the kind of IndieGogo that’s like Kickstarter where it’s all or nothing. That’s the beauty of it. Let’s say we had this idea and couldn’t get anybody interested. Then we’d know we wouldn’t have to do anything. It would show us this idea really doesn’t have traction, and I can be glad we didn’t spend six months working on it to release it and nobody cared. If we get enough people on board not only will we know that people are interested, but then we have people to help push it. We know they’re kind of on the team.

The other thing is that they kind of share the risk. They’re contributing money directly as well as time and interest. So, if I were to work for however long to make a file format, and it just flops, that would be six months of my life completely wasted for nothing. But here, we’re distributing the risk as well. It just makes it more possible to do this without having some big company hiring people to do it. Unfortunately, I think this is the reason this hasn’t been done before, because if you’re a company and you make a video technology, usually you want to hold onto that and not make it open. That’s the entire reason why we have problems with these video technologies, is that they’re not open.   

 

MC: I can think of at least one very large video software company that lacks their own intermediate format. So, it seems to me, that if I were in your shoes, one of my main goals would be to get a company like that to adopt this and to fully support it. I see that among your stretch goals, are to explicitly support Adobe software. Is that the reason behind that, or can you say a little bit more about what would be the big win in terms of going beyond your initial target and also who might respond and contribute to get behind this?  

Brendan: So, you’re asking why am I starting the Adobe stuff and then moving to NUKE…the real answer is that I actually don’t know if there’s any other choice because Final Cut doesn’t have a Plug-in architecture for doing this. Avid doesn’t have one either. So, I don’t think I really could start anywhere else. But then also, there’s the simple fact that it’s the stuff I’ve been working on in the past. My expertise is with the Adobe APIs. After Effects plug-ins and Premiere plug-ins. I’ve written a few NUKE plug-ins, but that would be a little more new to me, but they also have an SDK that’s publically available. For whatever reason, Apple does not. They actually used to.

I’ve been researching this. You used to be able to make a QuickTime component. In fact, WebM has a QuickTime component, but those only work in QuickTime 7. When they upgraded everything to QuickTime X they dropped support for that. Now, somewhere there is support for that because RED has a plug-in for Final Cut X that will let you read RED files. To make that, they must have gotten access to some sort of secret APIs that Apple doesn’t make public. 

MC: So the gist of what you’re saying is that you’re dealing with a couple companies that are very closed and already have their own format, they haven’t opened that format up, and that’s part of the reason you’re creating this. And you’re first working with the companies that initially already have the framework to basically let you do the work to at least get your software working with them. And from there, you envision an open sourced project taking off from there. And from that point, it wouldn’t just be that the whole world now relies on Brendan Bolles to keep this going. At that point, it exists and it’s completely open, so you did your initial work based on what was contributed to your campaign. Maybe you keep working on it, maybe not, but it doesn’t cease to exist based on the success or failure of your own company.  

Brendan: Yes, absolutely. And also, that is a key part of open source. It’s kind of like Wikipedia. On Wikipedia, you write an article but then you might lose interest, but because anyone can edit that article it doesn’t rely on you to keep it going. Same thing with open source. It doesn’t rely on the people who originally created it to continue it on. It doesn’t belong to any one person.

Also, the reason for writing those Adobe plug-ins is I think there are people who just want to move video from Premiere on their computer to Premiere on someone else’s computer. If you just had a Premiere plug-in, I think that would be useful. Then there’s many more people who want to be able to render a movie from After Effects on this computer and then move it to the editor over on another computer. Actually, After Effects can render out of Media Encoder, so even just for that first plug-in, After Effects would be able to render MOX files. So again, that workflow alone is useful. People will be able to use that.

And then add NUKE, that’s another one. There are definitely people that move things back and froth from NUKE to After Effects and NUKE to Premiere. Just having those would make it useful, which would be okay. More importantly, if people can use it in their work then they can be testing it and we can know that it does work.

And then from there, once it’s shown to be working and if it’s easy to adopt then I would hope we could get some more people on board. Like the DaVinci stuff would be great. They have no plug-in API for that, so they would have to do it themselves. By having an easy C++ library like OpenEXR had, hopefully it lowers the barrier to adding support so that people just add it because why not? They’d think why wouldn’t we do this? For the companies that don’t have APIs, a lot of it is going to be some user lobbying to make that happen.       

  

MC: Two part question – one, is there a standards committee that would ultimately play a role in further legitimizing this format? And the second question is what other institutional player would be most helpful to you? That’s to say, once they’re on board, or you get a certain level backing, you’re confident you’ve created something that now has legs and it’s safe for everyone to go in the water.

Brendan: The standard thing is an interesting question. Actually, I will probably want feedback on that. I’ve said though that OpenEXR is kind of the model that I kind of follow. For one, because it ended up being tremendously successful in terms of adoption, but it’s interesting because it doesn’t have a standards committee that’s known.

It was created by ILM really without anyone else’s input. They were just using it. And then they just sent it out into the world. Although they do collaborate it’s not like it’s a SMPTE standard or anything. I definitely don’t want to be the MOX monarch, but I’ve also seen how those SMPTE standards groups take forever to do something. Again, I’m hoping to get the first plug-ins of this thing out in six months. I don’t think any SMPTE standard goes that quick. Just talking about what they’re going to do takes much longer than six months.

To some degree that’s part of the whole thing about this. It’s sharing the risk. People are having confidence in me to get this thing done based on the work I’ve done, but then there is some risk because people have to say they’re going to give me some money, give me some time and then in six months we’ll see. If it was a company getting started that would be costing millions of dollars, then you’d be much more cautious about letting someone just try to do that. Hopefully because we are so much more modest people will be more likely to say, “let’s see what happens”.

I would like there to be some sort of inner circle…some sort of MOX board. I guess I kind of am the MOX monarch so I’d be in charge of putting that together.   

MC: It’s been my experience that with a crowd-funding campaign you do find yourself in conversations with people who now know about your project. So who would be your dream teammates? Who would you love to have show up to the table?

And as a quick follow up, on ProVideo Coalition you’re going to be talking to a lot of individual professionals. Plenty of our audience work at larger companies, but there are a lot of professional freelancers. So what are you looking for from that level of backers?

Brendan: I definitely want to have input from all of those various types of people. You’d want to have people who are purely artists, like a video editor. Then you’d also want to have people that were working on DaVinci, that were purely on the technical side and people from the software side. I would want to have both industry and user side represented there.

In terms of what those professional freelancers are going to get out of this, when I first started thinking about this I went to Kickstarter and searched for “open source” because I wanted to see other examples of this. And there are shockingly few that are only software. There are a ton of things that are open sourced hardware, probably because those are so much easier. You say, “give me money, we make this thing and you get this physical thing.” So you’re buying a thing basically. 

But for this, because the thing that we’re making isn’t a physical thing and it’s going to be free, I think it’s a much more pure form of crowd-funding. You’re putting your money in because you say, “I want this thing to exist.” People can say their $20 doesn’t make a difference, but if nobody puts $20 in then it’s not going to happen at all and if we all do it then it will happen. So I don’t have an easy sell in terms of what people are going to “get”.

There are some limited perks though, and I said I’ll give credits on the documentation, so if it takes off people can look back and show they were involved when it was just the twinkle in someone’s eye. And I’ve actually already had a few people contribute $1,000 so they can have their logo or their name displayed on the website. But that’s really all I can offer in terms of perks.

The thing about it that’s interesting to me though is that’s really the original idea behind things like Kickstarter. It wasn’t supposed to be product people because there was already a pipeline for getting products developed and made where you usually got a loan and investors, and that sort of thing. Kickstarter was originally for someone like a sculptor or artist and gave them the ability to tell their fans that they wanted to make some large, expensive piece which they don’t have the money to front. So the idea was that these fans would give money just to see something exist that they wanted to see. The biggest reward you would see would be a private party or something like that when it opened.

 

MC: You’re just saying, do you believe in this, and if you do, remember that I’m putting the time and sweat into making it. Therefore, you vote for me doing that by backing me to pay my rent and eat while I do that.

Brendan: Yes, exactly.

Obviously I hope this gets funded because I’m personally another person that really wants to see this project happen. Taking less pay is my contribution.

MC: Let’s talk about that for a second because we already established that you undervalued yourself by about 50% right from the get go. So let’s suppose this really takes off. You’re adding stretch goals, and it sounds like you have some more in mind, and I think that’s a great idea. Essentially, is your plan to just keep adding onto your own time and investment in it if you go beyond your goal?

Brendan: Yeah, that’s the idea. For every possible plug-in that we could ever write, and there are a few more that aren’t on that list, the more plug-ins we can work on from the get go the more use cases we have and the more likely we’ll run into any problems. For example, the RV Player you can actually write plug-ins for. So, if we were able to add that onto the list, that thing probably has different sort of requirements from something like After Effects or NUKE, where you’re just getting a frame. By having more of those things all bubbling up together, the overall process of getting everything final will slow down, but we’ll be running into the problem earlier and we’ll be able to get more users testing more stuff.

So yeah, that’s basically what the stretch goals are. It’s me saying if we add this much more time, then I can make another thing. And then that will help us get more of a critical mass where you can say not only can you move it from Premiere to Premiere and After Effects to NUKE, but now you can play it in RV…  

MC: For sure. It’s funny, you’re taking me back to that 90’s kind of feeling about when you would just see a new format or codec appear in a pulldown with new software. In a way, it’s been a long time since that’s been a normal thing. It’s kind of been the tried and true stuff for awhile. It’s kind true that it’s a name recognition thing party. Okay, maybe there are various MacGyver ways to install it in many more places than are initially supported, but on the other hand if it’s just there in the pulldown, you’re going to go “hey, what’s this?” Which is what we always used to do.

Brendan: Right. And one other thing that I think is interesting is that there are these various video compression technologies out there like Dirac, which I was surprised to see because BBC probably spent millions to make that, and it’s free and nobody is really doing anything with it. But then there are these other image compression technologies where they just really exist as a kind of a little piece of code in like a sample command line executable. So there’s oftentimes a lot of stuff out there that maybe could be really useful, but because they can’t get Apple or Avid to include it, we never see it because that’s the only way that people can take advantage of it. So I think it would be interesting to see that.

For example, OpenEXR just recently added another codec. It's actually high-quality lossy codec kind of like ProRes but for floating-point which is great timing because we’re totally going to love using that. Being open is going to allow us to sort of adopt new technologies if they become available and if they add something. We’re not going to add a codec just because it exists. But if you have a new codec that’s higher quality or faster or just adds something, then new things can be added. The key is that you just never drop support for the old stuff. And open source makes that possible.

With OpenEXR, it’s fine that they add new codecs. You want them to if it improves the format. The important thing is that they can never drop an old one. But because you have all of the code right there, even if the people who run EXR decided they were going to drop it, they couldn’t force anyone else to drop it because they don’t control it anymore.

 

MC: Are there ever interim updates required to just keep a given codec running? So basically you’re saying once it’s in there it’s in there?

Brendan: Yeah, I mean, the most that could possibly happen is, say, you get a new version of Xcode and some little snippet doesn’t compile anymore. But then you just fix it and you’re fine.

And that’s why open source is so important, because maybe some else is using a different compiler than me and with the closed source you just get a library saying it’s not working or there's a bug here or it’s not linking, or any of these other technical things. With open source they can fix it, and then they can send the changes back up there so everybody gets those changes. And that’s the important thing about open source.

And I’ve contributed to OpenEXR for example. I had that exact problem where, to build Adobe plugins I use an older version of the compiler but when they have new stuff coming I download them and I try it out and I find problems and I send them up the changes. Otherwise, the alternative is that there’s one person who’s supposed to have access to every single platform and build their library for every platform.

MC: So it sounds like OpenEXR is in a pretty major way a great model for what you're trying to do. Are there any flaws with EXR that you want to change or is that what you’d put out there as a success in terms of what you’re doing and you would love to see it do that?

Brendan: I am a huge fan of OpenEXR and I really don’t have anything to mention about it as a flaw. For one thing, I’ve never seen anything better from the technical/programmer side. I’ve never seen a better written library than that. It’s so well written and so clean. It just does everything right. In fact, I’ve learned a lot of my better C++ techniques really just from looking at that library. 

But then the fact that they use multiple codecs and the fact that the format has changed over time is amazing. For example, when it first came out it was scanline only but certain people wanted to add tiles and MipMap support for things like textures, and for bucket rendering. So they added that. But they made it so that everyone who had written a reader that used just the simple scanline interface could use the library which would do everything for you under the hood. So you just had to get the new library that knew about tiles, recompile and everything just worked.

And that’s the one thing I’ve learned from writing these other movie formats as a programmer. When I was writing the WebM format, I had to learn everything. I had to learn how to write the container, I had to learn how did it program the codec. EXR though, there’s all these codecs and you don’t have to know anything about them as a programmer. You’re just sending it pixels. That’s the codec, so you can tell it to go crazy. Same thing with reading. You don't have to know anything about how that codec works.

It’s kind of funny to me how in movies it’s often not like that. For example, when you’re writing a movie you're encoding video and audio but often you put some video frames in, you put some audio in and they don’t instantly spit out a video frame and audio frame. That’s the way encoders work. They often have to get a series of frames before they’re ready to produce something.

So it’s a real pain in the neck to have to think about that when you're writing the movie. You have think about how you have to buffer it, when all you want to do is just say here’s video, here’s audio, you take care of it.

MC: Right, multipass, etc…

Brendan: Yes. So from doing it with EXR, it was just very clear to me that EXR was so simple and movie formats should be like that too. And then you can write little modules for new codecs. You shouldn’t have to actually make everybody learn the new codec where you have to get it from here, put it in there…they shouldn’t have to know anything.

 

MC: So we touched on Adam Wilt’s concerns, but others are wary of adopting a codec that we can’t be sure will be supported for many, many years to come. We’ve already seen other supposedly reliable codecs disappear. And some think it would be far better for Adobe to bite the bullet and issue something or perhaps making this whole effort an official Adobe project.

Brendan: I would actually disagree that it would be better to make it an official big company project. And I think OpenEXR is a good example. That was made by ILM which is a company, but they’re a company of users. They’re the users of the format, and they’re not trying to profit off the format. I’m a big fan of Adobe, but then you’re asking the question in terms of who do you trust more, Apple or Adobe? But when it’s open sourced you don’t have to trust anybody. That code exists, and it can never not exist.

This is the development model Git uses, and why I'll be using GitHub, and why Linus Torvalds, the creator of Linux, made Git. Because it doesn’t require you to trust anybody. He’s also said that he never has to back anything up, because everything he does is spread all around the world. So even though everyone thinks of him as the king of Linux, technically he has no more authority than anybody else. It's just that people trust his thinking.

So if I make this thing and then I start being untrustworthy, it wouldn’t really matter because people would already have the code and there would be nothing I could do to it. And then if I came out and told everyone to forget this new codec it wouldn’t matter. I could tell everyone that the plug-ins that I uploaded don’t have that codec anymore, but everyone could just say they weren’t using that one and could use the ones they built which still has what they want in it. 

MC: Right, but the other thing you might want to address is the fact that the example used is about a codec. So in other words, there may even be confusion that where you're creating is in fact a codec.

Brendan: That’s a good point. People say that, and I am not creating any codecs. If I was creating a codec, I would have to raise hundreds of thousands of dollars.

I mentioned this in a tweet. When Google was making WebM, they didn’t have an open source video codec and patent-free video codec to use so they bought a company for 124 million dollars. So apparently that’s what it costs to make a video codec. And we’re not going to do that.

So, not only would I refuse to use some sort of closed source video codec that could possibly break in the future, but I’m unable to do that. That’s the nature of open source. It cannot be taken away from you, because it’s not under anyone’s control.

And hey, when Microcosm came out years ago, people we seeing this sixteen-bit lossless codec in QuickTime and thinking how it was great. Even if Microcosm had been open source it still would have problems because QuickTime has eliminated the ability to make codecs. So it was subjected to this closed source thing that it was plugging into. Of course, it wasn’t open source. If it had been open source we’d be able to at least continue supporting it as a codec. If the operating system had changed or if the company lost interest, we’d be able to keep that going. But because Apple controls the container, we couldn’t even do that. But with MOX, everything is open. The codecs and the container. The software that makes all of those things. There’s simply no way that it can be taken away from you.

And I think that’s really important, because like I said in the MOX video, imagine if you had a digital camera, but instead of JPG it had its own file format. With RAW you kind of have that, but at least Adobe has figured out how to read all of those. But even that’s a little scary because Adobe could decide not to read them anymore. Anyway, imagine if you had a camera that didn’t write JPG or RAW that Adobe could read. It just wrote its own crazy format and you could only read it using their software. That would be kind of terrifying. The idea that you had taken all these photos and then years from now…imagine if there was some crazy technology where you couldn’t even convert it to something else. That’s terrifying to me. The idea that your work is trapped in their format and something could happen that would shut it off. Especially when you might look at it fifty years from now. I mean, who knows what could happen. That company could just be gone, and it’s not going to run on that hardware or software in fifty years. Everything is going to be different.

With open source though, it will always be able to be moved up. And by being open like a JPG, you just know that you can read it forever. And even if Photoshop stops doing JPG, any other program could pick it up. So that's what terrifies me: having your work trapped in someone else’s file format. And it’s just not necessary. It was necessary but now that all these pieces have been released as open source it's no longer necessary. We just have to put the pieces together. So much of this work has been done by so many people over the years and we’re just pulling it together in a nice little package.

 

MC: So you’ve demonstrated really well why you’re a guy who’s comfortable taking that step and you’re doing us all a favor of taking it this far, and your campaign is doing well. Anything else you wanted to throw out?

Brendan: For one, anyone who has these concerns can post them on the Indiegogo page. You can tweet me @moxfiles. I’m totally down to discuss these things. And I’m not saying these concerns are unfounded. And I also can’t say that everything will work out as we hope. But I think it’s in a place where the risk is low enough and the rewards will be high enough that it’s worth doing it and finding out.

And again, I really like the fact that OpenEXR was kind of made in production. I feel like a lot of these standards are things you sit around and talk about. I’ve been in these meetings where it’s people just taking and talking about what would work or what wouldn’t work. I much prefer to just try it. Let’s get a thing working.

MC: It does seem to me that one difference between what you’re doing and OpenEXR is that ILM kind of threw down and said they were going to use it, no matter what. Even if nobody else could see why it was so cool, they were going to use it and they were going to be around for at least a few more years. So it seems to me that your ideal backer is someone who can commit at a similar level.

I mean, having talked to you about this, it really does sound like this will have legs and once it's in use anywhere it will have the means to continue to be in use everywhere. But honestly, the one thing is that there just has to be enough interest that there is more than one guy who is going to keep rewriting this and there’s somebody who has said, “Yeah, you know what? We’re going to use this, so we’re invested.”

Brendan: Absolutely. I'm hoping that the initial release gets some traction. If you look at OpenEXR, now it’s been contributed to by people from Weta, people from DreamWorks. So that’s definitely where I’d like to be.

Really, I’d like it to be so that at some point I didn’t even have to be involved because it would just be its own thing and I just helped create it. That’s really where it should be.

And like you said about ILM, they came out and said even if nobody else was going to use it they were going to. But how many employees does ILM have? It’s quite a few, but if we have several hundred backers and they were all using it, that would be roughly the same size. I think you could potentially say, “we have this many people using it,” but that’s not where we want it to end.

MC: Well, there is this institutional backing thing and maybe it's just psychological, but in your case, let’s just say a bunch of the VFX studios just at the high end decide say, “you know what? We’ll move away from still image formats and using those so much because this actually solves many of the concerns that we’ve had with archiving.”

Brendan: Yeah. By the way, I should say that I’m not hoping to displace still image formats. I think still image formats are totally great for certain things, and if MOX had been invented first, you’d ask, well, now with render farm how do I render each frame until…you’d have to create the still-image formats to do what you need to do on a render farm.

So to me, it’s not that I want to replace still image formats, but there are some times, like in OpenEXR, where you want to store this floating-point imagery but you also want to have it with sound.

MC: Yeah, and it’s easy to think that over time, there’s more risk of just that sound file and that visual file becoming so divorced from one another that they never find each other again. Simply because right there, you’ve thrown down and said those are two separate files. We've all had it happen. You can’t remarry them because they weren’t named the same way, or one of them goes corrupt, or whatever. I mean

Brendan: Sometimes, people just want to make movies.

MC: It is moving images after all. So in a way, the still sequence plus audio file has always been a workaround. It’s just in that industry everybody is so used to it that nobody thinks twice about it.

Brendan: It’s true. And also, the big studios have full-time people running around to make sure that the right audio file get loaded with the right frame sequence and gets played at the frame rate and gets the right color space stuff. So if you could have a full-time person you could have the hope of babysitting that system. I think the people that like this the most are the smaller shops. Especially in advertising. They’re the people I hear from the most that want to make movies and they’re kind of more adverse to image sequences. I think it is because they’re moving things around themselves, and they don’t have a big infrastructure.

MC: Not mention that as you get more into graphics there's less patience for a format that you can’t just click to play.

Brendan: Right.

The post MOX and Open-Source Video appeared first on ProVideo Coalition.

]]>
https://www.provideocoalition.com/mox-and-open-source-video/feed/ 0
The VFX Business in 2014: an interview with Scott Ross, Part Two https://www.provideocoalition.com/the-vfx-business-in-2014-an-interview-with-scott-ross-part-two/ https://www.provideocoalition.com/the-vfx-business-in-2014-an-interview-with-scott-ross-part-two/#respond Tue, 10 Jun 2014 02:57:32 +0000 At the close of last week, Sony Pictures Imageworks (SPI) announced it would be moving its headquarters from Los Angeles to Vancouver. While inevitable to many visual effects industry observers who understand the role of subsidies in moving VFX work away from California and the U.S., this certainly marks watershed moment, as none of the

The post The VFX Business in 2014: an interview with Scott Ross, Part Two appeared first on ProVideo Coalition.

]]>
At the close of last week, Sony Pictures Imageworks (SPI) announced it would be moving its headquarters from Los Angeles to Vancouver. While inevitable to many visual effects industry observers who understand the role of subsidies in moving VFX work away from California and the U.S., this certainly marks watershed moment, as none of the “big eight” studios is now located in southern California.

For some time there have been fewer than ten big studios responsible for the vast majority of feature film visual effects work. Of those, three were located in the Los Angeles region: Rhythm & Hues, Digital Domain, and SPI. With all three now gone from California, their staffs have either moved with those companies where possible, or left the business altogether. Digital Domain, like SPI, is now Vancouver-based. Rhythm & Hues no longer operates.

The problem for workers in the field of feature film visual effects is that the move is not the result of a new talent pool arising in places such as Vancouver to compete with California, but rather the relocation of workers formerly in the U.S. to foreign countries who are able to lure Hollywood productions using subsidies. Those do not for the most benefit visual effects companies but instead the Hollywood studios who are their clientele. These companies are compelled (forced, you could say) to move headquarters to wherever the most generous subsidy is offered or lose the job to a studio that is willing to do so.

In part one of my interview with Scott Ross, we discussed the commodification of individual roles within the industry. For part two, the topic is an effort by members of the VFX community to challenge the legality of foreign subsidies. In other industries just as critical to the U.S. economy as entertainment, foreign subsidies that cost jobs in the U.S. while benefitting U.S. companies have been found to be in violation of World Trade Organization (WTO) rules.

When such a violation of WTO rules is ruled to have occurred, “countervailing duties” (aka CVDs) are the mechanism by which the situation is remedied. In the case of Hollywood and the visual effects business, the question is whether footage produced by foreign facilities should be subject to import duty when it is delivered to American studios.

The effort to assert that Hollywood should indeed have to pay CVDs, and would in fact be liable for substantial duties for the entire period during which it has sought and gained subsidies from foreign competitors to U.S. effects facilities, is being led by Daniel Lay, a.k.a. “VFX Soldier.” Our conversation picks up where the last one ended, and took place prior to last Friday’s announcement by Sony.

Mark: Your perspective is provocative as usual.

Scott: That’s why they keep me boxed up and don’t let me near Hollywood anymore! Neither side likes to hear it like it is though. I could be wrong, but it seems like the logical progression. The only hope that I see is what Daniel Lay and the Association of Digital Artists, Professionals and Technicians (ADAPT) is trying to do with CVDs.

The CVDs are non political. It either is or it isn’t. You either pass the litmus test or you don’t and it’s not a decision that could be overturned by a senator, governor or president. It’s a non-political issue. It’s a digital issue. You either qualify or you don’t. And those decisions aren’t made on a political basis. So if in fact Daniel Lay and ADAPT and my compatriots can prove that the United States visual effects industry has been harmed, and it’s been harmed by unfair duties and tax subsidies from foreign nations, then the United States, by law, needs to levy duties those that are taking advantage of it and harming this domestic industry. And if that’s the case, if that case prevails, I think the motion picture studios are going to pay some very large sums, and it’s retroactive too, so whatever harm they’re liable for over the past few years or decade will also have to be paid. And that would be some serious money.

…if the case prevails, I think the motion picture studios are going to pay some very large sums… 

Mark: It’s punishing, but it doesn’t help the victims of the policy directly, other than to change the policy. I have to think this would be tied up for years in the courts though.

Scott: It’s really an “on” or “off” thing. The question is, do you qualify, do you have the number of people, can you show the industry been harmed, can you represent this group of people legally?  I think the legal counsel that is working for ADAPT is telling them that it’s going to be a year and a half to two years. That it’s not longer than that. There are precedents, whether it was in the shrimping industry or the lumber industry or whatever else. That’s the length of time the investigation took.

One other thing to look at too is what’s happening in the climate as well. Since the time of Life of Pi, the march on the Oscars and the piece of the pi day, there have been additional VFX companies that have gone bankrupt, and all of the subsidies and counties that were participating in this has gone up.

Every once in awhile there’s a story like, Detroit has done some analysis and realized that for every dollar spent in a tax-subsidy film incentive, they wind up having a .60 cents on the dollar return. So it’s a losing proposition. I think Carolina did it, Louisiana did it and they’re even talking about it in California.  There’s information that’s coming out that substantiates that film-tax subsidies paid for by the tax payer is not a good thing. But even while that’s happening, we’re seeing a bill that’s being introduced to the state legislature in California which is asking for more tax-subsisdes for the film industry! It’s being sponsored by Democrats, and I’m a left-leaning person, and they’re saying they’re helping the worker. But they’re helping the 1% because those subsidies are going to the studios and those subsidies are being offset by shooting in LA. The VFX component is minimal though.

It’s crazy, because California is a state where roads are falling apart, teachers are paid awfully, police and firemen are getting laid off…and the legislature is saying “let’s give more money to Dreamworks.” Give me a break.

It does help production, but at the end, these things finance the mismanagement of studios with their inability to control costs and inability to make good product. So if you’re Paramount management you need to either get a whole lot smarter, make better movies and run the studio like an actual business…or, rely on taxpayers money to pay for their foibles, mistakes and stupidity.  There’s some trickle down, where there will be more films shot in California. But in reality, that means everyone else just ups their ante, so the only people who come out on top are the people who run the studios and are making millions a year and flying all over the world in their jets. It’s the ultimate crony capitalism.  Democrats are trying to convince the populace that these are good things because they’re giving money to the gaffers and grips.   But it’s clear where that money is really going.


Click here to check out Part 1 of the discussion

 

 

The post The VFX Business in 2014: an interview with Scott Ross, Part Two appeared first on ProVideo Coalition.

]]>
https://www.provideocoalition.com/the-vfx-business-in-2014-an-interview-with-scott-ross-part-two/feed/ 0
The VFX Business in 2014: an interview with Scott Ross, Part One https://www.provideocoalition.com/the-vfx-business-in-2014-an-interview-with-scott-ross-part-one/ https://www.provideocoalition.com/the-vfx-business-in-2014-an-interview-with-scott-ross-part-one/#respond Fri, 30 May 2014 08:25:05 +0000 A little over a year ago we ran an interview with Scott Ross regarding the state of the visual effects business on ProVideoCoalition. I was fortunate enough to catch up with Scott recently for a conversation about what, if anything has changed in the VFX business since then. What follows is the unedited transcript of

The post The VFX Business in 2014: an interview with Scott Ross, Part One appeared first on ProVideo Coalition.

]]>
A little over a year ago we ran an interview with Scott Ross regarding the state of the visual effects business on ProVideoCoalition. I was fortunate enough to catch up with Scott recently for a conversation about what, if anything has changed in the VFX business since then.

What follows is the unedited transcript of the first portion of our discussion, about where the business is currently headed. The thoughts articulated are provocative to anyone working in the business today, and hoping for any kind of return to the “good old days” of visual effects, which is now typically the largest line item on the budget of any tentpole Hollywood film.

Spring of 2013 was something of a watershed moment for the VFX business, with the Oscar winners from Rhythm and Hues abruptly played off-stage just as they began to describe how it could be that they could simultaneously be accepting the award and preparing for bankruptcy. Just outside the ceremony hundreds of protesters organized to stage the first (but not the last) Hollywood VFX protest march.

We began the discussion talking about whether it would even be productive to host an industry round-table discussion on the current state of things, given how difficult it is for anyone in the industry to be open about what’s really going on.

Mark: It’s a challenge to get anyone to go on record about how the protests and organizing that began in earnest last year really fundamentally challenges anything, or even what needs to change.

Scott: That position would be, “yeah the industry is screwed up. They're problems that we’re all aware of, and it’s the reason I started my company and we’re doing things outside of the way in which typical visual-effects companies do it. We don’t even consider ourselves to be a visual-effects company. We’re focusing on original content and all forms of digital imagery and all forms of entertainment and education.  That’s the future.”

Mark: So that begs the question, where is feature film work going?

Scott: It begs two questions. One is what you just mentioned about feature film work, but the second question is around how that process is going in those places.  From my perspective, the feature film work is basically going to two separate areas.

There’s the high-end visual effects work, which is the stuff that puts butts in the seats and is very difficult to do.  That work is going to places that have world-class subsidy programs. Generally, if we look at the history of that, those world-class subsidy programs have been in English speaking countries. So whether it’s New Zealand, Canada, England, etc., what happens is that because the world-class work is going to these English-speaking places, the world-class artists are being subsumed by those counties and we end up with a roaming band of gypsy artists.

So let’s say tomorrow that Banff up in Canada decides it’s going to spend 75% of all their tax credits in visual effects. You’ll see an amazing influx of Americans, English, Australians, Canadians, New Zealanders and others that will move to Banff for the period of time that those subsidies are in place. The studios are following the subsidies and the higher-end working artists, which are generally comprised of Americans, end up needing to follow that money.

But then there’s the stuff that’s not really talked about. It’s a problem, and it’s a growing problem, and I think that it ultimately could be the biggest problem. The thing is, all of the other work that’s not the world-class work where the studio is looking to get the best price they can while utilizing these tax credits in English speaking countries, there’s a growing population of 3rd world countries, plus China and India, that do all of the lesser known and less critical work is being done for even less money than the tax subsidized work is. So I see a growing component of the feature film work moving to countries like India, China, Thailand and the Philippines where the cost of living is considerably less, even in a subsidized 1st world country.

…there’s a growing population of 3rd world countries, plus China and India, that do all of the lesser known and less critical work is being done for even less money than the tax subsidized work is.

Mark: And that takes away your apprenticeship program in your facility. You get rid of your roto and tracking departments that were often a way in for a lot of people who are now senior level in the business.

Scott: I’m sensitive to this because I was the person who came up with the term “digital artist”, but if you look at any show of around 1,000 shots, maybe 15-20% of the digital workers on that show are in fact digital artists. So around 80-85% of the workers are digital manufacturers. That digital manufacturing work is going to continue to migrate to the lowest cost provider, and that lowest cost provider is going to continue to be facilities that are able to do the work that have the lowest cost employee base.

Mark: We both know what Hollywood wants the most is being able to “commodify” as much of this business as possible, so they squeeze as much as they can into the category of “this does not make an artistic difference to the shot”. And then you let a few reasonably well paid experts, who are also subsidized, handle the rest.

Scott: This is my doom and gloom picture, but I think it goes even further.  If you look at the 15-20% of people who are actually digital artists, one of the reasons why they’re capable digital artists has to do with their talent base, which is to say their intrinsic artistic skills, which isn’t something limited to English speaking folks. There are world-class artists, painters, sculptors throughout the world and the only reason we see the world-class artists, painters, sculptors in English speaking countries is because there’s a history of doing that work in English speaking countries.

But over a period of time there are world-class artists, painters, sculptors who are the Phillipines, Brazilian, Chinese…it’s not the like the English-speaking world has a lock on artistry. What the English-speaking world has had is a history of digital pipelines and digital VFX, and even visual VFX going beyond that for the last 50 years. But now, as these tools become more democratized, and less and less expensive, and as the work starts to be being done in places like India, China, Thailand and the Philippines, those artists will rise up and they too, since they were artists before, will become digital artists. If you look out 10 or 15 years from now, if things don’t change, I can see a landscape where almost all of the VFX and animation work is done outside of 1st world countries and the planning and creative storytelling work close to the directors and producers will be done by the uber digital creative people, and those people will join 1st unit, and they will become like the what the cinematographer, editor or production designer has become.

If you look out 10 or 15 years from now, if things don’t change, I can see a landscape where almost all of the VFX and animation work is done outside of 1st world countries

Mark: So that means if there ever is a victory for the industry it’s a victory for the 99% of the industry. It’s an “A list” victory.

Scott: It depends on what side of the coin you’re looking on. If you’re a world class artist and you live in Shanghai, China, and you’re learning the skills of digital artistry, it’s a victory for that person because now they’re working on Godzilla at the highest level.  So it’s a victory for them.

Mark: But in terms of the Western, and particularly American, industry is trying to do, you’ve got…

Scott: On the feature film side of things, from the Western/English-speaking/superstar group, those people will be whittled down a lot.  There will be a lot less of them because there’s just a lot less of them. And those people will be subsumed by 1st unit, and I can foresee a time where there’s a local, and that they get front title credits and nobody has to talk to the DGA about it. They’ll just get it like the editors get it and the composers get it and the production designers get it. They’ll be considered a part of the top level creative people on a feature film. And there might be a team of three of them.

Mark: Right, and you only need one team per feature and there are only so many features per year.

Scott: That’s right. So we’re seeing a whole population shift. If you go back to the day where I was running DD, we had team of 250 people doing an entire motion picture. Now that factory mentality will now move to the lowest cost provider and just the super elite people will be the similar to the cinematographer.

Mark: So there could even be this guild that many in the industry have been pushing for, but it will be the equivalent of the ASC. It’ll be invite only, and there will only be a small set of active members.

Scott: Whether it’s a guild or actually an extension of the IA, because if you look at the camera operators local, it could be like 600, where there are 50 people in the United States that belong to this Digital Production, because you wouldn’t even call it VFX.

 

Click here to check out Part 2 of this discussion

 

 

 

The post The VFX Business in 2014: an interview with Scott Ross, Part One appeared first on ProVideo Coalition.

]]>
https://www.provideocoalition.com/the-vfx-business-in-2014-an-interview-with-scott-ross-part-one/feed/ 0
Wait and CC https://www.provideocoalition.com/wait-and-cc/ https://www.provideocoalition.com/wait-and-cc/#respond Fri, 01 Nov 2013 01:46:26 +0000 Earlier this year Adobe committed to a customer strategy that can accurately be described as bold. The Creative Cloud offering rather abruptly shifted the Adobe customer relationship to a subscription model, with access to most Adobe software possible only with payment of a monthly fee. We are now month to month. Depending on who you

The post Wait and CC appeared first on ProVideo Coalition.

]]>
Earlier this year Adobe committed to a customer strategy that can accurately be described as bold.

The Creative Cloud offering rather abruptly shifted the Adobe customer relationship to a subscription model, with access to most Adobe software possible only with payment of a monthly fee. We are now month to month. Depending on who you talk to, it is an incredible deal with vastly improved fexibility or a loathesome burden with handcuffs.

With the passing of several months and the release today of the first major interim update since that time, it seems clear that Adobe has no plans to reverse course, and so we decided to revisit what was a hot topic of debate a few months ago. The basic facts and factors haven’t changed too dramatically, so the idea is to gauge where the community is at with Creative Cloud, particularly now that Adobe has had its first chance to deliver on the initial promise of an improved customer experience with CC.

This article avoids anonymous opinions on forums and hagiographies from marketers (full disclosure: I have done a lot of work with Adobe over the years). I decided to trace what seem to be the themes that were common to more than one interviewee, to get the flavor of where Adobe’s customer base in post-production is at. Represented are freelancers and employees or owners of major studios in Hollywood, New York, and Europe, along with such far flung regions as San Francisco and Sydney. Credits, with twitter handles, are included at the end of the article.

I also asked Steve Forde, Adobe Product Manager of After Effects, to respond.

WE ARE NOT DISPLEASED WITH THE SOFTWARE ITSELF—WHY, IT MAY ACTUALLY BE IMPROVING

“I have 10 apps on my machine. I'd say Lightroom was the big discovery, previously I'd been a Bridge / Photoshop kinda guy.” – Matthew Law


“The video end of the suite in particular seem to be taking this in the best way, i.e. delivering an almost immediate response to critics and addressing some of the longer running issues rather than making new features.” – Dan Lucyszyn-Hinton


“Updating is better. Traditionally, Adobe updates have been cumbersome and annoying. Now they seem much less intrusive. I don't fear them any longer.” – Barry McWilliams

“I do not see improvements right now that I can relate to the subscription model or ‘cloud’” – Martin Weber

It is early to say for sure, but what the teams within Adobe love about a subscription-based customer base is that rather than targeting new customers with big, brand-new never-before-in-Adobe features for each annual release, they can make the improvements they’ve always wanted to make, that serve the people who they want to keep subscribing.

Here's what Adobe's Steve Forde has to say:

“After Effects is a 20 year old application.  For each release (Creative Suite was on an annual cadence since CS5) and our job on the After Effects team was to make some 'significant' feature(s) updates along with minor improvements once per year with the  in hopes that it would be enticing enough on the 'tin' to encourage you to upgrade over the previous release.

Creative Cloud changes all that.  Frankly, from the After Effects team perspective, Creative Cloud gives us the opportunity to do things in the application we have wanted to do for 20 years. Now we can deliver features faster to you than once every 18-24 months, which was the past Creative Suite perpetual release cadence. In fact, our job now is to not just come up with innovative new features that offer more creative capability, it is to 'retain' you as a customer month by month.  Since our customer (you), can cancel a subscription at any point if you are dissatisfied for whatever reason, we must make continuous improvements to features and workflow already in the application and deliver these new features more frequently. We’ve quickly adopted this new rhythm, as evidenced by 20+ new Premiere Pro features released in July and more than a 150 new updates across our video tools coming this fall.”

So while the jury may seem still to be out with just one round of updates having followed the initial CC offering, it sounds as though development within Adobe has shifted significantly, and the next year or two will reveal the true results.

TERMS OF SURRENDER ARE NOT FORTHCOMING

“It’s rare to work with the latest version of anything in this town” – Shane Ross


“Q: Do you buy the premise that Adobe can provide better software with this model?
A: There's a compelling argument there…But it relies HEAVILY on my believing that Adobe is a “good” (meaning moral) company. I don't fully believe that. I don't fully disbelieve it either. Adobe's actions will prove it one way or the other.”
– Barry McWilliams

“Customers became cattle. Adobe finally showed their true face. We don't count. Adobe's cash-flow precedes over how thousands of businesses run their business. Which is why it failed in European and Asian markets. We're not so keen on indentured servitude based models. America has a different history there. Anything goes as long as the near future is covered, long term effects and past events don't count.” – Frank Jonen

Shane’s point may be the most significant. It’s understandable that Adobe would want its customers to work with the latest versions of its software; for one thing, it significantly reduces support and compatibility issues that can plague older versions. But Creative Cloud is only a no-brainer for companies that upgrade with each major annual release, or even every other annual release.

The old model is to be paranoid, and never upgrade until it’s absolutely necessary for the bottom line of the company. This may help explain why so many studios still run Final Cut Studio. But imagine if apps for iOS or other mobile platforms were only updated once every 12-18 months. Increasingly, the risks could conceivably become higher with the status quo, even if that's not currently the case any more than it has ever been.

Meanwhile, one group that hasn’t received a satisfactory response from Adobe is the casual user base. This isn’t just people who only use Photoshop once in a while for personal projects, it includes people who work day in/day out with Adobe software but only occasionally work freelance, in their own studio. Having been one of these people at various points in my career, I can say that there are times when upgrading seems like a no-brainer, and other times when the home studio barely gets any use, but it’s still more than handy to be able to jump into even an older version of an application.


THERE EXISTS A HALFHEARTED LONGING FOR GREENER PASTURES

“Yes, I feel like a renter. I touched on this above. It's more onerous than renting, however. When I'm renting a home, for example, there are thousands of other homes, as nice or better, should I decide that I don't like where I live. There are not thousands, not even a half-dozen, professional alternatives to Photoshop & AE.” – Barry McWilliams

“Freelancers have not upgraded, don’t want to spend more money than before, and update only when they can’t open a project anymore.” – Danny Princz

“Our strategy is to find alternatives until early 2015. We might considerably reduce the number of maintained Adobe licenses.” – Martin Weber

Adobe is in a unique position among graphics software companies. Other companies may have managed to charge far more per-license, but there is more than one category in which no credible opponent threatens to unseat the Adobe offering, and yet the landscape can change quickly.

Competition can be good for everyone. It makes the leaders work harder, it inspires new development ideas, and it makes customer value paramount. Adobe would seem to have awakened the competition if indeed it is not giving its customers what they want, and perhaps that is good for Photoshop, Illustrator, InDesign, and the other applications that have not faced a serious challenger in some time.

DAMMIT JIM, I’M A DOCTOR, NOT SOME HUE IN A RAINBOW

“I'm one of the folks who understands that you only license software, you don't ever technically 'own' it. In practice however, if there is no DRM, and you have the binary on your system/on disk, they can't really enforce a revocation of the license — it's owned de facto, if not legally. You'll always be able to launch and run it.” – Christopher Harrington


“It's why I use text files when I write, so that I am beholden to no writing program. With the subscription-based tools, I feel utterly beholden.” – Barry McWilliams


“On average we would skip every other release, sometimes maybe even two releases. With CS 5.5 where XDCAM support was broken there was little incentive to upgrade and we did not upgrade to CS 6 as Adobe's push to CC was clear. With CS 6 being a dead end there is no intention to upgrade to it.” – Martin Weber

Lost in the argument against the subscription model is that there may be more to Creative Cloud for users than just comparing it with the model of buying the software you need at a premium, and only when you need it.

Steve Forde, again:

“It's a brave new world.

Technology is a commodity.  In fact, inter-connected / networked technology crossing multiple devices and platforms is the new normal.  Our goal with Creative Cloud is to weave together traditional desktop applications with interconnected services that has the sole purpose of satisfying the demands of you, our professional user.  Fonts, files, storage, and synchronization are just but a few that we have introduced via Creative Cloud – and I firmly believe you should expect a whole lot more. 

The point is, your clients / employers demand you to provide more with less – and we need to enable you to deliver.  At the same time, the opportunity for creative expression is also at the hands of anyone who is willing to invest the time.  It is no longer in the realm of only those who could afford the (what used to be) very expensive kit (hardware and software).  Creative Cloud subscriptions allow us to dramatically reduce the barrier of cost to a wide audience without sacrificing the financial stability of Adobe.”

ADOBE’S DESIGNERS AND ENGINEERS ARE GREAT, BUT THEIR COMPANY AS A WHOLE MAY NOT ALWAYS COMMUNICATE SO CAREFULLY AS THEY DO, INDIVIDUALLY

“Getting feedback *directly* from product managers and team leaders says a lot about the character of the company.” – Christopher Harrington

“I was in contact with our authorized Adobe dealer and Adobe sales representative from the very beginning when Adobe started with the cloud service (before CC). Neither was able to answer my questions and explain the service. Also with CC Adobe was not able to provide any meaningful answers. Their team subscription came late in the process with little merits and an added costs and they have been changing rules along the way. Before that we were not even able to take part in CC in a meaningful way. They are still unclear about how to exit the subscription (consequences are not clearly stated, no time limits are given, the legal issues for companies are not clear). Also, when buying through authorized dealers there is an extra administrative and legal layer including the commitment to audits by Adobe within 30 days of notice for as long as 2 years after ending rental with the obligation to prove purchases etc.” – Martin Weber

The contrast between the two quotes above speaks to the difference between how the public views the teams that make Adobe software, and those who communicate information about such nuances as comparative pricing, extra benefits for teams and groups, and why the subscription model is not an instant fit for everyone.

EVERYONE HAS THEIR PRICE

“Creative Cloud for teams is € 70 rather than CC for individuals at € 50 (all before taxes). That's € 1,200 per year more for 5 seats just to be able to license more than one seat. I think we should rather get a rebate.” – Martin Weber

Finally, there is definite skepticism as to where the Creative Cloud price will stabilize, and whether the pressure will be for the price to increase or decrease. Not everyone worldwide pays the same price for Creative Cloud, and yet there is no question that a price exists that would satisfy the vast majority of Adobe customers worldwide, even on a monthly basis.

Meanwhile, the one change that the most people want to the existing model is a buy-out; the ability to plateau with a given version of the software by having subscribed for a sufficient period, and the guarantee that those versions will at least be able to open files in their format for a given period. The period most often proposed is five years.

Five years is a long time in the current world of computing and software. If Adobe is right, the need for upgrades and services will far outstrip the conservative approach that would allow anyone to park a given application for half a decade, or even a year or two. If the price is a bargain and the quality of the offering for dedicated users rises, Adobe’s chances of success are assured, but at this early date, the jury of public opinion still finds many in the “undecided” category.

Participants:

Barry McWilliams @barrymcw
Chris Harrington @octothorpe
Dan Lucyszyn-Hinton @dan_hin
Danny Princz @rendernyc
Frank Jonen @frankjonen
Martin Weber @martinweber
Matt Law @foughtthelaw
Shane Ross @comebackshane
Steve Forde @sforde

The post Wait and CC appeared first on ProVideo Coalition.

]]>
https://www.provideocoalition.com/wait-and-cc/feed/ 0