Eric Escobar – ProVideo Coalition http://www.provideocoalition.com A Moviola Company Tue, 21 Feb 2017 14:41:41 +0000 en-US hourly 1 https://wordpress.org/?v=4.7.2 http://provideocoalition.moviola.netdna-cdn.com/app/uploads/cropped-Moviola-Favicon-2016-32x32.png Eric Escobar – ProVideo Coalition http://www.provideocoalition.com 32 32 DaVinci Resolve on a Surface Pro 4 http://www.provideocoalition.com/davinci-resolve-surface-pro-4/ http://www.provideocoalition.com/davinci-resolve-surface-pro-4/#comments Sat, 17 Sep 2016 21:34:50 +0000 http://www.provideocoalition.com/?p=39093 Blackmagic Design and Intel loaned me a Surface Pro 4 to test the performance of the Iris GPU and Core i7 processors. I got to see exactly how close we are to a future-present where film grading can happen on machine so small you could easily lose track of it on a messy desk. The

The post DaVinci Resolve on a Surface Pro 4 appeared first on ProVideo Coalition.

]]>
Blackmagic Design and Intel loaned me a Surface Pro 4 to test the performance of the Iris GPU and Core i7 processors. I got to see exactly how close we are to a future-present where film grading can happen on machine so small you could easily lose track of it on a messy desk.

Surface Pro Davinci Resolve
The Surface Pro 4, a mechanical keyboard, external display and mouse.

The ironic tech-prediction-punchlines from five years ago: “Wireless Photoshop”, “4k on your phone”, have all quickly come to pass. Today, I add one more punchline to the dustbin of clickbait titles: “DaVinci on a tablet”.

The unit I tested came loaded with a core i7 processor (i7-6650U @ 2.2GHZ), Intel Iris 540 GPU, 16GB of RAM, and 512GB SSD. This is the second most expensive model in the Surface Pro 4 line up, retailing at $2199 on the Microsoft Store website. This price puts it in the class of high-end laptop rather than iPad-challenger. The Surface Pro is only a tablet in the sense that it has a multi-touch screen and detachable keyboard. In price and performance one should compare it with a MacBook Pro 15” Retina (Iris GPU, 16GB Ram, Core i7 procs and 512GB SSD).

DaVinci Resolve Performance

Working with 1080P, 24 frame, ProRes footage (which works on Windows since Resolve 12.5.2), I can do three nodes without dropping frames: a LUT transformation, a primary pass and a secondary with a single mask. Anything more complicated than that, and I start working in non-realtime. This is pretty common for laptop color grading performance in my experience.

A Perfectly Tiny Screen

The Surface Pro has a 12.3 inch screen that displays 1080. The Resolve UI is not configurable and it’s designed for a much larger display area. When rendered on the screen of the Surface Pro, it’s a perfect miniature and eye-straining to use.

A mouse or trackball is a necessity, the UI is just too small to drive a work session.
A mouse or trackball is a necessity, the UI is just too small to drive a work session.

I could easily see using Resolve on the Surface alone, if I could just grab one or two of it’s palettes. A user could customize various set ups for asset management or basic grading tasks if only Blackmagic gave us the chance (hint, hint).

This problem is easily solved by connecting an external display and using the Surface Pro as a secondary screen and input device. Here is where my argument for customization is even more relevant. If Resolve had a customizable UI, then I could make the Surface Pro a touch-based colorociter for no extra money. Boom.

In fact, when connected to an external display, a separate keyboard and mouse, the Surface Pro 4 feels like working on a much bigger machine. Performance of the OS and applications are zippy. Using the Surface screen as an input device (both multitouch and with the stylus) is intuitive and easy. It’s really like having a couple of screens, a keyboard and no computer.

You will definitely want to keep the Surface Pro 4 plugged into AC while you do this GPU intensive work. The Core i7 and Iris GPU will drain all the electrons from the battery when you’re working in Resolve.

The Workflow

As a workflow model for grading, the Surface Pro 4 is almost perfect. I can work in my color suite with a large screen and peripherals like I’m used to. And, at the end of the work day, I can pick up the Surface Pro and walk out the door with the entire project and all my work. Also, I have all my email, Netflix, music, etc, traveling with me.

MBAir 13 and Surface Pro 4
Side by side with the MacBook Air 13 inch, the Surface Pro is tiny

I say almost perfect because there is one crucial piece missing from this puzzle: video output. The Surface Pro 4 has a miniDisplay port and single USB3 port. As of this moment, there is no way to get a 10bit video signal out of the Surface Pro 4 because it is not certified to work with the one device that will do that, the Blackmagic UltraStudio Pro. Not being able to see what you’re color grading on a color critical display is a crucial part of the grading pipeline, and a deal breaker for colorists.

There’s piles of other work you can do: editing, asset management, LUT application, etc that don’t require this piece of the pipeline. And for those workflows, the Surface Pro 4 works just fine. But it’s not a machine you’re going to build a color grading suite around. On that point, if you’re building a Windows a grading suite, 2200 dollars will buy you serious GPU horsepower if you go with a custom desktop, rather than a portable. However, you can’t unplug that desktop machine and watch Netflix on the train home with it. There’s a trade-off between convenience and power. However, as product line-ups like the Surface gain more power, this seems less and less of a trade off.

Personally, I want all of my computers small. I want SSDs in everything and an end to the era of back-straining color grading monster towers. As GPUs continue to evolve, I can see that in a year or two, there could be a machine like the Surface Pro that will offer lots of power with few compromises. Hopefully by then, the Resolve UI will offer customization options that take into account the new workflow of grading-on-the-go set ups.

Custom UI in Premiere
For example, here’s the Premiere Lumetri color wheels as their own Touch UI. See what I’m saying?

Bottomline

Resolve on the Surface Pro 4 is a proof-of-concept, not a workflow I’d recommend to anyone looking for a machine to do serious color grading on.

Think of Resolve as a benchmark for just how far tiny computers have come in terms of capability, and a marker for where they are going. This is a beast of a miniature machine, as powerful as many computers that are in use in existing post production work environments that are just a few years older.

And while you can color grade in Resolve 12.5 on the Surface Pro 4, lack of video output makes it a device best suited for editing, asset management and non color critical tasks.

The post DaVinci Resolve on a Surface Pro 4 appeared first on ProVideo Coalition.

]]>
http://www.provideocoalition.com/davinci-resolve-surface-pro-4/feed/ 2
Loathing and Loving Prisma http://www.provideocoalition.com/loathing-loving-prisma/ http://www.provideocoalition.com/loathing-loving-prisma/#respond Mon, 01 Aug 2016 14:59:03 +0000 http://www.provideocoalition.com/?p=37046 You’ve probably noticed your Instagram feed getting filled with fine art-ish looking pictures from your friends. You’re also certain that none of your friends are actual fine artists, or at least not good enough to make such interesting images out of their usual mundane Instagram imagery. Chances are, they’re using Prisma, an app for iOS

The post Loathing and Loving Prisma appeared first on ProVideo Coalition.

]]>
You’ve probably noticed your Instagram feed getting filled with fine art-ish looking pictures from your friends. You’re also certain that none of your friends are actual fine artists, or at least not good enough to make such interesting images out of their usual mundane Instagram imagery.

IMG_2773
“Coffee Grinder”, Oil Pastel on Canvas by Author (ok, not really)

Chances are, they’re using Prisma, an app for iOS and Android that converts photos into artistic renderings based on the various styles of popular artists and art styles. In response, you did one of two things: immediately downloaded the free app and started spamming away (like me); or you blew it off as just another auto “art” filter, soon to go the way of Kai’s Power Tools and Corel Painter.

I am almost always in the second camp — cynical at the idea of automated art and burnt out bad the final renders. Most of these attempts to build effects that mimic the delicate work of brush strokes are done with a ham-fisted convolutional filter. In the end, the images look like someone took a Sharpie marker and splooshed colored blotches all over a photograph. There’s no thought to what the image is, just a pure application of pixel math.

Prisma is not another kernel image process, or a simple raster-to-vector conversion, it is something much deeper, more interesting and more troubling. Prisma is a machine learning (ml) process that deconstructs an image, identifies shapes and objects in the frame (face, trees, water, etc) and then changes those objects based on the style of artist you’ve selected. The artistic style is also derived from machine learning, whereby an artistic work (or many) are fed into the system and an artistic style is abstracted into mathematical model. The final image created isn’t just a filter applied to your original image, rather it’s brand new image that is an interpretation of your original photo based on a set of rules and concepts derived from an entirely different image.

Call it “neural art”.

PRISMA IS DEFINITELY NOT “ON YOUR PHONE”

Prisma doesn’t live on your phone. It runs on servers (I have no idea how many), somewhere in the cloud (the developers are in Russia). When you download the app, you’re really just downloading the front end tool where you select your image and the process you want to apply. When you click the render button, you’re just sending your order off into the ether to be analyzed and rendered, then beamed back to you, all via the wonder of the internet. There’s no local processing on your smartphone, because it would take forever and probably make it explode. There’s a lot of hot math going on.

IMG_2779IMG_2782

 

 

 

 

 

 

 

 

 

Accessing Prisma on your iPhone is orders of magnitude faster than doing a similar process on a single personal computer, locally. This is an image effects process that takes a few hours on the fastest trashcan Mac, but a few seconds on racks of servers. As a user of this effect, you’re in a more efficient position to use this with a $300 camera phone than a $9000 Mac. Since A) your phone has a camera and you can snap directly to the Prisma servers and B) there’s no OS X app available yet. Even if there was a desktop version of the app, you’re still on equal footing when you send processing and rendering up to the cloud. Imagine if every effect you used worked this way?

In light of the rush to a subscription model for almost every application you use right now, what happens when all of the processing and rendering also moves to racks and racks of servers in the cloud? What does a post house look like in three years? Where does post happen?

ROLL YOUR OWN PRISMA

Prima is most likely an implementation of “A Neural Algorithm of Artistic Style”, where the concept and application is spelled out in detail. A further interesting reading, along with some tools to build your own version of Prisma, take a look at “Neural Style” up on GitHub.

This idea of a “neural art”, in concept and practice has been floating around Arxiv and GitHub for a couple of years now. We all got a taste of it with Google’s Deep Dream images. The ones that shocked us with trippy, psychedelic dog-infused imagery awhile back. However, the process of reconstructing what the eye sees into constituent brush strokes, shapes and color palettes is something that human artists have done since the first cave paintings. Mastering the styles of fellow artists is a key step in developing ones own voice as a fine artist. Picasso spent years in the Louvre copying the works of the masters. Training a neural network to approximate this process is exciting and frightening.

If you dig into the Arxiv and GitHub articles I linked to, you’ll see the wackiest consequences of abstracting artistic style from the art product — style merges. Take a little from Van Gogh, and a little from Kandinsky, sprinkle in some Degas and you’ll have something strange and new and unprecedented. This is similar to IBM’s Deep Blue recipe generating AI process, coming up with dishes a human wouldn’t have thought of. Or AlphaGo making Go moves that confounded the best players in the world. It is a different kind of “thinking” involved.

How good can this process be?

Take a look at this article about a “neural art” system that created a brand new Rembrandt painting using a 3D printer. It’s not just that a computer system was used to cannily recreate an image that looked like a pretty good Rembrandt (art forgers have been copying and selling masterworks for centuries). It’s that a system created a new image that never existed before that looked like it could actually be an original work of the long-dead painter. The system determined the most likely type of image that Rembrandt would have painted, had he painted it, then rendered it with a 3D printer.

MORE “LESS HUMANS WORKING”

Consider this a follow up to my alarm bell ringing article last year about most editorial jobs going away in the coming years. Not just editorial, but color grading, sound design and every aspect of media creation will be affected by the combination of ML and massive computational power in the cloud. Systems like Prisma and other kinds of neural art challenge the role of human creativity in the process of creating media. It may not completely replace what we do as artists, but it most certainly devalues our work. And we’re all going to jump on it, mostly because is takes away a lot of the grunt work we had to spend years learning how to do. The argument is that these kinds of intelligent applications will augment our creative work, which indeed, they already are.

In response to the article last year, as well as the subsequent podcast, I got some responses that I was talking about some kind of post-production singularity moment. Some people argued that machines will never be able to make the choices an experienced editor would make. Others told me that we are decades away from anything approaching that moment.

I offer up Prisma as an example of just how quickly these changes happen. A year or two ago, there wasn’t anything as powerfully complex in output or as simple to use as Prisma outside of GitHub or in computer science doctoral programs. Now it’s on your phone and millions of people are pinging the servers. In a month, Prisma will include the option to upload video clips and voila “instant Miyazaki”.

And that’s the important point — the source material (pictures, movies, paintings, etc.) to build these models and processes are not pulled from a vacuum, rather they are derived from the collected works of artists over hundreds of years. This is the part that is hard to swallow — every bit of human creativity and originality that exists as recorded media serves as the free data mine for neural art machine learning systems. There is no “Miyazaki” filter without the life and art of the man that produced it. There will be no “Murch Edit Process” without a careful machine learning analysis of every single one of his edits (some of which took him days to decide). These creators will not be remunerated for their effort and work. Artists using these tools will benefit from it, but will most likely be paid far less than in the past. The work is machine-enhanced and will require less time to learn how to master, and be about as valuable as those pictures of your plants filtered through the “Mononoke” filter on Instagram.

Prisma has abstracted and quantified the artistic styles of graphic artists and fine art painters. The resulting images are impressive and unprecedented. There are already projects in development applying similar ML concepts to editing style, color grading, sound design, actor performance, etc. ML is delivering, and poised to deliver even more, incredibly sophisticated tools for every aspect of media creation by digging into the areas that are currently dominated by actual human artists.

THE BRIGHT SIDE

There’s nothing stopping artists from taking control of these tools. Imagine a fine artist using an ML system to create incredible new images using their own work — inspiring bold new ideas. When it comes to analyzing temporal media, how about a tool that offers up cut and shot suggestions to an experienced editor to choose from?

Like Alpha Go making game moves a person wouldn’t make, a creative ML system may offer artistic choices that a person doesn’t see, but spurs a new rush of creativity.

UPDATE: My friend, Josh Welsh, a Prisma/Instagram enthusiast (Prismagrammer?) pointed me in the direction of Artisto the “Prisma for Video” app created in just 8 days by the Russian company Mail.ru. It’s crashy but a lot of fun. He also noticed that after running his iPhone images through Prisma, they were now geo-tagged as originating in the province of Jiangsu in China. So iPhone snaps from Southern California, run through Russian code on Chinese servers posted to Instagram. It’s a small world after all.

The post Loathing and Loving Prisma appeared first on ProVideo Coalition.

]]>
http://www.provideocoalition.com/loathing-loving-prisma/feed/ 0
Kids Build VFX Company, I Send Tachyon Beam Back in Time http://www.provideocoalition.com/kids-build-vfx-company-i-send-tachyon-beam-back-in-time/ http://www.provideocoalition.com/kids-build-vfx-company-i-send-tachyon-beam-back-in-time/#respond Thu, 17 Dec 2015 16:30:17 +0000 This is interesting, two boys, Ben and Alex, 11 and 13 respectively, are making, and selling, their own Star Wars inspired plug-ins, with the help of their dad who’s a developer. They’ve got light sabers and blasters, as well as some “space wipes”. The plug-ins run in FCP, Motion, AE and Premiere. This is interesting,

The post Kids Build VFX Company, I Send Tachyon Beam Back in Time appeared first on ProVideo Coalition.

]]>
This is interesting, two boys, Ben and Alex, 11 and 13 respectively, are making, and selling, their own Star Wars inspired plug-ins, with the help of their dad who’s a developer. They’ve got light sabers and blasters, as well as some “space wipes”. The plug-ins run in FCP, Motion, AE and Premiere.

Screen Shot 2015 12 17 at 10.03.13 AM

This is interesting, two boys, Ben and Alex, 11 and 13 respectively, are making, and selling, their own Star Wars inspired plug-ins, with the help of their dad who’s a developer. They’ve got light sabers and blasters, as well as some “space wipes”. The plug-ins run in FCP, Motion, AE and Premiere.

I’m posting this and hope the message, via modulated tachyon pulse, can travel back in time thirty years to let my 13 year-old self know that in the future, kids will not only make their own Star Wars episodes, but also create their own VFX software companies to sell effects to other kids. 

Because that’s awesome.

I personally have not used these plug-ins or wipes, because I am not really doing much lightsaber or blaster work these days However, having done this thing in Photoshop 2.0, one frame at a time, and then having done in again in After Effects with way too many key frames, I totally appreciate Ben and Alex taking the time to develop a tool set that will save other artists time.

Take a look at the link. I mean, come on!

ben and alex with lightsabers1

The post Kids Build VFX Company, I Send Tachyon Beam Back in Time appeared first on ProVideo Coalition.

]]>
http://www.provideocoalition.com/kids-build-vfx-company-i-send-tachyon-beam-back-in-time/feed/ 0
Video Editor: A job on the edge of extinction http://www.provideocoalition.com/video-editor-a-job-on-the-edge-of-extinction/ http://www.provideocoalition.com/video-editor-a-job-on-the-edge-of-extinction/#respond Tue, 28 Jul 2015 13:00:47 +0000 I know this post has a clickbait sounding title, and for that I apologize, but I’m not writing this for click throughs or ad impressions. This is about the things that software can do now and guessing at what it will do in the very, very near future. Right now, software “reads” articles and emails,

The post Video Editor: A job on the edge of extinction appeared first on ProVideo Coalition.

]]>
I know this post has a clickbait sounding title, and for that I apologize, but I’m not writing this for click throughs or ad impressions. This is about the things that software can do now and guessing at what it will do in the very, very near future.

Right now, software “reads” articles and emails, and this is how Google analyzes and ranks what we write and figures out what to sell us. Software even “writes” articles, more and more everyday. Software doesn’t write the opinion pieces, or the long, interesting New Yorker think-piece, it goes for the easy stuff: financials, sports scores, crime beats. The kind of stuff that feels all boiler-plate and perfunctory.

http://blog.ap.org/2014/06/30/a-leap-forward-in-quarterly-earnings-stories/

softwareunderstandsThis is old news, a change that has already transformed publishing and will change it more as the tech gets better. Of course the software doesn’t exactly do this on it’s own. The models and algorithms for writing copy are built on a hundred years, and millions of articles, written by human beings. This data set has been ingested, analyzed and rolled into Natural Language Generation (NLG) platforms like WordSmith. Here’s a nice friendly cartoon about how it works, completely with the visual imagery of a factory: http://automatedinsights.com/how_it_works/

The architecture of these algos is derived from the intellectual work of millions of human beings over a massive span of time. Of course, none of these writers have been remunerated for this work, it’s simply the aggregate data set used to develop the models. This is the central critique in Jaron Lanier’s book “Who Owns The Future?”

http://www.nytimes.com/2013/05/06/books/who-owns-the-future-by-jaron-lanier.html?_r=0

My thesis here is that what has happened in the world of print will happen in the world of video post-production, specifically in the job of the editor.

The use of deep learning and neural networks for image recognition is a hot topic in the tech press. The trippy images coming out of Google’s DeepDream project are amusing, alien and unintelligible but that’s really just the Kai Power Tool, pseudo-acid-head, arty front-end of a larger, far more ambitious project: Google Photo.

http://www.imore.com/google-photos-may-be-free-what-personal-cost

deepdream

In exchange for free, unlimited on-line storage of photos and videos, you agree to let Google scan your data to identify people, places and things. The platform then tags and organizes everything for you, automatically. This is really handy for me, the user, and amazingly data rich for Google, the big data mining company.

But what does this have to do with editing?

http://cs.stanford.edu/people/karpathy/deepimagesent/

semanticTake a look at this link for a deep learning image project that analyzes and semantically tags the image. Not just what or who is in the image, but it forms sentences describing, quite accurately what they, or it, are doing in relation to the other things or people in the image. Now imagine this as a filter in PPro 2017. If a computer can interpret the activity going on in an image or video clip, then how long before we have apps that will organize and edit this into coherent and logical sequences of clips?

As far as editorial style, there are already millions of movie and television shows on-line that can serve as an enormous database to teach a deep learning algorithm what editing is and how it has been done. There will be a generation of media creators that will use the digitized and aggregated knowledge of a hundred years of cinema to drive editorial decisions the same way publishers now rely on Natural Language Generators to create books, articles and blog posts.

Where does this leave professional video editors?

For the professionals working in corporate and industrial video, it will be a further eroding of the craft towards a deskilled workforce reliant on software to generate “content”. It will push the workload more and more on the producers as it promises a post pipeline with fewer people and a faster turnaround. For the narrative film and television editorial departments, much of the assistant editing work is already handled by digital asset management software with less and less human input needed. On the indie, no-budget front, having intelligent software do the heavy lifting of media management and edit assembly will be an amazing time and money saver.

Just as there is a generation of editors that have never touched a flatbed or edited in a linear video suite, there will be media creators that aren’t clear on the language of how editing works. In order to edit, it won’t be a requirement to know when or why one would use a medium or a close up, or how best to juxtapose images in a sequence. There will be an easy to use piece of software that does a decent enough job about ninety percent of the time to do it for you.

The post Video Editor: A job on the edge of extinction appeared first on ProVideo Coalition.

]]>
http://www.provideocoalition.com/video-editor-a-job-on-the-edge-of-extinction/feed/ 0
How to Survive The Post House Implosion http://www.provideocoalition.com/how-to-survive-the-post-house-implosion/ http://www.provideocoalition.com/how-to-survive-the-post-house-implosion/#respond Wed, 07 Jan 2015 07:51:48 +0000 Even in good times, video production/ post-production is a risky business. In bad times it’s downright foolish. A company has to be scrappy enough to weather the stagnant months, but robust enough to crew up for a big project that could walk through the door any second.  The San Francisco Bay Area has a vibrant,

The post How to Survive The Post House Implosion appeared first on ProVideo Coalition.

]]>
Even in good times, video production/ post-production is a risky business. In bad times it’s downright foolish. A company has to be scrappy enough to weather the stagnant months, but robust enough to crew up for a big project that could walk through the door any second. 

The San Francisco Bay Area has a vibrant, thriving, tech-forward economy that, despite all the brands and companies headquartered here, has put a lot of video production companies out of business. Blame it on the bad luck of geography: not close enough to Southern California to get LA work, but close enough to lose all the bread-and-butter economy building projects to enormous filmmaking industry in Hollywood. It’s just too easy for local clients to fly down to Burbank in the morning and come back home the same night. 

The effects are hard on a tight-knit community. Over the last decade, I have seen many friends and colleagues give up and leave the Bay Area, or the video production industry altogether. San Francisco is not alone, the forces that are rapidly changing how media is created and consumed, effect every professional in the industry, everywhere.

MATT SILVERMAN AND SWORDFISH SF

This is why I think the story of Swordfish and Matt Silverman is so interesting, and a great case study for surviving, and thriving, in this roller coaster climate. Matt has pivoted (to borrow jargon from the Silicon Valley lexicon): from a guy that makes videos to a guy that makes videos that are also integral to the UI/UX (User Interface and User Interaction design) development of his clients products. 

This article started out as a focus on how one post house was using the Cinema 4D and After Effects pipeline for combining live action material with computer generated stuff. Which is still a really interesting story, but I got sidetracked once I saw Swordfish SF in action and talked with Matt, the founder of the shop.

Matt started out in the Bay Area scene as a freelancer way back in the mid-90’s after attending SF State’s Cinema program. He worked as a Mograph (Motion Graphics) artist, software developer and interactive designer. He was the creator of the plug-in Color Theory and the QuickTime Codec, Microcosm. He was also the product manager for Commotion at Puffin Design. Like so many professionals in the Bay Area, Matt has a foot in creating images as well as designing the tools that make those images possible.

The last time I talked with Matt was a few years ago, he was the Creative Director at Phoenix Editorial and the Exec Creative Director at Bonfire Labs, both in San Francisco. The philosophy at both shops was that hardware was secondary, “It’s the brains behind the machines, not the machines themselves.” he said.

This is an oft repeated line in production and post, but not something many people actually believe, or practice. We are in an industry that fetishizes hardware, specs, performance stats and cost. This has been our fatal flaw. The days of building a reliable revenue stream around a digibeta deck are long gone, but the business model isn’t. 

NO ONE NEEDS YOUR VIDEO ANYMORE

As Matt says, “No one will pay you to ‘make a video” anymore. And why should they?”

It’s too easy to do it. The tech companies of the Bay Area (Apple, Adobe, Google, etc) have insured this by dispersing the technology, simplifying it and productizing it, cheap. While simultaneously creating the framework for the easy and free distribution of content. Video creation has been rolled into an office admin skill set, like building a webpage. Will it be as polished as the product created by a professional editor with years of experience?

No, but it doesn’t need to be.

Most of the content created for our media-saturated environment only has to be good enough, and on-time, for the deadline. No one cares what it was shot on, or what it was cut with. This de-skilling of video production has had a deleterious effect on the local professional labor pool. Or as Matt says, “Bay Area professionals are in a tough spot. If a client needs a pitch video put together they can do it in-house, or with junior editors. If they have the budget to get top talent, then they head to LA.”

Professional “videomakers” have to offer a wider assortment of products, while defining new roles with the brands and agencies they serve.

OFFERING A NEW SERVICE

In the case of Swordfish, the new thing that Matt and his team can offer is integrated UI/UX Design work. This shift in direction was partially serendipitous. Matt’s team was tasked to do mock ups for interfaces on a hand held digital product that didn’t have a completed UI yet. 

This is not uncommon in the tech world, where hardware, software and promotional media are all being created simultaneously. With the desperate hope that each department will hit the release date at the same time.

It was in this chaos that Swordfish saw an opportunity — use the mocked up UI’s in the product demo video as the actual UI’s in the final consumer product. Mograph artists can iterate UI’s much faster than the software development team. It saves the client time, money, and integrates the larger product design process with branding and presentation.

USED COMPUTERS ARE ALMOST AS GOOD AS NEW ONES, BUT WAY CHEAPER

The other side of this is hardware. When building out his new facility, Matt looked at getting the 2013 MacPro’s and sinking a lot of money into hardware. After comparing specs vs price, he concluded that for a lot less money he could just scour Craigslist and snatch up used 2009 through 2012 8-core and 12-core Mac Pros for a piece. He kitted them with at least 32GB of RAM and 256 GB SSDs. The basic rule is to leave at least 100GB of the SSD as cache. For the cost a four or five top-end, brand new “trash can” Macs, he suddenly had a network of 25 machines clustered together with Pixar’s Tractor network rendering solution (http://renderman.pixar.com/view/pixars-tractor).

This works well for Swordfish, most projects are between 30-60 seconds long, there are ten artists on staff and they can ramp up with freelancers to meet need.

As far as apps go, Matt says, “We are really software agnostic. While most of our motion graphics work is done in AE, we have Nuke licenses for heavy compositing work such as the [renowned fine artist Matthew] Barney film work we did.”

After Effects and Cinema 4D handles a lot of the animation work, especially motion graphics, “but we still use Maya for projects such as photo-real product shots with VRay (we use Vray for C4D as well). And Mauchi [Baiocchi] is a Houdini wizard, using it on most of his projects, especially those requiring complex data sets, procedural animation and simulations.”

All this useful, beautiful work isn’t restricted to product videos and mock-ups either. Some of Swordfish’s clients have proprietary tools for translating After Effects and Maya/Houdini projects into code, and for those that don’t there’s other options such as Youilabs’ proprietary service which can translate After Effects motion data into iOS and Android code that developers can immediately put to use. Matt and his team don’t have to actually write code to be a part of his clients’ software dev teams.

What started out as an idea for one client, UI/UX design now represents half of all the work at Swordfish and seventy percent of its revenue. Matt’s takeaway from this is to, “find a niche within a niche. You take the thing you know how to do, that maybe only a handful of people can do, and focus on that.”

The post How to Survive The Post House Implosion appeared first on ProVideo Coalition.

]]>
http://www.provideocoalition.com/how-to-survive-the-post-house-implosion/feed/ 0
Editing Video in Mid-Air with the Leap Motion Controller http://www.provideocoalition.com/editing-video-in-mid-air-with-the-leap-motion-controller/ http://www.provideocoalition.com/editing-video-in-mid-air-with-the-leap-motion-controller/#respond Mon, 26 Aug 2013 08:04:53 +0000 In the work-world of movie-making, our community has become accustomed to an eighteen month product release cycle marked by the calendar of trade shows. In the constant stream of technological product announcements, “technology” really means “new technology”. Gadgets are announced, showcased and promoted. Some new devices are useful, some are immediately dismissed, and others live

The post Editing Video in Mid-Air with the Leap Motion Controller appeared first on ProVideo Coalition.

]]>
In the work-world of movie-making, our community has become accustomed to an eighteen month product release cycle marked by the calendar of trade shows. In the constant stream of technological product announcements, “technology” really means “new technology”. Gadgets are announced, showcased and promoted. Some new devices are useful, some are immediately dismissed, and others live a marginal life as an interesting curiosity that never quite found a niche. Deciding whether something was innovative is often a case of hindsight.

I map out these products on my own personal utility metric called the “Segway-iPhone Innovation Intensity Scale”. I measure the effect rather than the hype of any specific device because that often has little bearing on how well the device is adopted by the public or whether it's actually useful. The X axis charts “innovation”, while the Y axis maps the “consumer adoption” rate. By plotting the relative innovation-intensity positions of new gadgets, a line can be drawn between two iconic devices: the Segway PT and the iPhone, hence the name of my scale.

 

On the bottom the scale is Dean Kamen's personal transportation device, the Segway PT. A product that had more hype than nearly any new device in modern memory before it was released in 2001. The Segway was released to a chorus of bewildered laughter, and with questionable utility for the masses, the device hasn't succeeded beyond the niches of mall cops and tourists. (Search “Segway Fail” on YouTube. You're welcome). The one thing that is not in question is the actual innovation of the device — there is simply nothing else like it in the world of transportation devices. On the top of the scale is Apple Inc's iPhone, released in June 2007. A much rumored, hyped and speculated device before it's release. The iPhone was innovative, useful and you probably have one in your pocket right now (or you're reading this article on it).

Which brings me to the latest gadget that has received boat loads of hype, the Leap motion controller from Leap Motion. The Leap is a small, USB peripheral device that brings markerless, realtime motion capture to your computer. It's similar in concept to the Kinect camera for the Xbox, but it tracks hand and finger movement rather than full bodies (and the Kinect uses a stereo pair of cameras). The Leap was announced in June 2012 and I immediately got on the pre-order list. I was smitten by the hype and years of subliminal conditioning by modern science fiction films to expect an entire universe of devices controlled by a wave of my hand.  

Before opening up my Leap, I had convinced myself that waving my hands around in front of my computer is the more efficient and natural way to edit video. The inherent non-computeriness of hand-gestures-in-the-air has to be a more human way of interacting with a machine. In all this anticipation of the future, I never really considered that any way I, a human, interact with my computer is inherently a human interaction. That's the snag. The keyboard, mouse and trackpad are human input devices, each with its own quirky history. One device isn't a more or less natural way or interacting than any other device.

The idea that learning a set of distinct hand-choreographies (gestures) will some how make working with my computer easier should have set off my hype alarm. Using motion capture based gestures is an interesting, and sometimes fun way to do something different with your computer, but it's not a better way to work. It's not about to sweep your keyboard, mouse and trackpad into the dust bin of technical history. Furthermore, don't expect this technology to upend NLE's, Color Grading Suites or mixing consoles anytime soon. Editors, colorists and sound mixers aren't complaining a ton about how clumsy their input devices are, and in all fairness, the Leap was not designed with the needs of this community in mind specifically. The Leap, according to Wikipedia, was a response to a workflow frustration in 3D modeling and animation (although currently there is only one plug-in for one 3D app in the Leap store), but is now positioned as a device for general computing uses. The largest single category of apps in the store is “Games”.

Setting Up The Leap for Premiere Pro CS6

While there is a plug-in being developed for Leap control of FCPX, it's not out in the wild yet. So, using BetterTouchTool v 0.97, the free and recently Leap-enabled gestural control app for MacOSX, my Leap and 13″ MacBook Air, I embarked on my voyage into the future of video editing. Since no one has released a comprehensive, app specific set of gestures for Adobe Premiere CS6 (I tried used FCPX but it just kept crashing my machine with the Leap connected), I created my own. I mapped a dozen of the 25 possible gestures (aka hand-choreography) to the things that seemed most logical (five-finger swipe right to Insert a clip from the viewer to the timeline).

Once configured, I printed out my list of gestures, so I wouldn't forget, and went about putting together a quick show in the timeline. I wanted to 1) open a project
2) select clips from a bin, 3) Set in and out points, 4) Drop clips into a timeline and 5) playback the show full screen.

This is a very frustrating way to edit video on a computer.  It can be done, it sort of works, but it's slow. Even after making a lot of tiny adjustments to Leap placement and room lighting (it is sensitive to IR), tracking speed and other settings, I was only able to slightly improve my ability to execute actions efficiently. While the Leap is a remarkable piece of technology, I still think the concept of motion capture gestural user input is very early in its development. There are a couple of hurdles that the tech needs to find solutions to, the first being haptic feedback. Even the smoothest physical device or control surface provides some resistance to let your brain and body know where you are in relation to the rest of the Universe. We evolved with stuff in our hands, waving imaginary objects in mid-air is disconcerting and difficult to master — far from natural

One possible solution is something the Leap dev community calls “table mode”, which means flipping the Leap on it's side and having it track your hand movement across a solid surface: a tabletop solution. With the right alignment, I could see some smart developer bringing digital life back to an old typewriter with a keyboard table mode app as an alternative to those expensive hipster typewriter mods. Or mash it up with a pico projector and turn any surface into a “Surface”. These kinds of applications would provide something physical to interact with and not abstract “gestures” to learn

Which brings me to the next hurdle — the gestures. The most reliable gestures right now, and despite the Leap's obvious high resolution, are the broadest movements in my experience. The broad sweeping hand movements are also the opposite of why we use keyboards, mice and trackpads in the first place. A wave of the hand is really an arm movement. It's slow, wasting time and energy, and takes me away from the work on the machine. Again, it's clear that the Leap is capable of tracking very small, precise and subtle gestures, so I look forward to the day when that kind of capability is available more broadly. Right now, Leap video editing is less Minority Report and more waving your arms around in front of your computer trying to make things happen than you can do more easily with the tools you already have.

Where would I place Leap on my chart?

Leap is innovative, there is no question in my mind. How well motion capture user input will catch on is really the question and depends on a lot of factors. While it's only been available to the public for a month; works best in dimly lit rooms; only has a modest number of non-gaming apps in it's app store; and still operates like a product in beta; it's too early to tell how widespread this technology will be adopted. Leap Motion has partnered with the computer and display manufacturer ASUS to build Leap units directly into new products. It will be interesting to see what these new machines can do with a unit integrated directly into the design. I am also excited to see what the developer and hacking community will do as the underlying software both matures and evolves.

The “killer app”, for me, with motion capture gestural user control would be a device that did not rely on learning a series of abstract choreographed gestures. Rather when you reached for something on the screen, your brain was convinced that your hand actually grabbed that thing. I don't think it's a case of waiting for holograms on the laptop (that may take a while), or haptic feedback gloves (that's been tried). I'm not sure what that kind of truly intuitive and natural UI would look like, but the Leap is a step in that direction.

The post Editing Video in Mid-Air with the Leap Motion Controller appeared first on ProVideo Coalition.

]]>
http://www.provideocoalition.com/editing-video-in-mid-air-with-the-leap-motion-controller/feed/ 0
Steven Soderbergh’s State of Cinema 2013 http://www.provideocoalition.com/steven-soderberghs-state-of-cinema-2013/ http://www.provideocoalition.com/steven-soderberghs-state-of-cinema-2013/#respond Sat, 27 Apr 2013 11:54:51 +0000 I don’t have any pictures from the talk at the Kabuki Theater in San Francisco today. EDIT: I lied, here is one.   Steven Soderbergh requested that people don’t photograph or record video or audio of the talk, and I am honoring his request. And I was too wrapped up in his words to take

The post Steven Soderbergh’s State of Cinema 2013 appeared first on ProVideo Coalition.

]]>
I don’t have any pictures from the talk at the Kabuki Theater in San Francisco today.

EDIT: I lied, here is one.

 

Steven Soderbergh requested that people don’t photograph or record video or audio of the talk, and I am honoring his request. And I was too wrapped up in his words to take notes, or tweet. Thankfully Joseph Beyer (@cinejoe) from Sundance tweeted all the bullet points of the talk. Check out Joseph’s twitter stream starting at 2:09pm 27 Apr 13 until 2:46pm 27 Apr 13. That’s PDT.

EDIT: And someone has “leaked” an audio recording. And the photos in this article are from the event, taken by Pamela Gentile for the SFFS.

This presentation will not be put on-line, it was recorded by the SF Film Society, but only for posterity I suppose. I will do my best to tell you what I think it was about.

1) There is a difference between Cinema and Movies: Cinema is something that is made by an artist with a specificity of vision. Cinema is unique, something that can’t be made except by the person that made it. Movies are “seen”. Sometimes there is cinema in movies, but nowadays less and less of that because it is under assault by the studios and the audiences. Movie studios are run, less and less, by executives that know or love movies. They decide what movies get made and by whom as purely business decisions and business politics. They are beholden to the numbers and a culture of decision-making driven by focus groups and financial returns.

2) Numbers and Money: It has never been a better time for the five or six large American film studios to make 100 million dollar action/ scifi/ fantasy films. Those films make money, they sell in places other than the United States. The studios are making fewer of them each year and taking more of the audience dollars than a decade ago. While at the same time, the production of independent films has doubled, while our share of the audience pie has shrunk considerably.  This is true even as the overall number of audience admission has shrunk by 10.5%.

3) Stealing is Wrong: Not much more to say except that Soderbergh thinks that pirating content has had a negative financial impact on content creators.

4) Marketing Costs Are Fixed: It doesn’t matter if your film cost 5 million or 50 million, it still costs about 70 million in marketing and distribution to do a wide enough release to recoup costs. Studios have found that there risk is lower and returns higher on the big budget films. How many 5 million dollar films have made 70 million?  No one knows, and people have tried to figure out how to make this cost lower.

5) Executives Don’t Get Punished, Filmmakers Do: When a film bombs, it is the fault of the filmmakers. There is no turnover in the executive offices, the artists are just replaced with new artists and the machine learns nothing. There is no support of a filmmaker over his or her career. No talent development strategy so that a filmmaker grows by trying ideas, making mistakes and triumphs, learning from the experiences and becoming a better filmmaker. It is opening weekend numbers and end-product profits perspective. This is killing Cinema in the Movies.

Soderbergh dropped this math on us and concluded that if you’re a studio then the set up is working fine. Then he pontificated that if he were given a half a billion dollars he’d gather up all the really good indie filmmakers he knew (name checked Amy Seimetz, Shane Carruth and Barry Jenkins) and set them loose within a timeframe and budget total and say go for it, make me three films, spend the money as you see fit. But no one has given him a half a billion dollars.

Soderbergh did not mention his retirement or what he’s doing next (painting?). There were no real words of wisdom other than a clear picture that he has been fighting within the system for the last 20+ years, which is inspirational all on it's own. He said that right now, someone, somewhere out there is making something really cool and we’re going to love it. This happens every year, it is the eternal hope of independent film and Cinema.

OK, that’s my memory of what happened, borrowing heavily from Joseph’s tweets to jog my memory. THANKS JOSEPH!

My takeaway is this: Steven Soderbergh is a brilliant filmmaker who combines his honest curiosity with an amazing intellect, sense of humor and generous spirit. He, more than any other American filmmaker, has embodied what it means to straddle the line between making art and corporate entertainment the last twenty five years. Somehow, he’s managed to create art that matters, while also bringing art, soul and subversion to big budget studio projects that surely ran the gauntlet of endless notes and interference. He remade Solaris with George Clooney (!) and did five hours of Che on a RED One (!!). He is a film lover, a tech geek and he shoots his own camera. He is a better filmmaker than all of us and he’s hanging it up at 50. I was really, really hoping for an explanation. Rather than tell us why he’s leaving, or where he’s going, or what he loves about filmmaking, he gave us a lecture on studio economics. He said that the space for Cinema in Movies is shrinking because, quite honestly the studios are doing really well without it and without us. 

What I LOVED was that he gave an overview on how an organization (he said a hypothetical half a billion dollar funded studio) could develop and support filmmakers to create Cinema over their careers. An organization that was not driven by decision-makers that don’t know or care much about film. An organization that could track and identify talent and give them a place to work. This, in my opinion, is exactly what the San Francisco Film Society has been doing the last half decade. (DISCLOSURE: I am, of course, biased since I have received fifty thousand dollars in grants over the last three years, seek regular advice and counsel from SFFS staff, and go to lots of free events that have made me a better filmmaker.) I am not alone, there is a whole group of us who are either from the Bay, or have been attracted here by the film community and the resources of the SFFS. Our scripts, films, post costs are getting funded by the organization. They are giving us free office space, a staggering amount of year round programming and exhibition and an incredible international festival.

I know that film arts organizations, and festivals, are often labeled “gate keepers” —  something designed to keep people out and create zones of exclusivity. I don’t really see it that way.  Studios are exclusive, they can afford to be, their goals are different than creating Cinema. They sell tickets and BluRays. I think of film arts organizations like the Sundance Institute and the San Francisco Film Society not as gates, rather I think they are like “victory gardens” that are keeping Cinema alive in America.

EDIT: Take a read of the great write-up by Sean Gillane over at Indiewire

The post Steven Soderbergh’s State of Cinema 2013 appeared first on ProVideo Coalition.

]]>
http://www.provideocoalition.com/steven-soderberghs-state-of-cinema-2013/feed/ 0
Fountain & Slugline: How to Write Screenplays in 2013 http://www.provideocoalition.com/fountain-slugline-how-to-write-screenplays-in-2013/ http://www.provideocoalition.com/fountain-slugline-how-to-write-screenplays-in-2013/#respond Fri, 19 Apr 2013 03:10:45 +0000 Screenwriting software is not a sexy area of technology development. There are no “game changer” releases like in camera-land. I have yet to see a “one more thing” moment of shininess, like with a new computer hardware introduction. At best, a screenwriting application is a practical workhorse that streamlines writing in the very specific way

The post Fountain & Slugline: How to Write Screenplays in 2013 appeared first on ProVideo Coalition.

]]>
Screenwriting software is not a sexy area of technology development. There are no “game changer” releases like in camera-land. I have yet to see a “one more thing” moment of shininess, like with a new computer hardware introduction. At best, a screenwriting application is a practical workhorse that streamlines writing in the very specific way that is required for a proper screenplay. At worst, the app gets in the way with proprietary document types, clumsy machine ID authorization and the slow feature creep-expansion of bloatware.

A screenplay really is just words on a page, organized by a specific set of formatting rules and printed out in a single font. The form was developed on the dominant writing technology of the 20th century — typewriters and photocopiers. Entire industries were built around reading and interpreting that singular document form in order to make movies and TV shows. When writers abandoned the Selectric for computer-based word processing, the final form of the document never changed. It was still all tabs and indents, and roughly one minute per page, all delivered on paper — a recipe for how to make a movie in a form everyone already understood.

Initially, screenwriters adapted existing word processing software and experimented with macros and printer settings to make sure everything worked out right and their scripts looked like scripts. In 1982 came the introduction of Scriptor, the first word processing applications designed specifically for the task of writing screenplays. Other developers soon followed with apps like Final Draft and Movie Magic Screenwriter. These apps promised to make life for the screenwriter easier by “…instantly and automatically handling all the hassles of Industry Standard Formatting…”. While I think it’s debatable as to whether typing a character’s name in all caps is actually a “hassle”, having a word processor anticipate formatting as you write is definitely useful.

This is where these screenwriting apps really shine: marking up a script into it’s proper form as you write, so you don’t have to think about formatting as you write. The other area is that these apps give you, the writer, an accurate page count while writing. When writing a text document that is the basis for timing out very expensive movies, knowing where you are as you write is invaluable.

A script, printed on paper, is a set of instructions for a team of people to create a movie, like the software to the hardware of production. But a script isn’t a movie and an electronic screenplay isn’t a script. Every screenwriting app has a way to describe the special tags used to mark certain words and blocks of texts that create the screenplay structure — a set of instructions for building the script. Those instructions describe what it will look like on a screen or printed on dead trees. All of it just a simulacrum of what was once typed on a typewriter, right down to the fonts we use. The end product is still very simple metadata wrapped around a bunch of text that will get printed on dead trees in either Courier or American Typewriter.

Which is why it has always baffled me that screenwriting applications create proprietary file types that are impregnable.  Want to open up your Final Draft .fdx in a text editor and make a few changes?

Forget about it. Want to import your Movie Magic Screenwriter screenplay into another application and preserve your formatting?

Let me show you a four-step process that might work.

This is, and has always been, annoying and what seems to me, only done to prevent me from writing in another application other than the one I originated my script in. Add to that the headache of Machine ID serialization and a crashed hard drive mid-project and suddenly your time-saving Hollywood screenwriting app is just another bunch of hours wasted talking to tech support call centers, or reformatting in another app.

This is where feature-expansion and bloatware comes in. When you make a product that really does a few things well (tab so you don’t have to, accurate page count), it’s hard to innovate, so what can you do?

Add features nobody ever knew they needed: want your script to read back to you in computer voices? done! Did you see the cover art on the box?

Will these things actually help you write an original screenplay? Couldn’t hurt. The one thing these apps will definitely never ever let you do, is work outside the sandbox of their app, often only on the single machine you licensed the app on. They make your computer a very expensive typewriter that will only let you work on your script on that singular typewriter.

I am a writer and director of things that are put in front of cameras. About half of my living comes from sitting down and writing scripts and treatments. I write all the time, and I write everywhere. I write on my phone, my laptop, my iPad, other people’s computers. If I had to wait until I got back to my desk to sit down and work on any given script, I’d never get anything done. And I am not the only person that works this way. I work collaboratively with other professionals who don’t have the “proprietary collaboration” software installed on their machines. They want to read my script, make notes on it and send it back — PDFs work just fine.

What has been missing in the script ecosystem is an open standard screenplay exchange forma. A standard that is easily created, edited and viewed regardless of application of origin. Thankfully a bunch of screenwriters and nerdy-screenwriters have created one, it’s called Fountain. Fountain is simply a mark up language for screenplays using plain text. It’s like HTML but instead of creating a web page, you’re creating a screenplay. It’s just a set of rules you follow as you write in ANY plain text app, you can then view that document in any one of a dozen Fountain friendly plain text editors and print out your script. You can easily write a screenplay on anything (phone, tablet, laptop) once you spend a few hours learning the syntax. It’s not really difficult, you’ll recognize it right away. Here’s how you write a scene heading and a line of action.

EXT. PARK – DAY

The late morning sun shines on the sandbox.

This is a scene heading because it starts with either INT. or EXT. in all caps and is followed by a blank line before the action begins. You can write that on any device, in any plain text editor. You could, conceivably, write it out long hand, scan and OCR it and it would render as a screenplay. That’s the thing, Fountain is a set of rules, not a proprietary file format.

You can write on anything these days, tablets, phones, computers, whatever is in your hand at the moment. So what’s the need for some special app that treats screenplays like they’re anything other than a bunch of words and formatting?

If I wanted to change exterior day to interior night, shouldn’t I just be able to open my screenplay on anything and just do that?

If you learn the syntax of Fountain and work in any number of plain text editors, you can do just that.

But what if I don’t want to?

This is where the new product Slugline comes in. It is a screenwriting app that works like a screenwriting app: autoformatting on the fly, realtime pagination, some simple templates, etc, but it creates a plain text Fountain document rather than some propietary file type. For collaboration there’s PDF export and you’re already using DropBox right?

Want to migrate that project into Final Draft?

Use the “Export to Highland” function and send your screenplay to Highland to convert it to an FDX file.  Get it anyway and use it to take PDFs of your old scripts and convert them into editable Fountain docs. 

My Slugline Life

I have been a Guinea Pig for the development of Slugline since last Summer. I was already playing around with writing in Fountain, and I was just beginning the rewrite of a feature length screenplay I’d been working on for a year. I wrote, and rewrote, the screenplay for my multi-grant winning script “East County” in various alpha and beta builds of the app.

Yes, I have been so desperate to leave the walled garden of Final Draft, especially in the endless draft writing phase of a script’s life cycle, that I was willing to work in an alpha build of a piece of software. Moving away from Final Draft and into the world of Fountain and Slugline has meant that I could write on my phone, my iPad and my assortment of MacBooks. I saved my doc to DropBox and could access it anywhere and everywhere. I spent about half the time just composing ideas in ByWord, grabbing notes and articles, snippets on conversations and marking them up as notes like this:

[[ Here is a note, I can paste any text I want and embed it in my script]]

When I render my script, the notes disappear. There is a whole visible/ invisible structure you can build by simply tagging stuff with # and * and [[. A whole skeleton gets built around the script that is visible when you work, but disappears when you present the script. While half my time was spent grabbing and marking up notes, the other half was spent writing out the actual action and dialogue that makes up the script. These notes, and structure elements, are always visible when I work and they proved to be invaluable as I turned raw text into fully fleshed out scenes.

I appreciate Slugline for all the things I don’t see. There is no Toolbar, just a handful or script templates, and only two fonts (Courier and Courier Prime). Slugline works in the background, anticipating what part of the script I’m working on and quietly marking up my Fountain document in the background. There are a couple of commands you’ll want to remember, the main one, for me is the # command that creates a section title line. I can very quickly sketch out all the beats for my story and then go back in and fill in notes for what I want to write. I have found myself laying out entire sections of the script, reading it through and then realizing a reorder was needed. All these notes, this iterative thought process stays on the page as I work, I can toggle it on and off. It is a very interesting way to write for me.

The feature I loved best was just how well Slugline infers what I want to write next, rather than forcing me to declare it before I write it. The app just gets out of the way. When all the notes, ideas and research coalesce in my head, and I have that writerly moment where I start “hearing” the script in my head, I am free to just write and never worry about key commands. 

Stu Maschwitz and Clinton Torres, the creators of Slugline, have said that Slugline is for the 99 drafts before your “Final Draft”. And I think for the agencies and writers in the studio system who rely on the script breakdown features of Final Draft, this may be true. However, most people writing scripts aren’t writing them for Hollywood. There are so many more writers, working on so many more projects who will find that Slugline and Fountain is all they need to get their work done. I am one of them. 

Slugline is available now for 39.99 from the App Store (it’s Mac only).

 

 

DISCLOSURE: I am friends with Stu and Clinton and have been deeply involved, as an end user, during the development of Slugline 1.0. I received no compensation for this, nor did I receive compensation for writing this blog post. This is a letter of love to an app I totally believe in that has changed how I work. Like most of the participants in the beta team I received a free copy of Slugline, a 39.99 value, however, I gladly would have bought it myself.

 

The post Fountain & Slugline: How to Write Screenplays in 2013 appeared first on ProVideo Coalition.

]]>
http://www.provideocoalition.com/fountain-slugline-how-to-write-screenplays-in-2013/feed/ 0
Markerless Facial Motion Capture with a Kinect http://www.provideocoalition.com/markerless_facial_motion_capture_with_a_kinect/ http://www.provideocoalition.com/markerless_facial_motion_capture_with_a_kinect/#respond Sat, 17 Nov 2012 22:14:26 +0000 In about two years time, I think, the overwhelming majority of 3D character animation content will not be made by professional 3D artists using professional, high end software like Maya. 3D character creation and animation are poised to become as accessible and ubiquitous as digital cameras and non-linear video editing. There are too many developers

The post Markerless Facial Motion Capture with a Kinect appeared first on ProVideo Coalition.

]]>
image

In about two years time, I think, the overwhelming majority of 3D character animation content will not be made by professional 3D artists using professional, high end software like Maya.

3D character creation and animation are poised to become as accessible and ubiquitous as digital cameras and non-linear video editing. There are too many developers working on making character animation as easy as navigating a videogame. There will be consumer apps galore running on everything from phones to tablets to the increasingly rare “tower” computer.

Most of these new animators will be folks wanting to animate their chat avatars using their facial movements to deliver the grand promise of the Internet — appearing to be something you are not. App developers are poised to allow iPad users to comp in pre-built animated CGI elements into their social media videos. Prepare for whole new genres of internet video born out of non-3D artists using easy-to-use 3D character animation tools.

Will it be as good as big budget 3D images created by highly skilled artists?

No. But who cares?

Not every person wanting to create 3D content needs it to look like Avatar. However, while these tools seem kind of gimmicky now (like a lip-synched chat avatar), they will undoubtedly find comfortable homes in the pipelines of professional media creators who never thought about adding CGI Character Animation to their menu of services until they suddenly could.

These newly minted Character Animators will be writer/ directors like myself who have enough technical ability to put some pieces of hardware and software together. We will use tools like iPi, FaceShift and physics engines inside 3D apps to build keyframeless, performance driven character animation. We will use Kinect cameras for facial motion capture and multiple low-res PS Eye cameras for markerless body motion capture in our tiny studio spaces. The economics of “off-the-shelf”, for better or worse, always wins in the end.

I got a chance to take out FaceShift’s debut application, the eponymous “FaceShift“. An affordable, cross-platform Kinect-based markerless facial motion capture system. Let me break that last sentence down. The app runs on Mac OSX, Linux and Windows 7; it uses the Microsoft Kinect sensor and camera to track your facial movements without the need to put stickers on your face and look like a page out of Cinefex. With tiered pricing, it will cost you anywhere from 150 dollars for unlimited non-commercial usage. The professional pricing is an annual subscription (yes, a subscription), starting at 800 dollars a year for the “Freelance” version on up to 1500 per year for the “Studio” version. You may be wondering how 1500 dollars a year is “affordable”. Facial Motion capture systems have, thus far, been properitary, complex and technical and generally start at around 8 to 10 grand.

How Does it Work?

Surprisngly well, especially for the version I tested so early on in the beta. I tested the app on both my MacBook Pro and my loaner BOXX desktop machine along with my Kinect camera. The app is unbelievably user-friendly, no need for a manual, just launch it, and go.

image

The app has you go through a calibration phase, which is remarkably a lot like a video game. An avatar on the screen does the requested facial gesture, and you can’t help but mimic it, like some kind of pre-verbal communication ritual buried deep in our primate brain. You hold the facial gesture, like a smile or eye brow raise and take a snap shot. As you complete each gesture request, the app shows your face map slowly being built, again a very game-like reward system. Pretty cool.

When your done building your personal face map, the app just works. When you turn your head, so does one of the pre-built models. Smile, arch an eyebrow, ditto. When you talk, so does the model. It’s fun and kind of unsettling.

The app has record, edit and playback functions so you can capture different performance takes, chop them up and export them to Maya and Motion Builder as data streams. You can export a performance as virtual marker data in FBX, BVH or C3D and use it in a bunch of other 3D apps. There’re also a Faceshift plug-in for Maya and Motion Builder and you can use it to drive custom characters in those apps.

image

The team behind Faceshift are a group of really smart and sharp computer scientists in Switzerland. I really think I contributed very little to being a part of FaceShift AG’s beta, but I was happy to be along for the ride. What I did do, was take some time to talk with FaceShift’s leader, Thibaut Weise, CEO, and find out what FaceShift’s plan is.

Here’s the interview

Eric Escobar: The Kinect camera has been criticized for it’s lack of fine detail tracking, and yet you all built a markerless, facial tracking system out of it. Have you found that the Kinect (and Asus camera) limit what you could do with your app?

Thibaut Weise: The accuracy of the kinect and asus cameras are indeed very low, but with faceshift we have put a lot of effort on using the minimal information we get from the cameras for accurate facial tracking. Nevertheless, we are looking forward to the next generation of sensors as with improved sensors we will achieve even better tracking.

EE: How much did the Kinect, and the Kinect hacking community, influence the creation/ development of your product?

TW: We have been on it from day one. In the first week after the release of the Kinect we had already developed a first prototype of the facial tracking system using our previous pipeline that we had developed for high quality 3D scanners. The Kinect was a great opportunity as it was the first affordable commercially available 3D sensor.

EE: Is your target market predominantly Character Animators, or do you see Faceshift being used by other markets?

TW: We are mainly targeting the character animation market, but we have also a lot of interest from people in research, art, HCI, and remote education. With the real-time capability it is ideally suited for online interaction in multi-player games and services.

EE: Are you using OpenGL? Any thoughts about the role of GPU processing for realtime face tracking, puppet rendering?

TW: We are using OpenGL for the rendering pipeline, but the tracking itself is done purely on the CPU. We have used GPU computing before as there are quite a few parts in the algorithm that can be parallelized. However, there is typically only a performance gain for higher end graphics cards, and for compatibility reasons we decided to only use the CPU.

EE: Any plans to stream video out of Faceshift into Video chat apps like iChat or Skype?

TW: We will not stream the rendered images out of faceshift. Instead we stream the facial tracking parameters in an open format, so anyone can develop
their own applications and plugins which use the faceshift tracking data. In 2013, we will also release an API that can then be directly integrated
into other software without the need to have a separate faceshift application running.

EE: Are there plans for other app plug-ins, other than Maya, for Faceshift? Which apps?

TW: We will support MotionBuilder with the first release of the software, but we are also planning to roll out plugins for 3DS Max and Cinema4D at a later stage. Besides, with our streaming format anyone can develop their own application and plugins for their favorite 3D software.

EE: Any plans for multicamera support, like iPi’s use of two Kinects or six PS Eye’s for a greater range of capture?

TW: Yes, we plan to support multiple 3D sensors in the future. We will also support other (offline) 3D capture systems such as high-quality stereo reconstructions based for example on Agisoft’s Photoscan.

EE: While I know the app is called Faceshift, any plans to do body capture? Hands?

TW: We are focusing on faces, while iPi for example is a great system for full-body capture. Currently there are no plans to develop our own technology, but we are looking into ways in combining the different systems.

EE: Are you all following the Leap development as a possible capture device?

TW: Yes, the development of the Leap is exciting, but the (first) device will not deliver 3D data as we need it for facial tracking.

EE: Why a subscription model?

TW: For the pricing, we’ve been considering it for a long time, and we believe an annual subscription is best for the customer, as it contains all updates and upgrades – and this will include compatibility with the next generation of sensors, as well as texturing. The alternative would be to have a more expensive one-time license (e.g. for $1600 instead of $800/year) and each major upgrade would then be $800.

Grand Philosophical Questions (Half-joking, but kind of not)

EE: Realtime face tracking with off-the-shelf hardware/ software will, necessarily bring about an era of untrustable video chat authenticity (remember the AT&T pay phone scene in Johnny Mnemonic?

How far away, technically, do you see this happening? What are the technical hurdles before Faceshift will make videochat as virtually anonymous as text chat? And Is this your goal?

TW: In order to animate a virtual character photo-realistically several technical challenges still need to be overcome including accurate tracking, expression transfer, photorealistic rendering, and audio distortion/analysis/synthesis. With faceshift we aim to solve the first two problems, and our goal is that people can use their avatars to express themselves emotionally. This does not necessarily mean a photorealistic character, but it can be any kind of virtual character, and this will lead to exciting possibilities in online communication. The issue of untreatable video chat authenticity will come up in the future, but I believe that we still have some time, given that Benjamin Button was so far the only character that truly seemed realistic.

DISCLOSURE: I was on the early beta for this app, and was offered, like all other beta participants, a US$200 dollar code that expires on December 1st (which I have not redeemed). I contacted the developers and requested to be on the beta as part of my research into the emerging field of real-time, keyframeless animation systems.

The post Markerless Facial Motion Capture with a Kinect appeared first on ProVideo Coalition.

]]>
http://www.provideocoalition.com/markerless_facial_motion_capture_with_a_kinect/feed/ 0
NVIDIA MAXIMUS ON A BUDGET http://www.provideocoalition.com/nvidia_maximus_on_a_budget/ http://www.provideocoalition.com/nvidia_maximus_on_a_budget/#respond Mon, 08 Oct 2012 21:37:45 +0000 New Adventures in 3D A few months ago, I set out to explore the 3D landscape for the small post house owner. I couldn't have picked a more complicated season to do it. Caveats, I am not a 3D artist by trade or training; I have a very basic conceptual understanding of 3D* technology and

The post NVIDIA MAXIMUS ON A BUDGET appeared first on ProVideo Coalition.

]]>
image

New Adventures in 3D

A few months ago, I set out to explore the 3D landscape for the small post house owner. I couldn't have picked a more complicated season to do it. Caveats, I am not a 3D artist by trade or training; I have a very basic conceptual understanding of 3D* technology and almost no practical experience; I'm allergic to following instructions.

2012 looks like it was the year of “3D for the rest of us”. It's daunting, intimidating and exhilirating. I can only equate it to a few years back when camera manufacturers, old and new, started flooding the market with afforable large sensor based cameras. Our “videos” suddenly looked like “movies” and we, overnight, started dinging new cameras if their sensor wasn't big enough — like we hadn't just spent the last twenty years shooting on tiny CCDs.

I think the Summer 2012 was a similar moment for 3D. It's a perfect storm of plummeting prices for both hardware (not unusual) and software (kind of unusual in this market). The big iron, ILM class digital vfx software suites, once priced into the stratosphere went on a fire sale, while brand new, incredibly robust and affordable competitors appeared on the scene.

Toss in the completely new space of OpenGL GPU accelerated 3D plug-ins for After Effects (Trapcode Mir and Video Copilot Elements), along with After Effects CS6 Raytrace engine, and we suddenly have a very wide set of tools to use in a very familiar desktop companion.

3D Writing Uncertainty Principle

My survey started with Autodesk Ultimate Creation Engine 2013 and a loaner BOXX Windows 7 Machine with nVidia Maximus hardware. The year is wrapping up on my old, albeit souped-up, MacBook Pro (see my last article) with Modo 601, Trapcode Mir and Video Copilot Elements. I was wow'ed by what I could do with the Autodesk Suite and the Maximus system (simply amazing things), but I was also blown away by what my “out-of-date” hardware could do with all those new apps.

When writing about 3D anything these days, you have to account for a kind of “uncertainty principle” as things in the industry change at a faster rate than one can document them. For instance, Luxology, makers of Modo and The Foundry, makers of Nuke, Katana, etc, used to be two companies. Now they are one. Developers merge, they one up each other with new releases, and plug-ins pop up out of no where. You can't really ever use the term “game-changer” (nor should you), the game IS change. You can hang on by your fingertips as this all rockets into different and exciting new directions.

BOXX Boxes

A Summer of 3D required the right machine — A Windows 7 machine. Why Microsoft?

The simple answer is that I can't run all of Autodesk's software on a Mac, just a tiny, although important, subset (Maya & Mudbox) is ported to OSX. I am a Mac user, my first post-production machine was a Mac IIci with a ToasterLink card to send still frames back and forth to my Toaster 4000 rig. I even worked at Apple, Inc. a long, long time ago. I'm not a total Windows moron, there have been plenty of times when I've had to get work done on a Windows box. But by and large, I have avoided the OS from Redmond, and I think it is totally just momentum at this point, not a behavior based on rational thought.

The fact is, there are way more options when building fast, reliable high-end towers on the Windows 7 platform. There wouldn't be such a vibrant Hackintosh community if were otherwise. For this project, I was not interested in building my own machine just so I could write some articles. I researched on-line and found a wide variety of companies that will build machines for me. On the big, huge corporate-end there is HP and the HP Z8xx series. These are their high-end boxes, outfitted with high end CPUs, GPUs and a solid build quality. These high-end towers are for every kind of high-end there is: medical imaging, number crunching, 3D animation, etc.

I wanted to find a company that built machines specifically for 3D animation and VFX pipelines. There are quite a few, but the winner, in my book, is BOXX Technology out of Austin, Texas. They're my winner for two reasons: they build bullet proof machines (you could park an elephant on the one I tested**), and you could not find a nicer, more down-to-Earth group of people to talk to. BOXX understands the needs for 3D and VFX workflows and companies, small and large.

image

They build everything from affordable entry level workstations that multitask everything, on up to dedicated blinged out workhorse connected to dedicated render farms. When it came to deciding which BOXX machine to use, I went with the more humble of their offerings, a quadcore Intel Core i7 3820 in their 4920 machine. When it came to GPU, I splurged and had them put in an nVidia Quadro 2000 and Tesla C2075 (aka Maximus).

Why the Maximus in such an entry level machine?

To test the hype.

I wanted to see what pouring limited dollars into a GPU animation solution would get the single-person shop. What would an artist get if he or she bit the nVidia Maximus bullet?

The short answer is: a whole lot for a few things, not that much for most things.

nVidia Maximus, the pairing of one of the “pro” nVidia Quadro graphics cards with one or more of their Tesla co-processor cards delivers a tremendous amount of real-time rendering power in specific apps, doing specific kinds of things. It's amazing to watch massive numbers of particles bounce and dance around at 45fps in Autodesk 3DS Max, but if you're not doing a ton of simulation or CAD work, it becomes a lot of expensive power rarely utilized.

Add in the fact that sooner or later, you will bounce that project from RAM, through the CPU and on and off of your hard drive(s). You'll wish you'd spent some of that GPU money on other parts of your machine.

This is where the BOXX box starts shining. While I was so distracted trying to make the Maximus system do tricks for me, I hadn't noticed how incredibly fast everything else was considering I was using an entry level quad core machine. BOXX builds balanced, finely-tuned systems. They bounce data back and forth from SSD caches, dual SATA drives and RAM. The Core i7 CPU was not overclocked, but it had a built-in proprietary cooling system just in case it heated up (however, I rarely ever heard the fans turn on, the machine stayed cool and quiet through all my testing).

After a week of testing different apps and workflows, I found myself having fun with a lot of non-Maximus stuff. After Effects, Premiere and Photoshop cruised with 4K media and projects. Luxology's Modo 601, a 3D app that pulls a lot on CPU resources was fast and responsive in it's realtime viewport renders. In fact using Modo 601 on the Boxx machine is the closest I've come to experiencing that whole computer-is-a-camera feeling — when the awkward artifice of the UI drops away and I am “directing” objects and the camera. But that is another article entirely.

GPU Cards Are Volatile Commodity Hardware

Which is why I am left scratching my head about nVidia's strategy — why are the very affordable GTX cards, perjoratively called “gaming cards”, as fast, if not faster, than their very unafforable Quadro + Teslo Pro cards?

The argument is that the Quadro cards are made by nVidia, not the seemingly infinite number of manufacturers that build the GTX cards using technology licensed by nVidia. nVidia also writes the drivers, does tech support for their cards and works directly with software companies like Adobe to ensure compatibility. That's the thinking. And I think it's makes sense if I was a big VFX company, ready to drop 10-20K on 3D workstations for my pipeline.

I'm not. And I think I'm like a lot of people in the market for hardware right now. I am far more likely to take 15 grand and buy two “entry level” Boxx machines — Quad or Hexacores with multiple GTX cards. I'll have plenty of CUDA cores, plenty of acceleration in the Viewport of a 3D app, and when it comes to CPU rendering, a good amount of power to throw at it. If I want to have an overclocked CPU and RAM, BOXX can do it. Not in the “smoke my proc and brag” gamermod-fanboy kind of way that makes most professionals shy away from overclocked machines. Rather, they can overclock to get the most out of the cycles without overheating and killing the machine.

Will it be as fast as the Maximus system for Maximus enabled stuff?

No.

As stable overall as the Quadros at GPU processing?

Probably not, but now I have two machines instead of one, getting twice as much work done. Further, how many old machines do you have sitting in a closet or have sent off to recycling in the last decade?

These machines have a very finite lifespan, the GPU has a wildly short half-life and will get pulled and replaced faster than the tower it's in. I'm not going to spend the equivalent of another tower on a graphics card when I could buy another tower, or just save the money for other parts of my business.

You Could Build A Machine But Why?

It's true, you could go on NewEgg and buy a bunch of components, then stuff them into a ready made tower and save even more money on parts. I really thought about doing that. If you're a gearhead who likes to build stuff, go for it. It is amazing what you can put together yourself. If you're a small company or a one-person shop, I think you're much better off buying a pre-built system with a three year warranty and someone you can call or email when things go terribly wrong.

The folks at Boxx gave me their personal cell phone numbers.

*By 3D, I mean computer generated imagery (CGI), not Stereoscopic 3D (S3D), just to be clear.
** I don't know if an elephant could stand on one of the BOXX machines and I don't recommend it.

DISCLOSURE: BOXX loaned me a machine to use for a month, and only a month. I had to pay shipping costs to return it. I receive no discounts or free computers. I tested Autodesk software and Modo 601 using review NFR copies, I cannot use the apps for professional use. I tested Adobe CS6, VCP Element and Trapcode Mir with my own copies (on Mac) or with the 30 day download trial (CS6 Windows) copy.

The post NVIDIA MAXIMUS ON A BUDGET appeared first on ProVideo Coalition.

]]>
http://www.provideocoalition.com/nvidia_maximus_on_a_budget/feed/ 0