When I got to looking back over 2017 I’m stuck by the fact that I’m not really struck by anything particularly ground-breaking or earth-shattering that came down the line this year, at least as far as film and video post-production goes. This roundtable discussion is something PVC has done before and it’s fun to read back on them as there is a lot of diverse talent that contributes to ProVideo Coalition. Reading back on writer’s thoughts from last year, I don’t think much has changed this year. Thinking back on my year of work I know that not a ton changed for me. I probably expected a lot of clients to come in the door asking about 360 VR and HDR … but they didn’t. It feels like 360 video production hasn’t taken off like the NAB shows of the last couple of years would make you think it would (hmmm, didn’t some NAB shows of years past make you think 3D stereoscopic would take off?) but there are some companies both big and small doing it and doing it well. 360 video seems to have become a viable niche that these dedicated production companies do as a turn-key service instead of us all having to learn it. Both Adobe and Apple did add 360 video editing features into their NLEs but they added them via a technology acquisition and not building them in-house. To me that says “VR video is important but not that important.”
I was thinking that 2017 might be the year that nearly 100% of the jobs that I edit came in the door in 4K or higher resolution. I was wrong about that as there was still a LOT of 1080 footage that I sat down to edit this past year. While this was a bit surprising in the end it was a blessing as some production … shoot … a … lot … of … footage these days. But I think the reality is that reality-style shows and productions shooting multiple cameras don’t have the luxury of unlimited transfer time and unlimited drive space when it comes to moving all that footage around. The 4K work that I saw this past year was reserved to more cinematic, film-style production where the filmmakers were crafting beautiful images and not capturing everything under the sun. That’s the way is should be, and it looked beautiful. As for that claim I read a time or two that 8K is the new standard for filmmaking … hogwash. The few times that I saw footage that was larger than 4K it was instantly transcoded out of its native resolution, not just for the offline edit but for final finishing as well. I’m sure in some magical super-computer world they are cutting and finishing 8K but not most of us. And we won’t be in 2018 either. I said this for 2016 and I say it again for 2017: I even had to deliver some SD spots this past year.
One thing I did see is more projects bound for the web. That might be offshoots of bigger productions or agencies dedicating entire campaigns to YouTube and Facebook. There is no doubt that web video is huge and we’re all just producing content for it. For 2017 it feels like a lot of agencies and production companies finally realize that it takes real resources to produce good web content so they aren’t trying to spend rock-bottom-dollar on web content anymore. That’s a plus. The biggest workflow change I saw with big uptick in web content is the need for captioning since so many web videos are viewed with the sound off. This might be an unfortunate side-effect of the hated autoplay video but if the client wants captions, we give them captions. This is where cloud-based transcription tools and their editing app integration, like SpeedScriber and Transcriptive, have been God-sends. I never would have thought that transcription tools would be my BIG THING from 2017 but lo and behold … I think it is.
So that makes me wonder: what about VR and AR and HDR that was supposed to be the coming thing? I think HDR is still in its infancy as far as mass consumption goes. Better (affordable) cameras and better (affordable) monitors means HDR tools are trickling down into the affordable edit suite but it’ll still be awhile, a long while, before HDR is part of that Facebook video. AR was supposed to be a big thing but that seems more the realm of the app developer than the video producer. The zombie gunship thing that Apple demoed was fun for about 1 minute on the new iOS and maybe I’ll make better use of placing virtual IKEA furniture in a room when I move to a new house so I’m still waiting for the breakthrough AR app on my phone and in my life. Truth is I’m not looking very hard for it. As for VR, a VR arcade opened in my town last year. I haven’t been there or taken my kids there yet. One is too young but maybe I’ll take the other. It’s only a block from my office. My oldest son got a VR Christmas gift where you download a bunch of 360 video apps to go along with, gasp, real printed books! The funny thing about the 360 apps that go along with this gift: they are all rendered 360 animation and not a frame of video in any of them. Maybe app development is as important a skill to have today as video literacy. I do think it’s harder to learn.
I’d agree with you, largely about 4K. But I just did an interview with Dan James who cuts Grand Tour (what used to be Top Gear), which is definitely “reality.” And they are shooting as much as 30 hours a day of material ALL IN 4K! It’s a massive amount of material and obviously TONS of data (mostly Arri Alexa), but they have the budget to do it, obviously. Dan does believe that MOST of the people shooting 4K are doing the more cinematic projects though, so he’s the exception to the rule.
As for cheap stuff, I’d point to the increase in cheap stock footage. My use of places like pond5.com and videohive.net has almost completely replaced my use of places like Getty and ArtBeats – proof of which is the notification I just got from ArtBeats that they are closing shop in February. Unfortunately I had just dropped a large sum of money on a one-year subscription to their PremiumBeat service which was supposed to give me “free” stock footage through October of next year. Just a few years ago, prices for decent stock footage would be in the hundreds of dollars a shot. Now, I can get pretty great looking footage for anywhere between $8-$40 per shot. I’ve even subscribed to places with cheaper monthly or annual subscriptions, though I’ve found the quality of those is rarely good enough for my projects, but sometimes you find some gems, and if you can even find one decent shot per month, it pays for itself.
Stunning Good Looks
As for that claim I read a time or two that 8K is the new standard for filmmaking … hogwash.
For 2017 it feels like a lot of agencies and production companies finally realize that it takes real resources to produce good web content so they aren’t trying to spend rock-bottom-dollar on web content anymore. That’s a plus.
So that makes me wonder: what about VR and AR and HDR that was supposed to be the coming thing? I think HDR is still in its infancy as far as mass consumption goes. Better (affordable) cameras and better (affordable) monitors means HDR tools are trickling down into the affordable edit suite but it’ll still be awhile, a long while, before HDR is part of that Facebook video.
AR was supposed to be a big thing but that seems more the realm of the app developer than the video producer.
As for VR, a VR arcade opened in my town last year. I haven’t been there or taken my kids there yet.
Maybe app development is as important a skill to have today as video literacy. I do think it’s harder to learn.
It does seem to be where the money is. That and IT. I just read an interesting story about how India is seeing massive IT layoffs because companies have learned that outsourcing there is a losing proposition. I remember reading a few years ago about how app development was going to be a losing proposition for Americans as all that work was going overseas, but I’m not sure that’s the case anymore.
This year with cameras we saw some promising innovation. Sony announced the 6K Full-Frame VENICE, Panasonic released their EVA1 a VariCam LT – light style camera, Canon finally gave shooters raw recording with the C200, Blackmagic made a much more user-friendly camera in the URSA Mini Pro and RED announced their Monstro an 8K Vista Vision sensor camera. The big take away is larger and larger sensors are here to stay and the resolution to make those impactful fields of views, even more, eye-popping is right about the bend, if not already here.
If you can afford any of these cameras listed above then you will find yourself with a very capable camera with very little limiting technology. No longer should a shooter be able to say their camera is holding them back from creating wonderful images. I feel like great looking footage squarely falls on one’s craft, preparation, and willingness to learn the new technology and the confidence to apply it well.
4K is great, 6K is better and 8K seems to be best for 2018.. this, of course, is what people will say. The reality is I’m stilling shooting a small percentage of my projects in 4K with a far higher percentage of my clients requesting me to shoot in 1080. Will we see a 10K camera announcement in 2018? I think it might be a possibility. I would not bet against such an announcement, but the likelihood of shooters capturing footage at 8K, let alone 10K, will still seem pretty far off for most of us. One of the things I want to remind shooters is craft is more important than a camera choice these days when many cameras have around 15 stops of dynamic range and more than 4K sized sensor. Great cinematic footage comes not only from camera choice but also from the lenses one choose, the lighting, the movement and the composition. The camera is just part of the equation.
VR and 360 video… I will call it here. I think these two fads will lose their glimmer in 2018. It is far easier for an audience to feel emotionally moved from a well-crafted edit and story than from a 360-degree world view of a scene or setting. Maybe 180-degree footage will become a thing, but I doubt it. I always think of shot selection and editing as a visual metaphor for good writing and grammar. Good writing and grammar can strike the emotional bone in as few words as possible and great visuals and story can do the same by highlighting what we want to see in that moment.
Lenses… as the sensors grow in size so too will the lens options. This could be a huge year for anamorphic lenses. Rumor has it Sigma is developing an anamorphic line and NAB 2017 saw Atlas Lens Co. announcing their less expensive options. We will have to wait and see, but I expect NAB 2018 and Cine Gear 2018 to be especially interesting.
I’d say the biggest camera/production trends at the turning of the year are “beyond pixels” and the birth of practical HDR. These may not be things we all deal with on a daily basis, but they’re fundamental shifts in underlying capabilities and available infrastructure that will affect how we perceive “video” in the years ahead.
Beyond pixels: 4K/UHD didn’t last long as a plateau on which camera systems could stabilize, but we’re not converging on 8K as the standard. Increasingly we’re seeing cameras liberated from the tyranny of a fixed pixel count.
Many, led by the DSLR / mirrorless crowd, supersample their images from 5K or 6K or more down to UHD or HD for recording. We’re at the somewhat awkward point where the 4K out of a lowly GH5 or EVA1 out-resolves the 4K from a Varicam35 or VaricamLT (and yes, there’s more to image quality than resolution, but the point remains).
Some cameras—RED really led this revolution—don’t restrict recording to standard frame sizes, either, offering a variety of native frame dimensions for capture. Often these are raw captures, but not always: you can shoot “6K anamorphic” on a GH5 (actually 4,992 x 3,774), and get a long-GOP HEVC file, ready to edit.
Yes, 8K as a broadcast and distribution standard is coming, with the 2020 Olympics as a driving function (never mind that it’ll be a very limited broadcast standard by that point, earlier Olympics have served as similar drivers for HD and UHD broadcast-standard milestones), but as we get beyond the point where pixels are resolvable—where vernier acuity / hyperacuity takes over from spatial acuity as the psychophysical determinant of spatial image quality—“smoothness” takes over from “sharpness” and the actual pixel count becomes less critical, less of a make-or-break number. Once you get past FHD resolution, it’s no longer as much about “is it good enough” but “it’s more better”.
Cameras and NLEs are increasingly format-agnostic: once you’re “beyond pixels” you can just pick the resolution that works for your project and not worry about output formats until it’s output time.
Practical HDR: While we’ve been looking at impressive HDR demos for several years now, it’s only at the tail end of 2017 that we’re starting to see practical HDR: high dynamic range for the rest of us, and wide color gamut along with it. HDR-capable TVs are now widely available, almost all with HD10 and an increasing number with HLG. Even comparatively low-end cameras like the Sony FS5 and A7Riii and Panasonic GH5 and EVA1 offer HLG recording alongside their existing log modes. The recent update for Final Cut Pro X makes authoring HLG and HD10 output as painless as cutting a Rec.709 show, whether your footage comes in as log or HLG. Netflix and Amazon Prime both offer HDR distribution, as does YouTube, and ATSC 3.0 will deliver it OTA for those of us who get TV via antenna (yes, some of us still do!).
The early results won’t always be pretty; heck, most of the HDR demo reels at the past couple of NABs showed that even the high-end folks haven’t yet mastered this new language. But the crucial fact is that now the workers can control the means of production, and viewers can see the results. May a thousand flowers bloom—some excessively bright, many garishly oversaturated, but flowers nonetheless. The shackles of 100-nit, Rec.709 are loosening, and our NLEs and displays are catching up to the wide dynamic ranges and wide color gamuts our cameras are capable of. Finally, there’s something to do with all those stops of highlight headroom aside from flattening them in the grade!
Sound for Picture
Professional audio for post production hasn’t seen much in the way of game changing advances in the past year. There are the usual iteration improvements in software and hardware but since the “sound barrier” was surpassed with 192K, 32 bit floating point audio files, we’ve been well beyond the realm of human hearing for some time. Somewhat ironically at that same time period the highly lossy audio type – mp3 – gained in popularity. That format died this year, but I digress….
That being said there were some things to note this year. Dolby Atmos has really started to reach critical mass. At this point there are well over 600 titles mixed in the format and the gradual implementation of the AC-4 codec will really get it into homes, cars and mobile devices. For those that don’t know, Atmos is an object based audio experience. So rather than being confined by the monitor speakers, instead, each panning of the audio is based in nodes including overhead. This system allows for the Dolby decoders to decipher how many objects are available to it and then create the mix accordingly. Atmos is created in a 7.1 surround environment, but can fold down into stereo and mono or up to its full compliment of ceiling monitors along with a full array, surrounding the viewer, in professional theatre settings. At CES this year LG announced that their OLED TV monitors will support the technology sending it out to a compatible audio receiver.
AC-4 can easily handle these multiple streams and in a very efficient, compressed manner. It is about 4 times as efficient as the now standard AC-3 codec. The other touted benefits include “personalized content delivery.” That feature sounds like a bit of a nightmare for audio master delivery, as well as for the end user. The idea is that multiple streams of the mix would be available to the end user. They could then choose to “only hear the ice” from a hockey game for instance and remove the announcers. Or they could increase the dialog level while decreasing the music and effects level. Or basically – remix the show I’ve delivered!
The other main item to note is the acquisition of Fairlight audio into the DaVinci Resolve program from Blackmagic Design. There are still kinks to be worked out in the implementation but BMD has done a great job over the years in it’s acquisitions and further improvements to those new corporate purchases. I believe that over time this will be a game changer for post production audio. It offers many of the same workflows and features of the many DAWs already in the marketplace. The main determinate factor will be the advancements to the audio module by the company BMD itself. They have proved time and time again to not only look to improve the products it acquires, but to also be keen on listening to user’s ideas, thoughts and comments and making adjustments for the better. It comes out of the box with many things that are currently only provided third party on most other DAWs. BMD also has an I/O box as well as an accelerator card in the works. Fairlight has a long history in audio and as an additional component of the DaVinci Resolve program, that is also steadily improving it’s picture editing capabilities, the future shines bright for this well received addition.
What I most look forward to in 2018 is a course correction in the computing hardware. Ever since Apple released the aptly dubbed “trash can” it’s been an uphill climb with ports, drivers and additional boxes to support the additional hardware required. Some of us long for the simplicity of the old 9600 with it’s six PCI slots. I love the increases in computing power today of course, but it would be great if there were more streamlined approaches to the ugly cable mess from the front panel of the Apple Mac Pro. It might be a well designed box from a “design concept” point of view, however, needing external drives and having no PCI slots makes its “elegance” lacking in the “real world” department. They say that professional users like us are in for a great re-imagining of the Mac Pro, fingers crossed. (As usual…)
At our suggestion, in 2017, more audio/video apps for mobile devices (Android and iOS) began to support 48 kHz audio sampling and to make it the default setting. I hope that the few holdouts will embrace this in 2018.
In 2017, more manufacturers started to properly label non-integer framerates in camera menus, even in consumer models. I hope that the few holdouts will embrace this in 2018.
In 2017, Apple started to offer automatic matching of framerates and type of HDR (or lack thereof) in its AppleTV 4K. I hope that in 2018, Apple will finally start supporting non-integer framerates in macOS, for hardware outputs and internal displays. I also really hope that in 2018, Apple will realize its terrible mistake and go back to offering the option of matte screens in its products, and that it will substantially update the Mac Mini.
In 2017, more digital microphones (with digital output) and audio interfaces were delivered that are focused on mobile devices (Android and iOS). Some of them specifically included monitoring to react to the lack of analog headset jacks in many of the latest mobile devices, by adding inboard monitoring, some of which are latency-free. In 2018, more of the holdouts will likely update their audio products to include built-in analog monitoring.
In 2017, HP announced the first touch matte color display. I hope that in 2018, HP and other manufacturers follow the same path with other touch matte color displays.
In 2018, more of the video sharing platforms will likely support H.265/HEVC for upload. More online services for remote interviews came out and were improved in 2017, and I expect that to expand in 2018.
In 2017, a large number of Chromebooks with matte or glossy screens started to offer official support for Android apps. Some of those apps are already optimized for a larger, sizable window on Chromebooks. In 2018, certainly many more Android apps will be optimized for a larger, sizable window on Chromebooks.