With the decade winding down a few of our writers thought about doing a post about what they thought were the biggest items of the decade, but PVC Roundtable Discussions have always been a better way to explore these topics.
Below is how that discussion took place in email-thread form.
Update 1/8/20: Additional insight about how things have developed with lighting and monitors. Click here to see the update.
Looking back over the last ten years in media post-production the thing that sticks out (and is always a topic of discussion) for many of us dedicated editors is the launch of Final Cut Pro X and the death of Final Cut Pro Classic. That since shockwaves through post-production and changed the workflow for many. But that isn’t my choice. My pick for the biggest development of the past decade is Blackmagic Design’s development of DaVinci Resolve. The purchase of daVinci Systems was nearly 10 years ago and while it was a shock to see Resolve reduced from a $100,000 high-end color grading system to a free piece of software the big news has been the development of Resolve into an industry-standard piece of software that can quite literally do it all. Let think about what has happened to Resolve since BMD’s purchase: It came to the Mac, went free, added better conforming tools, became an online editor, added titling and generators and effects, integrated Fusion for visual effects, integrated Fairlight for audio mixing, added the Cut page for a different style of editing and now supports a dedicated editing keyboard for faster editing.
That list neglects to mention what I think is one of the most important things that BMD has done with Resolve: developed it with a breakneck speed that should put the other NLE makers to shame. And they’ve done this rapid Resolve development all the while continuing to build the usual video infrastructure hardware that made BMD famous as well as the tiny task of developing and introducing what seems like a couple of cameras a year.
That development of Resolve is one thing that makes it a very intriguing prospect for the editor. While the feature-set is second to none when it comes to the competitors these days you feel like you never really know what might pop-up in a future release. And while it’s the signature features that always make the headlines (Cut page, Boring Detector!) each Resolve update is filled with tons of little enhancements and bug fixes. Enough bullet points that it takes some other NLEs two or three (or more) releases to equal one release of Resolve. I always look through those release notes because there is almost always some small additions and enhancements that will make a world of difference in the day-to-day editor’s life.
The most important part of this development cycle is it feels like the product team for Resolve is listening to the users and they are able to make these little enhancements that can make a world of difference in the editor’s life while also doing big-ticket items like the Cut page. That makes it really feel like the Resolve team is in it for the long haul and won’t rest until Resolve is the only post-production tool left standing.
It’s not all wine and roses though. I’ve talked to many editors who have tried Resolve for creative, craft editorial and feel it still is ready for the season-long broadcast work or the 1,000-hour documentary. Others think it’s too much and even though you can hide the many different pages outside of the Resolve Edit page it’s still a tough hurdle to jump to get truly proficient in Resolve. But it’s getting there as there are many small shops who have turned over all of their post-production work to Resolve, from ingest to edit to effects, mix, and color. Whatever your thoughts on Resolve it’s hard to ignore the fact that Blackmagic Design has dedicated huge resources to Resolve and it doesn’t look like it will stop anytime soon.
Kevin P McAuliffe
Video Post Blog
Looking back over the last decade, there were a ton of different effects companies out there. Boris FX, GenArts, Imagineer Systems, Red Giant and too many more to name here. What we’ve seen, starting with Boris FX’s acquisition of Imagineer Systems, and their flagship application Mocha Pro has been the consolidation of some of the bigger effects companies into one – i.e. – Boris FX’s subsequent acquisition of GenArts to give them probably the biggest line up of effects under one roof, and then right at the end of the decade, Maxon stepping forward and purchasing Red Giant, developers of many product lines from Magic Bullet to Trapcode to Universe. This one, though, really strikes me as odd. Boris FX’s purchase of Imagineer Systems makes perfect sense. It now gives Boris FX the ability to integrate Mocha tracking technology into almost all the effects they release out to the market for almost every application they support. Their purchase of GenArts, again, makes sense, because I would say the GenArts would have been their direct rival when it comes to large scale effects packages for applications like After Effects, but more so for editing applications like Media Composer, Vegas and others. So what is Maxon up to with Red Giant?
Logically, people might think that Maxon want’s to integrate Red Giant products into Cinema4D, but that almost seems like a backwards step for them. They already have complete particle systems inside of C4D, as well as a lot of other functionality that Red Giant already offers. I think that Maxon’s not done with their AE 3rd party effects company acquisitions, and Red Giant was just the first step. There are other great effects companies out there like ReVision, NewBlueFX, FX Factory and even Video Copilot that would fit very well into the new Maxon family.
The Boris FX acquisitions were surprising as they took place so quickly, both Imagineer and GenArts in about two years, and the Maxon one is surprising as it came completely out of left field, and it’s Maxon stepping into a space that many people might think they wouldn’t be involved in (2D motion graphics), but having Red Giant under their belt gives them more of a friendship with Adobe, who Red Giant has been involved with, for far longer than they have. I’m thinking that we’re going to see more of these mergers, coming sooner than we think!
Sound for Picture
There have been many incremental upgrades to audio post technology and workflows in sound for film, over the past decade that taken together, have really pushed the boundaries of what is possible with audio processing, mixing and outputting. We’ve seen a swift move from hardware devices to software devices with the advancements of computing power. There has been an expansion of deeper sets of impulse responses for convolution reverbs, the chaining of various effects into single software tools, and a very significant move away from physical media to all digital. I haven’t printed stems to a deck since the HDCam SR, which itself changed audio export stem outs from a then typical 4 track split, to the 12 tracks available on the HDCam SR. Of course now, freed from these physical constraints, we output every single format and split that the distributors and studios can think of!
However, in the post audio workflow, I would say the one evolving development, the stand out for my daily work, that has increased efficiency, quality, and ease of use and functionality over the past decade, is the always expanding tool-set of iZotope’s RX line for audio restoration. Although RX 2 existed already by 2010, the engineers over there have worked overtime this past decade to turn it into a restoration, software only, powerhouse. Originally, an inexpensive option for software-based audio repair, the expanded modules are in daily use in most audio post bays. Each iteration of RX has provided advancement and something of significant value. That is not true with most software upgrades! For example in the RX 4 upgrade Ambiance Match was added, a completely new tool for creating the always absent room tone from location. This iteration also included EQ Match, a tool for computer analysis of recordings and suggested EQ curves to better match and also added the Dialogue Denoiser module. Each new version retains the prior tools and just adds more. Also – a special note of thanks for their easily identifiable module name to module function nomenclature!
RX 5 introduced Loudness Control that re-leveled mixes to user-definable loudness specs. The advanced version of RX 5 also introduced us to the Advanced Audio Editor, a standalone version for more detailed processing, and a round-tripping connect and monitor, that could integrate with the host program. The advanced version also included my favorite metering system Insight, which is a tool I recommend to all audio post mixers. I can’t say enough good things about it, particularly since many bays these days don’t come with large format console and rows of blinking lights like old mixers like myself cut our teeth on. It also included a new Corrective EQ module as well as a De-Plosive tool for all the lousy VO recordings that come through.
RX 6 Advanced introduced Mouth De-Click, Dialogue Isolate, De-Wind. As with each iteration of the software, many of the advanced options included in the prior iterations were folded in the standard (less expensive) versions on the upgrades. RX 7 brought a lot of these processes to the AudioSuite tools for Pro Tools. This allowed faster access to the frequently used tools and lessened the need to round trip out to the editor. With all of this stunning technology feeding onto itself, this year they introduced Dialogue Match. A multi-module software device that does what the name says. iZotope acquired Exponential Reverbs this year and has used some of that tech in Dialogue Match. It not only analyzes the EQ curve of various recordings but it also analyzes the reverb quality of the recording (!!!) and applies suggested EQ and reverbs for the recording match. I can’t think of any device that does this task.
For those of us processing and manipulating audio files and mixes for many varied uses, I cannot think of anything that has expanded, improved and added daily value to my work in sound for film. Gone are the days of zooming into sample view to pen tool out a click, search through handles to create a new room tone, or audition a thousand reverb types to better match for ADR. iZotope RX, a force for audio good, like all things, can be misused – perfectly fine files can be over processed, mangled and strangled by a heavy handed engineer. However, in the right hands it can solve a myriad of audio woes, often with a sweep of a slider and a click of a button.
Side note: Jumping off from Scott’s comments regarding BMD & Resolve – Fairlight integration in Resolve has the potential to alter the audio post landscape. As noted, BMD has been a powerhouse not only in acquiring and implementing other technologies but is doing it at a breakneck pace and with a deep concern for end users experiences. Fairlight has an extensive tool-set built in, that by itself can handle many complex tasks. It has world class proprietary processing, great metering including loudness standards metering, and of course important multi-track support and need I say – perfect integration with video. Those of us in the Avid universe still get lots of video errors, even in the latest version, which always struck me as odd since they make the famed Media Composer. You’d think that video would be rock solid. Pro Tools is an industry standard program and I make my living with it, but it is based on an old model of proprietary and third-party add-ons, all purchased separately, and that can really add up financially.
Fairlight has been around a long time so, in and of itself, it is a mature, serious program. There is still a ways to go for better implementation within Resolve and I think that there are turnover and workflow systems that need to be identified and streamlined. For instance, as a post audio mixer, I don’t want a session that has a video edit, color and VFX and all the files needed for that – I want what I always want – a single video file that lives in its own lane, leaving only the audio for me. All of this will be easily worked out in short order, once turnover becomes more common and these things get ironed out. However, BMD has proven time and again that they listen to the end user and make the necessary changes. I have sound designed and mixed feature films that were created completely within Resolve, but they were exported for sound editing and mixing in Pro Tools. It will be wonderful to stay in there to do the audio too, that time is very close approaching.
Resolve is already well on its way to fulfill the promise of Final Cut Studio from Apple. Apple broke ground with their suite of programs for editing, exporting, color etc., and at the time round tripping between the programs was the way to go. Resolve is leaps ahead of that, with a simple tab to get to each process, be it edit, color, VFX, audio or export. I think that Resolve has the potential to be the singular tool-set for post, in very short order. Besides making it a powerhouse of a stand-alone program, there is one aspect that should strike fear in the other companies in the post production space – it’s free.
The single natural disaster that changed Film and TV production forever.
The past decade has been tumultuous, to say the least. The first decade of the new century left us beaten and bewildered, opening with 9/11and ending with one of the first advancements for the digital age, the United State’s transition to Digital Television was completed, for the most part, in June of 2009. While the court battles and lawsuits would remain for a few more years, the vast majority of the people in the US were now receiving their TV via digital broadcast.
RED had made a good deal of noise with the launch of the RED One, but it was Arri’s Alexa launch in 2010, that would fundamentally change our world, as its simple ProRes based workflow would make it the darling of TV and film production. Yet, tape still ruled, no matter what the camera, your content always ended up on tape, with nearly all films and television shows delivered on either Sony’s HDCamSR or Panasonic’s D5 tape stocks. HD was still rather new at the time and the promise of digital had been lingering on the sidelines for years, while the post community was still living with one foot in its analog past, still capturing and outputting our content on videotape. Yet, one seemingly uneventful day in March 2011 changed our future in ways few today even realize, fundamentally thrusting our community forward faster than anything previously encountered.
March 11th, 2011, is the date of the Tohoku Earthquake off the coast of Japan. While the Fukushima Dai-ichi nuclear disaster is far more widely known aspect of that natural disaster, the tidal wave that disabled those safety systems, would flood the Fukushima prefecture and the sounding areas, resulting in the loss of nearly a million homes and buildings and death of almost 16,000 people. It was that tidal wave that changed our collective future, 3 of the 4 existing facilities engaged in manufacturing professional videotape were along the coast near Sendai and were destroyed by the wave itself or the following storm surge. The remaining facility was already in the process of being phased out, woefully incapable of fulfilling the worldwide demand for professional videotape that had just been thrusted upon it.
At the start of this decade, whether it was a miniDV tape from your camera, the SR Master of your show or the DVD of your wedding, we delivered physical media, by the end of it tapeless production is all most people know. As tape-based cameras disappeared, the digital camera market exploded, as the DSLR marketplace dwindled as more and more DP’s embraced the larger sensor imagery in made available in the latest digital cinema-style cameras. It is amazing that a single natural disaster accelerated the transition to digital workflows faster than any camera or NLE manufacturer could in such a short period of time. The freedom of a digital-only workflow caused an explosion in the marketplace, NLE’s grew into suites that now include advanced audio tools and color finishing, cameras got smaller while the sensor’s grow ever larger, all while our recording media gets smaller (and faster).
I have to agree with Scott: the death of FCP Classic — and on the photo side, Aperture — are prime examples of “own goals” on Apple’s part: best-of-breed products dropped unceremoniously with no clear successor, leaving users adrift. I’ve substituted FCPX for FCP Classic, and Capture One for Aperture, but not without friction.
FCPX is brilliant, but its Apple-knows-best approach to media management is often unwieldy and always multi-user-unfriendly. If I spend the up-front time to ingest everything and keyword my media, there’s no faster way to edit. But if I just want to pop into the NLE to quickly look at something on disk, maybe throw a look on it, and export a selects reel, fuggedaboutit! Thank goodness for Screen, especially now that QuickTime Player 7 is pushing up the daisies.
Capture One does almost everything Aperture did, and more, only with less speed, elegance and robustness. It’s almost as tedious and frustrating as using an Adobe product (grin).
Again Scott has it sussed: Blackmagic Design’s ongoing development of Resolve, starting with an elegant and expensive color-grading system and turning it into an editing / grading / VFX / sweetening powerhouse, with 95% of its functionality available for-freakin’-free, is earthshaking.
What about image acquisition? Gary points to the introduction of the ARRI Alexa in 2010. Ten years later, what’s the hot camera? Yep, the ARRI Alexa and its variants. Some things haven’t changed. Yes, we’ve had the C700, the VENICE, the VariCams, various REDs, and more — all excellent cameras in their own rights — but nobody else has quite managed to bottle the magic the way ARRI has, combining robust, tolerant, and elegant color; superb highlight handling; low noise; and non-nonsense usability. Alexas just work.
Having said that, it’s hard to find a bad camera these days; anything you’re likely to rent or buy today is capable of making a technically excellent image, even in less than ideal situations. That certainly wasn’t the case ten years ago.
We have seen fundamental advances elsewhere this decade that change how we work. The two that stand out to me are stabilization and focus assist.
Steadicams are great, but flying one is a full-time job. I find that if I’ve been away from it for a while, I need a week of practice to get recurrent with a Steadicam, else my camera’s aim wanders and my horizons go woozily dutch. Freefly Systems introduced the MōVI M10 at NAB 2013, and camera stabilization hasn’t been the same since. A good gimbal watches your horizons and maintains your aim point, letting you focus on movement and composition instead of the pesky mechanics of stabilizing the camera. Yes, there’s a learning curve with a gimbal, but it’s nowhere near as daunting as flying a Steadicam; think of it as “stabilization for the rest of us”. And yes, you can even get Steadicams with gimbal stabilization added.
Decent gimbals capable of steadying a smartphone start around $100; those for mirrorless cameras and small camcorders go for $300 and up. Combined with in-camera sensor-shift stabilization and in-lens optical stabilization, there’s no longer any excuse for sloppy, low-rent shakycam. I love tripods, dollies, and Steadicams, but gimbals are liberating.
A decade ago, AF was used only by the most adventurous (and ridicule-tolerant) run ’n’ gun shooters. Nowadays it’s a tool for serious filmmaking.
On-sensor phase-detection autofocus (PDAF) lets a camera determine not only that a focus point is out of focus but how far out of focus it is, which direction to adjust the lens, and how much to adjust it. Unlike contrast-detection AF, PDAF doesn’t hunt: it pretty much focuses the lens to the exact point on the first try (CDAF requires overshoot and hunting to determine whether something is or isn’t in focus). PDAF-capable cameras, equipped with compatible lenses, let the focus puller tap an onscreen target to make the camera focus on it: no overshoots, no buzzing. The same way gimbals automate the mechanics of stabilization, a good PDAF system automates the mechanics of focus: the focus puller is freed to worry about when and where, and not fret about which way? or how far?
Canon’s recent Cinema EOS cameras are the leaders here, but newer mirrorless cams, camcorders, and DSLRs from Canon, Sony, Fujifilm, and Nikon also offer usable, production-worthy PDAF systems. These cameras are useful for automated one-shot focus pulls and are increasingly capable of sophisticated focus-point tracking for continuous autofocus. Many offer face detection and eye tracking; some let you specify which eye to focus on. Pet Mode and Animal Eye AF extend this technology to the dominant art form of our time: the Internet Cat Video.
Existing PDAF system require the use of electronic-focus lenses; traditional cine glass can’t be used. No worries: Howard Preston’s 2014 Light Ranger 2 includes a superb AF capability alongside what’s arguably the best manual focus assist system available: a graphic overlay of focus distances and depth of field, giving a focus puller the same essential information a PDAF system uses internally. No, it doesn’t track faces and it only works in a horizontal plane, but even so, it’s a game changer.
There’s so much to say about the developments of AI in photography, game engines and VR in cinema’s Virtual Production throughout the decade.
DaVince Resolve has, no doubt, changed the landscape, as Scott writes. But my notes on the decade we leave behind center on photography, and how the adoption of a subscription model by Adobe – the Creative Cloud first appeared in 2011 – created the conditions for the explosion of alternatives to both Lightroom and Photoshop, all offering new ways to edit photographs, sometimes beyond what used to be called photography a few decades ago.
I am not sure if it’s the biggest development in terms of photography, and even if in the end it can be considered a positive thing, but the introduction of Artificial Intelligence – or machine learning or smart algorithms or whatever you want to call it – is one of the game changers when it comes to photography. We’ve come a long way since the introduction of the first “fuzzy logic” systems by Minolta in its Maxxum xi SLR, in the early 90s, and the modern AI is able to do many things, from automatically create tags, caption and keywords for photographs, even be trained to recognize any new concept within seconds, to fuel quick mask tools that make it easier to use masks in photography.
On the other extreme, AI is also being used to make it more accessible to edit photographs beyond what is usually considered photography, or at least creating doubts that a photograph is a photograph. Some of the new software available is marketed with the clear suggestion that if photographers are not happy with the sky in their holiday photos, they can simply remove it and use another one, with a simple click. While the idea may sound exciting from an editorial perspective, when a subject with a specific sky is needed and a deadline has to be met, the suggestion that it is OK to tamper with one’s holiday photos gives a bad taste to the idea of AI, and will contribute to a family album full of fake photos, instead of real memories.
Artificial Intelligence in photography is expanding beyond the desktop editing stage, though. Adobe Photoshop Camera, a smartphone app for iOS and Android, announced during Adobe Max, uses Adobe Sensei, a AI and machine learning platform, to identify what the smartphone camera sees in real time, and immediately applies Photoshop effects on the fly, in camera. Adobe uses Sensei in its editing software too, to alter aspects of photographs, from a frown turned into a smile to eyes enlarged or reduced with a single click. Whatever the future, photography will never be the same, and being able to say what is a real photograph or a fake is more difficult each new day.
Two other areas where technology is evolving fast are Virtual Reality and the use of game engines in filmmaking. Only a decade ago, filmmakers where not familiar with names like Unreal Engine of Unity 3D – unless they were gamers – now filmmakers as Jon Favreau wear a Virtual Reality headset to direct a movie, The Lion King, inside a game engine. Favreau is setting as new trend.
Virtual Reality, first associated with “experiences” and video games, has been, in recent years, a constant presence in film festivals as Sundance, Tribeca or SXSW, and led to the creation of projects as Oculus Story Studio, that introduced the concept of VR film-making. VR is the glue that fuels cinema’s Virtual Production, and to better understand its potential, nothing better than starting 2020 by reading The Virtual Production Field Guide v1.2, written by Noah Kadner and published in .pdf format by Epic Games, the company behind Unreal Engine.
The Virtual Production Field Guide v1.2 is a 98 pages eBook designed for anyone interested in or already producing projects using virtual production (VP) techniques. The guide was written with the help of a vast team at Epic Games, creators of Unreal Engine, and includes, besides essential information about techniques and solutions for Virtual Production, interviews with filmmakers who share their insights about virtual production.
While my first thoughts were of hardware – a decade of GPU advances can’t be overlooked – I’ve eventually decided that on-demand TV has seen the most development over the last decade. Sure, there’s been various forms of pay TV for as long as there’s been TV, but the last decade has seen subscription and on-demand TV become the new de facto. From a personal perspective, I can’t recall the last time I watched free-to-air TV, and the increasing proliferation of non-scripted shows (ie. reality TV) suggests that it’s well into some sort of slow death spiral.
A comparison can be made to cell phones – they’ve been around for longer than the average person may think, and initially they seemed like a privilege for the wealthy. Gradually they became more mainstream, and now we’re at the point where it’s normal for people to rely solely on their cellphone and not have a landline at all. Free-to-air TV is at the same point. I can easily live without it, relying solely on the internet for delivery and on-demand subscription services for content. Netflix is not seen as a luxury.
Ten years ago “Game of Thrones” hadn’t aired yet. It’s the first example I can think of where people subscribed to a pay service for a single TV show. It’s been followed by “Westworld”, “The Grand Tour” and now “The Mandelorian” as cornerstone content funded by a subscription, on-demand TV service. The sheer quality and quantity of shows across all platforms is staggering.
The last decade isn’t just about the rise of subscription TV. It’s also about the demise of free-to-air TV. Perhaps not in the financial sense, but definitely in the cultural sense. The current generation of children will grow up without any notion of “time slots” and un-skippable ad breaks, and possibly without the notion of family TV shows and “prime time”. Free-to-air TV has become a quagmire of news services and reality TV shows, with the cultural significance of TV ads withering accordingly. I can’t speak for the whole world, but in my antipodean corner the budgets for TV commercials have dropped, along with their prestige.
The older generation have lived through the “golden age of television” – the age where people watched TV because there wasn’t anything else to do. The internet began a slow generational shift away from TV as the primary source of entertainment, and the last decade has clearly proved that the “golden age of television” is over. This is not a bad thing. While I don’t think the current number of subscription services is sustainable, I’m personally glad that broadcast TV no longer has an iron grip on popular culture.
Hopefully, the next decade will see a huge proliferation of diverse, creative content appealing to much wider audience than broadcast TV ever did.
This is the golden age of television, more than the 1950s, and the incredible amount of content produced must have massive implications for our work lives. I’m kind of out of the loop on much of this stuff since I moved to the boondocks and edit remotely, but the camera and lighting gear coming out is exciting. I didn’t even know object tracking on a gimbal was a real thing. It’s hard from here to gauge what all the activity effects means for labor, especially if this is really just a bubble.
Computational photography has really come on strong. I haven’t been hands-on in production for a few years, but Apple and Google phones have helped me out on various projects. It’s amazing what smartphone comping can do in under a second, and I can take the yellow tint out quickly. I’ve seen plenty of video files that don’t look as good as footage from the iPhone 11 Pro Max.
Likewise, drones have added a bit of spice for almost nothing. I’m happy to have drone footage from DJI Mavics, but have defer on quality concerns to others who might use the footage for more than fleeting glimpses.
The Macintosh and Apple video-related products are flourishing again this fall, which is a relief for those us that were considering jumping ship over the laptop alone.
There still needs to be progress in making content easier to add, find, and purchase in on-demand platforms. And the unwary need to look closer at what the platforms offer for browser versus Roku viewing for example. In one case a client began using a platform that does use a streaming server and the most valuable stuff now ends up on YouTube anyway, which is far too time-consuming to have to police.
It seems like Adobe development progress has really switched gears. Several years ago, Stu Maschwitz made a convincing argument for a melding of After Effects and Premiere, but it doesn’t seem to be on the table. Apart from a few notable additions, After Effects sometimes seems like a front-end for a scripting engine, but thanks to aescripts + aeplugins it does mean there’s a lot of little developments to learning. As Kevin noted, consolidation has kept the plug-in developers busy, and the merger of Maxon/Cinema 4D and Red Giant is curious.
A bright point of the last couple of years has been The Year In Review podcast from the School of Motion. The 2019 edition, which comes in audio and text with hyperlinks, is hosted by Joey Korenman, EJ Hassenfratz, and Ryan Summers who discuss the highlights of 2019 and what to expect in 2020.
Quasar Lights have been big sellers for Filmtools this year, and it’s easy to see why. They’ve really changed the way people think about lighting with those portable tubes. People are asking for and expecting them on set because they’re so light, they’re so easy to place and so easy to hide. The quality is great and the fact that they have RGBW is a huge bonus. They’ve had a huge impact on the industry with their lights because they’re becoming so common.
In addition to Quasar, there are a couple of other brands that have become generalized trademarks. Obviously ARRI’s Skypanels are the panel light of choice, with the S-60 becoming the defacto light on set. Aputure has brought themselves into prominence with their Lightstorm series, and the 300d has become one of the most popular lights for Youtubers which helped bring the Aputure brand onto bigger productions as those startup filmmakers’ careers took off. Aputure is now producing lights like the MCs, which have such a high demand that no one can keep them on the shelf, and RGBW lights like the AL-RC & WRGB 300 poised to go against Quasar’s household globes and the ARRI Skypanel line respectively.
It’s been a big decade for lighting because it’s seen the evolution of LEDs from being green, expensive and low powered to what we have now, and that’s opened up all kinds of opportunities for those Youtubers all the way up the production landscape.
The next step, and it’s something we’re already seeing, is all these lights being transitioned to being RGBW. However, we’ll be going past that with RGBA. There’s lots to say about the difference between RGBW and RGBA, but the big distinction is that RGBA includes an amber tip. All of that is indicative of the insane amount of color-tunable options that we might see take over at this year’s NAB.
While our industry flirted with 3-D for a bit, the biggest change in monitors this past decade has definitely focused on the 5”-7” range. While larger monitors have increased their resolution and brightness, so have the smaller, more portable monitors used by almost everyone these days.
Smaller Cameras like the DSLR forced our hands to be able to see what the camera was pointed at and 1-3 person crews weren’t bringing reference monitors with them to gigs. Manufacturers started off by adding in what are now standard features: Focus Assist (peaking), Scale Functions, Waveform & Vector Scopes, and Framing Markers. The biggest winner for the first half of the decade had to be TV Logic with their 5.6” on-camera monitor. It had the ultimate feature, SDI Loop Through, allowing your DSLR’s consumer input (HDMI) to be converted and looped through to an SDI output for seeing it elsewhere. It’s a workhorse monitor that gets the job done, even today.
Once that important feature was figured out, the monitor game became like every other category… trying to pack in the most features and make it look the best. Whoever is brightest, clearest, least expensive, and color accurate, wins.
SmallHD used their smartphone-like UI and social media marketing to appeal to end-users. They even teamed up with eventual sister company, Teradek, to integrate wireless video monitoring. Atomos added a recording unit on the back of theirs, giving users a multi-tasking monitor. Of course, Blackmagic got involved in the party, as well.
Monitors are funny things, though. They seem to be very personal. What looks good to person’s eyes, looks different to another. There is no one best monitor for anyone, much like the cameras on the market today. We’ve since made a full circle, thanks to even greater cameras with proper features and connections. Actual crews and large monitors still win out on set. Flanders Scientific monitors are still the Cadillacs. JVC, Panasonic, and Sony are still providing excellent reference monitors. SmallHD, Atomos, and others, are still producing for the masses.