Site icon ProVideo Coalition

The Future of Hard Drives – A PVC Roundtable Discussion

The Future of Hard Drives - A PVC Roundtable Discussion 9

Whether we’re talking about archive, workflow, storage or the post-production process itself, more and more is headed to the cloud. That means service providers are now more focused on cloud storage and internet speeds, not the access to the service in the first place. It also means the next step is to figure out how to merge most, if not all content to the digital realm in the sky versus keeping the content on a drive in a drawer or closet somewhere.

However, this process isn’t going to happen overnight. The multitude of review and approval options for video pros is a perfect illustration of how the cloud is currently being used by productions of all sizes, but the costs associated with asset management for finished assets in real time is still significant. Utilizing more traditional tools and services still makes sense today, but those prices and capabilities continue to change and evolve. What it means to keep content on a hard drive will undoubtedly be very different in the near future.

In light of such developments, the transition of content to the digital realm is something we need to consider. Is this transition wholly dependent on the cost of Internet bandwidth? Should we view this transition as a foregone conclusion, or will other factors keep parts of the post-production process independent of the cloud? Are security issues as problematic as the problem of siloed data on hard drives? Does this transition represent increases to efficiency that will be able to be quantified in a real manner?

 


Adam Wilt
Camera Log

It’s not so much a question “the future of hard drives” but “the future of local storage”.

Certainly we’ve seen the push in that direction in general business applications, whether it’s Chromebooks and Google Docs or server virtualization via AWS and Docker. And there have been high-bandwidth content sharing and collaborative editing demonstrations via LAMBDANET and CineGrid for some time, as well as evolving commercial offerings from the likes of Sohonet for the past couple of decades (I got a LAMBDANET demo about ten years ago, with an FCP Classic system in San Francisco cutting two streams of full-res HD sitting on a system in Japan. At that time, there were fewer than ten nodes on LAMBDANET, and you’d have to telephone folks in distant cities to patch cables and set up a connection!).

Even so, there are a few hiccups in this brave new world.

Security is a Big Deal in the content industry. The question is, do you trust your crown jewels being stored off-premises? Yes, rushes and dailies and approval comps flit back and forth through the ether, and content distributors increasingly demand digital delivery of programs over a wire or satcom link, not as a tape or other physical medium. But “data in flight” are different from the storage of the crown jewels. It’s one thing to accept that on-location production elements need to be conveyed—by whatever means—back to the studio or network operation center; that’s a risk that’s an unavoidable part of the game. But your in-process and finished product, in all its full-res, value-added glory: is that something you want parked for the long term offsite, and pay for the bandwidth to view, review, and edit remotely, with all the potential security weaknesses inherent in that arrangement?

The counter-argument is represented by the Norks’ hack of Sony a few years back: even your own silos may not be safe. Yet cloud storage simply multiplies the attack surfaces: not only your own internal plant needs to be secured, but the offsite provider, and all the connections between the two. I’m not sure the bonding and insurance companies will be totally down with that any time soon.

Another counter-argument may be data redundancy and disaster recovery. A decade ago one large broadcaster I was involved with was developing a plan to replicate all their media across their three main bases of operation, thousands of miles apart, with replication essentially happening in near-realtime (I’m assuming this replication has long since been put into place, and is now old news). Yet here again, the replication was between company-owned silos, with no outside storage involved. True, many companies contract with the likes of Iron Mountain, but that’s more for “cold storage”, like sending your film protection masters down an unused salt mine; it’s not something viable for “hot storage” like the assets for a production in process.

There’s also bandwidth and latency. The canonical station wagon full of tapes (or, more up-to-date, the SUV full of SD cards) may have impressive bandwidth, but the days-long latency can be an issue if you’re trying to decide on an edit point! Latency is less of an issue for high-speed ‘nets, but it’s not inconsiderable, even on local ‘nets, never mind a cloud provider many, many hops away.

One mundane example: FCPX refuses to use media on a remote server connected by AFP or SMB, even if you’ve got 10GigE and fast storage: the overhead of those filing protocols kills performance. FCPX will however quite happily use an NFS connection, a more primitive protocol—it’s block-based instead of file-based, and it has very low overhead—even over mere Gigabit links. I’ll often cut with media parked on my NAS via GigE, though the system is a bit laggy in terms of responsiveness (latency) and playing more than a stream-and-a-half in realtime without a local render can be problematic (bandwidth).

(Live, on-the-fly encryption/decryption will help with the security nightmares, but adds latency. It’s unclear to me how that’ll play out; my gut feeling is that it’s still too much of a hurdle in the near term, but when modern CPUs have AES encryption as part of their instruction sets, I’m not going to bet against it.)

Overall, I think we’re going to see a more gradual migration to cloud-based asset storage, given the heavy-data workloads we generate—and this slowness will help prevent us from overreacting to the hype (well, a fellow can hope…). In the general business world there’s already been at least one complete “whiplash cycle” of moving everything into the cloud, then finding the arrangement unsatisfactory and yanking it all back, and then sending some of it out again; that world is starting to converge on a more moderate, hybrid model combining local storage, “on-premises cloud” (what we quaintly used to call corporate data centers before they were upgraded to be more fully buzzword-compliant), and offsite cloud providers. And yes, Sohonet and CineGrid and their ilk are pushing what we can do with remote connectivity, but at the same time we’re seeing the rise of location-based post with portable systems from folks like Bling Digital and Light Iron.

Local storage where and when it’s needed, long-distance connectivity where that makes sense: technologies and topologies will evolve in concert, filling needs as best they can. And, amusingly, it’s the move away from those hard drives we started talking about to solid-state storage that’s helping drive this (pun not intended, but I’ll take what I can get): the robustness, low power draw, and compactness of solid-state media makes portable systems with their own storage more practical than ever before.

So, no: cloud doesn’t win. Local doesn’t win. We increasingly have both options, to mix ‘n’ match as needed to fit the demands of production—so we win.


Jeff Foster
The Pixel Painter

I’m currently serving as the Sr Manager of Video and Imaging Services at Bio-Rad, a San Francisco East Bay biotech manufacturer. The demand for more videos for marketing, training, sales and education have really driven the need to produce quality content from a small team in a faster turn-around. This means sharing resources with the team internally and often, on the road. We sometimes travel internationally to shoot customer testimonials and new product discoveries at university labs. And while we may not try to upload 6 hours of 4K video through our hotel WiFi, we CAN access our secure server through VPN to send and receive project data and sensitive materials as needed or to make a quick correction on a project while on the road so we won’t miss a deadline delivering a client proof.

But our on-site closed network was conjured-up by our IT vendors, BizMacs (http://bizmacs.com) who knew our dept. had very little budget to work with, but also have seen us struggle working on large video productions trying to shuttle assets from local external HDs at each workstation up to our server and down again for another editor or animator to work on their part. Very time consuming, frustrating and many opportunities to lose data or over-write critical assets inadvertently. Basically ineffective – especially with the way our group works together using the Adobe CC video production workflow.

While this setup may not be optimal for the regular video production house with some $$$, the guys at BizMacs managed to make it work for us within our budget and we can now edit 4K productions right over the network without having to “sneakernet” or shuttle project files – plus multiple editors/animators can be working on the same production simultaneously. This may be a solution for more smaller internal corporate video groups as well. And at least it’s scaleable!

They’ve installed a 10GB setup with two 32TB RAIDS and 2x10GB Link Aggregate connections at both ends, providing 20GB pipe through CAT6 cabling. So each workstation has two networks to connect to: our own internal closed system running 20GB and the external network that allows us to connect to the general MarCom servers and the Internet. Here’s our basic setup:

General Network:

Dedicated Server:

End Users:

For us, this works with our existing workstations (again, think bang for the buck!) which are latest model loaded 27″ iMacs with the SanLink2 20GB Adapters that allow us to all work on productions in a timely manner, utilizing the Adobe CC production workflow.

One editor creates the project folder on our server, ingests and organizes all the media, syncs audio/video and multicam clips and creates the Premiere Pro project with all the synced assets. Another editor then starts the main editing sequence while other contribute to the project with After Effects animations and tracked elements through dynamic linking in the Premiere project. Nobody steps on anyone’s toes and there’s no more pre-rendering and re-rendering for timing shifts or client revisions. This has been a real time saver and allows the animators to work remotely via VPN on the After Effects animations – which are automatically updated in the Premiere project as soon as they hit “save” on their end.

Sure – it’s not ILM, but for a low-budget work-around we couldn’t be happier!

 


Damien Allen
Moviola

As CPUs and GPUs get faster, bandwidth becomes more and more the bottleneck to real-time processing. The cloud is great for a lot of things, but transfer speeds is not likely to be one of them any time soon. So in the future that means either localized storage or virtualized computing. Let’s look at both options.

Localized storage is by far the best suited workflow for film and video production. As we move from conventional 10 bit offline workflows to 32 bit HDR workflows, data throughput requirements are going to mushroom. I can’t see any logical scenario where it would make more sense to store data in the cloud and perform processing (playback, editing, effects) locally. Sure, you can work with proxy files, but you’re going to lose the 32 bit dynamic range in the process, which will make some editorial decisions and almost all color and effects decisions difficult.

The other solution is virtualization or remote computing: put everything in the cloud (the data and the computer working on it) and then stream the desktop to a remote “dumb” client machine. This has the benefit of allowing you and your team to work pretty much anywhere, giving global access to the project.

(By the way, I’m using the term “virtualization” a little too loosely here for brevity. A better blanket term would just be “remote computing”. “Virtualization” really refers to the process of emulating computer hardware on a server system.)

The downside to virtualization is that you have to get the data to the remote machine in the first place. If you’re hosting it yourself, that’s as easy as connecting a Thunderbolt drive to a workstation on SAN. If you’re using a third party host like Amazon, you have to count on many hours of upload time via a service like S3 or Aspera, or courier a drive–if they’ll even accept one. Then of course there’s the whole security issue, which Adam already hit on very well in his piece.

I’m a big fan of the remote computing route, when you have control of the hardware yourself. I currently have a 20 core Windows 10 Workstation with 7 GPUs sitting in my studio connected to a healthy 150/150 fiber pipe to the web. I find I can connect anywhere in the US that has a decent downlink and work via Microsoft Remote Desktop as if I’m on the machine locally.

Microsoft have actually done an amazing job on their most recent Microsoft Remote Desktop client for OS X. So much so that I can sit in a cafe with my MacBook Pro and feel like I’m using a Windows 10 laptop that just happens to have 20 cores and 7 GPUs installed into its lithe frame. And when I connect an external monitor to my MacBook Pro, the Remote Desktop app takes it over just like it would on the local machine. Hats off to some amazing engineering there by the Microsoft remote team.

Here’s what I find virtualization great for: compiling code, working inside 3D applications like Maya, Modo, or Unreal Engine, and general office work. When it comes to editing for the most part I get solid performance. But the internet is a fickle thing and the further geographically I get away from my workstation the more unpleasant things become. Audio starts to hiccup a little and I’ll get some occasional refresh issues.

Of course, I’m not working with a remote system engineered for video editing; I’m working with Microsoft’s generic remote protocol. But I have used “professional” remote services in the past and been underwhelmed. I’m also skeptical as to how these remote systems would handle 32 bit HDR color ranges, given that a lot of the voodoo magic they do to get responsive streaming is clever compression of the video display signal.

I did try actual virtualization a couple of years ago: setting up a remote system with software on the Amazon EC2 service. I found the configuration extremely confusing (and maybe I need to be disabused of the notion, but I’ve always felt I was reasonably savvy when it comes to system configuration), the setup painfully slow, and the metered usage a little terrifying. The whole tiered usage system was harder to decipher than the system configuration. I stopped my experiment short for fear of ending up with a $15,000 usage bill the following month.

Perhaps virtualization has improved since my test, but I also find it hard to imagine getting terabytes of video data to and from a remote third party system on a regular basis. I know Ramy at Digital Film Tree is doing some pretty clever stuff with upload workflows, but for what I do (VFX and finishing) at the end of the day I need to be working off the full quality source files. And somehow they need to be resident on the system doing the processing.

In summary: in a world headed for 32 bit HDR workflows, local storage and computing will still remain the reliable option. Remote access to that local storage and computing just kicks it up a notch.

As an aside, people sometimes confuse the fate of hard drives with the emergence of the cloud. The cloud uses hard drives! We’re just displacing the location where those physical devices are sitting. And given that in 2016 we had working lab prototypes that can store 1 terabit per square inch and now in 2017 IBM has announced the ability to store a digital bit on a single atom, I’d say hard drive technology is more than ready to meet our future data demands. Well at least until we move to 16K holograms…

One last thing: I think the bigger question we need to ask is about archival storage. Where do we dump all this stuff when we’re done with the project?

 


Adam Wilt
Camera Log

As we’re having this discussion, Jim Mathers of the Digital Cinema Society has just sent out his latest newsletter, including his “One DP’s Perspective” essay, Are You Ready to Take Advantage of Post Production in the Cloud? It’s well worth a look; he talks to a number of industry providers, and describes a world in which connectivity has advanced to allow (mostly overnight, non-realtime) asset transfer and replication between facilities without schlepping disks and tapes around. It’s not “editing in the cloud”, but high-bandwidth shared storage—as opposed to direct-access, per-workstation storage—within facilities, much as Jeff describes at Bio-Rad, and with reasonably fast material transfer between facilities allowing more decentralization across the post pipeline.

He highlights a very promising workflow that this level of connectivity enables:

Another DFT service Ramy [Katrib, CEO of DigitalFilm Tree] was telling me about that especially piqued my interest as a Cinematographer is known as a “Remote Color Session”. I’ve written a lot about the frustration we DPs feel when we are not able to supervise the color grading of our work. The problem stems from the fact that successful Cinematographers are usually onto their next project, many times on a distant location, before their last one reaches the DI stage. It is getting to be somewhat common in these cases that the Post house will set up a remote monitoring situation so that the DP can follow along remotely as the Colorist works at the Post house. However, these have traditionally been highly compressed files in order to meet the available bandwidth parameters which is certainly not ideal for grading.

However, working with capabilities built into DaVinci Resolve, DigitalFilm Tree has begun taking the process a step further. They transfer the camera raw files to a remote server located at a facility convenient to the DP. Then, instead of compressed footage, only the color decision data is being transmitted in real time, thus using a fraction of the bandwidth, so that what the DP is seeing on location is displayed at full quality from the raw files at his location….sweet!

This is likely how true “editing in the cloud” will evolve: the high-bandwidth video and audio files (or “essence” in geek-speak) don’t change a lot: they trickle in as they’re shot or generated, but they are fairly quiescent once created. The decisions about what to do with those assets—EDLs, CDLs, LUTs, updating the playhead position on a remote instance of an NLE so two geographically distant people can see the same frame at the same time—are very dynamic, but this “metadata” is tiny by comparison, and can be replicated in interactive time even over comparatively thin pipes. So, if you have the bandwidth to replicate the essence across all facilities in reasonable time, even with poor latency, that’s fine: that’s a transfer-once problem. Then all you need is the low latency to support interaction with low-bandwidth metadata, and creative tools that allow synchronized viewing and editing using that shared metadata.

Whether it’s one editor in Chicago handing his project file off to his compatriot in Cincinnati, or a DP and colorist in LA creating a grade while the director in NYC sits in on the session, once the bulky essence files have replicated it’s only the lightweight metadata that need to fly back and forth interactively. Not only is this realistic with current, commodity, readily available Internet connectivity, it has the added advantage that no valuable frame data and audio samples are being transferred back and forth during the session, reducing the piracy problem.

Handing off projects for serial access (editing by one editor at a time) is viable today, but for shared editing sessions we’re not quite there yet. The biggest missing piece for wider adoption is the same level of multi-player, shared-session interactivity in our creative tools as we have in Google Docs. DaVinci Resolve’s Remote Grading shows how it can be done. We just need our other tools to gain that capability… or we just resolve to use Resolve for everything, something that looks to be increasingly viable with each new software update.

If I were a product manager at Avid, Adobe, or Apple, I’d be very nervous about what Blackmagic is doing with Resolve.

Support ProVideo Coalition
Shop with
Exit mobile version