Site icon ProVideo Coalition

HPA Tech Retreat 2015 – Day 2

HPA Tech Retreat 2015 - Day 2 14

 

From Mark Schubin’s Intro and Year in Review, through Pete Putman’s CES Review, it’s day 2 of the Tech Retreat. Plus, we get to debate integer vs. fractional frame rates again, a topic that will never die.

[Updated 18:58 PST: Final except for fixups.]

The HPA Tech Retreat is an annual gathering of some of the sharpest minds in post, production, broadcast, and distribution. It’s an intensive look at the  challenges facing the industry today and in the near future, more relentlessly real-world than a SMPTE conference, less commercial than an NAB or an IBC. The participants are people in the trenches: DPs, editors, colorists, studio heads, post-house heads, academics, and engineers. There’s no better place to take the pulse of the industry and get a glimpse of where things are headed; the stuff under discussion here and the tech seen in the demo room tend to surface in the wider world over the course the following year or two, at least if past years are any guide.

I’m here on a press pass, frantically trying to transcribe as much as I can. I’m posting my notes pretty much as-is, with minor editing and probably with horrendous typos; the intent is to give you the essence of the event as it happens. Please excuse the grammatical and spelling infelicities that follow.

(More coverage: Day 1Day 3Day 4.)

 


 

Welcome – Leon Silverman, HPA

Welcome to the 21st annual Tech Retreat – we’re now old enough to drink! Biggest to date; sold out. The HPA will now be known as the Hollywood Professional’s Alliance (buttons are passed out).

 

Introduction & Technology Year in Review – Mark Schubin

2016’s Tech Retreat will be 15-19 Feb.

19th or 80th “this is the year of HDTV”. We are still shooting/protecting for 4:3. HDTV: 1935 report on high-def, so it’s been a while. Eutelsat in 2014: broadcasting 5000 channels, of which 500 are HD. Russia’s NTV just starting to experiment with HDTV.

Industry shifts? tape to film, linear to nonlinear, etc. Nielsen Total Audience Report (download it) says this is 1st year total viewership has dropped. 81% have internet, 78% have broadband, 3% broadband only. Worldwide pay TV increasing even as cord-cutting increases. CEA latest penetration TV 99%, internet 78%. Millward Brown Report shows generational shift towards mobile, millennials watch equal amounts of TV and smartphone media, but don’t watch smartphone media for more than 5 minutes before switching to larger screens. Netflix making inroads on linear TV. NYTimes Bilton story on in-flight WiFi says it sucks… because people are using it! Over 1/3 Dutch people use illegal VOD.

Big “news” this year is 4K. Mark’s sister’s TV is S3D, which she didn’t know and didn’t care about. Will 4K “sell” the same way? Toshiba tiny 4K camera with 1600TVl/ph; Sharp “more than 4K” TV, shootouts between 720p and 4K. Samsung’s voice command TVs transmit what they hear over the ‘net.

 

Washington Update – Jim Burger, Thompson Coburn LLP

Congress: in control of Republicans, but not enough to overcome filibusters (which the Democrats suddenly like). Lots of hearings on copyright, but little agreement on anything other than extending DMCA. 

Lots of litigation:

ABC v. Aereo: when we last looked, 2nd Circuit affirmed Aereo’s victory 2-1, “that’s not a public performance.” Who is capable of receiving it: it’s going from Aereo to you. Re-hearing affirms decision. Broadcasters file appeal, Supreme Court majority 6-3 that Aereo publicly performed copyrighted works without permission. Breyer writes majority opinion focusing on ’76 copyright act; “one of Congress’ primary activities … was to overturn the determination that CATV systems fell outside the scope…”, rejects Aereo’s argument, it “looks like cable TV”, “Aereo, and not just its subscribers, perform[s] or transmit[s]”. Dissent: Aereo is like Kinko’s or another copy shop, not guilty of infringement. All Aereo is doing is providing the tech. 

Hopper: Dish’s “Primetime Anytime & Auto Hop” to skip commercials, and forward shows to tablets (called “Joeys”). Networks sued for direct, contributory, vicarious infringement, & inducement to infringe. Denied; appeal to 9th Circuit; denied. B’casters have no “copyright interest” in controlling commercial skipping. But Aereo case knocked out the volition argument; Hopper looks like copying by Dish. Judge Gee: Hopper different than Aereo – had initial license to transmit to consumers. The Sony VCR case extends to this case: the user has ”fair use”, so no secondary liability. It’ll probably be appealed further.

Cindy Lee Garcia v. Google Inc.: Garcia performed a script for $500, wound up in “Innocence of Muslims” overdubbed with incendiary insults. YouTube posted the movie online. Garcia sends a takedown notice to YouTube. They refuse. To District Court on copyright grounds and irreparable harm (due to fatwa against everyone in film). DC denied; 9th Circuit reversed: there is an independent copyright interest in her performance, filmmaker did not own the interest as a work for hire, didn’t have a license to use her performance in that way. Google appealed; hearing held on Dec 26, we’ll see where it goes.

FCC action:

Net neutrality mostly concerned with the last mile (ISP to home), also edge providers. All traffic treated the same, v. paid “fast lanes”. Verizon v. FCC case: DC Circuit overturned FCC rule to compel broadband providers to treat all traffic equally, saying FCC had no jurisdiction under Title I. Court said FCC has some general authority for regulation, but Title II (common carrier regulation) needed for anti-discrimination and anti-blocking powers. 19 Feb 2014 FCC NPRM (notice of proposed rulemaking) to rely on Title I, but will consider Title II; sought comments including what rules to forebear if we go with Title II; should fast lanes be allowed. 3.7 million comments received: Internet advocates liked Title II, providers for net neut but against Title II. President Obama says reclassify under Title II, cuts ground out from under FCC Chairman. (So much for Title I!) Congress has bills, but nothing may pass. Chairman Wheeler last week stated we’ll go with Title II reclassification for wired and wireless broadband. Ban paid prioritization, blocking, throttling. Draft order 5 Feb, commission vote 26 Feb, forebear most regs of Title II except for 13 sections. Competitive broadband: few consumers have a choice of broadband providers. Worries about starting to “regulate the Internet” since regulations grow over time.

Spectrum auctions: AWS-3 reserve price $11 billion, proceeds over $41 billion: massive interest in spectrum. So b’casters are more interested in selling their spectrum. NAB challenges reallocation rules in court based on critical audience measurements. TV spectrum auction pushed back to 2016. Goal is to clear 84MHz of TV spectrum, FCC expects $39 billion from the sale.

OTT MVPD (multichannel video program distributors) NPRM: Are Netflix etc. MVPDs regardless of transmission path? What about MLB-TV that owns its own content? What MVPD rules should be imposed, e.g., access to other programs, retransmission consent, video description, emergency info, CALM, employment riles, etc. Hearings later this month.

Drones or Unmanned Aircraft Systems (UAS): cool way of making interesting video. Congress ordered FAA to rule in UAS integration; final rules in 2016 or 2017. Interim: must have FAA authorization, file section 333 exemption (Special UAS Rules). 250 applications so far, mostly for closed-set film production. Conditions: below 400 ft, in sight of pilot in command, PIC must have private pilot’s license and 3rd class medical, daylight only. Warner Bros used drone in episode of The Mentalist, the first approved use.

Questions: 

Time Warner / Comcast merger? 60%/40% chance, the longer it drags on, the less likely to go through.

MVPDs: aren’t they open to local franchise fees? Yes, but that’s based on physical plant, and MVPDs don’t have any (Jim thinks that’s how it will go).

 

Panel: UHD Content Protection: a View from All Sides of the Ecosystem

Moderator: Rusti Baker, ARM

Wendy Aylsworth, Warner Bros.

Helena Handschuh, Secure Content Storage Association Security Working Group

Heather Field, FOX

Carolina Lavatelli, Internet of Trust

Rusti: What are the implications for devices and their lifecycles? Fives ago most DRMed devices all used ARM 7. I now work for ARM, shipping 1 billion chips a month. No longer mostly for mobile devices; more auto entertainment, industrial appliances, fitness bands, IoT, etc. Many challenges: 3rd party tech, often open-source, unknown vulnerabilities. Products developed ever faster, 12 month development lifecycle, but test/qualification period hasn’t grown. 

How do we reflash memories for shipped product that have been compromised? Physical access often required, not practical. 

Coding errors seem obvious once identified, but not before! Hard to identify remote vulnerabilities. Security isn’t always the top development priority. Attackers are experts and have the lifetime of the device to break into it.

Wendy: A rush to next-gen content formats; these cost more to produce and consume. These must be beneficial to consumers and industry; MovieLabs Specifications for Next Generation of Video and Enhanced Content Protection. 

Resolution alone is insufficient, already a lost production cost. Lots of folks who have Blu-Ray players only buy DVDs, because they don’t see $5 worth of additional visual quality in the Blu-Ray disc.

Changes in visual parameters have different costs and benefits:

HDR seems to have the highest impact at the least cost.

Typical film ripping profile: leaks in industry pre-shows, big spike at a film’s release, bigger spike when DVDs ship to the supply chain. (Now, with digital cinema watermarking, theater ripping has diminished.) Another spike when BD ships, but it’s tiny compared to the DVD spike. Another spike just before a sequel comes out.

Copyright industries (software, games, books, films, TV, etc.) $142B/year in value.

Heather: Hardware root of trust: identify and authenticate device, use to establish trust chain. 

Trusted Execution Environment (TEE): secure processing, memory runtime apps checks, secure cryptography. Content separated from decryption, video decodes within TEE, protected frame buffers. DRM runs in TEE, output on HDCP 2.2 (or downres if no HDCP). Enabled locking content until release date. Online-accessible license for copying or moving. 

Forensic watermarking to trace piracy. System-wide renewability to update firmware, DRM, keys, heal breaches, so “hack one” does not equal “hack ‘em all”. Allow secure video pipeline updates and improvements.

Helena: Multiple forms of piracy and multiple vulnerable points in the ecosystem. We want premium UHD content everywhere, but can it all be secure? Consumer electronics may not provide for over-the-air firmware updates, for example. 

Typical threats & issues: weak crypto, weak DRM scheme (no renewability, which means once a protection is broken there’s no way to fix it), after-hours cloning in disk manufacturing, people hacking devices for keys. Software attacks and side-channel attacks on devices. Weak link to the display device; camcording off the display (weak forensic watermarking). Self-certification of device vendors, not checked and verified.

End-to-end security is possible. Stronger crypto, stronger watermarking. Rights renewability per title. Secure key provisioning systems. Hardware roots of trust on the device, strong access control, frame buffer encryption, encrypted media pipeline (HDCP 2.2 etc.) Efficient forensic marking schemes to make camcording more traceable. 3rd-party security certification if needed.

Carolina: Challenge is to define & develop security schemes compatible with risks, costs, delays in the industry. Risk analysis in approval, development, end-usage. Security by design (sound protocols & crypto; data access & flow control; HW tech maturity). Root of trust / Trusted Execution Environment.

Security assessment: independent 3rd party, or self-assessment (conflict of interest, and can’t be done properly by the same developers). Certification role: lab licensing, harmonized methodology, definition of requirements, evvaluation monitoring.

Summary: evaluation referential, independent assessment, independent certification.

Questions: 

How has the rapid pace of development affected standardization? Wendy: proliferation of devices and formats has increased the challenge of getting content to consumers. Constantly expanding, a forever cycle. 

Now we’re raising the bar on security, what should consumers expect? Heather: transparency in the end-user experience.

To Helena: How do you see this rolling out; will there be a tiering or a segmentation of the market? Helena: depends on the market. For TVs, everyone will go for highest level. In mobile, there will be low-res phones and high-res phones. Tablets probably aiming for the high end, but we see both markets playing out.

Carolina, what testing is appropriate for self-assessment or submitting to labs? We can imagine functional self-testing and assessment, but for full security it requires independent assessment.

Regarding globalization of UHD content and harmonization of standards, what are the challenges for content protection? Content protection is a different market, very dynamic, devices changing very fast. How to make sure your devices are approved quickly? That’s the challenge.

 


 

Stop Trying to Help: Making the Living Room Safe for Content Chris Armbrust, Marin Digital, & Geoff Tully

Guy Findley (?): Metadata Madness event in LA, Tuesday 25 March Luxe Sunset Blvd Hotel. Why metadata? It’s everywhere. MovieLabs and SMPTE will be there. There will be a NY event, too. 

Geoff: THX goals: an environment for viewing media. Metadata used to convey grading info to display. How to get consumers happy with their products . Steve Poster has message to TV Makers: Stop destroying our content! Stop motion interpolation!

Chris: A working example for content metadata: proper display of 3D. How to get 3D signaling from BD to TV? HDMI used CEA-861 VSIF (vendor specific info frame) data, covered most if not all 3D variations. Worked for full HD 3D on BD, missed some variants for broadcast and cable, hence the move from HDMI 1.4 to 1.4a. Example: which frame comes first, left or right?

Future VSI implementations became crowded; specs were there but untested, not present in legacy parts, led to HDMI 2.0.

Geoff: Different examples in past 5 years. Aspect ratio descriptors (ARD), for example. Auto picture sizing based on detected black bars takes control away from user. Neural Audio / DTS allowing 5.1 broadcast over stereo channels; with metadata a receiver can switch between stereo and 5.1 decoding. Color space and content: DVDs are Rec.601, BD is Rec.709, some games use extended color space. But when you use the game console to play a DVD or BD? 601 and 709 aren’t auto-detectable, but the player should be able to tell from the disk type to set the HDMI color metadata properly.

Chris: UHD/4K evolving: HDR bit depth, frame size, aspect ratio, frame rate, color gamut, audio type; all need metadata to set ‘em up properly. 

If we had metadata in 1968, we could have solved the mono/stereo transition more easily! UGC: portrait-mode videos, for example, displayed on a TV: rescale? crop?

Metadata to convey creative intent (e.g. to set brightness level on a TV to emphasize highlights vs shadows). Art vs artifact: metadata used to ensure proper playback. Artistic Intent metadata in the living room, so when we switch from sports to theatrical, the set (or the whole home eco-system) adapts. A lot of moving pieces: chipsets, production for distribution, creator’s intent, playback devices, playback medium, standards. Challenges: evolving standards, some are easy (some are not). There are standards groups for each component of the process; how do we get them to work together? Parametric vs enumerative: think ARD: 0 for 4:3, 1 for 16:9, but what about 1.85:1, or 2.35:1? Organizations: SMPTE ITU, IEEE, ATSC, etc.

Geoff: We want Steve Poster to have hope. His aspiration is not unattainable. We want to encourage discussion. 

One more thing: there will be HDR displays before there’s much HDR content for them, and someone may design a system to show a normal picture in HDR whether you like it or not!

StopMotionInterpolation.com

 

Integer and Fractional Frame Rates: Pros, Cons, & Help DesksBruce Devlin, Dalet & John Pallett, Telestream

Let’s start with the elephant in the room: some of you do not understand what we have fractional frame rates, and do not appreciate the beauty of drop-frame timecode!

Why fractional frame rates exist, and will continue? 29.97 needed to keep audio from interfering with color signal, needed an odd harmonic of the half line rate, 1000/1001 seemed the best compromise at the time. But it’s not 29.97, it’s 30000/30001 or 29.97002997002997…

Timecode? Used for alignment, editing, duration. Duration is a problem, “like leap years” for 29.97; you’ll be off 108 frames/hour, or about 2 minutes/day. So skip ;00 and ;01 at start of each minute except for multiples of 10 minutes. What could possibly go wrong? Media in wrong timebase, you’ve lost the timebase, mismatched timebases…

Two servers feeding DF (drop-frame) media with automation. When set to Play, all goes dark. Why? The MXF file was correctly encoded, but head-trimmed from 1:00:00;00 to 1:02:00;00 (which doesn’t exist) so automation waited for the nonexistent code forever… on both the main and backup servers!

Netflix caption files and sync drift was the #1 QC problem for quite a while: 23.98 video and 24fps captions. We don’t have a standard for 24DF timecode and caption formats don’t include fractional rates. Netflix converts everything to media time; extended their metadata to include timebase info, so timing can be converted. Error rate dropped from 4% to under 1%, and it’s easy to fix.

“I don’t know what I have” problem: digitizing archive, captions had always been entered manually, different sync methods. Some were 23.98, some were 29.97, some were 30.0. Very expensive to work it out.

What happens downstream? Can you convert DF to integer frame rates? Yes, but not widely deployed; older fractional-rate-only TVs (which must by law be able to receive broadcasts) may not accept changed rates.

Other DF challenges: converting frame rates: 29.97 to 25, for example. There are good frame rate converters available, but there are different ways to get from A to B:

Editing: trim values may be in an incorrect timebase; EDL values may also be incorrect; you might need timecode restriping; EDLs may contain multiple types of timecode. Even if you get it cut, the mixed cadences may screw up your transmission encoder’s performance.

$70B is generated annually in fractional-rate broadcasting; it’d take $50B to change out all the gear for integer-rate gear. Isn’t going to happen.

Lesson 1: Standards do make life easier. Lack of 23.98DF does make things difficult, also no HFR DF. Non-drop works well for editing and alignment if not duration.

Lesson 2: Avoid ambiguity: specify the timebase and timecode used. IMF needs to ensure this throughout the tool chain.

In the future, acquisition will increasingy go non-drop, but distribution will remain DF in “NTSC” areas and NDF in “PAL/SECAM” countries and newer consumer devices won’t care.

Questions: 

As we move into 4K, is 3840 the new 29.97?

Who wants integer 120.00 fps? (Many hands go up.)

Will DF remain forever? (Half the hands go up.)

 


 

Broadcasters Panel

Moderator: Matthew Goldman, Ericsson

Maxime Caron, Radio Canada/CBC

Bob Seidel, CBS

Rich Friedel, FOX

Skip Pizzi, NAB

Mario Vecchi, PBS

Mark Aitken, Sinclair Broadcast Group

A free-flowing discussion of possible topics. [I don’t expect I’ll be able to keep up with who is speaking. Apologies in advance. -AJW]

One topic: impact of alternative patforms on b’casters; b’casters are getting in on the came with CBS’s All-Access platform. Bob, how has it worked?

Bob: An app for iOS / Android to phone or tablet to allow local TV viewing via WiFi / 4G / 3G. Also determines if you’re in coverage area, if not, drops out. Always shows local station feed. Also reports viewing to Nielsen, “preview data”, eventually rolled into overall sample. Uses Sync-back tech, triangulation & hotspot  for location data to receive signal. Preserves existing distro model, exclusive license to station to show their content in DMA (market area). All the same content as OTA w/ exception of NFL. Revenue stream for affiliates. Also over 5000 titles in CBS library for VOD.

Richard: We embrace it, FOX uses sync-back for years for a similar app. We started Hulu with NBC, have been in space for 4-5 years now.

Q: article in NYTimes about using VPNs to get around geographical restrictions; how will this affect things?

Bob: Many ways to defeat the services, including jailbreaking. So it’s like any countermeasure/anti-countermeasure battle. Sure, ways to break it, but we want to make content available if we can monetize and measure audience.

Q: Now with 90% HD penetration, when can we drop shoot-and-protect?

Bob: Nielsen says 86% have HD, but true HD (purchased HD service or have OTA, STB is HD and is properly connected for HD, viewer is tuned to HD source) is only 36%. So vast majority is still watching 4×3 SD.

Richard: FOX for years producing widescreen, almost everything is 16×9 with AFD (active format descriptor, I misnamed it ARD above).

Q: all the ancillary markets require 4×3, too.

Q, Bill Hogan: When will you require 16×9 as a delivery format? 17 years since HD turned on. isn’t it time?

PBS: The wide format for HD is clear; but the majority of our audience isn’t HD capable. We also multicast a lot, which means SD. We’d all like widescreen HD but there’s a long way to go.

Q: the CPMs for mobile said to be equal to CPMs for b’cast. Do you see that?

Bob: It’s better to aggregate all viewers in a composite rating. If we get that incliding All-Access viewers, great. Nielsen mandates OTT signal has same commercial load as the OTA signal to get credited for it. 70% of data traffic is via WiFi. If yoiu can aggregate all this traffic, the tide lifts all boats.

Q: A year ago we stopped composing center-cut-safe news (local station), and no one has complained.

Q: ATSC 3.0?

Skip: Key requirements being fulfilled, target is candidate standard later this year so testing can occur. Prototype products 2016. OTA throughput 30% better than ATSC 1.0 or better. Modern codecs (HEVC), combined with higher throughput better pix. UHD. Immersive audio: Dolby, DTS, MPEG-H Audio Alliance. ATSC 3.0 intended to cover broader spectrum of services and receiver types, from big TVs to mobile devices, including in-vehicle. Trying to do without too much subdivision. Targeting international marketplace, too. Audio from mono to 22.2 channels, has to be scalable / adaptable. More robust transmission. Will allow multiple operating points, e.g., a 4K service for fixed antenna, plus a scaled-down HD signal for handheld, “trading off the fungibility of digital signals”.

Q: Multiple operating points and applications; convergence; Mark, is it really happening?

Mark: The conceptual view is a matter of convergence. ATSC 3,0 is a convergence platform. Look at the IP transport management layer MMT. It’s about taking content delivered across multiple platforms. A platform for broadcasters to compete with personalized services. Multiple devices will support multiple codes; the days of a monolithic standard are gone. And evolvable, flexible platform to compete in the modern world. B’casters have been one-trick ponies, ATSC 3.0 means to break out and offer mobile, fixed services, pull in content across fixed and wireless Internet, etc. The fact of the matter is that the major parts are becoming very real; it hasn’t been put together as a single platform but demos of the various bits have been done.

Q (Deborah McAdams): You’re stuck with 1.0 by the FCC. Where’s the FCC on letting you use 3.0?

Mark: FCC claims indifference to 3.0, they won’t rule one way or the other. Massive collective effort in b’cast to look at regulatory and technical sides of issue. Large breakthroughs in ATSC to bring components before congressional folks and other lawmakers to see what a rewrite of the Communications Act would be. We’re stuck in the Clinton years; it’s not the way things are any more (B’casters aren’t the sole means of mass distro any more). If you want a true competitor to AT&S and Verizon, b’casters have to be the ones.

?: I think that this allows spectrum sharing, so I think FCC will be interested; it’ll cause more stations to get rid of spectrum.

?: There’s a possibility to bond 2 6MHz channes together; requires regulatory change. FCC is very interested behind the scenes. Divide between spectrum auctions and ATSC 3.0, statutory requirements, keep ‘em separate. But as ATSC 3.0 advances and auctions get delayed, there’s more interest in 3.0. B’casters seek a more flexible environment re. channels and codecs and such, so we don’t have to jump through hoops every time the tech changes. 

Q: Transition question: what’s the transition scenario for 3.0? Givernment sponsored STBs?

Mark: That’s one thing we need to do cleanly, hold consumer harmless during it. Station sharing, so a station moves its 1.0 content to another station and converts to 3.0. Something as simple as Chromecast, 10M sold at under $40, not beyond discussion of how b’casters would divide up the cost to send similar devices out to those dependent on OTA TV (100M-150M). The government won’t hand over $1B to give out transitional devices.

Skip: You can’t plan this until you know when the transition will occur. It’;s something important to plan for when we can.

Q: How many of the 180 PBS member stations will join up on single channel and sell off the others?

Mario: I don’t know; I don’t think anybody knows. Numbers are all over the map. PBS system made from indie O&O stations, PBS can’t decide for members. By and large, evolution from 1.0 to 3.0 and spectrum auction has to be looked at in context of where is our business going in the future. If you look at it with today’s model, you’re missing the point. We deliver content; we need to understand what consumer wants; his habits, put in context of new world (auction, 3.0, WiFi, fiber to the home) and how we adapt to this world. 

Q: Perfect storm of good stuff coming: HDR, HFR, 4K, etc.. How do they all get here?

Maxime: A much better experience, the challenge is controlling it across multiple devices with different color spaces, etc. How do we do it so it’s safe to the consumer, gives the best experience. Let’s get it right.

Mattew: UHD exploding, was about spatial res; this year’s CES is about wider color, HDR. 

Mark: All of these HDR, HFR, WCG are for the future. What’s the best and highest use of spectrum, how do we generate revenue, how to allocate our bits. From a transitional standpoint, scalable codecs, higher quality HD as the base layer, an enhancement layer with full, robust, wide everything. Maybe 2 components to the feed: base and enhancement [sounds like European EDTV from the ‘90s -AJW]. There have to be ways we can produce and distribute content that’s not one-size-fits-all. If you look at what’s happening with scalers and interpolation, you get great 4K display of good HD feeds.

Bob: Consumers can’t tell the diff between HD and 4K, EBU report the same. We shot many things in 4K 10-bit 4:2:2. We edited it and put it on a 65” 4K display; downconverted to HD and fed to a 4K upconverting monitor, plus an HD monitor. Many could not ID the original 4K image at 3 picture heights of distance. For a 65” monitor you have to be 4 feet from a 65” monitor to resolve true 4K. CableLabs talks about color gamut; MacAdam ellipses enlarge out towards the limits of the wide gamuts.

Q: Lots of low-cost cameras that only shoot UHD. Discovery already discussing 4096-only, raw-only delivery. Will we accept 3840 pixel delivery?

Bob: Always shoot highest quality a good idea. ITU etc. use 3840×2160 as the baseline for international distro.

Q: Fractional frame rates to solve the mono/color mix; SD/HD mix in the ‘90s; is it time to leave legacy 59.94 behind and go to 120.00 and integer rates from there on up?

One words answers across panel: yes, yes, abstain, yes, abstain, yes.

Q: Moving from SD to HD, we changed the studios. What about when moving to UHD? True immersive experience through studio changes?

Bob: We didn’t change lighting or sets; we just added a few set elements. Maybe went to different makeup.

Mark: depends on where the money is; doesn’t happen at the snap of a finger. How many can you transition in a year? 

?: NHK thinks it’ll happen (small rooms, close viewing in Japan). Immersive audio: with all those speakers, the sweet spot is too far back for optimal viewing. Sound bars, sound frames? Another issue. Video gaming is immersive, but one person at a time. Issues are ecosystem wide.

Bob: CableLabs says that close up, with HDR, you notice more judder and blur, so need 120fps, but creatives may not like that.

Q: If tech hasn’t got a proven consumer want, what do you do?

Bob: We work with CE manufacturers so we can better understand how to work with ‘em, how content will be shown on the devices. B’casters should be a transparent pipe to the viewers.

Q: With 4K will we finally see the demise of interlace?

A: Yes! (applause)

 

CES Review Peter Putman, ROAM Consulting

I shot 200 photos and 100 videos at CES. Here in Palm Springs, go to La Quinta, try the liquid nitrogen ice cream.

How much TV can you buy for $400? $90 for 19”, 42” for $280, 55” with Roku for $400. For $700, you can get 50” 1080P smart TV, or a 60: plasma. $900 gets you 60” or 65” smart TV, or a 49: 4K TV.  For $2000 you can get a 65” 4K 3D smart TV.

4K prices are converging on 2K prices.

CES: 160K attendees, 3600 companies Chana Inc booths getting big. More appliances this year, fewer TVs. Toshiba has zero TVs in the booth; only selling TVs in Japan. Panasonic has commercial products. LG as all-in with OLEDs. No digital health stuff. North Hall was  car show.  

Cold: 2K, health, phones, tablets, smart TV, 3D, GPS. Hot: UHD with HDR, connected cars, wireless, smart appliances, faster display interfaces, beauty, drones.

More pixels, more contrast, more color, more highlights, more hype; 2K is so 2000. Quantum dots. LG enhances brightness with embedded white pixels. Samsung Super Ultra HD TV.  Sony triluminous display use QDs. TCL, Samsung, LG all with QDs. Hisene calls ‘em U-LEDs. QDs everywhere.

LG had a display with user-selectable HDR mode – you have no idea how bad it’ll get!

55” is now a small TV. Currved sets. China is the fastest growing market for 4K.

LG 21:9 5120×2160 “5K” display, Sharp 120” LCD, largest from a single fab cut; also 5K display.

LG 8K 98” LCD IPS monitor, 7680×4320, needs a mess of DisplayPort connections! Samsung has a 110” version.

Sharp showed odd-shaped LCDs, like round ones for car panel displays. 

12 major vendors showing car products. McLaren to Fiat to Ford to Ferrari. Nvidia customizable dashboard concept. Toyota Mirai hydrogen car. Hisense HD driving sim with 3 displays.

Super MHL: mobile HD link, a version of HDMI over microUSB. SNHL uses a bigger 32-pin connector, 32GB/sec. DisplayPort 1.3 also 32GB/sec. USB 3.0 Type C smart connector and cable. 

Snap Technology wireless 12 GB/sec connection (SiBEAM). Devices sit in a Snap cradle.

Conexant Voice Control for TVs: say “hello Athena” to invoke, works well even in noisy environment.

Eye control of a tablet from TheEyeTribe in Denmark. A sensor bar affixed to a tablet, $99. Scroll through display by looking up and down.

UHD Blu-Ray player, uses H.265 HEVC, coming later this year.

UHD Alliance announced at CES, but is a bit ahead of the standards…. standard for CES.

 


 

From Smartphones to Cinema: Personalized & Immersive SoundJeff Riedmiller, Dolby Laboratories

Four pillars: accessible (multi-language, descriptive audio), personalized (modifiable to user preferences), immersive, and adaptable (optimal on every device) sound. Today the TV audio world is flat: 2D. But we don’t experience the world that way. 5.1 enabled a lot, but it’s still flat: no vertical dimension. The area above the listener is quite interesting.

Object-based audio: represent a sound with an x,y,z position as opposed to a fixed channel mix. Future-proof, since it’s not tied to any number of channels. Audio with a flight plan; metadata is the flight plan. Dynamic metadata: time to present, space to place it in. Also render modes (speaker zone masks or speaker snap modes), size & decorrelation, divergence, update intervals for the metadata. For object-based audio, dynamic metadata is mandatory. 

Playback requires a “renderer” to process the audio + metadata to speaker feeds for a specific situation, whether it’s 2.0 stereo, 5.1, 22.2, etc, based on both spatial configuration and user preferences. Wish list: positional, not just directional, so you can render within the room and not just sound stuck on the walls; matches current practice for surround; smooth pans through the audio space; discrete (to be able to render a “point source”); sync between metadata and audio (jitter < 32 samples).

Implementations and philosophies: Vector-Based Audio Panning (VBAP; discrete, good timbre, but limited to objects on the surface of the audio sphere); balance-based (current practice); distance-based. 

Lots of data: 100+ channels of audio (objects); 100+ tracks of metadata, > 100Mb/sec. What about data interchange (OTA, OTT, other home delivery)? Today’s workflows are data-rate-limited; HD-SDI supports only 16 channels, can carry metadata in VANC. Compressed mezzanine format? Near/mid-term bridge to an all-IP plant in future.

Grouping interchange objects: Orange spots are speakers, blue dots are audio objects:

Interchanging spatial object groups offers new mix opportunities: Music & FX, dialog helper, language X, commentary, description objects can be recursively regrouped. Allows higher-quality alternate-language experience, for example.

Standardization: lots of work ongoing. ITU-R BWAV/ADM File Interchange – work in progress.

What this sort of object interchange might look like: these are typical group numbers:

So about 21 tracks with metadata, you have a very powerful set of building blocks, enabling immersive and personalized audio. Different presentations take from different groups.

Q: How are environment effects (doppler, echo, etc.) being incorporated? A: All early days, work is progressing. Crawl/sit/walk; we’re just starting to get on our feet.

Q: Can we make metadata carriage robust and foolproof? A: I think so, we’ll have to leverage a mezzanine format. Won’t be robust in an SDI workflow.

 

Next Generation Cinema: New Technologies and Techniques and What They Mean for Filmmakers

 

Expanded Color Gamut: What Do Artists Need? – Matt Cowan, Entertainment Technology Consultants & Jan Frölich, Stuttgart Media University

Matt: the Christie projector we have here is 2020 gamut-capable; let’s look at some pictures [which I mostly can’t reproduce here for obvious reasons! -AJW]. Color test patches in P3 and 2020; carnival scene timed for 2020; next-gen cinema content.

(Cyan in P3 on left, in 2020 on right; this image is much less vivid than what it actually looked like!)

 

Contemplating the Expanding Canvas: Pairing the Mathematics of Motion and Frame Rate with Artistic VisionBill Bennett, ASC and Tony Davis, Tessive

Bil: Demo of HFR 120fps downsampled to 24fps using Tessive’s code. Shot with 360º shutter to allow best frame mixing. 

Tony:Motion blur is very important. The “clean canvas” has (1) frame rate, (2) how real-world action is captured by the camera (pre-filter”), (3) how it’s displayed by the projector (“post-filter”). Problems: judder (acquisition aliasing), strobing (projector aliasing), blur (low bandwidth). [Shows demo of judder and strobing artifacts alone and combined using a 5fps source.] If we get sinc-shaped shutter, high-frequency aliasing goes away.  We had the Tessive Time Filer; now there’s the RED Motion Mount. 

New way: acquire at 120fps, 360º shutter; in post we synthesize soft shutter by frame combining. Digital downsampling in the time domain. Artistic control. Here’s what it looks like:

Because we’re using square waves, the higher the frame rate, the smoother the frequency response. You can shape the shutter response as you like.

Download the software from Tessive’s site.

Sample still frame 24fps square shutter extraction on left, smooth shutter on right:

Also helps reduce noise.

Q: Negative lobes on the sinc function?

A: Can’t do it optically in camera, but easy to do with data.

Q: Variety of shutter angles available?

A: Any combo of integer multiples of the input frame rate, to 360º and onwards for square shutters. I have about five variants for smooth shutters.

 

Towards Higher Dynamic RangePete Ludé, RealD

Current display tech can’t emulate real-world light ranges. The human visual system can see about 1,000,000:1 contrast range without adaptation, more with adaptation. An HDR display captures most of this range. 10 stops standard DR, 20 stops HDR? Spectral reflections are typically 10x the brightness of a lambertian (diffuse) reflection. 

HDR requires higher highlights and blacker blacks. 

Brighter? Use laser projectors for 2x-3x the brightness.Also we’ve been making screens the same way for decades, spraying the coating on a vinyl substrate. New way: plastic substrate run through embossing machine with UV cured resin, applying thin coat of aluminum for an engineered surface with 91% reflection efficiency vs 52% for conventional screens. Gain can be doubled with no change in angle or uniformity. Also micro-perfs 85% smaller with same acoustic performance; much less visible.

Blacker blacks: Projector black state (sequential contrast ratio), involving imager diffraction, dust & debris, glass impurities, imperfect polishing, etc. Projection lens (veiling glare, where light reflects within lens, reducing contrast). Auditorium ambient light. Room contrast ratio (light bounced in the room, bounced off the audience, etc.), with black carpets, seats, walls. According to one study, 0.5% of screen light bounces back (200:1 contrast ratio). Actual measured auditorium contrast is more like 650:1. Total contrast of the system (all factors concatenated) about 354:1. With improvements in tech, can push this to near 700:1 total contrast in a theater. True million-to-one contrast requires bright projector, black auditorium, no people!

 

Next-Generation Cinema Technology Test MaterialDave Stump & Garrett Smith, Working Group on Test Materials, AMPAS

Dave: NGCT (next gen cinema test), AMPAS generated test material for HDR, HFR, high display brightness, wider shutter angles. “The Affair”, short shot with motion control to repeat the same moves with different parameters ARRI Alexa at 2K, frame rates 24-120fps, various shutter angles; two F65s in 4K through a beamsplitter to get two exposure 6 stops apart.

More shooting will occur. We want feedback. 

[ARRI 2K samples shown: 60, 48, 24fps]

All this material and the shooting report is available if you contact AMPAS.

What are we learning? Rec.2020 displays are starting to appear. HDR makes judder worse. HFR reduces judder. We’re using ACES to color-correct this material; works great.

 

Beyond the ScreenFrank Tees, Moving Image Technologies

The 1990s: more screens, bigger screens, the overscreening of America. Digital came along, making auditorium swaps easier, but still a relatively low auditorium use percentage. Now we have immersive seating; fewer but better seats: motion, vibration, ticklers/pokers, wind, water, bass extension. Passive (ButtKicker, tremorFX) using bolt on or built in transducers [passive never defined; I’m guessing it means essentially subwoofer feeds into the seats, requiring no new programming. -AJW]. Active tech: additional creative avenues, additional impact as the story requires; track development time depends on # of effects desired. Immersive seating tracks created by seat manufacturers, QC is a day or less. Distro via DCP (D-Box) or separate data feed synced via timecode. Added draw for exhibitors & studios. Additional revenue (increased audience occupancy percentages or added upcharges).

Future developments: path to DCP distro, codec on server with embedded code. External decoder with dedicated seat object. Standards?

 

Panel: Suspending Disbelief: When and How to Use New Tools and Techniques

Moderator: David Geffner, ICG Magazine

Steven Poster, ASC, International Cinematographers Guild

Michael Goi, ASC

David: From Coleridge, 1798, “The willing suspension of disbelief.” So we have UHD, HDR, HFR, virtual reality. Next-gen tech, disruptive tech. We just saw HDR and Rec.2020, how do you see these changing your on-set workflow?

Michael: I’m a technology perverter, I take what comes along and subvert the intended use of it to make it work for me. I used 16mm Tri-X and abused it in darkroom before shooting with it to make it look like 1950s porn footage. I work on shows with mature, great actors, none want to be shot with razor-sharp anything. UHD reminds me of Ren & Stimpy close-ups (ugly!). Needs to be used catering to artistic intent.

Stephen: Did you see how the HFR in the AMPAS demo looked like Telemundo? None of the uses of HFR help with suspending disbelief. With new tools, essential that director and DP are involved in postproduction, initial concept needs to be followed through. So if you shoot 120fps, who is going to ensure it’s posted properly? We saw that 2020 demo, the ability to desaturate will be critical. We’re not shooting Telemundo here.

David: would you want to see that detail in the sky, when the focus is on the girl in the front?

Michael: These are tools to use, picking and choosing what the tech does to support what we need artistically.

Stephen: look at the use of legacy lenses, so we can get the flares and pleasing look of old lenses on 4K material. Filters are being designed to work with 4K since actors don’t want to see every flaw in their faces. See the Golden Globes? Closeups looked horrible; too sharp.

David: The idea of a more realistic experience is being sold as more immersive. When you get into areas of poetic interpretation, how does that impact your craft?

Stephen:  Directors I’ve worked with agree that HFR affects emotional responses; we don’t understand how human perception is affected by these tools: a big mistake. 

Michael: Reality is the reality that you create in the story you’re telling. I designed a movie about internet predators to look like what you’d see on TV, but we create our realities.

David: One of our [ICG] members is campaigning against motion interpolation in TV sets. How do you protect against that, to prevent things like that? What can you do?

Stephen: You can only protect for one format. You shoot for the biggest format, adapt for the others.

Michael: I shot recently in 2.35 anamorphic, but’s being shown in 16×9 side-cut. They wouldn’t do that to Lawrence of Arabia!

Stephen: Some header/descriptor to affect the display as discussed this morning would be great.

David: Comment on new tech?

Michael: You always want to go forward. Will we ever reach a state of stability? We’re moving forward in the right ways compared to 20 years ago.

Stephen: I agree. HDR is valuable. Headroom is valuable. Shoot 4K for 2K, for headroom. Shooting higher bit-rate is valuable.

David: the next generation is watching on phones and tablets. Do you as DPs, are you good with that?

Michael: When TV came in, everything was shot in close-up. You can’t anticipate where things will go. Audiences today want to see things larger so they can connect with it emotionally.

Stephen: VR is not a close up medium. We have to understand how these things work. Cinematographers are passionate about their work, pushing into post to preserve intent. ACES incredibly important to give us a common playing field. 

 

120fps High Frame Rate Production, a Universal Production Format?Jim DeFilippis, TMS Consultancy

About a year ago I started to think about HFR (I used to work in sports). Is there some value in 120fps beyond slo-mo? We shot in Orlando: softball with F65 at 120fps. Also tests at 120 with F55 @ 2K, temporally oversampled, comparing to 50/60 fps. Also trying different shutter angles: 180, 90, 360. Tested motion in dance, cars, fight scenes. Green screen tests too. Shot in January, so we’ve only processed the dance footage so far [clip shown, 120fps 120º shutter]. Less motion blur, and less judder.

Frame rate conversion to 24fps can trade off blur and judder, depending on # of frames used in the downconversion and the addition of synthetic blur .

Q: did you have a chance to look at details in the blending test? A: Think about the ability to control the amount of blur. We’re working with software developers to see if we can do that in specific area of the image.

Q: Shoot 120 for blending, use 360º shutter? A: It matters or doesn’t depending on the look you want. Yes, 360º gives you the full temporal pattern, but you bake in more blur, too. This is a tool, it doesn’t obviate the input of the director. It’s in its early days yet.

 

 

 

What Just Happened? – A Review of the Day by Jerry Pierce & Leon Silverman

“I’m a bad father for not teaching my kids drop-frame.”

It’s been fun watching Jim Burger’s 20-year progress in PowerPoint presentations.

Who here has cut the cord? (Maybe 10%) What are you watching? (OTA, Netflix, Academy screeners).

Net neutrality is the wrong question, it’s a tragedy of the commons where Netflix expands to fill the network and nothing else works!

Who wants 1080P HDR? (A few.) HFR? (Fewer, but those who want it are more worried about seeing fast motion than about “suspending disbelief”.)

64 years ago we worried about screen brightness; will we ever stop worrying about it?

I wish we spent more time on HDR, less on 4K.

I was hoping to see more convergence between cinema and broadcast; it isn’t happening.

What about 2020 color space? How are you going to master for it? Give us a display and we’ll master to it. Until recently, nobody had ever been able to reproduce those colors. 

We’re going to have displays with lots of color, contrast, brightness, and people will author for that. Then translate to lesser displays?

I’m amazed that this group that can calculate frame rates to five decimal places still calls 3840 “4K”!

I learned that we could have a panel completely populated by women that wasn’t about women in the industry, and that’s great!

Like all new tools, HDR and HFR will be misused horribly, just like 3D.

The best way to get contrast in the theater is to issue the audience black burkas and black popcorn.

Even if people have 4K sets, nothing says we can’t send it a 2K signal, so we can transition into it gradually.

It’s unlikely we’ll figure out the current standards before the next set comes out.

Who has seen a non-theme-park movie with 4D (butt-kicking)?  Yeah, it was a gimmick. Does anyone have an interest in seeing a 4D movie? (Not much response.) We’re the wrong demographic for it.

The amount of people actually watching HD is quite a lot lower than we expected.

 

 

Disclosure: I’m attending the Tech Retreat on a press pass, which gets me free entrance into the event. I’m paying for my own travel, local transport, hotel, and meals (other than those on-site as part of the event). There is no material relationship between me and the Hollywood Post Alliance or any of the companies mentioned, and no one has provided me with payments, bribes, blandishments, or other considerations for a favorable mention… aside from the press pass, that is.

 

Exit mobile version