When to edit native, When hybrid, and When pure i-frame… and Why


Ever since the launch of Final Cut Pro 6.0 (at this writing, we are at 6.04), we have for the first time had the possibility of realtime hybrid editing. Prior to FCP 6.0, in order to have realtime editing we had to convert all footage to the target format (códec, framerate, resolution, etc.) before editing… or edit natively. Now editors need to decide, on a case-by-case basis, whether to edit native, hybrid, or pure i-frame. But let me start by defining my key terms for this article:

?Here is my expanded definition of the word:
(Coder + Decoder… from the Latin codex, –?cis, code, and –de, Latin prefix that negates or reverses the base meaning.)

  1. noun. Algorithm used to encode and decode sounds, words, text… or audio/video signals.
  2. noun. Hardware device or computer program that via a specific algorithm carries out encoding and decoding of sounds, words, text, or audio/video signals.

DV100 is proper technical name for a códec which Apple and Panasonic tend to call “DVCPRO-HD”… but that practice is confusing and illogical, since DVCPRO-HD is the name of a videotape format, which happens to use the DV100 códec. However, the DVCPRO-HD tape format does not have exclusivity on the DV100 códec, which nowadays is also recorded on P2 cards and hard drives as well.

?In MPEG encoding, frames are organized in GOPs (Group of Pictures). Depending upon the type of MPEG, there may all i-frames (where each frame contains all of the information), or a combination of i-frames and other frames which do not include all of the information for each frame. In that case, the non-i frames need to be reconstructed. All HDV formats use a short, medium, or long GOP, as do all XDCAM-HD and all DVD-video recordings. Long GOP encoding is more efficient, since it offers higher quality at a lower bit rate than pure i-frame encoding, although long GOP recording can be much more demanding on a computer during editing in certain circumstances, as I will explain later in this article.

Hybrid editing?
When the source footage’s códec doesn’t match the sequence códec or render códec. In FCP 6.0x, we get a green bar on the upper part of the timeline and we get realtime almost always on a recent Mac, as long as we have set our RT (realtime) setting to “Unlimited”.

i-frame encoding?
A type of encoding where all frames are i-frames. This is much easier on a computer processor when editing, but less efficient to record. This means that to maintain low bit rate, more information is thrown away… or a higher bit rate must be recorded to maintain high quality.

Native editing?
When the source footage códec and the sequence target códec or editing códec are the same.

Pure i-frame editing?
When both the source footage and timeline settings (target format) are both i-frame códecs, although not necessarily the same ones. For that reason, a editing situation can be both hybrid and pure i-frame.

RT-editable format
When I say RT-editable format, I mean a códec that FCP can accept as editable in realtime. These include both long GOP and i-frame códecs.

Color sampling is measured as 4:4:4, 4:2:2, 4:2:0, 4:1:1, and 3:1:1. In general, the higher color sample, the better. However, be careful when a format or códec is promoted as having 4:2:2 color sampling only to discover later on that the 4:2:2 sampling was achieved after heavy subsampling. It’s easy to poll 25% of your neighborhood about the upcoming election when you truly poll 25% of only 66.6% of the people who live there! When you read ahead, you’ll know to which códec/format I’m referring.

Shooting códecs versus editing códecs?
There is a good reason why the best códecs for shooting are not the best códecs for editing, at least not today. Shooters want good quality, but they frequently want a fairly long recording duration per recording medium (tape, chip, or disk). Professional camera manufacturers strive to provide the best compromise between efficiency, quality, and editability. In their professional high-definition formats, both JVC and Sony believe that the best balance is achieved by using a relatively long GOP (Group of Pictures), not i-frame. Panasonic Broadcast has traditionally disagreed about long GOP, and has used the DV100 códec and more recently (at the very high end), AVC-Intra (high bit rate H.264), both of which are i-frame (as opposed to long GOP). However, Panasonic Broadcast now offers two cameras in their prosumer line which record long GOP, so things change.

On the other hand, editors seek to avoid visible generation loss during editing, and can afford larger file sizes than shooters do, especially when those file size are much smaller than uncompressed video.

Subsampling (medium or heavy), versus full raster?
Although they fiercely disagree on transmission issues, fortunately the three HD gods completely agree about the spacial resolution of HDTV (High Definition Television). In case you don’t know them, the three HD gods are the ATSC, DVB, and DiBEG (ISDB-T). They all agree that there are two HD spatial resolutions: 720p (1280×720) and 1080i/p (1920×1080) using square pixels.

Now, let’s see which video shooting formats obey and which disobey the sacred, unilateral decree (insert thunderclap sound effect here), and how much. In the case of HDV 720, fortunately no subsampling is used, so all 1280×720 pixels are recorded. (HDV 720p is full raster HD.) In the case of standard HDCAM, the first generation of XDCAM-HD (which was 1080-only), and HDV 1080, 1440×1080 are recorded instead of 1920×1080. In other words, subsampling and anamorphic recording is used with all of these. However, Sony later repented from this transgression by offering full-raster 1280×720 and 1920×1080 at 50 megabits/second 4:2:2 on the PDW-700 camcorder… and full raster 1280×720 and 1920×1080 at 35 megabits/second 4:2:0 on the EX1 and EX3 camcorders.

In the case of the DV100 códec (used in several Panasonic DVCPRO-HD and P2 cameras), the degree of subsampling depends on the framerate used: In DV100’s 720p, 1280×720 is always subsampled to 960×720. In DV100’s 1080 at 25p or 50i, the subsampling is the same as that of standard HDCAM, the first generation of XDCAM-HD, and HDV 1080: 1440×1080. However, if the framerate is 23.976p, 29.97p, or 59.94i, DV100 subsamples 1920×1080 all the way down to 1280×1080. Although Panasonic likes to boast about the DV100’s códec being 4:2:2 and i-frame, that’s all done at the expense of the exaggerated subsampling, in all types of 720p, plus the most widely used 1080 frame rates in the ex-NTSC zones.

Next page: How much is lost through subsampling

How much is lost through subsampling?

  • When your códec subsamples 1280×720 as 960×720, you lose 25% of your spatial resolution
  • When your códec subsamples 1920×1080 as 1440×1080, you lose 25% of your spatial resolution
  • When your códec subsamples 1920×1080 as 1280×1080, you lose 33.3% of your spatial resolution

The issue of subsampling doesn’t only apply to recording códecs: It also applies to sensors in many “HD” cameras where the sensor resolution isn’t anywhere close to the HD spec decreed by the three HD gods. However, that issue is outside the scope of this article. This article is about how the image is handled after the camera’s sensor block and DSP, and the editor’s workflow options. No matter how much you may have lost in spatial resolution due to subsampling in the field, you’ll want to conserve as much quality is left (not degenerate it further), and make sure that your graphics are created and maintained as clean as possible, depending upon your different final output destination(s).

Apple’s introduction of the ProRes422 (HQ) códec was a breakthrough for FCP editors, both for SD and especially for HD. Before ProRes422(HQ), FCP HD editors had to make a major compromise: edit uncompressed HD (not an option for many due to the cost of appropriate storage), or edit with the lossy DV100 códec (which Apple calls “DVCPRO-HD”). Other, 3rd-party lossless códecs would work with FCP 5.1.4, but not in real time. Now we finally have a full raster, 10-bit, visually lossless i-frame códec to use in NTSC, PAL, or any type of HD.

Advantages of leaving your raw footage as long GOP in editing
Even though your footage is likely shot in a long GOP format, like one of the HDV or XDCAM-HD, you have a choice in FCP 6.x about whether to capture them as native… or to transcode during capture (or just after capture). The advantage of leaving the long GOP raw footage as native [even though you will probably still edit to ProRes422(HQ)] is that the file size of the raw footage remains much smaller than the same footage after transcoding… so you save in space on your disk array.

Advantages of transcoding to ProRes422(HQ) prior to editing?
I know of two cases where it is extremely advisable to transcode your raw long GOP footage to ProRes422(HQ) prior to editing:

  • When you have received independent 48Khz/16-bit audio to be be synced
  • When you are planning to do critically frame-accurate compositing

When FCP plays long GOP footage, it doesn’t play the exact frame at the exact moment. Instead it shows you an estimate. As a result, after rendering, you will likely see mismatches in audio sync or critical compositing sync.

Starting with FCP 6.x, the decision whether to transcode or not to transcode arises mainly in a case like the ones mentioned above… and then, only when your raw footage is long GOP. If your raw footage is already i-frame (i.e. DV100), there is no need to transcode before editing, even if you are editing to ProRes422(HQ).

When to edit to ProRes422(HQ) and make it the mastering format (sequence setting)?
Advantages to editing to ProRes422(HQ) and creating a ProRes422(HQ) master include :

  • Retaining the maximum quality of your raw footage.
  • Creating and retaining pristine graphics at full raster,10-bit, 4:2:2, regardless of the raw footage specs.

These advantages will exist whether you have transcoded your original footage or not. You should do this in almost all cases, so that all your output formats will have the best possible quality.

When not to create a ProRes422(HQ) master
?Don’t bother creating a ProRes422(HQ) master when you are absolutely positive that you are supposed to deliver the exact same format exclusively as the original, and you’re very short on space. Examples:

  • When the client is absolutely certain that s/he exclusively wants the master on the original format (i.e. DV100 on DVCPRO-HD tape, DV100 on P2, EX (MPEG2) on SxS chip, HDV (MPEG2) on HDV tape, XDCAM-HD on its optical medium, etc.)… and you’re quite short on space.
  • When you are absolutely certain your only output is to a server or player which uses the exact same códec as the original format (i.e. DV100, HDV-compliant MPEG2, EX-compliant MPEG2, etc.)… and you’re quite short on space.

Even when this is the case, you should still set the FCP preferences to render in ProRes422 to reduce multigenerational loss, even when outputting to deliver on these other formats. If your original footage was unfortunately subsampled, and you have also decided to leave it as native, and you’ are absolutely certain that you are only to deliver the final product in one of the above named subsampled format, then you might as well make your sequence one of the corresponding subsampled version of ProRes422(HQ).

Sidebar: The best way to transcode from HDV to ProRes422 during capture?
Although FCP 6.04 now offers ways so transcode during capture via IEEE-1394 (FireWire, i.LINK) to ProRes422, you lose timecode and Log & Capture with those methods. Considering that one of the remaining benefits of HDV is that the inexpensive cassette is also archival medium… and considering that clients often ask you to re-edit projects months or years later, maintaining the capability of Log & Capture and automatic recapture is valuable. So if you are going to transcode, and you have the appropriate hardware, those are two important reasons to use it. Professional interfaces with inboard HDMI input is available from AJA with their IoHD, and from Blackmagic with their Intensity, IntensityPro, MultibridgePro2, and MultibridgeEclipse. If you own a professional HD interface that lacks an HDMI input, then you can use Convergent Design’s HD-Connect MI to convert HDMI to HD-SDI. Of the above solutions, only the IoHD will allow this to happen with a laptop (MacBookPro). The Matrox MX02 also works with the MacBookPro, but doesn’t currently allow realtime encoding to ProRes422 on a laptop. This is really a processor limitation on the MacBookPro, which AJA resolves with its onboard hardware encoder for ProRes422(HQ). The Blackmagic and Matrox products mentioned will do this with a MacPro tower.

But there are even more advantages to capturing HDV from the camera’s or deck’s HDMI or HD-SDI:

  1. Some HDV decks have hardware correction circuitry which is bypassed when you ingest from the IEEE-1394 output of the camera or deck. If you use the HDMI or HD-SDI output, you retain that benefit.
  2. Sony HDV camcorders and decks are able to play HDV 720p footage out of the HDMI output, but not out of the IEEE-1394 output. By using the HDMI output of Sony HDV camcorders or decks, you are allowing the deck to be much more universal as an NLE feeder. [In the case of most of the Sony HDV cameras and decks, they can play JVC’s HDV 720p30 (29.97p) recordings in this way, but not any of the other four available HDV 720p framerates. With the more recently released Sony HDV camcorders, which boast “native progressive recording”, you can play three of the possible five HDV 720p framerates: 720p24 (23.976p), 720p25, and 720p30 (29.97p), but the 23.976p playback won’t be native. It will come out as 720p59.94 with pulldown (see my article: When 25 beats 24p).]

The special case for the AVCHD/AVCCAM format?
In version 6.04 of Final Cut Pro, AVCHD is supported for ingest via the Log & Transfer window, but native AVCHD editing is not supported. The Log & Transfer window will currently allow you to select the desired clip and transcode either to ProRes422 or AIC (Apple Intermediate Codec). It has to be the entire clip; no marking in and out is currently available with 6.04. So although this format is long GOP, at least for now, you won’t see it as such in the timeline.

Support ProVideo Coalition
Shop with Filmtools Logo

Share Our Article

Born in Connecticut, United States, Allan Tépper is an award-winning broadcaster & podcaster, bilingual consultant, multi-title author, tech journalist, translator, and language activist who has been working with professional video since the eighties. Since 1994,…

Leave a Reply

Notify of
popup close button