The Hi-Def Checklist

Questions to ask and issues to consider when you tackle a high-definition graphics job.

The darker areas to the left and right of these images show what happens when a 16:9 image is “center cut” to create a 4:3 version. Make sure information such as lower thirds survive this cut. Image courtesy Belief and HGTV.

Many motion graphics artists are tackling their first high-definition jobs. In some respects, hi-def is just like normal video; only larger. However, hi-def also comes with a number of issues which can throw some major curves at you. As with all problems in waiting, it’s best to solve them before you start, rather than when you think you’re almost finished. Here are a series of questions you need to ask, and what the implications are – both technical and artistic – of the answers you may get.

Frame Rate Issues

When working with standard definition video, frame rate is dictated by the format, such as 29.97 frames per second (fps) for NTSC and 25 fps for PAL. The video is also probably interlaced, which means there are two fields – captured at different points in time – per frame. If the footage originated on film at 24 fps but is to be played at 29.97 fps, chances are that is slowed down to 23.976 fps and then has had 3:2 pulldown applied using a pattern of “split” (interlaced) and “whole” (non-interlaced, or progressive scan) frames to spread every 4 film frames across 10 video fields.

However, the Advanced Television Standards Committee (ATSC) – who set the hi-def standards – allow frame rates of 23.976, 24, 29.97, 30, 59.94, and 60 fps progressive scan; the 29.97 and 30 fps variants may also be interlaced or progressive! Therefore, the first question to ask the client is: What frame rate should I use for the final animation I hand you? That’s the rate you should use when building animations (in other words, what to enter for the composition’s or sequence’s frame rate). If you are to deliver an interlaced file, hi-def is always upper field first, in contrast to the lower field first order of DV.

The second question you need to ask is: What frame rate is the footage you’re giving me? This is more devilish than you may think. Quite often, a studio will not deliver footage as a QuickTime or AVI movie with the frame rate already embedded; it will come as a sequence of TIFF, SGI, or even Cineon DPX frames, with no inherent frame rate attached. It will be up to you to then assign the correct frame rate when you import the footage into a program such as Adobe After Effects (see below).

Adobe After Effects has a preference to set the default frame rate of sequences you import.

If you forget to set this correctly, you can fix it later in the file’s Interpret Footage dialog.

Even if you receive footage as a QuickTime or AVI, you cannot necessarily trust the frame rate embedded in the file. HD decks can not be relied upon to automatically detect the frame rate a tape was shot at, and might play it at a different speed. Verify that the frame rate was transcribed from the tape – and hopefully, from the shooting notes – themselves. (And if you are the shooter, please remember to mark these details on your tapes.)

In most cases, the answers to these first two questions will probably be 23.976 fps, progressive scan. However, it is worth double-checking, as some cameras can shoot at either 24 or 23.976 fps. The difference can cause subtle audio synchronization issues that become noticeable within a minute, and which need to be corrected by speeding up or slowing down the audio track. For example, if the audio track was meant to go along with 24 fps footage, but you are conforming all of your footage to 23.976 fps for a final delivery, you need to slow down or stretch the audio track by 100.1% (some software thinks in terms of speed rather than stretch; in that case, set the speed to 99.9%).

Beware of complacency! On one recent job, virtually all of the dozens of clips we received (which were delivered as SGI sequences) were at 23.976 fps, except for one which was at 29.97 fps with 3:2 pulldown added. If you have footage that is supposed to be progressive scan – such as all 23.976 or 24 fps footage – but you see the telltale “comb teeth” of interlacing on moving objects (see below), you know something is wrong. Set the field order to upper field first, and ask your software to detect the pulldown sequence. And don’t automatically trust what it says: Manually step through the resulting footage to make sure you don’t see interlacing artifacts. In After Effects, Option+double-click on Mac (Alt+double-click on Windows) on the footage item in the Project window to open it in a special Footage window, and use the Page Up and Page Down keys (above the normal cursor arrows) to step through several frames to make sure you don’t see those artifacts. If you do, go back and try different pulldown phases until those artifacts disappear.

Look for the tell-tale “comb teeth” look around moving objects to see if your hi-def sources are interlaced. When interlaced, hi-def footage is always upper field first. Footage courtesy Artbeats.

Another important implication of frame rate is the smoothness of motion. When objects move, you get to see their new positions only 40% as often at 23.976 fps than you would at 29.97 fps interlaced. That means formerly smooth motion can now take on a strobing appearance.

The easiest answer to this problem is to add motion blur. Hopefully, your program supports this; if it doesn’t, or if you received sources that were not rendered with motion blur, you may need to add it using a plug-in such as RE:Vision Effects’ ReelSmart Motion Blur. The downside of this added blur is that you will lose some clarity on items such as fast-moving text (see below). You may need to back off on the motion blur amount to find a compromise between smoothness and readability. Render tests, and run them by the client before delivering the final.

Motion-blurred text that is perfectly readable at 29.97 fps interlaced (top) can be much harder to read at 23.976 fps progressive, even with the same blur shutter angle (above). For fast-moving objects, adjust the motion blur amount to balance readability off against strobing.

next page: frame size issues

Frame Size Issues

Just as there are a wide variety of legal frame rates in hi-def, there are a variety of sizes to contend with as well. The standard hi-def sizes are 1920×1080 pixels and 1280×720 pixels. The larger size is far more common, but again, ask to be sure, and don’t assume all of your source files are going to come in the same size.

Additionally, some hybrid “production” sizes have emerged. A 1920×1080 hi-def frame has nearly six times more pixels than a typical 720×486 pixel standard-def frame, which can mean it takes up to six times as long to render (although it’s not always that bad; if your project is at 23.976 fps, you have to render only 40% as many frames as you would with a 29.97 fps interlaced project). That large of a frame is also difficult to display comfortably on most monitors, plus results in more bytes to store and move around a network. Some stations have started using a “half HD” size of 960×540 pixels, which they then scale down slightly for their standard-def broadcasts, and double for their hi-def feed. Don’t be shocked if you receive a request to supply graphics at this size, or even the square pixel widescreen standard-def sizes of 864×486 (NTSC) or 1024×576 (PAL).

But wait – there’s more. The one silver lining in the ATSC specification was that all of the higher-resolution formats used square pixels. Alas, that last refuge has been taken away from us by the HDV and DVCPRO HD formats. In HDV, a “1920×1080” frame is actually captured at a size of 1440×1080; the pixels must be stretched horizontally by a factor of 1.333 to become square again. DVCPRO HD uses the same size for PAL frame rate projects (25 frames per second, interlaced), but a different size – 1280×1080, with a pixel aspect ratio of 1.5 – for NTSC frame rate projects. When a 1280×720 frame is called for, DVCPRO HD captures it at 960×720 pixels, also requiring a horizontal stretch of 1.333 to make the pixels square. Some software – such as Apple Motion 2 (see below) – support these sizes and manages them automatically, but not all do as of the time this was written (late 2005). Be aware that you may need to perform these stretches manually in the short term.

Newer programs, such as Apple Motion 2 and later, After Effects CS3 and later, etc., have templates for the numerous non-square-pixel HDV and DVCPRO HD formats. With other programs, you may need to stretch the pixels square yourself.

There are other size implications beyond pixels. Along with larger dimensions, hi-def projects are usually captured and rendered at greater bit depths. Whereas a 10-bit YUV capture and output was often considered a luxury in standard-def video, it is common in hi-def, with some systems supporting 12-bit YUV. Ask the client what bit depth they expect delivery at: Anything over 8-bit means you need to be working in at least 16-bit RGB to render these greater bit depth files. Yes, that means another render hit (and more disk space, and…); on the plus side, working at this greater depth often cures many issues with colors banding and posterizing.

You will need higher-resolution sources to fill these larger frames, requiring you to capture and scan at a larger size than you have before. But what if that crucial shot or photo is not available at a higher resolution? Scale it up – but carefully. If After Effects has an Achilles’ Heel, it is in scaling up objects; sharp edges can start to look jagged once you get past 125% or so. I’ve previously used the ReSizer plug-in from Digital Anarchy, but was disappointed when the second version dropped some of the different algorithms the user could choose to determine which worked best on each shot. There are other solutions now available, such as Instant HD from Red Giant Software.

If you are re-creating or re-rendering images to use in hi-def, don’t just make them larger; consider adding it a bit more fine detail as well. The ability to see fine details is the reason consumers are buying hi-def sets (aside from bragging rights); deliver it in your content by increasing the detail in your 3D texture maps and other elements of your design.

sidebar: Repurposing Footage

If you have stock footage – or have already created 3D elements or other pre-renders – at a frame that is different than what your hi-def project demands, and the resolution is high enough, it may be tempting to use it as-is. However, if you do so, you will end up with motion artifacts that will manifest themselves as a subtle staggering in the final output.
For example, soft clouds are an element you can often get away with resizing from standard-def to hi-def. If the footage came as a 29.97 fps movie, it’s all too easy to place this into a 23.976 fps composition, scale it up, and render that out as a hi-def element. However, a funny thing will happen on the way to output: Frames of the source material will be missing. Four frames will pass directly from input to output, but then to resolve the difference in frame rates, every fifth frame will be skipped. To test how sensitive your eyes are to motion, drag a 29.97 fps clip into a 23.976 fps comp or sequence, and RAM Preview it – notice anything funny? If not, step through it a frame at a time, and notice how the movement jumps every fifth frame.

If the precise peed of the footage is not that important, re-interpret its speed to that of your final output. In After Effects, you would do this by selecting the footage, choosing File > Interpret Footage > Main, making sure “Assume this frame rate” is enabled, and entering the desired number, such as 23.976 just as we showed back on the first page for frame sequences. If the speed is important, and if you have access to the original project that created the footage, re-render it at the new, desired rate (again, making sure any the source footage used is also conformed to this rate). If the speed is important and you don’t have access to the project, then – at a minimum – enable frame blending, causing your software to interpolate intermediate frames at the new rate. Better would be using a plug-in such as RE:Vision Effects’ Twixtor or some other optical flow technology to get more accurate interpolation; be prepared to spend a little time optimizing parameters to reduce artifacts to a minimum.

next page: widescreen aspect issues

The Widescreen Format

Once you get these technical issues under control, then you need to move onto the aesthetic ones. Hi-def has a different aspect ratio than standard-def: 16:9 versus 4:3. What are you going to do with that extra real estate to the sides? More importantly, what is your client going to do with it?

If you are creating separate standard-def and hi-def versions, you can take one of two initial paths: scale the standard-def design so that its left and right edges match up with the high-def frame and cut off the excess on the top and bottom, or scale it up so the top and bottom edges match and add imagery to fill out the left and right edges. Ask the client which they prefer. More often than not, the second path is going to be the way to go. Remember that most hi-def sets are larger than standard-def sets; therefore, if an object appears relatively smaller in a hi-def frame, it will still be viewed at the same size or larger in the real world.

The set of figures below show an excellent example of this issue. The LePrevost Corporation was asked to update the logo for Buena Vista Television, which resolves to a blue rectangle in a field of white. They did the standard-def version first, and later to do a hi-def version. The best solution ended up being a compromise between the “match the sides” and “match the tops” solutions.

What’s the best way to scale this 4:3 logo (top left) to fill a 16:9 aspect screen – stretch it to fill the full width (top right), or fit the height and pad out the sides (above left)? The latter is often better; best is to choose a compromise inbetween (above right). Courtesy The LePrevost Corporation and Buena Vista Television.

In reality, it is a luxury to be able to create separate standard-def and hi-def versions. As a designer, you would prefer to do two versions, as it gives you a chance to optimize the design for the different aspects and resolutions. However, more of often than not, the hi-def version will also be used for the standard-def broadcast. Therefore, the last item on your checklist of questions to ask the client is: How are they going to go from the hi-def version to standard-def? Are they going to letterbox it, or perform a “center cut” where the fill out the top and bottom, and chop off the left and right sides? It’s probably going to be the latter – and that has huge design implications.

The darker areas to the left and right of these images show what happens when a 16:9 image is “center cut” to create a 4:3 version. Make sure information such as lower thirds survive this cut (top). You can have action extend beyond this center-cut area; just make sure it resolves to the center (above). Images courtesy Belief and HGTV.

At a minimum, you need to make sure all of your important visual information resides in or resolves to the center of the screen (see above). Make yourself an overlay template that shows you were a 4:3 frame falls in the middle of the 16:9 frame, and treat this as your new action and title safe areas. However, you still need to fill the area outside the center cut with interesting imagery. In the case of KUSA-9 in Denver, they created a widescreen “wallpaper” to place behind images (shown below) – especially important when the source material has a 4:3 aspect, such as is the case with most news footage today.

KUSA-9, NBC’s affiliate in Denver, has created “wallpaper” (which subtly uses NBC’s peacock) that fills out a 16:9 frame when they have to inset a 4:3 image.

Hopefully this brief primer has given you an idea of what questions to ask and what issues to watch out for as you start to tackle hi-def jobs. You don’t need to be afraid, but you do need to be very aware – and don’t be shocked if you know more than you clients, as many are just now entering this brave new world.

The content contained in our books, videos, blogs, and articles for other sites are all copyright Crish Design, except where otherwise attributed.

Chris and Trish Meyer

Chris and Trish Meyer

Chris & Trish Meyer founded Crish Design (formerly known as CyberMotion) in the very earliest days of the desktop motion graphics industry. Their design and animation work has appeared on shows and promos for CBS, NBC, ABC, Fox, HBO, PBS, and TLC; in opening titles for several movies including Cold Mountain and The Talented Mr. Ripley; at trade shows and press events for corporate clients ranging from Apple to Xerox; and in special venues encompassing IMAX, CircleVision, the NBC AstroVision sign in Times Square, and the four-block-long Fremont Street Experience in Las Vegas. They were among the original users of CoSA (now Adobe) After Effects, and have written the numerous books including “Creating Motion Graphics with After Effects” and “After Effects Apprentice” both published by Focal Press. Both Chris and Trish have backgrounds as musicians, and are currently fascinated with exploring fine art and mixed media in addition to their normal commercial design work. They have recently relocated from Los Angeles to the mountains near Albuquerque and Santa Fe, New Mexico.