Site icon ProVideo Coalition

After Effects & Performance. Part 17: Interview with Sean Jenkin from Adobe

After Effects & Performance. Part 17: Interview with Sean Jenkin from Adobe 1

There’s good news, and there’s bad news.  Actually, it’s better than that.  For all After Effects users out there, it’s more of a good news / kinda bad news / very good news situation.

This is part 17 of a long-running series on After Effects and Performance.  If you’ve followed a link directly here then great! But you can always start at the beginning, with Part 1.

TL; DR: There’s light at the end of the tunnel, and it’s looking like a very bright light indeed.

The good news is that in this article, we get to hear directly from Adobe. And who better to conclude a series on “After Effects and Performance” than the head of the After Effects Performance Team, Sean Jenkin.  Sean, a fellow countryman of mine, sat down and discussed After Effects for almost 2 hours, answering many questions with far more depth and insight than I expected.  In fact, the interview went for so long that I’ve had to split it across two parts.  In this part, we’ll be looking at everything Sean has to say about “After Effects and Performance” in general.  In the next part, concluding the series, we’ll specifically look at Adobe’s work on “Multi-Frame Rendering” – the most exciting new feature that After Effects has had in over 10 years.

Which brings us to the bad news, but to understand why I consider it bad news, I need to put it in context…

All good things take time… My series on the Desktop Video Revolution was supposed to be the introduction to my series on Advanced Chromakey. It was another 5 years before I finished the Chromakey tutorials. I’d started planning a series on After Effects and CPUs as long ago as 2014.

I posted my first article for the ProVideo Coalition in 2009, about 12 years ago, and ever since then I’ve had a small text file on my desktop where I scribble ideas for potential articles.  Sometimes these ideas take years before they make it onto the ProVideo Coalition website.  I was originally invited to write for the PVC after blogging about chromakey, and so the first idea I had was to write a series on advanced chromakey techniques.  When I started writing it, the introduction looked at how early desktop video software was developing alongside traditional “million dollar suites” like Flames and Henrys.  But what was supposed to be a short introduction to a chromakey tutorial developed into its own thing, a 50 minute video on the evolution of desktop video production.  It’s still online, and still relevant, so if you’re bored please check it out!  It ended up taking me another five years to make the chromakey tutorials

As long ago as 2014, I’d noted an idea to write about how After Effects didn’t utilize multi-core CPUs.  As I detailed in Part 1 of this series, this was prompted by the release of the infamous Apple “trashcan” MacPro.  If you wanted to fully max the thing out, you could spend more than $10,000 on a single, very expensive cylinder with 12 CPU cores – that’s more CPU cores than any Apple computer had ever had before.  Considering that the current MacPros have $400 wheels, and they can be configured to cost closer to $100,000, this might not sound like a big deal.  But in 2014, $10,000 for a MacPro raised a few eyebrows.  However, based on my real-world experience, After Effects didn’t run any faster on a high-end MacPro than it did on a much cheaper iMac.  With the Mac Mini also offering respectable value for money, and Adobe allowing users to run as many After Effects render-nodes as they liked, I felt that a fully-specced MacPro with 12 CPU cores was a waste of money.  My opinion was that – if you had $10,000 burning a hole in your pocket for an After Effects machine – you’d be more productive using an iMac as a workstation, and setting up a render farm with a few Mac Minis.  There won’t be many pictures in this article, so I’ll re-use this image from part 1.  It’s the original premise for this entire series.

The reason for this long preamble is to establish that as long ago as 2014, the issue of After Effects and multi-core CPUs was already evident.  While not everyone was looking to buy a MacPro, or even spend $10,000 on a new computer, the fact remains that seven years ago it was possible to buy a computer with way more CPU and GPU processing power than After Effects could utilize.

Roughly a year after Apple released the trashcan, Adobe released After Effects version 13.5 – generally known as CC 2015.  As discussed in part 13, this marked the beginning of a period I refer to as “the wilderness years”.  The RAM preview was broken, and many users simply stopped using it and went back to CC 2014.  However, on forums and blogs frequented by Adobe staff, it was revealed that the early problems with the RAM preview were related to a hugely significant technical milestone: from a software engineering perspective, the rendering thread had been separated from the user-interface thread.

For the more technically minded users, the implication was that Adobe had begun the incredibly difficult task of making After Effects multi-threaded, in order to take advantage of CPUs with multiple cores.  The outlook was optimistic: if CC 2015 marked step 1 in After Effects being “multi-threaded”, did that mean that Adobe were continuing with the multi-threading work, to take full advantage of the latest multi-core CPUs?

But that brings us to the Bad News.

No, no it didn’t.

What’s definitely true is that, with the release of After Effects CC 2015, Adobe had separated the main application into two CPU threads – the rendering pipeline and the user interface.  It’s also true that this was an incredibly long and difficult process, and it certainly represents a significant milestone in the history of After Effects.

Regardless of how technically impressive this feat was, the bad news is that it was never about multi-core CPUs. In Part 1 of this series, I looked at how the word “performance” can be applied to many different aspects of After Effects, not just rendering speed.  Other types of “performance” include the responsiveness of the user interface, and previewing in RAM.  From the time when the RAM preview was first included with After Effects version 4, through to CC 2014, the RAM preview worked as a “modal window”.  This is programmer speak for when an application is locked until the user responds, usually with a mouse or keyboard click.  In other words, once After Effects started playing a RAM preview, the user couldn’t do anything except watch it, and they had to stop the preview from playing before they could use After Effects again.

The After Effects development team saw an opportunity to improve the productivity of users by allowing them to continue to tweak properties and settings as the RAM preview was playing.  This would allow designers to see the RAM preview update as they made changes in the timeline, without having to stop, make a change, and then re-start the preview from scratch.  This is an example where “performance” can be improved even though raw rendering speed might remain the same.  Although it sounded like a great idea, it proved to be difficult to implement, and involved separating the existing After Effects code base into two separate CPU threads – one for the user interface and another for the After Effects rendering pipeline.  But they did it, and with After Effects CC 2015 / v 13.5 we got a brand new RAM preview architecture, and significantly more intelligent caching.

So why am I calling this “bad news”?

Because the huge multi-threading effort that went into CC 2015 wasn’t motivated by the emergence of multi-core CPUs. Motion designers may have appreciated the interactive RAM previews, but that initial effort at multi-threading wasn’t significantly continued through the releases of CC 2017, 2018 and 2019.  My assumption about Adobe’s motivation for the thread-separation effort was wrong.  Optimising After Effects for multi-core CPUs wasn’t an ongoing priority.  And being wrong is something I consider Bad News.

But thankfully our good news / bad news doesn’t stop there.  You’ve heard the bad news.  Now we’re at the VERY GOOD news part.

By 2019, the issues facing After Effects and Performance had become so prominent that I finally started writing this series – only five years after I’d intended to! Building on my original premise that an iMac and 3 Mac Minis made more economic sense than a single MacPro, I started drafting out all of the different, interrelated topics that fall under “performance”: what After Effects does, how it works, how CPUs evolved, and then GPUs, and so on.  The introduction was posted in October 2019, almost 18 months ago.

What I didn’t know was that something similar was happening at Adobe.  Only a few months before the first part of this series was posted, Adobe had prioritised the need to address multi-core CPU performance in After Effects.  New people were hired.  For the first time in the long history of After Effects, a dedicated team had been assembled whose sole job was to focus on performance.

Following the release of After Effects CC 2019, Adobe publicly announced that the next release of After Effects was going to be all about that magic P word: performance. And when After Effects 2020 was released, it didn’t come with any high-profile, shiny new features.  Instead, it was simply faster – and in specific cases, such as working with EXR sequences – noticeably, significantly faster.

What makes this Very Good News Indeed is that work didn’t stop there.  The improvements we saw with After Effects 2020 weren’t just a one-off push.  After Effects now has a team of software engineers dedicated to improving the performance of After Effects, and their focus on performance – and multi-core CPUs – is ongoing.  The 2020 release was merely step 1, the first public results from a project with no end-date.

Jumping forwards to a few months ago, Adobe dropped their next big announcement – the news that After Effects users have been waiting years for. The Next Big Thing to improve performance, called “Multi-Frame Rendering”, was released to public Beta.  If you missed the news, then right now (yes, that includes today!) anyone with a Creative Cloud account can download the latest Beta version of After Effects, and depending on your hardware, the “Multi-Frame Rendering” feature will render many compositions at least twice as fast, if not faster.

Puget Systems, makers of high-end workstations that are specifically tailored for graphics professionals, interviewed Sean about Multi-Frame Rendering. That’s Sean on the right. You can watch the interview over at YouTube.

Yes, this is a big deal.  Yes, this is the start of an entirely new chapter in the future of After Effects.  At the risk of stating the obvious, it’s still in Beta, and there’s a huge amount of work yet to be done.  But looking forwards, the direction is very clear – After Effects is only going to get faster and better.

While updating an app that’s over 20 years old was never going to be an easy task – as I mentioned earlier, at least one software developer I interviewed suggested it was almost impossible – the fact that so much progress has been made in a relatively short period of time is very encouraging.  The exciting new features we’re seeing now are only the first steps from a team that’s just 18 months old.

In that respect, progress has been very rapid.  But while Beta testers can directly experience the progress that’s been made with MFR, there’s also been other, equally impressive improvements that have been made behind the scenes.

A few years ago I had the pleasure of meeting Victoria Nece, the After Effects Product Manager. One piece of trivia she shared was that when she first joined Adobe, the time taken to fully compile After Effects was over 15 hours. Compiling is the process of taking the software code that’s written by programmers, and converting it into an actual application that the computer can run.  Compiling is the software engineering equivalent of rendering, and while 15 hours wasn’t unusual for an application the size of After Effects, it seemed like a pretty long time to wait in order to test new code.  But thanks to the work of the After Effects DevOps team, that compile time has now been reduced to just 45 minutes, and it continues to drop.

While this might not seem immediately relevant to your average motion graphics designer, it’s the type of progress that drastically improves productivity – for Adobe.  In essence, it’s not just the After Effects application which is getting faster, but the entire After Effects development team.  This is a very good thing.  Seeing so much progress in such a short period of time has actually made me more optimistic about future improvements. And from the sound of it, a lot of the initial hard work that’s been done has laid the groundwork for all sorts of future improvements.

The After Effects Performance Team is headed by Sean Jenkin, who was eager to point out that he’d been using After Effects for many years before he actually joined Adobe to develop it.  Thanks to Sean’s willingness to discuss all topics in great detail, it’s clear that the future of After Effects is in very capable hands, and that Adobe is now dedicated to addressing and improving all aspects of “Performance” – not just render times.

In Part 1, I listed a range of After Effects features that all contribute to the overall sense of “performance”.

The first half of our conversation was quite general, and again demonstrates that Adobe is looking beyond simple rendering times as a measure of speed and are focused on overall productivity.  This is very reassuring, as way back in Part 1 of this series, I looked at a whole range of issues which could be considered types of “performance”.

In part 2 we’ll focus specifically on the new Multi-Frame Rendering feature and what the future holds, but let’s begin by looking at the topic of “Performance”.

——————————————————————————————————–

CHRIS: So I’ve jumped onto Wikipedia and it says that the first version of After Effects was launched in January 1993. Do you know how many people were involved in that original CoSA version?

SEAN: There were 15 people who worked at CoSA for the original version of After Effects.

CHRIS: I find this sort of trivia fun. How many of the original team from CoSA are still on the After Effects team?

SEAN: Day-to-day, David M. Cotter is the only original employee who still works on After Effects.  Dave Simons and Daniel Wilk are still very involved in Character Animator and have provided their guidance and wisdom on AE in an on-going fashion. John Nelson, who was involved in the original port to Windows, is also involved in bringing the magic from Adobe Research into the AE product.  We have a couple of other people from when Aldus owned After Effects as well.

CHRIS: How it was originally developed? Do you know what language it would have been originally?

SEAN: It was done in C, and C++ was more something that happened as CoSA/Aldus was purchased by Adobe.  As the integration between the various Adobe libraries and applications came along, C++ became more prevalent, but it was definitely C to begin with.

CHRIS: Now we’re in April – May 2021. How large do you think the After Effects team is now?

SEAN: People who are dedicated to After Effects is probably 35.  But we build on a lot of shared code that the rest of Adobe, in Premiere Pro and the rest of those teams, are developing. There’s a lot of shared components. But at its core it’s probably about 35 people working on it.

CHRIS: I guess it’s going into software management, but how does a company like Adobe track a code base that’s over 25 years old? Have there been major leaps internally just in regards to how Adobe actually manage the After Effects project?

SEAN: There’s a couple of answers to that question. All of our source code sits in some sort of source code control system. It was Perforce when I arrived and now everything is hosted in Git.

When you build After Effects, right now it’s actually building about 540 individual projects. So if you break open After Effects, you’ll see lots of DLLs (on Windows), and a lot of .aex files and a bunch of other files, and all of those are essentially individual projects that get compiled, staged and then when you run After Effects you’re running a small EXE that really just bootstraps the loading of all of those other components.

CHRIS:  And do you have an idea how long it takes to compile? Is it over 15 hours?

SEAN: No, on my development Windows machine it’s about 45 minutes.

CHRIS: I’m asking because I met Victoria a few years ago, and she told me that when she first joined the team, the compile time was over 15 hours.  So I was wondering if it had gone up but it’s obviously gone drastically down.

SEAN: We’ve spent a lot of time and resources internally to speed up our builds.  Our goal is to get to seconds. We’re not anywhere near that yet, but yeah, we’re certainly actively working on speeding up our process.

CHRIS:  Let’s have a look at what After Effects was like in 1993. It was originally designed as a compositing engine. I love reminding users that we didn’t get text layers until version six and shape layers, I think was version eight. So it was originally a desktop compositing program, and it’s evolved to become the industry standard for motion graphics.

Is there any sort of recognition within Adobe that After Effects was made to do one thing, but people actually use it for another?

SEAN: There is certainly recognition of that. If you talk to Victoria these days, she will say “people have made After Effects do stuff it was never designed to do”. That is both awesome and scary for us internally.

The most common user of After Effects today is a motion graphics designer. Yes, there is still some compositing and visual effects. For the average user there’s still a lot of power in the video compositing tools, but obviously there are other apps out there that are focused more on those requirements and have surpassed what After Effects can do.

But as we’re thinking about new features today, I will say we’re putting on our motion designer hat, because they’re the core of our users.  We’re thinking about ‘how do we make that experience better?’

Whether it’s for people who are experienced motion designers, or maybe Photoshop users or Illustrator users or for content that’s more static, and now they need to make something for social media, or need to animate a small graphic. How do we do that?

And it’d be a lost opportunity if I didn’t talk about motion graphics templates. That’s also a huge part of what we’re doing, whether it’s creating and using those in After Effects, or in Premiere Pro and building that part of a motion designers workflow. Templates are taking up a good amount of our thinking time and we are making sure that those are going to be really effective tools for people to build automated workflows.

Whether it’s a TV show and I need to do 30 lower thirds, or I need to crank out bar graphs or color changes based on values in a data file, whatever that is, whether it’s essential properties or motion graphic templates or our data driven functionality, those are the things that are probably driving more of our priorities when it comes to motion graphics designers. It’s certainly motion graphics.

CHRIS: Considering that in 1993 it was conceived as a compositing app, a recurring theme in my articles is that After Effects’ core functionality is manipulating bitmap images (or raster images, whichever terminology you prefer). Can you give us an idea of what that involves technically? I often describe a bitmap as just a big long stream of numbers. It’s very different to rendering a 3D scene.

SEAN:  It is. The way that you can think about it, is that every single layer in a composition and every single pixel in every single layer has to be addressed. Every pixel has to be read, composited or manipulated, or run through an effect and then that has to be transformed to the right location on screen and it has to be blended with the layers above and below.

The shape layer system is built on vectors. We’re not only working at a bitmap or a pixel by pixel level, we do render the shape vectors and then we have to transfer them at some point into a bitmap.  You can make a project with 100,000 shapes and After Effects still has to iterate through every single one of them, and then figure out how to rasterise them and composite them into the scene.

Our text engine is similar. It’s going to draw text at the highest fidelity it can, but eventually it has to turn it into pixels, and so that part of the process is still not as fast as a straight up GPU 3D raycast scene.

If you look at the 3D games engines that exist today, Unity and Unreal are good examples. You may have an entire 3D scene, but your UI is something like a bitmap that is overlaid on top of that 3D scene. It’s a single pass over the completed 3D scene and so in those situations you only have one composite that needs to occur between the 3D scene and the 2D system.

In After Effects, if you start intermixing 2D and 3D layers, we’ve got to bin those 3D layers together, render those all independently. Figure out where all the 2D layers sit in between, and then composite all of those parts back together, which is also a hard thing because if you’re not thinking about it, you don’t get a full 3D scene. You end up with 2 ½ D planes that you’re trying to intersect in After Effects.

So compositing bitmaps is a completely separate thing that a lot of game engines don’t have to think about. They think about 3D representation with a single 2D overlay. In After Effects you can do thousands of those intersections, and we have to render every one of them independently.

CHRIS: I think the difference between a 2D bitmap image and 3D geometry is pretty clear, but once you get into manipulating and compositing multiple bitmap images together, it’s a completely different technical challenge, right?

SEAN: You know, while we sit alongside Premiere Pro in an organizational structure, we often feel much closer to Photoshop. I mean, After Effects is often quoted as Photoshop for moving images; or Photoshop for video. When you think about Photoshop, no one spends much time thinking about “How do I get my Photoshop art to render faster?” There’s some pixel blending, there’s transforms, but in Photoshop it’s generally limited in size and scale and scope.

But if you try and do that over a series of 10 frames or 1000 frames, or 100,000 frames, that’s closer to what After Effects is trying to do. It’s trying to handle both the massive stack of layers as well as being able to handle changes over time (animating properties moving position or rotating, etc.).

Or I’ve got effects that need to calculate everything from frame 0 to frame 1000 just to be able to render frame 1001. Those are all those sorts of things where After Effects is a massive toolbox. It might not be trying to do one thing particularly fast. It’s trying to do a lot of things at a reasonable speed that can give you high fidelity results.

Another thing about After Effects is whether you’re using the Mercury software or Mercury GPU renderer, whether you’re using an AMD card or an NVIDIA card, or an AMD or an Intel CPU, or name any of those things – Mac or Windows – a rendered frame should always be the same. No matter what machine you’re using and which version of After Effects you’re using, we want to make that frame render as consistently as possible.

Just on that, another new piece is 3D. We now have draft 3D with a new 3D rendering engine. It’s currently used in draft 3D because the quality is not as high as the final quality render out of After Effects will be, yet. It may also not be the same render from machine to machine. But this is an area where we are trying to improve the 3D geometry support in AE.

After Effects is a massive toolbox. It might not be trying to do one thing particularly fast.  It’s trying to do a lot of things at a reasonable speed that can give you high fidelity results.

-Sean Jenkin

CHRIS: If you look over the last 25 years that After Effects has been around, have there been any quantum leaps in computer technology that have affected After Effects or has it been more of a slow incremental changes thing?

SEAN: The clock speed of the processor is probably a huge piece. You’re very aware, as is the whole industry, that After Effects generally runs on one thread. So as the clock speed has grown from megahertz through to 5+GHz, that’s certainly significantly impacted the things that AE is able to do. So the clock speed has been one thing.

Another piece that has had an impact, is the GPU. In After Effects there are approximately 45 effects that use the GPU, but we also use the GPU for transforms when objects or layers are rotating or translating. We’ve also ported some of the blending modes to the GPU. When we can efficiently use the GPU, we’re certainly going to get performance out of it.

In saying that, I know a lot of people ask, “why doesn’t AE do more with the GPU?”. The answer is two-fold:

1. There is a time and resource overhead to transferring data between the CPU and GPU. If a particular effect only runs on the CPU but the next effect on the layer only runs on the GPU, it takes time for the pixel data to transfer to the GPU. However, if that next effect could run equally well on the CPU or GPU, it might actually be more performant to composite both effects only on the CPU. To get the full benefit of the GPU, all of AE and all the effects you want to use would need to work on the GPU and that’s going to take a good amount of time to make happen.

2. GPU VRAM – right now, most video card drivers treat GPU VRAM as if there is only one application using the VRAM on the system at a time. This stems from the gaming industry where a game will likely have exclusive access to the GPU for extended periods of time. Which means the GPU doesn’t provide optimal ways of sharing its resources. Even if you assume After Effects is the only application running, plugins and effects may access the GPU VRAM independently. So if you’re using a GPU heavy plugin, that might use 6GBs of VRAM out of 8GBs, that’s not much left for the rest of your composition to use.

But the real leap is going to come as we finish up all of the threading work we’ve been doing, and everything shifts to multi-frame rendering.

CHRIS: So jumping forwards from 1993 to now, that’s April – May 2021. There’s a dedicated After Effects performance team, right? What does performance mean to the performance team?

SEAN: Yes, I head up the performance team, so hopefully I know something of what I’m talking about on this one! The performance team has only existed as a standalone team for about 18 months now. Before that, performance work was interspersed with different teams working on specific features and saying OK, maybe we need to look at the performance of shapes or we need to look at the performance of text, and it might be a team that lasted three, maybe six months.

This is, as far as I’m aware, the first time After Effects has dedicated essentially 10 resources to the performance architecture.

We look at the performance of After Effects and ask if a feature’s performance is negatively affecting a designer’s ability to design, preview, design, preview, design, preview – that iterative loop. For me and my team who are working on multi-frame rendering, that is the core of the work that we’ve spent 18 months doing.

Technically, the fundamental work we’ve done over the last 18 months is to get After Effects to be thread-safe when it’s rendering. Thread-safe is a fancy computer science term, but put simply, it means that multiple threads or CPU processes can safely operate or manipulate the same set of data. In the case of After Effects, it means frame 1, 2 and 3 could all be rendered at the same time and AE will produce the same final pixels as if frames 1, 2 and 3 were rendered sequentially. This allows AE to take advantage of those additional CPU cores, RAM, VRAM, etc., that have until now been sitting idle. But that’s just the beginning of the improvements because we can now build features that just weren’t possible before.

Let’s say you’ve turned around to chat with someone, or you’ve got a chat message or an email, After Effects just sits idle, it doesn’t do anything for you. But, now that we’ve made our core rendering pipeline thread-safe, AE can start to safely render in the background for you – we call this speculative preview and it’s coming to Beta soon. AE can build up your pre-comps and have them ready to playback. Your main composition will be rendered for preview when you hit the spacebar.

I think the biggest evolution we’re going to see (and not to toot my own horn and my teams) is After Effects finally being able to take advantage of the new processors with lots of cores, and use that functionality to speed up your entire workflow- not just the rendering portion of it all. I think that’s going to be a huge thing.

I’ll go back to the design-preview iteration loop. To us, that’s our driving force. How do we make it so that designers have as much time as possible to make the best designs?

That could be: I’m interactively working on the design. Or, I need to have a final quality render out the door tonight. Or, I’m going to render over the weekend, so please tell me if something is failing so I know if I need to remote in or go and fix it. How do we increase that workflow as much as possible?

So there is a lot of work so far about getting the core of After Effects in a state where we can do more than one thing at a time, safely.

And then how do we build upon that? Is that when I click on one frame in my timeline that we can render that frame faster ? Or when I preview a series of frames, is that faster? Or is it when I hit render in the render queue, is that faster? Or when I go out to Adobe Media Encoder and I have to dynamic link back to After Effects… how do we get that faster? There are all of these areas that we could attack.

You’ve probably seen that we’ve made some mention about render queue notifications coming at some point soon, where when the render queue is done, it will pop up a message on your phone, right? That isn’t necessarily going to make your renders faster, but it does mean that you could walk away from your computer and know if something worked or didn’t work and you don’t have to continue to worry about the export.

We’re also thinking about it from other perspectives – I’m a user that comes from Illustrator, and Illustrator’s properties and UI work in a certain way. After Effects doesn’t do any of those things. Could we build something that makes the properties of an Illustrator layer more easily accessible, and does that speed up the user completing their work? That’s performance too.

I think at the heart of everything we’re doing, there is some piece of performance in there and the performance team’s job is to step back and make sure that all of those pieces will run effectively, and how do we build upon this new thread-safe foundation to accelerate what a user can do.

CHRIS: I’ve opened up part one of my series, the very first article on After Effects and Performance.  It’s dated October 2019, so it’s possible I wrote this about the same time that your performance team was starting up. In the article I listed a few different things that I felt were different aspects of performance. I’ll go through a few of them, and can give your thoughts on them?

Let’s start with the application startup time. What scope is there for that to be improved?

SEAN:  We have actually done some improvements here. The “first time startup” is defined as the first time you install After Effects or upgrade After Effects to a new version. It is slow, we’re aware of that. Every time there’s a new version of AE it scans every single plugin that’s installed on the system to make sure we understand it works and is supported in this version of AE. Then we cache that information away, so the next time AE starts, now we actually have all of that information and can skip most of that scan. If you add a plug in later on, we can do an incremental scan so that startup time remains faster.

So, part of the startup time is the number of effects that you have installed, and that After Effects needs to find.

The other part is architectural. When you start up After Effects, it’s a very small executable that starts up and then it has to start loading all of the individual projects, all the DLLs, frameworks, built-in effects, etc.

That architecture is something that we are reviewing. It’s really good for us, for building components that can work across lots of applications, but as an end user you suffer. Consequently, right now AE reads a few thousand files just to be able to start.

So, the startup time is something we’re conscious of… how many plugins are actually loading, how much memory is being used, how many modules load, etc. We’re actually looking at maximum, minimum and average times to make sure that we aren’t doing things that are making the experience worse. Vice-versa, when we do make improvements and can see that things are better, that’s great.

CHRIS: OK, so the next one was project opening time. I have had projects where it’s taken several minutes just to open it, even from a SSD.

SEAN: Sad to say, I’m very aware of that problem. I have a series of projects that are also very slow. We are looking at a number of things.

The After Effects project structure was actually designed all the way back at the original version of After Effects, and it’s been expanded and extended, but it’s never been rethought. So we are asking ourselves, are we doing this in the most robust way?

There’s also new work that’s happening, though primarily with Premiere Pro right now, but After Effects will eventually be able to use it as well.

The best way to describe it is like when you go to Facebook or another site that loads a skeleton of the content. You know it’s sort of half-filled-in and in the background, we could be asynchronously loading all of the media needed for the project.

Right now, After Effects essentially forces a user to wait, while it reads the entire project structure, to load every single file in and make sure that we understand everything there.

And then there’s actually a second pass of that where we make a copy of the entire project. I’m getting into the very internals here, but we keep the UI and the rendering separate, and this has been around since version 13.5 (CC 2015). But that means that when you open one copy of the project for the UI thread, we have to create a second one for the rendering thread, so we have two copies of the project. We are actually cleaning this up as part of the multi-frame rendering work but that’s part of the slowness right now.

If we can get asynchronous project loading to work, we’ll basically get the structure up and the files/footage/layers, you need immediately to get working, and then in the background we’ll asynchronously load the rest of the footage or compositions or effects that we actually need. It’s on the road map. It’s not the most immediate, but I certainly understand it because we are dealing with large projects too and we want to fix that.

CHRIS: The fact that the project structure hasn’t changed massively since 1993 is really interesting. So probably a similar question. Even just saving is slow. Are we going to see auto-saves happen transparently in the background?

SEAN: I don’t know, sounds like a great idea. I do know it’s on the road map to look at the auto-save situation, but in terms of priorities, it’s probably a lower priority than many of the other things that are on the list right now. It’s quite annoying, isn’t it, popping up all the time…

CHRIS:  It can be if you have a slow project to save. I think that’s a point where user perception plays a significant role.  In Part 1 I was saying that if you had a render and it took five minutes longer, that doesn’t feel as slow to the user than if you go to save a project and it takes one minute. Because render times can vary a little bit, you’re kind of used to it, but if you have this save dialogue thing pop up and you’ve just got to sit there and wait, it feels slower to the user, yeah?

SEAN: It does, it blocks you from doing your work. I probably keep talking about multi-frame rendering, but again it’s a lot of architectural changes to After Effects to support this new work. We will actually be in a much better position where we can save a project potentially much faster.

I don’t know if we’ve actually done the work to think about what that means to the user for auto-save in the background, but there are some things that we’re doing that fundamentally shifts the way we manage the project in memory, and maybe that gives us the ability to speed up saves, or at least make it happen in the background.

CHRIS:  Sometimes importing files can take a while, although it does seem to have been improved. Is there any particular overhead that happens when you just import lots of big files or large image sequences?

SEAN:  When AE reads a file, the file format may not be one After Effects knows about natively.  Then we have to load it through a shared component and transform it into something AE can handle. That can result in a lot of extra steps and overhead. Add in reading files over the network or spinning disks and that can make things even slower.

For a large file, AE may need to read the entire file into memory before we do anything with it. That may not be necessary especially if you are scanning across a 10 minute piece of footage and you really just need to find one frame, but for now AE will read the whole file in. What if we could just scan to the closest keyframe and retrieve just the frames you really need, and then load the rest of the file in asynchronously for playback if and when the frames are needed?

There is a team that’s responsible for all of the formats across all of our apps and they are trying to work on improving the performance of those formats. Hopefully over time we’ll get some of this in place and make working with footage fast.

Honestly, it’s probably a good thing for us to figure out what are the slowest formats. For example, PNG sequence exports are significantly slower than QuickTime exports or something else, right?

CHRIS: Yes, I benchmarked that.  I even sent a few emails to check, because I thought it was a bug. I have a bar graph of render times and the PNG times go exponentially up with composition size. I do all these benchmarks for these articles so I was comparing JPEGS to TIFFS, are TIFF with compression or with alphas slower than ones without and so on.  I made a benchmark test, all the image formats are roughly benchmarking a few minutes but then I get to PNG’s and all of a sudden it’s like 40 minutes. You don’t necessarily notice it at smaller comp sizes, but a lot of people use PNG’s.

Is it a bug or are they just rubbish? I don’t know, but for the time being – don’t use PNGs. Use TIFFs instead, or EXRs. There’s no time scale here, but at the far right (8K), the big orange bar represents 38 minutes as opposed to 50 seconds for the ProRes 422 file.

SEAN: Firstly, keep making those graphs and keep telling us about it. You know if this is an area that you’re finding, or other people are finding issues with, these are things that maybe we don’t always know internally. So as soon as we can get that information, we can take a look at it and prioritize it.

PNG sequences is one that I need to personally go and investigate, because I don’t understand why they are so slow right now.

CHRIS: So how about the general responsiveness of the user interface. I know that’s seen a lot of attention.  The conversation has drifted towards multi-threading and multi-core CPUs, but before we get to multi-frame rendering, the first historical mentions of CPUs and threads relate to the UI.

From the user’s perspective, when CC 2015 came out there wasn’t really any public announcement, but if you read Adobe blogs, forums, even answers from Adobe staff, then it was mentioned that with CC 2015, the UI thread had been separated from the rendering thread. So it was kind of discussed after it happened, but it wasn’t actively announced with the release of CC 2015.

I’ve written about that as Adobe’s first step towards multicore support and I don’t know if that’s necessarily true, but it seemed to be a big thing. Do you want to give us an overview of what’s actually happened to the user interface?

SEAN:  OK, so the separation of those two pieces – the user interface thread from the rendering thread – was primarily to achieve the ability for someone to hit the spacebar, begin previewing, and then still be able to interact and make changes to their composition. Uninterrupted playback was the fundamental design idea. It probably makes more sense for a short, 5 second or 10 second portions of a composition as opposed to something minutes long.

Again, the idea was to keep that composition cycling, allowing you to be able to shift a value, like the position or an effect parameter or something like that, and not have to keep hitting the spacebar every time. What was happening with previews before then was that you were completely locked off and it was a modal experience. You had no ability to do anything with the application. So that was the primary reason for that separation, to enable that use case.

There are probably questions about whether that’s been successful or if that has changed people’s workflows. Do people make a bunch of changes, hit the spacebar, interact with it or do they sit back, watch it, stop the preview, and so on? But that was the primary reason for it.

The good thing about it is that it did start to fix some architectural issues. Where we are today is a result of continuing that work, essentially to get to where we were going with multi-frame rendering. That initial thread separation was a huge part of that work.

From a user perspective it really was about making the UI more responsive, making the ability to edit while a preview is happening, and to get rid of the modality.

CHRIS: I mentioned before how I’ve been benchmarking file formats and things like that. One of the things I tried to benchmark was the difference in speed between 8, 16 and 32 bit projects.  Obviously you need a wide range of projects to test, because you might have one project that’s all shape layers, or is all text layers, and then you might have another that’s based on different file formats and is all compositing and so on.

But in my attempts to come up with some sort of consistent speed difference between 8, 16 and 32 bit projects, the render time differences jumped all over the place. I’ve had some projects where it almost takes twice as long to render the 16 bit and then twice as long again to render the 32 bit project, which you might expect. But I’ve also had real life projects, in my case compositing TVCs, where switching from 8 to 32 bit only adds another 5%, or something fairly small.

So I haven’t discovered some hard and fast rule about performance differences between 8, 16 and 32 bit projects. Can you give us an insight as to what overheads there are when you switch between 8, 16 and 32 bit? And is there a reason why there doesn’t seem to be a simple rule as to how much slower it gets?

SEAN:  At the core, when we’re iterating over many pixels, if there’s more color depth then the rendering time will increase. Fundamentally we’re just internally changing from an int to a double or a float, but floats and doubles are slower than integers from a CPU standpoint. So that’s the primary reason that more color depth is slower.

But the reason you start to see larger performance differences is because when an effect or plugin, whether it’s an internal one or a third party one, implements 8, 16 and 32 bit support they actually implement those as three different sets of code. So the 8 bit processing is not done the same way as 16 bits, 16 bits is not the same as 32 bits in general. You can certainly have some common code, but in general it’s three different implementations. The performance differences between them can vary depending on the differences in the implementation.

The other part, which is something we don’t really talk about, outside of effect developers, is something called smart render. Smart render is the ability for an effect, or After Effects itself, to render only the portion of a frame that has changed.

So as an effect developer, or as After Effects, we say- OK the user has moved something into the top corner of the composition. We only need to re-render this small bit up here and not the whole thing. Smart Render has much more of an impact at 16 and 32 bit, because it means you haven’t got nearly as much that you have to process and deal with.

Not all effects plugins implement smart rendering, and so you may find that as you go to 16 or 32 bit you’re running into code that doesn’t support some of the potential optimizations that exist.

This also starts to run into the GPU usage, you’ll find that 32 bit is going to work better on the GPU than 8 or 16 bit, because that’s the native precision that a GPU is working at.

So if the effect has GPU support and 32 bit support, you may actually find that it’s faster than using the GPU with 16 bit. Or, you might find that CPU rendering 16 bit is faster than GPU rendering 16 bit. It really is on an effect-by-effect basis.

That’s why you see the differences between different projects. Every effect plugin is potentially multiple different pieces of code – 8, 16 and 32 bit for the CPU, and then GPU, Smart Render, etc.

CHRIS: The other thing that I noticed, in terms of my own personal benchmarks, is that color management made a huge difference to overall performance. Is there any particular aspect of color management that makes it processor intensive or slow?

SEAN: I don’t know specifics. I’m not the color management guy, but effectively you are now having to do another pass to adjust pixel colors, and so that’s going to be every single pixel. Depending on the bit depth that could certainly take some time.  After Effects color management support is probably not as good as people would like it to be, we’re aware of that, and so we’re trying to figure out what is it that the customers need out of us?  Is it ACES? Is it some other color management system?  Then we think about its integration with Premiere Pro or other applications, what are they doing and how do we make that consistent across the board? That’s probably the extent of my color management knowledge.

CHRIS: That’s OK, I was more curious if there’s some technical or nerdy software explanation about converting Log files, or applying LUTs, that’s just inherently slow…

SEAN: I think it’s just the pixel conversion to the right colour. Whether it’s using a LUT or something else, we’ve got to do every single pixel.

CHRIS:  OK, so it was very interesting to hear you say that you felt that After Effects was closer to Illustrator and Photoshop than Premiere. I think this is probably an obvious question, and one you’ve probably heard before, but why can’t After Effects use the same engine as Premiere Pro? In Premiere you just hit the spacebar and it plays a 4K file.

SEAN:  Yeah, it is two completely separate rendering engines. Premiere Pro has been rewritten from the ground up, whereas After Effects has lived in the same code base for a long, long time.

When we look at it and we take a step back, the average Premiere project will have a very long sequence duration but the layer stack is relatively small. If you look at the average number of layers in a Premiere project, it’s likely not even double digits. Of course, it can be on larger projects, but less than 10 would be the average.

But an average After Effects composition is probably closer to 50 or 60 layers, and so that’s the core fundamental difference.

If you take Premiere Pro, it’s optimized for reading footage and playing it back on screen. But as soon as you put an effect on a Premiere Pro sequence, you will find that it likely has to render that clip, and then it’ll cache that away on disk, and then playback, whereas After Effects generally has some sort of effect, blending mode or transform happening on every single layer.

We think about it as horizontal versus vertical work that’s happening. Most Premiere Pro sequences don’t even alter the opacity, maybe for a title or something like that, but everything else is a very clean blending mode – a normal blending mode. The new pixels just replace the existing ones, and that can be done very quickly in the GPU. It’s a very specific type of rendering.

Whereas a lot of people do things in After Effects that After Effects was never built for, and so it’s a much more general type of rendering that it’s doing.

CHRIS: I remember, I think it was introduced in the early 2000s, there was theoretically the potential for third party renderers, like we see with 3D software.  Back when After Effects first came out you had the original renderer, and then with 3D layers you could choose between classic 3D and advanced 3D, at one stage there was an Open GL renderer and then a ray tracing renderer, and now we’ve got the Cinema 4D option. Is there the scope for third party rendering engines in After Effects?

After Effects has had a few different rendering engines over the years, but you might be surprised to know how just how many there have been. Currently we have two options (right image), the Classic 3D and Cinema 4D. In CS 6 (left image) we also had the Windows-only Ray-traced 3D option, and earlier versions also included an Open-GL renderer.

SEAN: I’m not going to say no never, but what we found is that it’s a quite a lot of maintenance. When you think about an After Effects effect, there is a lot of work for a plugin developer to keep that plugin working version over version. But it’s a very well defined. You get a set of pixels for a frame and you give back a set of pixels. Whatever you do in between, there’s some rules, and if you don’t follow those rules, your effect doesn’t work. If you extend that to an entire renderer, the complexities and dependencies become exponentially larger and unless that third-party renderer is providing value for a significant portion of our customers, it’s hard to justify supporting it.

We do have the classic 3D renderer which is still being updated. We continue to support the Cinema 4D renderer and have a great relationship with Maxon. They are continually updating the renderer and they’re looking at new architectures to make it work better inside After Effects.

And then we also have the new 3D renderer – currently in the draft 3D part of AE, it’s the same 3D renderer as Dimension and the other 3D products Adobe is currently working on. But that rendering engine is not going to handle C4D files, which is important for a lot of our Motion Designers where they are deeply embedded in using Cinema 4D. So we continue to invest in that.

 

We’ll pick up where we left off in our next article, looking at the new Multi-Frame Rendering feature. Check out Part 2 here…

Thanks so much to Sean for taking the time to talk about After Effects and Performance.  In the next article, we’ll continue the conversation and look at the latest and greatest new feature: Multi-Frame Rendering.

This is part 17 in a long-running series on After Effects and Performance.  Have you read the others?  They’re really good and really long too:

Part 1: In search of perfection

Part 2: What After Effects actually does

Part 3: It’s numbers, all the way down

Part 4: Bottlenecks & Busses

Part 5: Introducing the CPU

Part 6: Begun, the core wars have…

Part 7: Introducing AErender

Part 8: Multiprocessing (kinda, sorta)

Part 9: Cold hard cache

Part 10: The birth of the GPU

Part 11: The rise of the GPGPU

Part 12: The Quadro conundrum

Part 13: The wilderness years

Part 14: Make it faster, for free

Part 15: 3rd Party opinions

Part 16: Bits and Bobs

And of course, if you liked this series then I have over ten years worth of After Effects articles to go through.  Some of them are really long too!

 

 

Exit mobile version