Should Adobe re-write After Effects from the ground up? Earlier this week, that’s what Wren from Corridor Digital tweeted. Following the response, the Corridor Cast posted not one, but two podcasts on the topic – leading to even more discussion and comments.
It’s not a new sentiment – plenty of other After Effects users have felt the same way at one point or another, me included. But it’s a much more complex topic than it might first appear. And in this case, it’s something I’ve spent a lot of time looking into.
I’ve been using After Effects for over 25 years, and I’ve been writing After Effects articles and tutorials for about 20 of those years. A few years ago, I wrote an article that looked at the animation tools in After Effects and what it’s lacking. I followed that up with an 18-part series on After Effects and Performance, a comprehensive technical analysis and history. The series provides an in-depth explanation of what After Effects does, how it does it, and all of the different things that contribute to the overall notion of “performance”.
As part of the series I spoke to a number of software developers, as well as interviewing Sean Jenkin from Adobe. I asked the developers what would be involved in re-writing After Effects from scratch, and I asked Sean if Adobe had ever thought about doing it.
So when Wren posted his tweet, and the Corridor Cast followed up with their podcasts, I felt like it was something I could comment on.
No-one likes the sound of their own voice, but even I fall asleep listening to me waffle on. You can click on the settings button and watch the whole thing sped up. I won’t be offended.
I haven’t made a reaction video before, but here’s a collection of my thoughts on the topic. For in-depth technical insight, you can also go back to the series on performance, but it’s a bit much to dive head-first into an 18-part series that’s over 100,000 words.
So for now, here’s a video that’s light on the visuals, and while it’s a bit unstructured and rambling, at least it shares some of the insights that I’ve picked up over the past few years.
I’d like to say that I sympathise with the angst that Wren feels, and I can share the underlying sentiment. The comments I’ve made in the video above are mostly to do with areas where I feel I can offer some perspective. There’s also loads of things I haven’t addressed, but I don’t want to give the impression that I’m disagreeing with everything they say. However I also don’t see much point in making a video where I just agree with what they said, and anyway, one hour of talking is enough for anyone.
Crashing is bad. I can’t offer any insight or explanation for the instability problems they’ve had, or why their experience with Premiere has been so poor. I can’t offer any insight into why they’ve found Resolve so much more stable than the Adobe equivalents, although I can say that Resolve has its fair share of quirks.
The simple version – if you don’t want to sit through a 1-hour video of incessant talking – is that re-writing After Effects from the ground up would result in a brand new app that is no longer After Effects. While that would no-doubt be welcomed by many people, you have to consider everything you’d be giving up to get there: all 3rd party support, including plugins, scripts, tutorials, forums, templates as well as existing project files, would be gone.
I am, to this day, a full-time After Effects specialist, and After Effects has defined my career. Is it perfect? Not by a long shot. Is it better than it was? Yes, it gets better every month. Will I still be using it tomorrow? Yes I will.
ADDENDUM! Some additional notes & Clarifications
It’s been a few days since I posted this, and that’s given me some time to re-watch what I said and to consider various reader’s thoughts and comments. There are just a few notes that I wanted to add in addition to the original video.
In re-watching the video, it struck me that it sounded a bit like I was contradicting myself. I begin by talking about the way that re-writing After Effects for GPUs and CPUs would break 3rd party plugins, but then I talk about the fact that Adobe have been doing just that over the past few years. Both are true, but with caveats.
Firstly, it underlines what an incredible job the developers at Adobe have done, in taking an application that’s over 25 years old and methodically optimising the code for multi-core CPUs and GPU acceleration. This has been done with minimal disruption to the users – existing plugins have continued to work, and in general After Effects has behaved in the same manner, just with faster rendering. But re-writing the original code doesn’t bring the same performance benefits as writing brand new code from scratch, designed for different goals, using the very latest software development technology.
If you were designing a brand new 2D compositing app starting today, designed from day 1 to run on multi-core CPUs and GPUs, then the end result would almost certainly be more efficient – but at the cost of breaking all existing plugins.
Context – 3D & Unreal
It’s always difficult to know how much context to put in an article that’s largely opinion. If you explain too much then everything feels very long, slow and boring – but if you don’t put in enough context then sometimes what you say is hard to follow. In this case, I make a few references to 3D rendering, and real-time rendering, and the performance differences to After Effects. There are two reasons I mention 3D and Unreal that are both related to the perception of end users all around the world.
From first-hand experience, working in various different studios with a wide range of artists, it’s normal – perhaps understandable – for your average designer to see what a real-time engine such as Unreal can do, and then wonder why After Effects can’t play a single layer of video. The same can be said of editing apps – why can Final Cut / Premiere / Resolve play multiple streams of video in real-time while After Effects cannot?
The simple answer – noted in bullet points but not in my comments – is that the After Effects rendering engine was not designed with the same goals. The After Effects rendering engine was specifically designed for compositing multiple layers of video together, each with an alpha channel and optional blending modes, in a resolution independent composition size up to 30,000 x 30,000. The user can choose a bit depth and optional color management. Real-time playback was never one of the aims. An editing application is designed specifically for real-time playback. The difference in performance comes down to differences in the original goals and the way the software has been designed to meet those goals. It’s pretty easy to add effects / layers / blending modes to layers of video in an editing app and have it suddenly require everything to be rendered before it will play back.
But that original user perception is there – if Premiere can do it, why can’t After Effects?
When it comes to technical discussions about multi-core CPUs (and GPUs), then the complicated part is explaining that 3D rendering is an anomaly, in the way it continues to get faster and faster as you add more and more CPUs. This is very unusual. Most types of software, including all of the everyday apps that you run on your computer, don’t continue to get faster as you add more CPUs. If you take a computer with 1 CPU and add another CPU, then apps don’t neccessarily run twice as fast – maybe they’ll be 1 1/2 times faster at best. As you add more and more CPUs, the improvement in performance rapidly declines, until there’s no improvement at all with additional CPUs. So compared to a computer with 1 CPU, an app won’t run 4 times faster on a machine with 4 CPUs, and definitely won’t run 8 times faster on a machine with 8 CPUs.
But the exception is 3D rendering, and also particle / physics simulations. They’re the two odd ones out. Just because of the way they work, you can add more and more processors and they’ll keep getting faster and faster. And it just happens that the types of people who use After Effects are often either doing 3D rendering themselves, or are working alongside people who’re doing 3D rendering and simulations. So again – you have this situation where a user might upgrade to a CPU with 32 cores, and their 3D renders will be almost 32 times faster. And it’s natural to look at After Effects on the same machine- and it’s definitely not 32 times faster.
It’s easy to blame Adobe, but it’s not their fault. It’s all down to the nature of the task. The unusual case of 3D rendering has raised unrealistic expectations as to what’s possible for other use cases, such as manipulating 2D bitmap / raster images. When you take into account system bandwidth and memory constraints, After Effects may never render more than 3 or 4 times faster, even if many more cores are available. While this can be disappointing for anyone who’s spent a lot of money on a monster machine, the issue isn’t After Effects but the unrealistic expectations that 3D rendering (and optimised real-time engines) have given users who do both.
What After Effects is doing is completely different to what a 3D renderer is doing. It just happens that 3D rendering has the unusual advantage of being naturally suited to multiple processors, while image compositing is not. But the reason this needs explaining is because so many After Effects users are either also doing 3D work, or they work with 3D artists, so it’s natural to compare them.
Rolls Royce vs Corolla
As I waffled on, I mentioned that color grading in Resolve is like using a Rolls Royce. And it is. Resolve is a beautiful app to use for color grading, and it’s worth remembering (as the Corridor Crew guys also pointed out) only a few years ago it cost well over $20,000 to buy.
But what I forgot to mention was that I use Lumetri more often. A lot more. In my day-to-day work as an After Effects specialist, Lumetri meets my basic requirements just fine. If I was color grading a film then yes – I would head into Resolve. But most of my work involves advertising and corporate work, and the notes I get are generally pretty simple. I’m used to getting feedback in apps like FrameIO, or even just screenshots with markups, and comments are usually just “lighten” or “darken”, or “warmer” or “cooler”, etc. Most of the time I’m only adjusting exposure and temperature / tint. I think this says a lot about what I was trying to explain with Adobe catering to different markets. Do I see your average influencer or podcaster jumping into Resolve for color grading? No.