As I've been working my way through all the blog posts, podcasts, and twitter hot takes on this year's WWDC announcements, one topic keeps coming up that I think could use some additional exploration. Apple announced a PCI card they're calling "Afterburner", built to decode ProRes and ProRes Raw footage in real time.
Which is a great idea. I think the Afterburner card is going to be a very useful tool for post-production folks and, should I be lucky enough to end up with a new Mac Pro on my desk, I would love if it had one inside.
The problem I have is with the way they're pitching the product. On the Mac Pro page on apple.com, it reads:
Afterburner allows you to go straight from camera to timeline and work natively with 4K and even 8K files from the start. No more time-consuming transcoding, storage overhead, or errors during output. Proxy workflows, RIP.
This message has been repeated in almost every conversation I've heard about the Afterburner card and I think it's based on a fundamental misunderstanding of post-production workflows.
We don't edit with "proxy" files because it's slow. We do it because it's the smarter way to do things. I love the idea that working with ProRes files will be faster, but I have no intention of editing with camera native files. It's just not a good idea.
This isn't new
Hardware acceleration of video decoding is not new. When I saw this product announced, I described it to a coworker who missed the keynote as "Apple made a Red Rocket card for ProRes".
I'm not denigrating the product with that comparison. The Red Rocket card was a huge advancement for post-production workflows when it came out. Rather than waiting a day (or 4) to get our R3D files into an NLE-friendly format, we could have it in about as long as the duration of the footage. And I'm excited at the proposition of having that same speed improvement for workflows using ProRes.
A side effect of that increased speed was the ability to edit directly with our R3D files in our NLEs. While technically possible, it was a terrible idea that caused more pain than it solved. Rather than describe all the dumb technical gotchas related to editing R3D files natively, let's look at the idea from a higher level view; one that takes into account an entire workflow, if you will.
Disclaimer: this next section is going to have a lot of my personal opinion built into it. But that opinion is based on a couple decades worth of professional experience, so you can totally trust me.
Safety First
The first step after shooting a professional video project is making an untouched backup of your camera negative files. We don't work from these files, we don't import them into Premiere or Avid. We don't look at them. They go into a safe place on an expensive hard drive array with drive redundancy and, if we're smart, it's backed up off-site.
Because if something happens to these files, we're done. We've lost potentially millions of dollars worth of material that, in most cases, cannot be recreated as it existed previously. It's not a risk worth taking. We're making at least 2 copies.
VFX
I love ProRes as a format. I live in ProRes all day. But ProRes is not the best format for all tasks to be performed over the course of a project; hardware accelerated or not.
Unless we are a video production company of one, with an unlimited amount of time and money, we're going to use multiple file formats in our production pipeline. Because we're smart people who do things with intention, not just because our hardware enables us to do it.
When an edit is completed and ready to be sent to someone to add VFX or Motion Graphics, we're not going to send the entire, uncut shot length to that person. We're going to send them exactly the section of the shot they need to work on (plus a few frames of handles because, again, we're smart).
This may come as a surprise to some of you, but the best file formats for VFX are image-sequence-based formats. That is, a folder full of still frames, each representing a single frame of video. Yes, in 2019.
You've all heard the statistics from VFX or animation facilities that a single frame of a shot from a movie can take hours or days to render. That's not because they don't have a ProRes accelerator board in their computer, it's because there's a lot of work being done to the shot.
Also, what happens if your render crashes when it's halfway done? If you're working in ProRes, that means you start over. With an image sequence, you pickup where you left off. Time is money. Deadlines are as tight as they are important.
This is also one instance where the term "proxy workflow" is silly because, in most instances, the image sequence format we're using is higher quality than any ProRes format.
And, let's not forget that the majority of shots in movies and commercials will go through a vfx pipeline. Whether it's to add giant fighting robots, or to remove a Starbucks cup someone left in the frame, or to correct some lens distortion or camera bounce. It's going to be worked on, so let's do it smartly.
Shared Storage
Once your post-production facility grows beyond a handful of folks, you're going to need to keep your files on a centralized SAN so everyone can work off the same material and pass things back and forth while working in parallel.
With your footage on a shared network, there are a whole lot more considerations for which format you use for which part of the post-production process. Is your network fast enough to serve up these massive files to everyone who needs them at the same time?
And since we're making multiple copies of our footage (for safety), and we're keeping our working files in a shared location, it's unrealistic to say we're saving space by using our camera original format for our work. Whether your duplicates are H.264 (they should never be) or ProRes 4444, you're already using a "proxy workflow". And since we're realistic, responsible professionals, we're going to use the best smallest format for the job at hand. This is one of the main reasons some VFX facilities still use DPX sequences instead of EXR sequences.
ProRes is a Proxy Workflow
One of the best things about the ProRes format is that it's actually a half dozen or so formats of varying bit-rates and depths. The reason there are so many flavors of ProRes is so we can choose – at every step of the production and post-production pipeline – the right format for the project and task at hand.
Much like the new Mac Pro, we like our workflows modular and flexible. That does not mean we're going to use a single copy of our camera native ProRes files from start to finish. That's MacBook Air thinking in a Mac Pro world.