Home

You’re Doing That Wrong is a journal of various successes and failures by Dan Sturm.

Proxy Workflows are Dead, Long Live Proxy Workflows

As I've been working my way through all the blog posts, podcasts, and twitter hot takes on this year's WWDC announcements, one topic keeps coming up that I think could use some additional exploration. Apple announced a PCI card they're calling "Afterburner", built to decode ProRes and ProRes Raw footage in real time.

Which is a great idea. I think the Afterburner card is going to be a very useful tool for post-production folks and, should I be lucky enough to end up with a new Mac Pro on my desk, I would love if it had one inside.

The problem I have is with the way they're pitching the product. On the Mac Pro page on apple.com, it reads:

Afterburner allows you to go straight from camera to timeline and work natively with 4K and even 8K files from the start. No more time-consuming transcoding, storage overhead, or errors during output. Proxy workflows, RIP.

This message has been repeated in almost every conversation I've heard about the Afterburner card and I think it's based on a fundamental misunderstanding of post-production workflows.

We don't edit with "proxy" files because it's slow. We do it because it's the smarter way to do things. I love the idea that working with ProRes files will be faster, but I have no intention of editing with camera native files. It's just not a good idea.

This isn't new

Hardware acceleration of video decoding is not new. When I saw this product announced, I described it to a coworker who missed the keynote as "Apple made a Red Rocket card for ProRes".

I'm not denigrating the product with that comparison. The Red Rocket card was a huge advancement for post-production workflows when it came out. Rather than waiting a day (or 4) to get our R3D files into an NLE-friendly format, we could have it in about as long as the duration of the footage. And I'm excited at the proposition of having that same speed improvement for workflows using ProRes.

A side effect of that increased speed was the ability to edit directly with our R3D files in our NLEs. While technically possible, it was a terrible idea that caused more pain than it solved. Rather than describe all the dumb technical gotchas related to editing R3D files natively, let's look at the idea from a higher level view; one that takes into account an entire workflow, if you will.

Disclaimer: this next section is going to have a lot of my personal opinion built into it. But that opinion is based on a couple decades worth of professional experience, so you can totally trust me.

Safety First

The first step after shooting a professional video project is making an untouched backup of your camera negative files. We don't work from these files, we don't import them into Premiere or Avid. We don't look at them. They go into a safe place on an expensive hard drive array with drive redundancy and, if we're smart, it's backed up off-site.

Because if something happens to these files, we're done. We've lost potentially millions of dollars worth of material that, in most cases, cannot be recreated as it existed previously. It's not a risk worth taking. We're making at least 2 copies.

VFX

I love ProRes as a format. I live in ProRes all day. But ProRes is not the best format for all tasks to be performed over the course of a project; hardware accelerated or not.

Unless we are a video production company of one, with an unlimited amount of time and money, we're going to use multiple file formats in our production pipeline. Because we're smart people who do things with intention, not just because our hardware enables us to do it.

When an edit is completed and ready to be sent to someone to add VFX or Motion Graphics, we're not going to send the entire, uncut shot length to that person. We're going to send them exactly the section of the shot they need to work on (plus a few frames of handles because, again, we're smart).

This may come as a surprise to some of you, but the best file formats for VFX are image-sequence-based formats. That is, a folder full of still frames, each representing a single frame of video. Yes, in 2019.

You've all heard the statistics from VFX or animation facilities that a single frame of a shot from a movie can take hours or days to render. That's not because they don't have a ProRes accelerator board in their computer, it's because there's a lot of work being done to the shot.

Also, what happens if your render crashes when it's halfway done? If you're working in ProRes, that means you start over. With an image sequence, you pickup where you left off. Time is money. Deadlines are as tight as they are important.

This is also one instance where the term "proxy workflow" is silly because, in most instances, the image sequence format we're using is higher quality than any ProRes format.

And, let's not forget that the majority of shots in movies and commercials will go through a vfx pipeline. Whether it's to add giant fighting robots, or to remove a Starbucks cup someone left in the frame, or to correct some lens distortion or camera bounce. It's going to be worked on, so let's do it smartly.

Shared Storage

Once your post-production facility grows beyond a handful of folks, you're going to need to keep your files on a centralized SAN so everyone can work off the same material and pass things back and forth while working in parallel.

With your footage on a shared network, there are a whole lot more considerations for which format you use for which part of the post-production process. Is your network fast enough to serve up these massive files to everyone who needs them at the same time?

And since we're making multiple copies of our footage (for safety), and we're keeping our working files in a shared location, it's unrealistic to say we're saving space by using our camera original format for our work. Whether your duplicates are H.264 (they should never be) or ProRes 4444, you're already using a "proxy workflow". And since we're realistic, responsible professionals, we're going to use the best smallest format for the job at hand. This is one of the main reasons some VFX facilities still use DPX sequences instead of EXR sequences.

ProRes is a Proxy Workflow

One of the best things about the ProRes format is that it's actually a half dozen or so formats of varying bit-rates and depths. The reason there are so many flavors of ProRes is so we can choose – at every step of the production and post-production pipeline – the right format for the project and task at hand.

Much like the new Mac Pro, we like our workflows modular and flexible. That does not mean we're going to use a single copy of our camera native ProRes files from start to finish. That's MacBook Air thinking in a Mac Pro world.

Viewing Alexa Footage in Nuke and Nuke Studio

The Arri Alexa remains one of the most common cameras used in production these days. Its proprietary LogC format captures fantastic highlight detail and exceptionally clean imagery.

But with each new proprietary camera format comes a new process for decoding, viewing, and interacting with the camera's footage. Generally speaking, this involves applying a specific LUT to our footage.

Most applications have these LUTs built in to their media management tools. All it takes to correctly view your footage is to select which LUT to use on your clip.

This is, unfortunately, not the full story when it comes to Alexa footage.

If you've ever imported an Alexa colorspace clip into Nuke, set your Read node to "AlexaV3LogC", and viewed it with the default Viewer settings, you may notice that the highlights look blown out. If you use a color corrector or the Exposure slider on your Viewer, you'll see that the image detail in the highlights is still there, it's just not being displayed correctly.

An Alexa LogC clip being viewed in NukeX with the sRGB Viewer Input Process.

If you import that same clip into DaVinci Resolve, again, set it to Alexa colorspace and view it, you'll notice that it doesn't match the Nuke viewer. In Resolve, the footage looks "correct".

An Alexa LogC clip being viewed in Resolve with the Arri Alexa LogC to Rec709 3D LUT applied.

So, what's going on here?

The Alexa's LogC footage needs to be gamma corrected and tone-mapped to a Rec709 colorspace. In Nuke, this is a 2-step process. The footage gets its gamma linearized in the Read node before work is done, then, after our work has been added, the footage needs to be converted to Rec709 colorspace. In DaVinci Resolve, these 2 steps are performed at the same time.

The problem is that second step in Nuke. There is no built-in Viewer Input Process to properly view Alexa footage. We could toss a OCIOColorSpace node at the end of our script and work in between it and our Read. But we don't want to bake that Rec709 conversion into our render, we just want to view it in the corrected colorspace.

Adding a Custom Input Process

The first thing we're going to need is the Alexa Viewer LUT. No, this is not the same LUT that comes with the application. You can download it here, or build your own with Arri's online LUT generator.

If you only use Nuke/NukeX, adding the Input Process is relatively simple, and bares a striking resemblance to a lot of the Defaults customization we've done in the past. If, however, you also use Nuke Studio or Hiero, you'll want to ignore this section and skip ahead to the OCIOConfig version.

Nuke / NukeX

To get started, create a new Nuke project. Then:

  1. Create a OCIOFileTransform node and add the downloaded LUT file.
  2. Set your "working space" to "AlexaV3LogC". Leave the "direction" on "forward" and "interpolation" on "linear".
  3. After the OCIOFileTransform node, add an OCIOColorSpace node.
  4. Set your "in" to "linear" and your "out" to "AlexaV3LogC"

The nodes for the AlexaLUT Gizmo in Nuke.

Now we need to turn these 2 nodes into a Gizmo. To do that, select them both, hit CMD+G on the keyboard to Group them, then click the "Export Gizmo" button. Save the Gizmo in your .nuke directory. Mine is called Alexa_LUT.gizmo.

Once we've saved our Gizmo, we just need to add the following line to our Init.py file:

nuke.ViewerProcess.register("Alexa", nuke.Node, ("Alexa_LUT", ""))

Now, when you start up Nuke, you'll have your Alexa LUT in the Input Process menu in your Viewer.

The Alexa Input Process in the Nuke Viewer.

And, just so we're clear, if we're working on an Alexa colorspace clip, as a Good VFX Artist, we're going to send back a render that is also in Alexa colorspace. That means setting the "colorspace" on our Write node to "AlexaV3LogC", regardless of the file format.

NukeStudio (and Also Nuke / NukeX)

Welcome, Nuke Studio users. For you, this process is going to be a little more work.

Just like everything in Nuke Studio, am I right?

Sorry. Let's get started.

To add our Alexa LUT to Nuke Studio, we need to create our own custom OCIOConfig. Since we're lazy (read: smart), we'll duplicate and modify the Nuke Default OCIOConfig to save us a lot of time and effort.

The OCIOConfigs that come with Nuke can be found in the app's installation directory under /plugins/OCIOConfigs/configs/. We're going to copy the folder called "nuke-default" and paste it into .nuke/OCIOConfigs/configs/ and let's rename it to something like "default-alexa".

Before we do anything else, we need to put our Alexa Viewer LUT inside the "luts" folder inside our "default-alexa" folder.

Is it there? Good.

Inside our "default-alexa" folder, is a file called "config.ocio". Open that in a text editor of your choice.

Near the top of the file, you'll see a section that looks like this:

displays:
  default:
    - !<View> {name: None, colorspace: raw}
    - !<View> {name: sRGB, colorspace: sRGB}
    - !<View> {name: rec709, colorspace: rec709}
    - !<View> {name: rec1886, colorspace: Gamma2.4}

We need to add this line:

- !<View> {name: Alexa, colorspace: AlexaViewer}

I put mine at the top, first in the list, because I want the Alexa viewer to be my primary Input Process LUT. A good 80% of the footage I work with is Alexa footage. Your use case may vary. Rearranging these lines will have not break anything as long as you keep the indentation the same.

Now, scroll all the way down to the bottom of the file, past all the built-in colorspace configs. Add the following:

- !<ColorSpace>
  name: AlexaViewer
  description: |
    Alexa Log C
  from_reference: !<GroupTransform>
    children:
      - !<ColorSpaceTransform> {src: linear, dst: AlexaV3LogC}
      - !<FileTransform> {src: ARRI_LogC2Video_709_davinci3d.cube, interpolation: linear}

That wasn't so bad, was it. Was it?

Now, all that's left to do is open Nuke and/or Nuke Studio, go to your application preferences, and under the "Color Management" section, select our new OCIOConfig file.

Choosing our custom OCIOConfig in the Nuke application preferences.

Now, you'll have your Alexa LUT in your Input Process dropdown in both Nuke and Nuke Studio and you can finally get to work.

Thanks Are in Order

I've been putting off this blog post for a very long time. Very nearly 2 years, to be specific.

I was deep into a project in Nuke Studio and was losing my mind over not being able to properly view my Alexa raw footage or Alexa-encoded renders. This project also included a large number of motion graphics, so making sure colors and white levels matched was doubly important.

So, I sent an email to Foundry support.

After about a week and a half of unsuccessful back-and-forth with my initial contact, my issue was escalated and I was contacted by Senior Customer Support Engineer Elisabeth Wetchy.

Elisabeth deserves all of the credit for solving this issue. She was possibly the most helpful customer support representative I've ever worked with.

Also, in the process of doing some research for this blog post (yeah, I do that sometimes shut up), I came across an article she wrote the day we figured this stuff out. So I guess I shouldn't feel too bad for making you guys wait 2 years for my post.

Note: Test footage from of Arri can be found here.

Using iPhones in Production

Preamble

When talking to people about my work, a fairly frequent question I get asked is if I ever shoot “professional” video on my iPhone. The topic of whether or not one can use a phone for “real” video production is a great way to get people with strong opinions about technology all riled up. So let’s get to it.

When it comes to my own work, the answer to whether or not you can use an iPhone for professional video is “sometimes”. I have in the past, and will continue to use my iPhone as a professional camera. But, it’s in a limited capacity and probably not in the way most people would guess.

In the video I directed for 1Password, the insert shot of my pug, Russell, sitting on a couch was shot on my iPhone 5S in my living room [1]. Last year, for another video I directed, a shot we didn’t get on set was created entirely in CG, combining plates shot on our Arri Alexa Mini, a Nikon D4, and my iPhone 6S.

Nobody noticed, in either instance, because the iPhone footage was used in limited and specific ways. Russell was sitting in front of blown-out windows which might have given away the difference in dynamic range of the iPhone footage, so I replaced the windows with an HDR still image that was also taken on my 5S. In the CG shot, the element that was shot on the iPhone was not the center of attention and passed by quickly, without scrutiny.

Gear

A key aspect of shooting usable video on an iPhone is treating the iPhone like any other professional camera. Recording with an app, like Filmic Pro, that allows for full manual control over the Shutter, Aperture, ISO, and White Balance is essential. As is keeping the frame stable by mounting the phone to a tripod or c-stand with a mount like a Glif (my phone-mounting solution of choice).

You need to light your scene, whether that be with studio lights or natural light. And, possibly the biggest differentiator between professional and amateur video, if your video involves sound, you need to use an external microphone.

Shooting video on a phone is not a costly endeavor, but it does require care and attention to detail.

In Production

Recently, I shot a project wherein my iPhone 6S was the primary and only camera used.

Gasp!” you say. “It’s true,” says I.

The video was for an iOS app that had a video chat component, similar to a FaceTime call. And, while I could have used a Big-Boy Cinema Camera to shoot the actors, simulating the lens and footage characteristics of a front-facing iPhone camera, I took this project as an opportunity to put my phone through its paces on a real set.

In addition to everything we covered in the previous section of this post, there were 2 technical hurdles that needed addressing before it was time to roll cameras.

The first is one that will always, always bite you in the ass if you embark on such an ill-advised endeavor as this: storage space. In the past, when I’ve tried to shoot semi-serious video with my phone, I have, time and again, completely filled up the storage on my phone much faster than predicted. That resulted in the entire production stopping and waiting until I could download the clips onto my laptop, delete them from the phone, and set everything up again. And if you happen to run out of storage space while you’re rolling? Say goodbye to that clip because it’s gone.

The second challenge was how we would monitor the video as we shot. It’s likely that we could have shot with the rear camera on the phone, using the phone’s screen as a monitor, and no one would have known the footage wasn’t from the front facing camera. But, because this was an app demo, and the actors would need to interact with specific points on the screen, they needed to see themselves as we shot.

Two problems that, as it turned out, had a single solution.

The release of iOS 8 and OS X Yosemite added the ability to directly record an iPhone’s screen output in QuickTime when the phone was connected to the laptop via Lightning cable. I had used it to record apps for interface walk-through videos, so why not just open the Camera app and record the image coming from either camera?

And, to sweeten the deal, QuickTime allows you to select different sources for video and audio inputs, so I could record the video from the front facing camera, and audio from my USB microphone, also connected to my MacBook Pro. No need to sync the sound to the image in post.

Separate video and audio sources.

I could buy a 6-foot long Amazon Basics Lightning Cable for $7, set my phone up on a tripod with my Glif, set my microphone up on a mic stand, connect them both to my laptop where I can monitor the image and record clips with QuickTime, and never worry about taking up any space on the phone itself. Problem solved.

Well, almost. Since QuickTime records a live output of exactly what you see on the screen of your phone, that means it also includes the camera interface controls. That just won’t do. Thankfully, Filmic Pro includes a menu option to “tap to hide interface”. So, after I carefully set my focus, shutter, white balance, etc settings, I can hide the app interface and record only the image from the camera.

Side-note: If you’re a tight budget and can’t afford to purchase Filmic Pro, there are dozens of free “Mirror” apps on the App Store whose sole purpose is to show you a feed from your front facing camera with zero interface graphics. You won’t have the manual controls you get from a real video app, but it’ll do in a pinch.

Now, I don’t have to awkwardly fumble with the camera to roll and cut. I can set up shot once and do the rest from behind my Mac. I can record, playback, and discard takes as quickly as any professional video camera.

Do I recommend shooting video like this for non-app-related videos? Not really. But it was exactly what I needed for this particular project with its particular set of limitations. And, besides, I’ve certainly jumped through more hoops building a camera rig than I did with this one.


  1. Fun fact: the phone was taped to a chair because I didn’t have a tripod at the time.  ↩