Home

You’re Doing That Wrong is a journal of various successes and failures by Dan Sturm.

Viewing Alexa Footage in Nuke and Nuke Studio

The Arri Alexa remains one of the most common cameras used in production these days. Its proprietary LogC format captures fantastic highlight detail and exceptionally clean imagery.

But with each new proprietary camera format comes a new process for decoding, viewing, and interacting with the camera's footage. Generally speaking, this involves applying a specific LUT to our footage.

Most applications have these LUTs built in to their media management tools. All it takes to correctly view your footage is to select which LUT to use on your clip.

This is, unfortunately, not the full story when it comes to Alexa footage.

If you've ever imported an Alexa colorspace clip into Nuke, set your Read node to "AlexaV3LogC", and viewed it with the default Viewer settings, you may notice that the highlights look blown out. If you use a color corrector or the Exposure slider on your Viewer, you'll see that the image detail in the highlights is still there, it's just not being displayed correctly.

An Alexa LogC clip being viewed in NukeX with the sRGB Viewer Input Process.

If you import that same clip into DaVinci Resolve, again, set it to Alexa colorspace and view it, you'll notice that it doesn't match the Nuke viewer. In Resolve, the footage looks "correct".

An Alexa LogC clip being viewed in Resolve with the Arri Alexa LogC to Rec709 3D LUT applied.

So, what's going on here?

The Alexa's LogC footage needs to be gamma corrected and tone-mapped to a Rec709 colorspace. In Nuke, this is a 2-step process. The footage gets its gamma linearized in the Read node before work is done, then, after our work has been added, the footage needs to be converted to Rec709 colorspace. In DaVinci Resolve, these 2 steps are performed at the same time.

The problem is that second step in Nuke. There is no built-in Viewer Input Process to properly view Alexa footage. We could toss a OCIOColorSpace node at the end of our script and work in between it and our Read. But we don't want to bake that Rec709 conversion into our render, we just want to view it in the corrected colorspace.

Adding a Custom Input Process

The first thing we're going to need is the Alexa Viewer LUT. No, this is not the same LUT that comes with the application. You can download it here, or build your own with Arri's online LUT generator.

If you only use Nuke/NukeX, adding the Input Process is relatively simple, and bares a striking resemblance to a lot of the Defaults customization we've done in the past. If, however, you also use Nuke Studio or Hiero, you'll want to ignore this section and skip ahead to the OCIOConfig version.

Nuke / NukeX

To get started, create a new Nuke project. Then:

  1. Create a OCIOFileTransform node and add the downloaded LUT file.
  2. Set your "working space" to "AlexaV3LogC". Leave the "direction" on "forward" and "interpolation" on "linear".
  3. After the OCIOFileTransform node, add an OCIOColorSpace node.
  4. Set your "in" to "linear" and your "out" to "AlexaV3LogC"

The nodes for the AlexaLUT Gizmo in Nuke.

Now we need to turn these 2 nodes into a Gizmo. To do that, select them both, hit CMD+G on the keyboard to Group them, then click the "Export Gizmo" button. Save the Gizmo in your .nuke directory. Mine is called Alexa_LUT.gizmo.

Once we've saved our Gizmo, we just need to add the following line to our Init.py file:

nuke.ViewerProcess.register("Alexa", nuke.Node, ("Alexa_LUT", ""))

Now, when you start up Nuke, you'll have your Alexa LUT in the Input Process menu in your Viewer.

The Alexa Input Process in the Nuke Viewer.

And, just so we're clear, if we're working on an Alexa colorspace clip, as a Good VFX Artist, we're going to send back a render that is also in Alexa colorspace. That means setting the "colorspace" on our Write node to "AlexaV3LogC", regardless of the file format.

NukeStudio (and Also Nuke / NukeX)

Welcome, Nuke Studio users. For you, this process is going to be a little more work.

Just like everything in Nuke Studio, am I right?

Sorry. Let's get started.

To add our Alexa LUT to Nuke Studio, we need to create our own custom OCIOConfig. Since we're lazy (read: smart), we'll duplicate and modify the Nuke Default OCIOConfig to save us a lot of time and effort.

The OCIOConfigs that come with Nuke can be found in the app's installation directory under /plugins/OCIOConfigs/configs/. We're going to copy the folder called "nuke-default" and paste it into .nuke/OCIOConfigs/configs/ and let's rename it to something like "default-alexa".

Before we do anything else, we need to put our Alexa Viewer LUT inside the "luts" folder inside our "default-alexa" folder.

Is it there? Good.

Inside our "default-alexa" folder, is a file called "config.ocio". Open that in a text editor of your choice.

Near the top of the file, you'll see a section that looks like this:

displays:
  default:
    - !<View> {name: None, colorspace: raw}
    - !<View> {name: sRGB, colorspace: sRGB}
    - !<View> {name: rec709, colorspace: rec709}
    - !<View> {name: rec1886, colorspace: Gamma2.4}

We need to add this line:

- !<View> {name: Alexa, colorspace: AlexaViewer}

I put mine at the top, first in the list, because I want the Alexa viewer to be my primary Input Process LUT. A good 80% of the footage I work with is Alexa footage. Your use case may vary. Rearranging these lines will have not break anything as long as you keep the indentation the same.

Now, scroll all the way down to the bottom of the file, past all the built-in colorspace configs. Add the following:

- !<ColorSpace>
  name: AlexaViewer
  description: |
    Alexa Log C
  from_reference: !<GroupTransform>
    children:
      - !<ColorSpaceTransform> {src: linear, dst: AlexaV3LogC}
      - !<FileTransform> {src: ARRI_LogC2Video_709_davinci3d.cube, interpolation: linear}

That wasn't so bad, was it. Was it?

Now, all that's left to do is open Nuke and/or Nuke Studio, go to your application preferences, and under the "Color Management" section, select our new OCIOConfig file.

Choosing our custom OCIOConfig in the Nuke application preferences.

Now, you'll have your Alexa LUT in your Input Process dropdown in both Nuke and Nuke Studio and you can finally get to work.

Thanks Are in Order

I've been putting off this blog post for a very long time. Very nearly 2 years, to be specific.

I was deep into a project in Nuke Studio and was losing my mind over not being able to properly view my Alexa raw footage or Alexa-encoded renders. This project also included a large number of motion graphics, so making sure colors and white levels matched was doubly important.

So, I sent an email to Foundry support.

After about a week and a half of unsuccessful back-and-forth with my initial contact, my issue was escalated and I was contacted by Senior Customer Support Engineer Elisabeth Wetchy.

Elisabeth deserves all of the credit for solving this issue. She was possibly the most helpful customer support representative I've ever worked with.

Also, in the process of doing some research for this blog post (yeah, I do that sometimes shut up), I came across an article she wrote the day we figured this stuff out. So I guess I shouldn't feel too bad for making you guys wait 2 years for my post.

Note: Test footage from of Arri can be found here.

Using iPhones in Production

Preamble

When talking to people about my work, a fairly frequent question I get asked is if I ever shoot “professional” video on my iPhone. The topic of whether or not one can use a phone for “real” video production is a great way to get people with strong opinions about technology all riled up. So let’s get to it.

When it comes to my own work, the answer to whether or not you can use an iPhone for professional video is “sometimes”. I have in the past, and will continue to use my iPhone as a professional camera. But, it’s in a limited capacity and probably not in the way most people would guess.

In the video I directed for 1Password, the insert shot of my pug, Russell, sitting on a couch was shot on my iPhone 5S in my living room [1]. Last year, for another video I directed, a shot we didn’t get on set was created entirely in CG, combining plates shot on our Arri Alexa Mini, a Nikon D4, and my iPhone 6S.

Nobody noticed, in either instance, because the iPhone footage was used in limited and specific ways. Russell was sitting in front of blown-out windows which might have given away the difference in dynamic range of the iPhone footage, so I replaced the windows with an HDR still image that was also taken on my 5S. In the CG shot, the element that was shot on the iPhone was not the center of attention and passed by quickly, without scrutiny.

Gear

A key aspect of shooting usable video on an iPhone is treating the iPhone like any other professional camera. Recording with an app, like Filmic Pro, that allows for full manual control over the Shutter, Aperture, ISO, and White Balance is essential. As is keeping the frame stable by mounting the phone to a tripod or c-stand with a mount like a Glif (my phone-mounting solution of choice).

You need to light your scene, whether that be with studio lights or natural light. And, possibly the biggest differentiator between professional and amateur video, if your video involves sound, you need to use an external microphone.

Shooting video on a phone is not a costly endeavor, but it does require care and attention to detail.

In Production

Recently, I shot a project wherein my iPhone 6S was the primary and only camera used.

Gasp!” you say. “It’s true,” says I.

The video was for an iOS app that had a video chat component, similar to a FaceTime call. And, while I could have used a Big-Boy Cinema Camera to shoot the actors, simulating the lens and footage characteristics of a front-facing iPhone camera, I took this project as an opportunity to put my phone through its paces on a real set.

In addition to everything we covered in the previous section of this post, there were 2 technical hurdles that needed addressing before it was time to roll cameras.

The first is one that will always, always bite you in the ass if you embark on such an ill-advised endeavor as this: storage space. In the past, when I’ve tried to shoot semi-serious video with my phone, I have, time and again, completely filled up the storage on my phone much faster than predicted. That resulted in the entire production stopping and waiting until I could download the clips onto my laptop, delete them from the phone, and set everything up again. And if you happen to run out of storage space while you’re rolling? Say goodbye to that clip because it’s gone.

The second challenge was how we would monitor the video as we shot. It’s likely that we could have shot with the rear camera on the phone, using the phone’s screen as a monitor, and no one would have known the footage wasn’t from the front facing camera. But, because this was an app demo, and the actors would need to interact with specific points on the screen, they needed to see themselves as we shot.

Two problems that, as it turned out, had a single solution.

The release of iOS 8 and OS X Yosemite added the ability to directly record an iPhone’s screen output in QuickTime when the phone was connected to the laptop via Lightning cable. I had used it to record apps for interface walk-through videos, so why not just open the Camera app and record the image coming from either camera?

And, to sweeten the deal, QuickTime allows you to select different sources for video and audio inputs, so I could record the video from the front facing camera, and audio from my USB microphone, also connected to my MacBook Pro. No need to sync the sound to the image in post.

Separate video and audio sources.

I could buy a 6-foot long Amazon Basics Lightning Cable for $7, set my phone up on a tripod with my Glif, set my microphone up on a mic stand, connect them both to my laptop where I can monitor the image and record clips with QuickTime, and never worry about taking up any space on the phone itself. Problem solved.

Well, almost. Since QuickTime records a live output of exactly what you see on the screen of your phone, that means it also includes the camera interface controls. That just won’t do. Thankfully, Filmic Pro includes a menu option to “tap to hide interface”. So, after I carefully set my focus, shutter, white balance, etc settings, I can hide the app interface and record only the image from the camera.

Side-note: If you’re a tight budget and can’t afford to purchase Filmic Pro, there are dozens of free “Mirror” apps on the App Store whose sole purpose is to show you a feed from your front facing camera with zero interface graphics. You won’t have the manual controls you get from a real video app, but it’ll do in a pinch.

Now, I don’t have to awkwardly fumble with the camera to roll and cut. I can set up shot once and do the rest from behind my Mac. I can record, playback, and discard takes as quickly as any professional video camera.

Do I recommend shooting video like this for non-app-related videos? Not really. But it was exactly what I needed for this particular project with its particular set of limitations. And, besides, I’ve certainly jumped through more hoops building a camera rig than I did with this one.


  1. Fun fact: the phone was taped to a chair because I didn’t have a tripod at the time.  ↩