Home

You’re Doing That Wrong is a journal of various successes and failures by Dan Sturm.

Project Folder Structure and Task Management

For my work, producing videos, it’s incredibly important to keep my files and tasks organized. Managing multiple versions of sequences and shots across thousands of files, hundreds of gigabytes in size, can and will get unwieldy very quickly. The better organized I am from the start, the less likely I’ll be banging my head against my desk in the eleventh hour.

Here’s a brief overview of what I do and how I do it.

The Folder Structure

I’ve been using roughly the same folder structure for my projects for the better part of a decade now. Here’s what it looks like:

For the record, I hate the macOS list view and I never use it while working. I'm only using it here to show you the whole folder structure in one image.

I have high-level folders for the key stages of production: pre-production, production, post-production, and final deliverables. Within each of those folders are subfolders corresponding to specific types of assets that will be gathered or created along the way.

It’s worth mentioning that this folder structure is just the starting point for each of my projects. Not every folder will be used on every project. And, for some projects, many additional folders will be added. For example, a project that includes vfx work (most of them) will have files and folders programmatically created for each vfx shot within the Comp Files, Plates, and Renders folders [1].

Launchbar

To create that folder structure, I run a Keyboard Maestro macro — from Launchbar — that gives me this little pop up:

I give the project a name, hit Return, and the new folder is created in my “Projects” folder in Dropbox.

As an aside, the Launchbar Action I run to activate the Keyboard Maestro macro is called “Planter” because I first began automating my folder structure creation with Brett Terpstra’s great, free application Planter, and I’m a big fan of maintaining muscle memory regardless of whether or not it makes any actual sense [2].

The Things Project

I’m sure you noticed the checkbox on the “New Project” dialog box. Yes, that does what you think it does. If checked (the default), it creates a new project in Things in my “Work” Area with the designated name of the project, and pre-populates it with a standard set of tasks. Again, this is just a starting point. Some tasks may not be required, and many more will likely be added. Regardless, after I hit Return on that dialog box, this is what shows up in Things.

The Tags

There’s one other trick I added to this setup a few years ago that I really like. When navigating through dozens of folders, often with very similar names, it’s easy to get lost. Something that helps me find what I’m looking for more quickly is a Hazel rule that looks through my folder structure and adds a macOS green tag to any folder that is not empty.

This way, after I’ve programmatically created a series of folders that will receive my yet-to-be-rendered vfx shots, I’ll more quickly be able to see which shots have been rendered and which have not.

It doesn’t tell me anything about what’s going on inside that folder, but I find it makes navigation a little quicker and easier, especially with a tired brain.

The Technical Bits

Alright, let’s get to the part with all the images and the scrolling.

The folder creation and the Things project creation are done with two separate Keyboard Maestro macros to attempt to keep things as tidy-ish as possible [3]. The folder creation macro calls the Things macro (if the box is checked) at the end of its steps.

The Folder Creation Macro

Click through to see the whole enchilada.

Click through to see the whole enchilada.

The Things Macro

Click through to see the full image.

Click through to see the full image.

The “New Things Project” macro is built using Things’ JSON based commands, rather than the more limited URL Scheme commands. It’s much more flexible, faster to modify, and is the only way to access certain features like Headings.

If you’re curious, you can see the full code inside that second block of the macro here.

The Hazel Rule

The Hazel rule is fairly simple:

The AppleScript in the second Condition is:

set root_fol to theFile

tell application "Finder"  
    set files_ to count files of entire contents of root_fol  
end tell

if files_ is 0 then  
    return false  
else  
    return true  
end if

I honestly have no recollection of where I found this script on the internet. However, I can tell you with a fair amount of certainty that I did not write it.

This Hazel rule is running on my entire “Projects” folder, so any new folder that’s added will automatically be analyzed and tagged.

All That’s Left to Do Is Everything

So, that’s it. That’s how I begin work on every project. Once a project has a name, it gets a folder and a Things project. Then the actual work can begin. Hooray…


  1. These files and folders are created by the project management tools inside Nuke Studio and that is absolutely a post for another time. Or maybe not because sheesh.  ↩

  2. See also: I use the abbreviation “ch” to launch Safari with Launchbar despite having switched from using Chrome as my primary browser probably 7 years ago. It’s fast and my hands are used to it, so I’m sticking with it.  ↩

  3. lol  ↩

Proxy Workflows are Dead, Long Live Proxy Workflows

As I've been working my way through all the blog posts, podcasts, and twitter hot takes on this year's WWDC announcements, one topic keeps coming up that I think could use some additional exploration. Apple announced a PCI card they're calling "Afterburner", built to decode ProRes and ProRes Raw footage in real time.

Which is a great idea. I think the Afterburner card is going to be a very useful tool for post-production folks and, should I be lucky enough to end up with a new Mac Pro on my desk, I would love if it had one inside.

The problem I have is with the way they're pitching the product. On the Mac Pro page on apple.com, it reads:

Afterburner allows you to go straight from camera to timeline and work natively with 4K and even 8K files from the start. No more time-consuming transcoding, storage overhead, or errors during output. Proxy workflows, RIP.

This message has been repeated in almost every conversation I've heard about the Afterburner card and I think it's based on a fundamental misunderstanding of post-production workflows.

We don't edit with "proxy" files because it's slow. We do it because it's the smarter way to do things. I love the idea that working with ProRes files will be faster, but I have no intention of editing with camera native files. It's just not a good idea.

This isn't new

Hardware acceleration of video decoding is not new. When I saw this product announced, I described it to a coworker who missed the keynote as "Apple made a Red Rocket card for ProRes".

I'm not denigrating the product with that comparison. The Red Rocket card was a huge advancement for post-production workflows when it came out. Rather than waiting a day (or 4) to get our R3D files into an NLE-friendly format, we could have it in about as long as the duration of the footage. And I'm excited at the proposition of having that same speed improvement for workflows using ProRes.

A side effect of that increased speed was the ability to edit directly with our R3D files in our NLEs. While technically possible, it was a terrible idea that caused more pain than it solved. Rather than describe all the dumb technical gotchas related to editing R3D files natively, let's look at the idea from a higher level view; one that takes into account an entire workflow, if you will.

Disclaimer: this next section is going to have a lot of my personal opinion built into it. But that opinion is based on a couple decades worth of professional experience, so you can totally trust me.

Safety First

The first step after shooting a professional video project is making an untouched backup of your camera negative files. We don't work from these files, we don't import them into Premiere or Avid. We don't look at them. They go into a safe place on an expensive hard drive array with drive redundancy and, if we're smart, it's backed up off-site.

Because if something happens to these files, we're done. We've lost potentially millions of dollars worth of material that, in most cases, cannot be recreated as it existed previously. It's not a risk worth taking. We're making at least 2 copies.

VFX

I love ProRes as a format. I live in ProRes all day. But ProRes is not the best format for all tasks to be performed over the course of a project; hardware accelerated or not.

Unless we are a video production company of one, with an unlimited amount of time and money, we're going to use multiple file formats in our production pipeline. Because we're smart people who do things with intention, not just because our hardware enables us to do it.

When an edit is completed and ready to be sent to someone to add VFX or Motion Graphics, we're not going to send the entire, uncut shot length to that person. We're going to send them exactly the section of the shot they need to work on (plus a few frames of handles because, again, we're smart).

This may come as a surprise to some of you, but the best file formats for VFX are image-sequence-based formats. That is, a folder full of still frames, each representing a single frame of video. Yes, in 2019.

You've all heard the statistics from VFX or animation facilities that a single frame of a shot from a movie can take hours or days to render. That's not because they don't have a ProRes accelerator board in their computer, it's because there's a lot of work being done to the shot.

Also, what happens if your render crashes when it's halfway done? If you're working in ProRes, that means you start over. With an image sequence, you pickup where you left off. Time is money. Deadlines are as tight as they are important.

This is also one instance where the term "proxy workflow" is silly because, in most instances, the image sequence format we're using is higher quality than any ProRes format.

And, let's not forget that the majority of shots in movies and commercials will go through a vfx pipeline. Whether it's to add giant fighting robots, or to remove a Starbucks cup someone left in the frame, or to correct some lens distortion or camera bounce. It's going to be worked on, so let's do it smartly.

Shared Storage

Once your post-production facility grows beyond a handful of folks, you're going to need to keep your files on a centralized SAN so everyone can work off the same material and pass things back and forth while working in parallel.

With your footage on a shared network, there are a whole lot more considerations for which format you use for which part of the post-production process. Is your network fast enough to serve up these massive files to everyone who needs them at the same time?

And since we're making multiple copies of our footage (for safety), and we're keeping our working files in a shared location, it's unrealistic to say we're saving space by using our camera original format for our work. Whether your duplicates are H.264 (they should never be) or ProRes 4444, you're already using a "proxy workflow". And since we're realistic, responsible professionals, we're going to use the best smallest format for the job at hand. This is one of the main reasons some VFX facilities still use DPX sequences instead of EXR sequences.

ProRes is a Proxy Workflow

One of the best things about the ProRes format is that it's actually a half dozen or so formats of varying bit-rates and depths. The reason there are so many flavors of ProRes is so we can choose – at every step of the production and post-production pipeline – the right format for the project and task at hand.

Much like the new Mac Pro, we like our workflows modular and flexible. That does not mean we're going to use a single copy of our camera native ProRes files from start to finish. That's MacBook Air thinking in a Mac Pro world.

Node Sets for Nuke v1.2

The Selectable Edition

Yesterday, while trying to address a note on a near-finished animation, I discovered the need for a new tool in my Node Sets toolbox that was both useful and trivially simple to create. A rare combination when it comes to my code.

The original intended use for the Node Sets tagging tools was that animated nodes would be tagged as you work and, when you need to adjust an animation's timing, you would run the "Show Nodes" command to open all of the tagged nodes. The idea being, you'll need to open not only the nodes that need to be adjusted, but also all of the other relevant animated nodes for timing and context.

The problem I encountered involves this methodology's inability to scale with the modularity of larger projects. One of the main benefits of a node-based workflow is the ability to create any number of blocks of operations, separate from the main process tree, then connect and combine them as necessary. Each of these blocks would have its own set of animated nodes, building a piece of the overall animation.

But the comp I was working on yesterday had 140 tagged animated nodes and, while it would technically still work to open all of them every time I need to make a timing change, it's slow and unwieldy to have 140 node property panes open at the same time.

A solution I proposed to this issue in the v1.0 blog post was the ability to use a different tag for different types or groups of nodes and open them each independently. A fine idea that I never personally implemented because the tags are hard coded into the tool and there's no way to add more tags without closing the app, modifying the menu.py file, and cluttering up the toolset with a lot of similarly named tools. A terrible workflow.

A solution that solves this problem in a much simpler, smarter way is to use a selection of nodes to narrow the search for tags. So, when working on a smaller section of the animation, I can select a block of nodes and run the new command "Node Set: Show Selection" to open the tagged nodes contained within.

 

The selected block of nodes used to search for tagged nodes.

 

The Code

Like I mentioned at the top of this post, the code for this new addition was exceptionally simple. Specifically, I duplicated and renamed the "Node Set: Show Nodes" code, and changed one word. In the function's for loop, I changed nuke.allNodes() to nuke.selectedNodes(). And that was it. Writing this blog post has already taken several orders of magnitude longer than writing the code.

The full function, called showOnlySelectedNodes(), looks like this:

def showOnlySelectedNodes():
  names = []
  li = []
  for node in nuke.selectedNodes():
    if "inNodeSet" in node['label'].value():
      names.extend([node.name()])
      li.extend([node])
  numPan = nuke.toNode('preferences')['maxPanels']
  numPan.setValue(len(names))
  for i in range(len(li)):
    node = li[i]
    node.showControlPanel()

And the additional line to add the tool to the menu is:

nsets.addCommand('Node Set: Show Selection', 'showOnlySelectedNodes()', icon='NodeSetsMenu-show.png')

It's rare that the solution to an issue I encounter while working is so simple to create that it's quicker to just make the tool than capture a note to create it later, but that was the case with this one and I'm very happy to have this new option.

Head over to the Downloads page to get the full updated Node Sets v1.2 code.

Viewing Alexa Footage in Nuke and Nuke Studio

The Arri Alexa remains one of the most common cameras used in production these days. Its proprietary LogC format captures fantastic highlight detail and exceptionally clean imagery.

But with each new proprietary camera format comes a new process for decoding, viewing, and interacting with the camera's footage. Generally speaking, this involves applying a specific LUT to our footage.

Most applications have these LUTs built in to their media management tools. All it takes to correctly view your footage is to select which LUT to use on your clip.

This is, unfortunately, not the full story when it comes to Alexa footage.

If you've ever imported an Alexa colorspace clip into Nuke, set your Read node to "AlexaV3LogC", and viewed it with the default Viewer settings, you may notice that the highlights look blown out. If you use a color corrector or the Exposure slider on your Viewer, you'll see that the image detail in the highlights is still there, it's just not being displayed correctly.

An Alexa LogC clip being viewed in NukeX with the sRGB Viewer Input Process.

If you import that same clip into DaVinci Resolve, again, set it to Alexa colorspace and view it, you'll notice that it doesn't match the Nuke viewer. In Resolve, the footage looks "correct".

An Alexa LogC clip being viewed in Resolve with the Arri Alexa LogC to Rec709 3D LUT applied.

So, what's going on here?

The Alexa's LogC footage needs to be gamma corrected and tone-mapped to a Rec709 colorspace. In Nuke, this is a 2-step process. The footage gets its gamma linearized in the Read node before work is done, then, after our work has been added, the footage needs to be converted to Rec709 colorspace. In DaVinci Resolve, these 2 steps are performed at the same time.

The problem is that second step in Nuke. There is no built-in Viewer Input Process to properly view Alexa footage. We could toss a OCIOColorSpace node at the end of our script and work in between it and our Read. But we don't want to bake that Rec709 conversion into our render, we just want to view it in the corrected colorspace.

Adding a Custom Input Process

The first thing we're going to need is the Alexa Viewer LUT. No, this is not the same LUT that comes with the application. You can download it here, or build your own with Arri's online LUT generator.

If you only use Nuke/NukeX, adding the Input Process is relatively simple, and bares a striking resemblance to a lot of the Defaults customization we've done in the past. If, however, you also use Nuke Studio or Hiero, you'll want to ignore this section and skip ahead to the OCIOConfig version.

Nuke / NukeX

To get started, create a new Nuke project. Then:

  1. Create a OCIOFileTransform node and add the downloaded LUT file.
  2. Set your "working space" to "AlexaV3LogC". Leave the "direction" on "forward" and "interpolation" on "linear".
  3. After the OCIOFileTransform node, add an OCIOColorSpace node.
  4. Set your "in" to "linear" and your "out" to "AlexaV3LogC"

The nodes for the AlexaLUT Gizmo in Nuke.

Now we need to turn these 2 nodes into a Gizmo. To do that, select them both, hit CMD+G on the keyboard to Group them, then click the "Export Gizmo" button. Save the Gizmo in your .nuke directory. Mine is called Alexa_LUT.gizmo.

Once we've saved our Gizmo, we just need to add the following line to our Init.py file:

nuke.ViewerProcess.register("Alexa", nuke.Node, ("Alexa_LUT", ""))

Now, when you start up Nuke, you'll have your Alexa LUT in the Input Process menu in your Viewer.

The Alexa Input Process in the Nuke Viewer.

And, just so we're clear, if we're working on an Alexa colorspace clip, as a Good VFX Artist, we're going to send back a render that is also in Alexa colorspace. That means setting the "colorspace" on our Write node to "AlexaV3LogC", regardless of the file format.

NukeStudio (and Also Nuke / NukeX)

Welcome, Nuke Studio users. For you, this process is going to be a little more work.

Just like everything in Nuke Studio, am I right?

Sorry. Let's get started.

To add our Alexa LUT to Nuke Studio, we need to create our own custom OCIOConfig. Since we're lazy (read: smart), we'll duplicate and modify the Nuke Default OCIOConfig to save us a lot of time and effort.

The OCIOConfigs that come with Nuke can be found in the app's installation directory under /plugins/OCIOConfigs/configs/. We're going to copy the folder called "nuke-default" and paste it into .nuke/OCIOConfigs/configs/ and let's rename it to something like "default-alexa".

Before we do anything else, we need to put our Alexa Viewer LUT inside the "luts" folder inside our "default-alexa" folder.

Is it there? Good.

Inside our "default-alexa" folder, is a file called "config.ocio". Open that in a text editor of your choice.

Near the top of the file, you'll see a section that looks like this:

displays:
  default:
    - !<View> {name: None, colorspace: raw}
    - !<View> {name: sRGB, colorspace: sRGB}
    - !<View> {name: rec709, colorspace: rec709}
    - !<View> {name: rec1886, colorspace: Gamma2.4}

We need to add this line:

- !<View> {name: Alexa, colorspace: AlexaViewer}

I put mine at the top, first in the list, because I want the Alexa viewer to be my primary Input Process LUT. A good 80% of the footage I work with is Alexa footage. Your use case may vary. Rearranging these lines will have not break anything as long as you keep the indentation the same.

Now, scroll all the way down to the bottom of the file, past all the built-in colorspace configs. Add the following:

- !<ColorSpace>
  name: AlexaViewer
  description: |
    Alexa Log C
  from_reference: !<GroupTransform>
    children:
      - !<ColorSpaceTransform> {src: linear, dst: AlexaV3LogC}
      - !<FileTransform> {src: ARRI_LogC2Video_709_davinci3d.cube, interpolation: linear}

That wasn't so bad, was it. Was it?

Now, all that's left to do is open Nuke and/or Nuke Studio, go to your application preferences, and under the "Color Management" section, select our new OCIOConfig file.

Choosing our custom OCIOConfig in the Nuke application preferences.

Now, you'll have your Alexa LUT in your Input Process dropdown in both Nuke and Nuke Studio and you can finally get to work.

Thanks Are in Order

I've been putting off this blog post for a very long time. Very nearly 2 years, to be specific.

I was deep into a project in Nuke Studio and was losing my mind over not being able to properly view my Alexa raw footage or Alexa-encoded renders. This project also included a large number of motion graphics, so making sure colors and white levels matched was doubly important.

So, I sent an email to Foundry support.

After about a week and a half of unsuccessful back-and-forth with my initial contact, my issue was escalated and I was contacted by Senior Customer Support Engineer Elisabeth Wetchy.

Elisabeth deserves all of the credit for solving this issue. She was possibly the most helpful customer support representative I've ever worked with.

Also, in the process of doing some research for this blog post (yeah, I do that sometimes shut up), I came across an article she wrote the day we figured this stuff out. So I guess I shouldn't feel too bad for making you guys wait 2 years for my post.

Note: Test footage from of Arri can be found here.

Update for Nuke 12 and Up

In case you didn't click through to that support article above, there's a very important update for users of Nuke 12 and up:

As of Nuke 12, the active_views list will now be respected, and this controls which views are visible and the order in which they appear.

So for the custom LUT to appear in the Viewer, you will need to append the LUT to the active_views list in the OCIO config:

active_views: [sRGB, rec709, rec1886, None]

For example:

active_views: [sRGB, rec709, rec1886, AlexaToRec709, None]

This line is also optional and, by default, will set all views to be visible and will respect the order of the views under the display. So if you wish for all LUTs to be visible, you could simply delete this line.

In my case, my active_views line says active_views: [Alexa, sRGB, sRGBf, rec709, rec1886, None] because my Alexa profile is named "Alexa", not "AlexaToRec709". I also put it at the front of the active_views list because I want it to be the default since I primarily work with Alexa LogC footage.

Global Motion Blur Controls in Nuke

I’m back again with another custom tool for my Nuke setup. That can mean only one thing: I’m doing dumb stuff again.

I recently embarked on another large motion graphics project, animated entirely in Nuke. Just as with the creation of my Center Transform tool, using Nuke for such a project quickly reveals a glaring omission in the native Nuke toolset which, on this project, I just couldn't continue working without. I speak, of course, of Global Motion Blur Controls.

The Use Case

Most assets that move, especially motion graphics, need to have motion blur on them. But motion blur is incredibly processor-intensive, so, while you're working, it's almost always necessary to turn off motion blur while you animate, turning it back on to preview and render.

In Nuke, that means setting the motionblur parameter on a Transform node to 0 while you work, then setting it to 1 (or higher) to preview and render. Simple enough when you only have a handful of Transform nodes in your script. Nigh impossible to manage when you have almost 200.

The Problem

Currently, each Transform node has its own set of motion blur controls: Samples, Shutter, and Shutter Offset. There is no mechanism for modifying or enabling / disabling all motion blur parameters at the same time like there is in, say, After Effects.

Smart Nuke artists will use Cloned Transform nodes or expression link the motion blur parameters to each other. Or, take it one step further and create a custom motion blur controller with a NoOp node and expression link all Transforms to that.

While that saves some effort, you've got to add the NoOp expression to every Transform node (twice), including each new Transform you create. And, of course, there's the very likely possibility that you'll forget or miss one along the way and have to track it down once you notice your render looks wrong.

This is how I have previously dealt with this problem.

A Half-Step Forward

To make this process faster, I wrote a script to quickly expression link the motionblur and shutter parameters of selected nodes to my custom NoOp, which I have saved as a Toolset for easy access in each new Nuke script.

That script looks like this:

def SetNoOpBlur():
  for xNode in nuke.selectedNodes():
    xNode['motionblur'].setExpression( 'NoOp1.mBlur' )
    xNode['shutter'].setExpression( 'NoOp1.mShutter' )

toolbar = nuke.menu("Nodes")
gzmos = toolbar.addMenu("Gizmos", icon='Gizmos4.png')
gzmos.addCommand("Link NoOp Blur Control", 'SetNoOpBlur()')

The Link to NoOp tool in Nuke

This makes the expression linking faster and easier, but I still have to select all the Transform nodes by hand before running the script. It's also incredibly fragile since I hard-coded the name of the controller node (NoOp1) into the function.

This level of half-assed automation simply won't do. We need to whole-ass a better solution.

The Solution

The goal would be to have motion blur settings in the Nuke script's Project Settings that control all Transform nodes by default, with the ability to override each node's individual settings, as needed.

Here’s what I came up with [1]:

# Customize Transform Controls - No Center Transform Button

def OnTransformCreate():
  nTR = nuke.thisNode()
  if nTR != None:
    # Create "Use Local Motion Blur" button
    lbscript="mbT = nuke.thisNode()['motionblur']; mbT.clearAnimated(); stT = nuke.thisNode()['shutter']; stT.clearAnimated(); soT = nuke.thisNode()['shutteroffset']; stT.clearAnimated();"
    lb = nuke.PyScript_Knob('clear-global-mblur', 'Use Local Motion Blur')
    lb.setCommand(lbscript)
    nTR.addKnob(lb)
    # Create "Use Global Motion Blur" button
    gbscript="nBB = nuke.thisNode(); nBB['motionblur'].setExpression('root.motionblur'); nBB['shutter'].setExpression('root.shutter'); nBB['shutteroffset'].setExpression('root.shutteroffset');"
    gb = nuke.PyScript_Knob('use-global-mblur', 'Use Global Motion Blur')
    gb.setCommand(gbscript)
    nTR.addKnob(gb)
    # Set Transform Node to use Global Motion Blur by Default
    nTR['motionblur'].setExpression('root.motionblur')
    nTR['shutter'].setExpression('root.shutter')
    nTR['shutteroffset'].setExpression('root.shutteroffset')

nuke.addOnUserCreate(OnTransformCreate, nodeClass="Transform")

# Root Modifications for Global Motion Blur

def GlobalMotionBlur():
  ## Create Motion Blur tab in Project Settings
  nRT = nuke.root()
  tBE = nuke.Tab_Knob("Motion Blur")
  nuke.Root().addKnob(tBE)
  
  ## Create motionblur, shutter, and shutter offset controls, ranges, and defaults
  mBL = nuke.Double_Knob('motionblur', 'motionblur')
  mBL.setRange(0,4)
  sTR = nuke.Double_Knob('shutter', 'shutter')
  sTR.setRange(0,2)
  oFS = nuke.Enumeration_Knob('shutteroffset', 'shutter offset', ['centered', 'start', 'end'])
  oFS.setValue('start')
  
  ## Add new knobs to the Motion Blur tab
  mblb = nuke.Text_Knob("gmbcl","Global Motion Blur Controls")
  nRT.addKnob(mblb)
  nRT.addKnob(mBL)
  nRT.addKnob(sTR)
  nRT.addKnob(oFS)

GlobalMotionBlur()

Init.py Script

# Global Motion Blur Defaults
nuke.knobDefault("Root.motionblur", "1")
nuke.knobDefault("Root.shutter", ".5")
nuke.knobDefault("Root.shutteroffset", "start")

The Motion Blur tab in Project Settings

The expression linked motion blur controls

The unlink / re-link buttons

I’ve created global parameters for Motion Blur, Shutter, and Shutter Offset [2]. When you create a Transform node, it automatically adds 2 buttons to the User tab to make it easy to unlink / re-link to the global controller.

In my version, all Transform nodes created are linked to the global setting by default. If you'd prefer each node be un-linked by default, you can just remove the last 3 lines of the OnTransformCreate() function. Then, you can click the "Use Global Motion Blur" button on each node that you want to link.

While I haven't spent a ton of time with this new setup, I'm really happy with how it's come out. Though, as with most of my weird customizations, I look forward to the day that The Foundry adds this functionality to the app, making my code obsolete.


  1. This is just the new code without the Center Transform button that I normally have in my OnTransformCreate() function. The function in my Menu.py file actually looks like this.  ↩

  2. I did not add the Custom Shutter Offset control to the global controller because, for one, I really don’t use that option much (or ever), and two, it turned out to be much harder to script than the rest of the options. It simply wasn’t worth the effort to figure out how to create a global controller for something I never use, and the command is still accessible by using per-node motion blur settings.  ↩

Replacing Native Nuke Nodes with Custom Gizmos

Friends, I feel like an idiot.

So many of the posts on this site are about creating custom gizmos to replace the native nodes inside of Nuke. But they've never completely satisfied their mission because, until now, I didn't know how to tell Nuke, "Hey, when I call a FrameHold give me my FrameHold_DS gizmo instead". So my FrameHold_DS gimzo has lived alongside the native FrameHold node since its creation. Which, by the way, is super annoying because it shows up lower in the tab-search results than the native node.

The alternative I've used — to a lesser degree of success — is to customize native nodes with the addOnUserCreate python function. While that has been effective at adding features to the native nodes, it's entirely python based and results in all my customizations being banished to a properties tab named "User". Just the sight of which makes me sad.

The good news is, I have finally figured out how to actually tell Nuke "Hey, when I call a FrameHold give me my FrameHold_DS gizmo instead". The bad news is, it's so incredibly, stupidly easy, I can't believe it took me this long to figure it out.

I was reading the Assigning a Hotkey section of the "Customizing the UI" python guide and saw this:

To assign a hotkey to an existing menu item, you effectively replace the whole menu item.

Let’s assign a hotkey to the Axis2 node.

nuke.menu( 'Nodes' ).addCommand( '3D/Axis', nuke.createNode( 'Axis2' ), 'a')

Pressing a on the keyboard now creates an Axis node.

I've known for a long time that I could add custom hotkeys to nodes, but the tab-search method was always fast enough for me that I've never wanted to do so.

But what caught my eye was the line of code. Before adding the hotkey, it defines the application's menu path to the node, then the createNode call for the node itself.

I thought to myself, there's no way I could just swap out the node name in the createNode call with the name of one of my gizmos. It couldn't possibly be that easy.

It is.

By adding the single line of code —

nuke.menu( 'Nodes' ).addCommand( 'Time/FrameHold', "nuke.createNode( 'FrameHold_DS' )")

— to my Menu.py file, calling a FrameHold node will now result in my FrameHold_DS gizmo being added instead.

Now, rather than debating which half-assed method for creating custom nodes is more suited to the tool I'm trying to create, I will create custom gizmos and remap their calls using this method.

I've been wanting to do this for so long. It's a very exciting discovery for me, only slightly overshadowed by feeling like a total doofus for not figuring it out sooner.

Postscript

"But what if I want to be able to call the native node at some point, too?"

Well, I have no desire to do that, but if you do, you could always add a second line of code to rename the native node to something else, like:

nuke.menu( 'Nodes' ).addCommand( 'Time/Dumb-Stupid-Native-FrameHold', "nuke.createNode( 'FrameHold' )")

That way it won't show up when you hit tab and start typing "Fra", but you will be able to find it if you need it.

Dumb Hold 2.png

Smarter, More Flexible Viewer Frame Handles

The best thing about posting my amateur, hacky Nuke scripts on this blog is that you, the handsome readers of this site, are often much smarter than I am, and frequently write in with enhancements or improvements to my scripts.

Such was the case, recently, with my Automated Viewer Frame Handles script. Reader and Visual Effects Supervisor Sean Danischevsky sent me this:

def set_viewer_handles(head_handles, tail_handles):
  #from https://doingthatwrong.com/
  # set in and out points of viewer to script range minus handle frames
  # Get the node that is the current viewer
  v = nuke.activeViewer().node()
  # Get the first and last frames from the project settings
  firstFrame = nuke.Root()['first_frame'].value()
  lastFrame = nuke.Root()['last_frame'].value()
  # get a string for the new range and set this on the viewer
  newRange = str(int(firstFrame)+head_handles) + '-' + str(int(lastFrame) - tail_handles)
  v['frame_range_lock'].setValue(True)
  v['frame_range'].setValue(newRange)


# Add the commands to the Viewer Menu
nuke.menu('Nuke').addCommand('Viewer/Viewer Handles - 16f',
"set_viewer_handles(16, 16)")
nuke.menu('Nuke').addCommand('Viewer/Viewer Handles - 12f',
"set_viewer_handles(12, 12)")
nuke.menu('Nuke').addCommand('Viewer/Viewer Handles - 10f',
"set_viewer_handles(10, 10)")
nuke.menu('Nuke').addCommand('Viewer/Viewer Handles - 8f',
"set_viewer_handles(8, 8)")

In my original script, I had hard-coded the frame handle length into the function, and created duplicate functions for each of my different handle lengths. Sean, being much better at this than I am, created a single function that takes a handle length input from the function call. In his version, all that's required to add an alternative frame handle length to the menu options is to duplicate the line that adds the menu command, and change the handle length that's sent to the function. Sean also added the ability to set different head and tail handle lengths to the script.

In thanking Sean for sending me this improved version of the script, I mentioned that it seemed that he'd set up the function in a way that would make it easy to prompt users to input a handle length, should they require a custom handle that wasn't already in their menu options. To which he replied with this:

def set_viewer_range(head_handles= 10, tail_handles= 10, ask= False):
    # set in and out points of viewer to script range minus handle frames
    # from https://doingthatwrong.com/
    # with some tweaks by Sean Danischevsky 2017
    if ask:
        p= nuke.Panel('Set Viewer Handles')
        p.addSingleLineInput('Head', head_handles)
        p.addSingleLineInput('Tail', tail_handles)
        #show the panel
        ret = p.show()
        if ret:
            head_handles= p.value('Head')
            tail_handles= p.value('Tail')
        else:
            return

    #only positive integers, please
    head_handles= max(0, int(head_handles))
    tail_handles= max(0, int(tail_handles))

    # Get the node that is the current viewer
    v = nuke.activeViewer().node()

    # Get the first and last frames from the project settings
    firstFrame = nuke.Root()['first_frame'].value()
    lastFrame = nuke.Root()['last_frame'].value()

    # get a string for the new range and set this on the viewer
    newRange = str(int(firstFrame)+ head_handles) + '-' + str(int(lastFrame) - tail_handles)
    v['frame_range_lock'].setValue(True)
    v['frame_range'].setValue(newRange)


# Add the commands to the Viewer Menu
nuke.menu('Nuke').addCommand('Viewer/Viewer Handles - 16f',
"set_viewer_range(16, 16)")
nuke.menu('Nuke').addCommand('Viewer/Viewer Handles - 12f',
"set_viewer_range(12, 12)")
nuke.menu('Nuke').addCommand('Viewer/Viewer Handles - 10f',
"set_viewer_range(10, 10)")
nuke.menu('Nuke').addCommand('Viewer/Viewer Handles - 8f',
"set_viewer_range(8, 8)")
nuke.menu('Nuke').addCommand('Viewer/Viewer Handles - ask',
"set_viewer_range(ask= True)")

Now, in addition to the set, common handle lengths in the menu, there's now an option to prompt the user for input. The pop-up is pre-filled with a value of 10, something that can be customized, as well. It's a thing of beauty.

I'd like to thank Sean for sending me both of these scripts. He took my ugly, half-formed idea, simplified it and made it more flexible. I've already begun using his script in place of mine, and I suggest you do the same.

Using iPhones in Production

Preamble

When talking to people about my work, a fairly frequent question I get asked is if I ever shoot “professional” video on my iPhone. The topic of whether or not one can use a phone for “real” video production is a great way to get people with strong opinions about technology all riled up. So let’s get to it.

When it comes to my own work, the answer to whether or not you can use an iPhone for professional video is “sometimes”. I have in the past, and will continue to use my iPhone as a professional camera. But, it’s in a limited capacity and probably not in the way most people would guess.

In the video I directed for 1Password, the insert shot of my pug, Russell, sitting on a couch was shot on my iPhone 5S in my living room [1]. Last year, for another video I directed, a shot we didn’t get on set was created entirely in CG, combining plates shot on our Arri Alexa Mini, a Nikon D4, and my iPhone 6S.

Nobody noticed, in either instance, because the iPhone footage was used in limited and specific ways. Russell was sitting in front of blown-out windows which might have given away the difference in dynamic range of the iPhone footage, so I replaced the windows with an HDR still image that was also taken on my 5S. In the CG shot, the element that was shot on the iPhone was not the center of attention and passed by quickly, without scrutiny.

Gear

A key aspect of shooting usable video on an iPhone is treating the iPhone like any other professional camera. Recording with an app, like Filmic Pro, that allows for full manual control over the Shutter, Aperture, ISO, and White Balance is essential. As is keeping the frame stable by mounting the phone to a tripod or c-stand with a mount like a Glif (my phone-mounting solution of choice).

You need to light your scene, whether that be with studio lights or natural light. And, possibly the biggest differentiator between professional and amateur video, if your video involves sound, you need to use an external microphone.

Shooting video on a phone is not a costly endeavor, but it does require care and attention to detail.

In Production

Recently, I shot a project wherein my iPhone 6S was the primary and only camera used.

Gasp!” you say. “It’s true,” says I.

The video was for an iOS app that had a video chat component, similar to a FaceTime call. And, while I could have used a Big-Boy Cinema Camera to shoot the actors, simulating the lens and footage characteristics of a front-facing iPhone camera, I took this project as an opportunity to put my phone through its paces on a real set.

In addition to everything we covered in the previous section of this post, there were 2 technical hurdles that needed addressing before it was time to roll cameras.

The first is one that will always, always bite you in the ass if you embark on such an ill-advised endeavor as this: storage space. In the past, when I’ve tried to shoot semi-serious video with my phone, I have, time and again, completely filled up the storage on my phone much faster than predicted. That resulted in the entire production stopping and waiting until I could download the clips onto my laptop, delete them from the phone, and set everything up again. And if you happen to run out of storage space while you’re rolling? Say goodbye to that clip because it’s gone.

The second challenge was how we would monitor the video as we shot. It’s likely that we could have shot with the rear camera on the phone, using the phone’s screen as a monitor, and no one would have known the footage wasn’t from the front facing camera. But, because this was an app demo, and the actors would need to interact with specific points on the screen, they needed to see themselves as we shot.

Two problems that, as it turned out, had a single solution.

The release of iOS 8 and OS X Yosemite added the ability to directly record an iPhone’s screen output in QuickTime when the phone was connected to the laptop via Lightning cable. I had used it to record apps for interface walk-through videos, so why not just open the Camera app and record the image coming from either camera?

And, to sweeten the deal, QuickTime allows you to select different sources for video and audio inputs, so I could record the video from the front facing camera, and audio from my USB microphone, also connected to my MacBook Pro. No need to sync the sound to the image in post.

Separate video and audio sources.

I could buy a 6-foot long Amazon Basics Lightning Cable for $7, set my phone up on a tripod with my Glif, set my microphone up on a mic stand, connect them both to my laptop where I can monitor the image and record clips with QuickTime, and never worry about taking up any space on the phone itself. Problem solved.

Well, almost. Since QuickTime records a live output of exactly what you see on the screen of your phone, that means it also includes the camera interface controls. That just won’t do. Thankfully, Filmic Pro includes a menu option to “tap to hide interface”. So, after I carefully set my focus, shutter, white balance, etc settings, I can hide the app interface and record only the image from the camera.

Side-note: If you’re a tight budget and can’t afford to purchase Filmic Pro, there are dozens of free “Mirror” apps on the App Store whose sole purpose is to show you a feed from your front facing camera with zero interface graphics. You won’t have the manual controls you get from a real video app, but it’ll do in a pinch.

Now, I don’t have to awkwardly fumble with the camera to roll and cut. I can set up shot once and do the rest from behind my Mac. I can record, playback, and discard takes as quickly as any professional video camera.

Do I recommend shooting video like this for non-app-related videos? Not really. But it was exactly what I needed for this particular project with its particular set of limitations. And, besides, I’ve certainly jumped through more hoops building a camera rig than I did with this one.


  1. Fun fact: the phone was taped to a chair because I didn’t have a tripod at the time.  ↩

My Post-Production Workflow

Over the past three-ish years of working for myself, I've experimented with a number of permutations of my post-production workflow in an attempt to find the smoothest, most flexible path from dailies to delivery. And, while I've been able to settle on a consistent workflow over the past year or so, I would never describe it as smooth or flexible. Each step has technical and creative frustrations that keep me from being satisfied. Still, it's the best frustrating workflow I've put together so far.

Before we get started, I'd like to make it clear that this post is in no way meant to be a "how-to" guide for others to follow. My intention here is to illustrate the absurdly complex method by which I turn ideas into videos, while also holding on to the faint hope that publicly highlighting these pain-points may lead to potential solutions.

The Players

Currently, the high-level flow of applications I use in post-production looks like this:

  1. DaVinci Resolve v12.5 – Dailies
  2. Avid Media Composer v7.0.3 – Offline
  3. Apple Compressor v3.5.3 – Encodes
  4. Nuke Studio / NukeX 10.0v3 – Picture Conform, Online, VFX
  5. DaVinci Resolve v12.5 – Color Correction
  6. Nuke Studio 10.0v3 orAfter Effects CC 2015 – Motion Graphics
  7. Final Cut Pro v7.0.3 – Audio Conform
  8. Soundtrack Pro v3.0.1 – Audio Editing and Mixing
  9. Final Cut Pro v7.0.3 – Final Output
  10. Apple Compressor v3.5.3 – Final Encodes

Digital Negative

The beginning of the post-production pipeline is the camera and format selected for the project. When I have a choice, my preferred camera is the Alexa Mini, shooting UHD (3840x2160) ProRes 4444XQ LogC. At a data rate of 1591 Mb/s, we'll suck up ~716Gb for every hour of footage we shoot.

QuickTime support is more-or-less ubiquitous these days and, while LogC support is somewhat hit-or-miss (more on that later), it's possible to take QuickTime Alexa footage through a post pipeline without ever applying a LUT. And, rather than wasting time extolling the benefits of shooting Log, I'll simply suggest you go read Stu's excellent post over at Prolost full of pragmatic wisdom like:

Log, in its many flavors, is a smart, flexible, and powerful way of storing high dynamic range digital cinema imagery. It’s closer to raw than you might think, and often much easier to work with for results of the same or better quality.

Dailies

As a sensible, rational filmmaker, I create dailies for my offline edit. I do not edit with native camera negative files. While it is fast to AMA-import native files into Avid and just start editing, the time gained is quickly lost, many times over, by the slowdown caused by the massive file sizes of most modern cameras.

When I sit down at my desk after a shoot, I have 2 identical hard drives containing my camera negatives. One drive is transfered to a Drobo for archival, and the other drive is dumped into DaVinci Resolve.

In Resolve, all clips are dropped onto a timeline and, in the case of shooting with Alexa, the built-in Alexa LogC LUT is applied. Jumping straight to the Delivery tab of the application, I load a preset to create 1080p DNxHD 36 MXF files of each clip, careful to maintain the clip's original name.

The resulting MXF files are moved into /Avid MediaFiles/MXF/1 via the Finder. When Avid is launched, the drive is re-indexed and, once in my project, I locate the clips with the Media Tool, and sort them into bins.

Back when MXF support was less widespread and I thought flexibility was more important than hard disk space, I used to create DNxHD 36 QuickTime files as my dailies, rather than MXF files. While it's a perfectly reasonable option with a comparatively quick import time into Avid, it creates 2 copies of my dailies (The MXFs in the Avid MediaFiles folder and the QT files that were imported), as well as adds additional complexity and confusion to conforming and relinking sequences later.

And, before you say "how is it confusing to have just one more copy of your footage", let me remind you that you may be relinking your sequences next week or next year. The fewer potential hurdles you put in your own way, the fewer curse words you'll shout at your former self. If I had the QT dailies as well as the MXF files, which would I archive? Both? Just one? Which one? Skip the headache and render MXF dailies for direct import into Avid.

One "issue" with Resolve's rendering that's worth mentioning. I don't know what I did to the program to make it angry, but when I click "Render", it throws out my timeline pixel aspect ratio settings and starts rendering with the Cinemascope ratio. I have to immediately cancel the render, delete the files it began to create in the Finder, go back to the Edit page, open my project settings, and set the pixel aspect ratio back to square, where it was before I pressed render, then press Render again, and everything will export correctly. This happens every single time I render anything, regardless of project or clip settings.

Update - DaVinci Resolve 12.5.1

During the writing of this post, Blackmagic Design released DaVinci Resolve 12.5.1 which offers a solution to my pixel aspect ratio woes. On the Delivery page, in the Advanced Settings section of the Video output settings, Resolve now has a control for pixel aspect ratio.

Strangely, for me, the pixel aspect ratio still defaults to Cinemascope, despite the project being set to Square, but changing this new setting to Square prior to rendering will actually render files with a Square pixel aspect ratio. I'm no longer required to cancel the render, delete files in the Finder, change the setting on the Edit page, and render again.

Offline Edit

I currently edit my projects in Avid Media Composer 7.0.3. While I also have the option to edit in Premiere Pro CC, Final Cut Pro 7, and DaVinci Resolve, there's still no better editing tool than Media Composer. It's fast, efficient, and accurate.

In fact, the core editing tools in Media Composer, which have remained largely unchanged for decades, are so good that I find very little incentive to upgrade to the current version of the application. Since the introduction of the Smart Tool and the addition of tabbed bins in version 7, I find most new features are nice-to-haves.

I am occasionally envious of some of the newer features in other NLEs but, when it comes to the task of editing, Media Composer just can't be beat. I honestly wish I liked editing in Premiere. The timeline integration with AfterEffects is a fantastic feature I'd love to use but, unfortunately, I find Premiere slow and frustrating to use.

Oddly, Final Cut Pro 7 is still an essential part of my post workflow, just not in the editing phase. We'll get to that.

Reviews

As a brief aside, I will mention one tool I use at many different stages of post-production that you may find useful. Often times, while working on a video, it may be necessary to create a QuickTime file of the work-in-progress.

Long ago, I settled on the encoding settings that I considered the appropriate balance of file size and image quality. Those settings were saved as a Compressor Template.

Remember Compressor? That app that comes with Final Cut Studio for making encodes of things? As it turns out, Compressor can be used from the Command Line in Terminal. Which is, frankly, a terrible idea unless you also use something like Hazel to run the commands in the background while you continue to work on more important things.

I have a handful of folders set up on my computer, each with their own Hazel rule for creating a predetermined file type. When I need, for example, a 1080p H264 file, I export a Same As Source QuickTime file from Avid into the 1080 folder, and Hazel does the rest for me. When the encode is finished, Hazel opens a Finder window, showing me the final file.

Here's what the 1080 Hazel folder rule looks like:

The 1080 folder containing the Same As Source file from Avid is in my Home directory, and the Renders folder for the finished QuickTime file is in my Dropbox folder, making it quick to generate a shareable link to send to whomever.

I have not automated the deletion of the Same As Source file because I may want to create another encode from it (maybe half-rez) or, if it happens to be the final version of the video, I'll want to save the Same As Source file as part of the final project archive.

I do things this way for a couple of reasons. The number-one reason is that exporting a Same As Source QuickTime file from Avid is an order of magnitude faster than exporting an H264 file from Avid. Probably 2 orders of magnitude. Which only matters when you remember that, when Avid is importing and exporting files, the rest of the application is inaccessible. With Compressor and Hazel I can export my file quickly, let it convert in the background, and get back to work while I wait to be presented with that Finder window.

Another major benefit of making encodes this way is that it works with any source application, not just Avid. Just dump a file into the folder and away it goes.

Conforming

Prep

Here's where things start to get ugly. I need to turn my un-color-corrected rough edit full of slap-comps and minimally-viable audio into a final, polished product.

Before leaving Avid, I need to do a few things to prep the sequence for Conform and Online. Step 1 is, of course, duplicating the locked edit and starting with a fresh copy of the timeline. If I haven't already, I create a 1080p H264 QT "reference" file of the locked edit to match against the conformed timeline in Nuke Studio.

Next, I replace any audio that came in attached to the Dailies with the original WAV files from the Sound Recordist. I do this manually, on a new track, so I can verify that the timecode sync sent from the smart slate is identical to the timecode the camera recorded. It's usually off by a frame or two, so I line the originals up via waveform and double check it by listening to playback. Avid's timecode displays on the Viewer windows make it easy to find the starting timecode of a source clip on the timeline and the corresponding in-point on the WAV file. Since I primarily work on sequences under 2 minutes in length, this process only takes 5 or 10 minutes.

The last thing I need is an AAF of my sequence. It includes all video and audio tracks and is named to match the sequence from which it was created.

Note: if the Avid timeline has an abundance of effects applied to clips, often times they will need to be removed before creating the AAF. The AAF format is capable of transferring some basic effects to Nuke Studio, like Transforms, but if any of the clips are stacked or collapsed into a Submaster, the AAF will not transfer the layers within the effect. You must manually pull out each layer onto the main video tracks in order to have them all included in the AAF.

Picture Conform

The freshly minted AAF file is imported into a new project in Nuke Studio. Project settings match the final output; typically 1080p 23.976.

As expected, the sequence shows up with all clips offline. Annoyingly, none of the audio tracks are imported because Nuke Studio does not support importing audio via AAF. It does support the manual addition of audio tracks and basic audio editing tools (similar to its video editing tools) so I have no idea why audio is excluded. It is maddening and, frankly, unacceptable for an application that costs nearly $10,000.

The reference QuickTime file is imported below the other video tracks, then the sequence is relinked to the original UHD camera negative files, adjusted visually using Nuke Studio's wipe and difference viewer tools for any sync issues that may have occurred, and scaled and/or cropped to fit the appropriate raster and letterbox sizes.

VFX

To save us all a lot of time, I'll leave out the if/then/while loop of steps one has to go through when creating the complex export structures required to send individual shots to NukeX for VFX work and have them return to the timeline layer above their original plate (The Hiero portion of Nuke Studio). A task for which Nuke Studio is specifically designed and a task at which it fails to perform correctly 3 out of 5 times.

The shots that need VFX work are selected and exported into Nuke scripts with 12 frame handles. The work is completed, the shots are rendered as EXR sequences, and the renders are imported back into the timeline on a new video track, either automatically or via Nuke Studio's Build Track tool. Since most projects are shot on Alexa, and not all shots require VFX work, the EXR sequences are rendered with Alexa LogC colorspace so everything in the final timeline uses the same settings.

Rendering the EXR sequences with the Alexa LUT is important for more than just consistency. Alexa LogC footage requires a LUT be applied to linearize the footage for VFX work, and a second LUT to preview the linearized footage in a "monitor-space" environment. Which is to say, even after you've removed the Log gamma curve from the footage, you need to tone-map the footage to bring the white-point down to 1.0.

This 2-step process in Nuke is often a 1-step process in other applications like, say, Resolve. Resolve uses a single 3D LUT to linearize and preview LogC footage so, if your EXR sequences are rendered with a linear gamma (the default for EXR sequences), they will look very very wrong with the Resolve Alexa LUT applied.

It's also worth noting that, while Nuke comes with the "linearizing" LUT for Alexa footage pre-installed, users need to download the viewer LUT from Arri's website and manually install it in their menu.py file with python scripting.

Additionally, since Nuke Studio is the awkward love-child of Hiero and Nuke, it contains two separate viewers; one for the Timeline and one for the Node Graph. So, installing the Alexa viewer LUT for use in the Node Graph viewer does not install the LUT in the Timeline viewer.

The Timeline Viewer gamma controls.

The Node Graph Viewer gamma controls.

Personally, I've only installed the LUT for the Node Graph viewer because:

  1. Compositing is the most important place to maintain proper color management.
  2. I know how to install it in the Node Graph viewer because it's the "Nuke" half of the application and Hiero is more difficult to customize.
  3. It was a pain in the ass to do it once and I don't feel like doing it a second time.

The unfortunate result is that my footage looks different in the Node Graph viewer and the Timeline viewer. Which sucks, but since I'm not doing any color work in the Timeline, and I made sure my footage was properly color-managed when I was doing the compositing work in the Node Graph, I know it'll look right when I move everything to Resolve.

Color Correction

When the VFX work is complete and the sequence is ready to be color corrected, I export an XML of the timeline from Nuke Studio. Why an XML instead of an AAF like the one I exported from Avid? Because, while Nuke Studio is capable of importing AAF, XML, and EDL files, it's only able to create XML and EDL files.

No, I don't know why.

And, since Avid can only create AAF and EDL files, I have to use 2 separate "professional file interchange formats" in my workflow. Makes total sense.

EDL is out of the question because it's the oldest of all exchange formats, with the fewest features, and a separate EDL has to be created for every video track in a given timeline. No thank you.

Resolve typically imports and relinks the XML without any issues. It does, however, fail to recognize that the EXR sequences have handles, so the vfx shots need to be slipped 12 frames in the timeline before proceeding to color correction.

A Question for the Audience

Being that all of our footage is Alexa LogC footage, at some point, as part of the color correction process, we should be using Resolve's built-in 3D Alexa LUT to linearize the footage and convert it to monitor-space. But, if you recall from earlier, Resolve doesn't separate the linearization LUT from the viewer LUT like Nuke does. So the question is do we apply the LUT to the clips in the Media page before color correcting, or as a Node on the Color page? Should it be the first or last Node on a Shot? Or added to the Timeline so all clips can be corrected with one global Node? Some of those options affect preview thumbnails in the app. Does that bother you?

Clearly, I'm not sure of the correct answer. I've found that, no matter which option I choose, the results are questionable.

Ideally, we would linearize the footage prior to performing any color transformations so our math is correct, then we'd use a viewer LUT to view the tone mapped image in monitor-space. Just like Nuke does. But, again, we don't have those kind of tools in Resolve (by default). This is not simply an issue of semantics. While it's possible to arrive at the same final image, regardless of the order in which you add the LUT, getting it backwards will adversely affect the experience of using the color correction tools.

Adding the LUT before color correcting makes the Color Wheel tools in Resolve sensitive to the point that small color changes are near impossible. Which is understandable when you think about it. The wheels are expecting an input image with linear values across a certain range. When the values are compressed with an inverse-Log gamma curve, they will not change as you'd expect them to when you move the color wheel.

And this goes for almost any color space in any color correction tool. If you've ever had the experience of dragging a color wheel or slider and seen the image change much more dramatically than you expected, chances are the color tool was expecting a different gamma or color space than the image being fed into it.

Which is why, while a pain in the ass to use, the 2-step process of linearizing Alexa footage in Nuke is necessary to apply mathematical operations correctly. It's too bad Nuke's color correction tools are a bigger disaster than Resovle's LUT issues.

Regardless, this issue is the reason I was hesitant for a long time to make Resolve my primary color correction application. It has a (mostly) fixed UI and I found the small size of the color wheels and sliders frustrating to manage with a mouse or Wacom pen. But my frustration was due almost entirely to the sensitivity created by using incorrect LUT settings on my clips. When clips are correctly color managed, Resolve's color tools are much easier to use. Not great, but easier.

This is why, up until this year, I color corrected my projects in After Effects with Red Giant Colorista. I love Colorista. Love it. Its color wheels are dampened for smooth adjustments and include fast, easy to use tools for HSL adjustments that are a dream to work with. The reasons I went in search of a dedicated color correction application like Resolve had nothing to do with Colorista's toolset, and everything to do with my frustrations with After Effects.

Back to Color Correction

Assuming we've struggled our way through the LUT and interface difficulties and created a color corrected image we're happy with, the next step is to export the clips.

Depending on the project, this step will vary. If all work on the "picture" portion of the video is now complete, a single 1080p ProRes 4444 QuickTime file will be rendered of the entire sequence.

If the project requires Motion Graphics work that I either couldn't, or didn't want to create in during the VFX stage, each shot will be rendered as a separate ProRes 4444 QuickTime file.

The clips are rendered into their own subfolder, with their original names plus some sort of modifier appended to the filename, indicating they are the color corrected versions of the clip (typically _CC). As long as the names are consistent, they're easily relinked to an XML with Nuke Studio's bevy of conforming tools.

Motion Graphics

Assuming this is a project that needs Motion Graphics work, the individually-rendered shots are re-conformed into a sequence in one of two possible applications.

If I can do the work in Nuke, I will. Nuke isn't necessarily built for motion graphics, but it is my app of choice for most tasks. And, with the addition of my Node Sets gizmos, it's not as difficult to coordinate complex animations as it once was. The color-corrected QuickTimes are conformed back into Nuke Studio with the Build Track tool, and work begins, similar to VFX phase of post.

There are, however, certain motion graphics tasks that are better suited to being completed in After Effects (read: anything to do with text). In which case the XML that was previously imported into Resolve is imported into After Effects using the Pro Import After Effects option, formerly known as Automatic Duck. The sequence comes in offline, and each clip is manually relinked to the corresponding color-corrected file.

Yes, I could conform the color-corrected plates back into Nuke Studio and generate a new XML that references the color-corrected plates in order to save myself the hassle of manually relinking incorrectly named clips in AE but, more often than not, that method fails to reconnect all clips and manual relinking is needed anyway so I save myself some time and go straight to manually relinking.

I've also experimented with importing and relinking the XML in Premiere, thinking the NLE would have better luck relinking the clips than the compositing application, in which case I could use Send to After Effects to get it into AE. More complexity, more idiosyncrasies, more failures, more time wasted.

Once motion graphics work is completed, our picture should be locked. A single ProRes 4444 QuickTime file is rendered of the final timeline. This is one instance where I prefer to be working in After Effects. Though AE's renderer can be slow and is not without the occasional glitch, its stability and speed are miles ahead of Nuke Studio, especially with regard to QuickTime files.

By my estimation, the number of successful QuickTime renders I've created with Nuke Studio is likely a single digit percentage. And the time taken to perform the render is somewhere between 2x and 10x the amount of time of other applications.

I recently tried to use Nuke Studio to create dailies instead of using Resolve. The estimated time to create ~20 minutes of dailies was 12 hours and the render failed on the first clip. Resolve knocked out the render on the first try (after correcting the pixel aspect ratio) in less than 30 minutes.

Have I mentioned that Resolve is free and Nuke Studio costs $10,000. I think it's worth mentioning again. Resolve is free and Nuke Studio costs $10,000.

Sound

With picture locked and rendered, now it's time to work on sound. Our original AAF file from Avid is imported into Final Cut Pro 7 with Automatic Duck. Yes, just like FCP 7 itself, the Automatic Duck plugin still works.

The sequence that comes in is our offline edit, but with the original WAV files I manually conformed in Avid prior to the AAF export. The ProRes 4444 file of our finished picture is imported and lined up with the offline timeline. Once aligned, all the offline video tracks are deleted, leaving just the final picture file and the offline audio. Using Reconnect Media, all audio clips are relinked to their original files.

A Quick Sidebar

One hiccup I frequently run into is the naming convention used by whatever audio recorder my sound recordist uses on set. The CF card he hands me on set is full of WAV files with names like 12T03. That is, Scene 12, Take 3. For some reason, the metadata for that file is named 12/03. I assume the / isn't in the file name because whatever file system the recorder is using doesn't play nicely with having a forward-slash in the name.

While I'm sure there's probably a way to have the recorder use the same file name in both places, what this issue requires of me in post is that I batch rename a copy of the original sound files, replacing the T with the / that FCP is searching for. For this task I use Name Mangler, but it could just as easily be done with OS X's batch renaming tools.

Once the files are renamed, Reconnect Media in FCP 7 will now find and relink my files. Next, all audio clips are selected and sent to Soundtrack Pro with the Send to Soundtrack Pro Multitrack Project option.

Once in Soundtrack Pro, I edit, mix, and adjust my audio. I've been using Soundtrack Pro for years. I use it every time I edit an episode of Defocused, so I'm very comfortable and quick with its tools. Some day, when Soundtrack Pro inevitably breaks due to an OS update, I'll likely move to Logic Pro. But, as it stands, this old app does everything I need it to do and it does it quickly.

When work is completed, I export an AIFF file of the timeline and import it back into my FCP 7 project. I duplicate the sequence, keep the final picture, delete all the offline audio, and replace it with my final AIFF audio.

I now have a timeline with a single video track containing the finished picture, and a single audio track containing the finished sound. In some instances I'll end up with 2 audio tracks; one for music, one for everything else, but it's rare.

Final Exports

From the final timeline, a Same As Source QT file is exported to my 1080 folder and the final client encode is created. The Same As Source (ProRes 4444) file and the H264 file are saved together in the project directory.

Revisions

Sometimes, despite our best efforts, revisions need to be made after a "final" file has been delivered. In those instances, I will back up to the part of the process that needs updating, and the individual shot will be taken through the remaining steps by itself.

Once that shot has been updated, it will be placed on a second video track in the final FCP 7 timeline, above the previous clip. From there a new "final" encode will be made.

Since this step is almost always reserved for the last 1 or 2 shots that just need a small tweak, I've never found that the "final" timeline gets too cluttered with single shot revisions. If more than a couple shots need adjusting, the whole timeline goes back through the process with a new version number.

There is Too Much, Let Me Sum Up

Now that we're past the "how it's done" portion of this blog post, let's delve into some opinions.

Avid Media Composer

As I mentioned above, I still think Avid Media Composer is the best tool for editing. Every time I attempt to stray from it, trying something new, I always find myself rushing back to that ugly, antiquated interface that just gets the job done better than the competition.

That said, there are many parts of Media Composer that suffer badly from Avid's "if it ain't broke don't fix it" approach to software evolution. The effects system is stuck in the 1990s and complex animation is best left to other applications. The color correction tools are laughable. The lack of sub-frame audio adjustment confounds me. And things like the 0-255 or 16-235 color ranges, and crop/pad/resize import options, make the application feel like it was built for technicians, not artists.

I could go on, but since most of my complaints are the same complaints we've all had for many years, I'm sure you've heard them all before.

Nuke Studio

Nuke Studio's relinking and conforming tools are easily some of the best and most powerful tools in any application I've used. The timecode and metadata adjustments that can be made in the Spreadsheet tool are basically magic.

The problem with Nuke Studio is really that it's an application full of incredible tools that don't inter-operate with each other in a coherent or successful manner. The current paradigm of the separate Hiero Timeline and Nuke Node Graph is so confusing and broken that I wonder how anyone who wasn't previously using the individual applications could ever understand this impossibly complex, monstrous application.

Even when using the application exactly as intended, I often run into strange edge-cases that create unexpected results. And, for an application that creates and manages numerous, connected files on your hard-disk, with the idea that they'll be distributed to artists on a team, often times backing up and trying again when you encounter an issue is more of a hassle than just pressing on with whatever unusual Nuke script was created when you clicked "Create Comp".

Recently on a project, each of my Nuke scripts had 30 crop nodes after each Read node (one for each clip in the timeline). Only 1 was turned on and active, and the output was correct, so why bother figuring out what caused it? Just shake your head at the expensive application and move on.

Still, NukeX remains the best compositing platform on the market. And, in spite its issues, Nuke Studio does things that no other application I'm aware of can do. The conforming tools, the versioning tools, the roundtripping through vfx back to the timeline. All incredible features that are so great to have at your disposal if/when they work correctly.

AAF/XML/EDL Support and Interoperability

The appeal of Nuke Studio is the promise of seamlessly connecting an editorial timeline to powerful visual effects and color correction tools and applications. But we've had successful post-production pipelines before the creation of Nuke Studio, made possible by the interchange of data via EDL, XML, and AAF files. These files are intended to be an open source, universal language understood by video and audio applications.

In practice, support for these formats is frequently incomplete. I mentioned Nuke Studio's confounding lack of audio import support, but the one that really gets me is the exchange of basic effects. At some point, support for recognizing effects in AAF files was added to Nuke Studio.

Transforms usually come in correctly. Dissolves are usually deleted in favor of creating them again in NukeX because the Nuke Studio timeline doesn't honor clip transparencies by default, creating confusing results when dissolves are added to video tracks above V1. Nearly all other effects in the AAF file are ignored.

Specifics aside, there's no real way to know what pieces will and won't be reorganized by a given application in your pipeline. And most applications won't let you know there were additional effects in the file it was unable to interpret. Which is why that QuickTime reference export of the locked offline edit is so important. All we can really count on is that a basic sequence of clips will move from one application's timeline to another.

While I don't expect complete compatibility of files between competing applications, I find the current state of exchange tools and formats hugely disappointing.

GPU Acceleration and Other Ways to Ruin Your Day

Hands down, my most frustrating daily obstacle is the GPU in my Early 2013 Retina MacBook Pro. That's right, I do all of this work on a laptop connected to 2 additional external monitors. I love the portability and flexibility of this setup.

But with every software update of Nuke or After Effects or Resolve, more tools within the applications are being "accelerated" by offloading their processing to the GPU. In Nuke, I have the ability to override that acceleration and tell the application to process the effects on the CPU. In Resolve, I do not.

And the result is, depending on the type of footage I'm working with, I'll launch Resolve, open my project, and immediately be presented with a dialogue box telling me my GPU memory is full. If I dismiss the message and attempt to do any work, even something as simple as scrubbing the editorial timeline, my computer will instantly lockup and kernel panic.

The only solution, when presented with this dialogue, is to immediately quit the application, and perform a full reboot of the computer to purge all GPU memory. Not much of a solution. And even then, medium-sized Resolve projects using inter-frame compressed video formats are able to max out the GPU with zero other applications competing for memory.

In all seriousness, the best solution to this problem is to buy a bigger, more expensive computer.

And Finally

At the end of the day, I don't feel great about my post-production workflow. I spend way too much time thinking to myself "there's got to be a better way to do this". And I'll continue to spend too much time thinking that until I find a new, less bad solution.

Or until my frustrations grow large enough to make me start my own software company and build the tools I've been desperately searching for. There's a reason so much of this site is dedicated to custom Gizmos and Python scripts. I continue to be unsatisfied with the tools I use to do my job.

Though, knowing the person I am, I'm not sure I'll ever entirely rid myself of that feeling.

Customizing Native Nuke Nodes with addOnCreate

As much as I enjoy building custom gizmos to make my work in Nuke more enjoyable, they're really no replacement for native nodes that (hopefully, eventually) include the tweaked feature that I want.

In fact, trying to use a custom gizmo as a replacement for a native node can add friction since I end up with two results in my tab+search window rather than one. When trying to use my beloved FrameHold gizmo, I end up adding the old, dumb, native FrameHold node about half of the time. It's partly because there are two FrameHold nodes in my results, and partly because my custom gizmo shows up lower in the search results than than the native node. Sure, I could rename my gizmo to something unique to avoid ambiguity in the search, but that would come at the cost of precious muscle memory.

framehold tab search.png

But, thanks to an email from reader Patrick, I've recently become aware of the addOnCreate command in Nuke's Python API. Essentially, addOnCreate allows you to define a function to be run whenever a given Class of node is added, opening the door to customizing the native Nuke nodes as they're added to your script.

As a quick test, I used addOnCreate to add some TCL code to the labels of my TimeOffset and Tracker nodes.

# Time Offset Label Modification

def OnTimeOffsetCreate():
  nTO = nuke.thisNode()
  if nTO != None:
    label = nTO['label'].value()
    nTO['label'].setValue('[value time_offset]')

# Tracker Label Modification

def OnTrackerCreate():
  nTR = nuke.thisNode()
  if nTR != None:
    label = nTR['label'].value()
    nTR['label'].setValue('[value transform]\n[value reference_frame]')

# Add the custom labels to newly created nodes

nuke.addOnCreate(OnTimeOffsetCreate, nodeClass="TimeOffset")
nuke.addOnCreate(OnTrackerCreate, nodeClass="Tracker4")

(code only)

I've long been a fan of using the label of my TimeOffset nodes to show me how many frames have been offset. Especially handy for my kind of work, where a single animated element can be reused dozens times, offset in time, and randomized to make large animations easier to create and manage. For Tracker nodes, it's important to keep track of both the Reference Frame used, as well as the type of Transform being applied. Now, every time I add a TimeOffset or Tracker node, the additional information is automatically added to my node label.

Nuke default nodes on top. With custom labels below.

Nuke default nodes on top. With custom labels below.

As expected, there are limits to what you can modify with the Python API but, henceforth, this method of interface customization is going to be my preference, resorting to creating gizmos as a fall-back when I run into a limitation. The thought of dropping a single Menu.py file into my local .nuke directory, and having all my Nuke customizations show up, is incredibly appealing to me.

Node Sets for Nuke v1.1

Since creating the Node Sets for Nuke toolset back in June, I've been using it like crazy on all of my projects. Which has led to the discovery of one incredibly obnoxious bug.

This little guy is the maxPanels property at the top of the Properties Pane:

maxPanels.png

This is where you set the maximum number of node properties panels that can be open simultaneously. I usually keep mine set to 3 or 4. When I open a node's properties panel, if I already have the maximum number of panels open, the oldest panel, at the bottom of the list, is closed and the new panel opens on top. Which is great.

Unless, of course, you're trying to simultaneously open an unknown number of properties panels, all at the same time.

When using the Node Sets tool for showing all nodes in a set, I would have to manually set the maxPanels number to a value greater than or equal to the number of nodes I'd already tagged, prior to running the command. Since I usually have no idea how many nodes are in a set, I end up setting the maxPanels property to something I know is way too high, like 35. That way, when the Show Nodes in Set function runs, I won't be left looking at only 3 of my tagged nodes.

But since the Show Nodes in Set command is already searching through all the nodes to see which ones are tagged, wouldn't it be great if it could keep a tally as it searches and automatically update the maxPanels property to match?

Yes. That would be nice.

# Node Sets for Nuke v1.1

# This function opens the control panels of
# all the nodes with "inNodeSet" on their label

def showOnlyChosenNodes():
  names = []
  li = []
  for node in nuke.allNodes():
    if "inNodeSet" in node['label'].value():
      names.extend([node.name()])
      li.extend([node])
  numPan = nuke.toNode('preferences')['maxPanels']
  numPan.setValue(len(names))
  for i in range(len(li)):
    node = li[i]
    node.showControlPanel()

# This function adds "inNodeSet" to a
# new line on the label of all the selected nodes

def labelNodes():
  for node in nuke.selectedNodes():
    label = node['label'].value()
    if 'inNodeSet' not in label:
      node['label'].setValue( label +  '\ninNodeSet')


# and this one clears the label of
# all the selected nodes

def unLabelNodes():
  for node in nuke.selectedNodes():
    label = node['label'].value()
    if 'inNodeSet' in label:
      node['label'].setValue( label.replace('\ninNodeSet','') )


toolbar = nuke.menu("Nodes")
nsets = toolbar.addMenu("Node Sets")
nsets.addCommand('Node Set: Show Nodes', 'showOnlyChosenNodes()')
nsets.addCommand('Node Set: Add Selected', 'labelNodes()')
nsets.addCommand('Node Set: Remove Selected', 'unLabelNodes()')

In addition to updating the showOnlyChosenNodes() function, I've also renamed the actual menu commands to all start with Node Set:. This way, I can start a tab+search with the same three letters, nod, and quickly narrow results to the only 3 tools that fit that criteria; my Node Set tools.

node sets tab search

I love using Node Sets in Nuke and I'm glad to finally be rid of this annoying workaround.

Node Sets in Nuke

UPDATE: A newer version of this plugin exists here.

The story goes like this. It may sound familiar.

You're working on an animation in your favorite node-based compositing application, and you want to make a timing change. The first half of the animation is perfect, but it should hold a little longer before it finishes, to better match the background plate.

Problem is, you've got animated nodes all over your script, and all of their keyframes need to move in sync. Transform nodes, Grade nodes, GridWarp nodes.

You zoom in and move around your script, looking for nodes with the little red "A" in the upper right corner.

No, not that node. That one's for that other asset and it doesn't need to move.

Okay, got 'em all open? Now switch the the Dope Sheet, grab everything after frame 75 and slide it to the right a few frames. Done?

Let's watch the new timing.

Shit. Forgot one.

Which one?

Oh, here it is. Wait. How many frames did the other 6 nodes move?

Sigh.

CMD+Z. CMD+Z. CMD+Z. CMD+Z. CMD+Z. CMD+Z.

Okay, are they all open this time? Good. Now slide them all together.

Done? Let's watch it.

Better.

10 Minutes And 20 Additional Nodes Later.

Well...now I need a little less time between frames 30 and 42.

Dammit.

Feature Request

This is the annoying scenario I found myself repeating about a dozen times on a recent project, so I sent an email to The Foundry's support team, requesting the addition of a feature I described as "Node Sets".

A Node Set is an arbitrary collection of nodes that can be opened all at once with a single command. New nodes can be added as the script grows, or removed if they're no longer needed.

Along with my feature request, I provided these two screenshots to help explain:

What I received back from Jake, my new best friend at The Foundry Support, was the following script:

# This function opens the control panels of
# all the nodes with "inNodeSet" on their label

def showOnlyChosenNodes():
  for node in nuke.allNodes():
    if "inNodeSet" in node['label'].value():
      print node.name()
      node.showControlPanel()
    else:
      node.hideControlPanel()

# This function adds "inNodeSet" to a
# new line on the label of all the selected nodes

def labelNodes():
  for node in nuke.selectedNodes():
    label = node['label'].value()
    if 'inNodeSet' not in label:
      node['label'].setValue( label +  '\ninNodeSet')

# This function clears the label of
# all the selected nodes

def unLabelNodes():
  for node in nuke.selectedNodes():
    label = node['label'].value()
    if 'inNodeSet' in label:
      node['label'].setValue( label.replace('\ninNodeSet','') )

# These commands create a new menu item with
# entries for the functions above

nuke.menu('Nuke').addCommand('Node Sets/Show Nodes in Set', "showOnlyChosenNodes()")
nuke.menu('Nuke').addCommand('Node Sets/Add Selected Nodes to Set', "labelNodes()")
nuke.menu('Nuke').addCommand('Node Sets/Remove Selected Nodes from Set', "unLabelNodes()")

For those of you who don't speak Python, allow me explain what's happening here. Once added to your Menu.py file, the script creates 3 tools in a new menu within Nuke.

Just as I requested, I have the ability to add or remove selected nodes from the group, then, when I need to make a change, open all of those nodes with a single command.

Magic.

Not Magic

What the script is actually doing is tagging the nodes. No, Nuke did not suddenly or secretly gain the ability to add tags to things, it's cleverly using the label section in the Node tab to hold the inNodeSet text. The Show Nodes in Set command simply scans all the nodes in your script for nodes with inNodeSet in their labels, and opens them. Simple. Smart.

As a result, yes, you can add the inNodeSet text to the label field manually, rather than using the new menu command, and the Show Nodes in Set command will find it, but who would want to do such a barbarous thing?

Customization

As with all commands in Nuke, a keyboard shortcut can be added to these commands to make the process even quicker. But, since I don't particularly enjoy cluttering up my menu bar with unnecessary menus, nor do I enjoy having more keyboard shortcuts than I can remember (I totally already do), I opted to move the commands into the Nodes menu. This is easily done by swapping the last 3 lines of Jake's script with these lines:

toolbar = nuke.menu("Nodes")
nsets = toolbar.addMenu("Node Sets")
nsets.addCommand('Show Nodes in Set', 'showOnlyChosenNodes()')
nsets.addCommand('Add Selected Nodes to Set', 'labelNodes()')
nsets.addCommand('Remove Selected Nodes from Set', 'unLabelNodes()')

Here's where my tools now live.

I do this for one major reason; having these tools available in the Tab + Search tool. For those unfamiliar, Nuke has a built in tool similar to Spotlight or LaunchBar that allows you to press Tab then type the name of the tool you're looking for, avoiding the need to have keyboard shortcuts for every type of node.

Current Limitations

This being a bit of a hack, there are naturally a few limitations. First and foremost, using this tool will delete anything you already had in the label field of a node. I doesn't support the ability to add a tag to the text in the label field. The tag has to be the only thing in the label field.

Secondly, once you realize how useful this is, you may want to have more than one Node Set at your disposal. The good news about this current limitation is that you can very easily create as many node sets as you want by duplicating the code and changing the inNodeSet tag to something like inNodeSet2.

Of course, with multiple node sets, it'd be ideal if you could include a given node in multiple sets at the same time, but like I mentioned, this is not a real tagging system. If real tagging ever makes its way into the application, I imagine such a thing will then be possible.

Update - 2014-06-25

I emailed my pal Jake again, telling him how much I appreciate his work on this script, and you'll never guess what he did. He sent me an updated version of the script that adds the tag to the node label without overwriting the current text in the field.

Not only is this great for general usability, it means we can add a node to multiple Node Sets at the same time. We now have a real tagging system built into Nuke. How great is Jake? Seriously.

One thing I will note; if you are planning on using multiple Node Sets, you'll want to change the default tag to inNodeSet1. If you leave it as inNodeSet, it will also show up in results for other tags, like inNodeSet2.

Attribution

If it wasn't clear before, all credit for this script goes to The Foundry and their awesome support team. They continue to be one of my favorite companies, specifically because they offer great support in addition to their great products.

I'm incredibly happy to have this annoyance removed from my workflow, and I hope you are too.

Managing Disk Cache (with a Hammer)

Throughout the course of a given project, I use a handful of applications to complete my work. At the moment, I edit in Avid MediaComposer and Final Cut Pro 7. I do my VFX work in NukeX. I review shots and conform sequences in Hiero. And I do final color correction in AfterEffects with Red Giant Colorista II. There’s an obvious advantage to using the right tool for the task at hand, but there’s a caveat that I almost never remember until it smacks me right in the face: disk space.

I’m not talking about the space required for digital negatives, plates, renders, or project files. No, the piece I always manage to forget is the massive disk cache each application in my workflow creates on my startup disk [1]. With single frames of a sequence ranging between 5 and 30Mb, and individual shot versions reaching well into double digits, disk cache can take up a ton of space. And since cache files are created whenever an element is changed and viewed, it’s not an easy task to estimate how much space will be used by a project ahead of time.

Most applications, including every application I use, have built in preferences designed to limit the size of the disk cache. Most also feature a big button labeled “Clear Disk Cache”. These preferences are great if you spend all your time in a single application, but are of little consolation when you’re halfway through previewing a shot in Nuke and OS X pops up telling you your startup disk is full. Once you’re in that sad, embarrassing moment, good luck getting AfterEffects to open so you can hit that “Clear Cache” button.

“To the Finder,” you say? Even if the Finder were responsive when your startup disk had less than 50Mb of free space, are you the kind of person that keeps a sticky note on your monitor listing the paths to all the buried, hidden cache folders for each application? Neither am I.

Historically, I’ve gone straight to my project’s Renders folder and deleted my 2 oldest exports, giving me about 6Gb of breathing room to go hunt down the various disk hogs. Not a great solution. What would be ideal is the ability to hit one keyboard shortcut, see a list of which applications are taking up space and, more importantly, how much space, then quickly purging the unnecessary files.

Normally, the sizes for AfterEffects, Nuke, and Hiero are in the double digits of Gigabytes. This screenshot was taken after using the script as intended.

The Script

cache.command:

#!/bin/sh  

clear  
echo Current Cache Size:  
echo      
du -c -h -s "/Users/dansturm/Library/Preferences/Adobe/After Effects/11.0/Adobe After Effects Disk Cache - Dan’s MacBook Pro.noindex/" "/var/tmp/nuke-u501/" "/var/tmp/hiero/" "/Avid MediaFiles/MXF/" "/Users/dansturm/Documents/Final Cut Pro Documents"  

echo      
read -p "Purge Cache files?(y/n) " -n 1 -r  
echo      
if [[ $REPLY =~ ^[Yy]$ ]]  
then
    rm -r "/Users/dansturm/Library/Preferences/Adobe/After Effects/11.0/Adobe After Effects Disk Cache - Dan’s MacBook Pro.noindex/"*
    rm -r "/var/tmp/nuke-u501/"*
    rm -r "/var/tmp/hiero/"*
    echo    
    echo Done!
    echo      
fi  

The script displays the space taken up by each application, the path to that cache folder, and a total of space used. It then prompts for a y/n input to delete the cache files.

Since managing Avid and Final Cut Pro media requires a bit more attention and nuance than a dumb hammer like this can provide, their disk usage is listed, but their files are not removed. If you modify the script to delete Avid of FCP media, allow me to preemptively say “I told you so” when your project becomes corrupted.

Details

I run the script with a FastScripts keyboard shortcut. I wanted to shortcut to open a new Terminal window to display the information, so the keybaord shortcut actually calls a second short script called cache.sh which takes care of that part:

#!/bin/sh  

chmod +x /Path/To/cache.command; open /Path/To/cache.command  

The paths for the cache folders are hard coded into the main script and, as I mentioned, not all items listed are deleted. I like to see as much information as possible, but manage the deletion list more carefully.

For me, the main offenders of disk cache consumption are AfterEffects and Nuke which, together, average nearly 200Gb of disk usage. Once I’ve dealt with those 2 applications, I don’t usually need to go hunting for more free space.

Next time I accidentally inevitably fill up my startup disk with cache files, it’ll take seconds to rectify, rather than a half hour of excruciatingly slow, manual effort.

Hooray for automation!


  1. While it’s true that, in almost every application, you can re-map the disk cache folder to any disk of your choice, the entire point of a disk cache is to have the fastest read/write speeds possible, and no disk I own is faster than my rMBP’s internal SSD.  ↩

r_ScreenComp ToolSet for Nuke

In my previous post, I said I created QuickGrade so I could quickly balance my Log footage and start compositing more quickly. Based on the footage in the accompanying screenshot, you could probably guess what came next; replacing the screen on the device.

Creating a successful screen comp isn’t rocket science, but since it’s a very (very) common compositing task and, in many cases, the entire focus of a commercial, any tool that helps you work more quickly, while maintaining quality, is worth it’s weight in met-deadlines.

In uncharacteristic fashion, I’ll turn this post over to myself in video form to explain further.

There’s no chance I could have finished the number of shots I was assigned, in the time I had allotted, if it weren’t for this ToolSet. If you do a lot of screen comps in your day-to-day, even if you don’t use my ToolSet [1], I cannot recommend highly enough that you automate as much of the process as possible.

Download


  1. Installing a Nuke ToolSet is as easy as dropping the R_ScreenComp.nk file into \.nuke\ToolSets\  ↩

QuickGrade Tool For Nuke

Lately, I’ve been doing a lot of VFX work with Alexa Log-C footage. Compositing with Log footage generally requires a Viewer LUT so you can actually see what you’re doing. I don’t like using LUT files because they’re fixed color transformations and usually need to be supplemented with additional color tools on a per-shot basis. While I was working, I found myself repeatedly creating the same “custom viewer LUT” setup, so I decided to make it a dedicated tool called QuickGrade.

QuickGrade default parameters.

QuickGrade isn’t really intended to be a creative color tool, it’s built to quickly balance your footage and make it look “correct”. I decided which controls were relevant by noticing that nearly all Alexa Log-C footage I encountered required the same adjustments:

  • Contrast: Obviously. It’s built to be flat.
  • Exposure: After yanking on the contrast, it usually needs an exposure adjustment.
  • Saturation: It’s not inherently overly desaturated, but it benefits from a boost.
  • Green-cast: Alexa Log-C footage typically needs a healthy amount of “de-greening”.

For added flexibility, I also included controls for White Balance, Black Point, and White Point. The controls in the top half of the node are for luminance, and the bottom half for color. Quick and simple.

Workflow

Typically, I drop this node into the Node Graph above the Read node, unconnected, and set it as the Viewer Input Process [1], but it works just as well when used like any other color correction tool.

QuickGrade used as an Input Process.

Not A Gizmo

QuickGrade is a ToolSet for Nuke, not a Gizmo. ToolSets are easier to install than Gizmos, they show up in your toolbar without needing to write any Python code, and they’re easier to modify later if you feel the need. Since the whole idea behind creating this tool was ease of use and speed, the easy, lazy, ToolSet won out over the less easy Gizmo.

To install a Toolset in Nuke, just navigate to your .nuke directory. In there will be a folder called ToolSets. Unzip the QuickGrade.nk file, drop it inside, and you’re done.

Update: 2015-08-21

Okay, so I changed my mind about the whole Gizmo thing. Gizmos, unlike ToolSets, automatically open their properties panes when added to the node graph, which is great. So, now you can choose from either option below.


Download: Nuke Gizmo


  1. Using a node (or group of nodes) as a Viewer Input Process is as easy as right-clicking on the node, selecting Edit\Node\Use as Input Process. Bam.

My Custom Nuke Defaults

After a few busy months of post work, I finally found a few days to reevaluate and improve my workflows and system preferences. Since I spend a majority of my time in NukeX, that’s where I decided to start.

I’ve got some new custom tools I built recently that I’ll share with you soon enough, but first things first, Nuke’s default state needed a little adjusting. Here are the items I added to my init.py [1] file that save me tons of time and headache.

init.py:

# Project Settings > Default format: HD 1920x1080  
nuke.knobDefault("Root.format", "HD")  

# Viewer Settings > Optimize viewer during playback: on  
nuke.knobDefault("Viewer.freezeGuiWhenPlayBack", "1")  

# Write > Default for EXR files: 16bit Half, No Compression  
nuke.knobDefault("Write.exr.compression","0")  

# Exposure Tool > Use stops instead of densities  
nuke.knobDefault("EXPTool.mode", "0")  

# Text > Default font: Helvetica Regular (in Dropbox folder)  
nuke.knobDefault("Text.font",   "/Path/to/Dropbox/fonts/HelveticaRegular.   ttf")  

# StickyNote > default text size: 40pt  
nuke.knobDefault("StickyNote.note_font_size", "40")  

# RotoPaint > Set default tool to brush, set default lifetime for brush and clone to "all frames"  
nuke.knobDefault("RotoPaint.toolbox", "brush {{brush ltt 0} {clone ltt 0}}")  

Explain Yourself

Since I don’t work in feature film VFX, the HD frame size is a no-brainer.

I do a fair amount of motion graphics animation in Nuke, so I often have the Curve Editor open. I’ve always been frustrated that Nuke never seems to be able to achieve realtime playback when looking at curves, so I ended up making adjustments, then switching back to the Node Graph to view my changes. Very annoying. The recently added “Optimize viewer during playback” button was the answer to my realtime problems [2]. Like all of these custom preferences, I use it so often, I want it to be on by default.

I comp almost exclusively in Open EXR image sequences. For me, 16bit Half Float with No Compression is the appropriate balance of file size and quality. By default, the Write node sets compression to Zip (1 scaneline) and it annoys the crap out of me to change it every time.

I love to use the Exposure tool, especially when color-correcting Log footage by hand [3]. But since I’m a filmmaker and a human being, I prefer to adjust exposure in Stops rather than Densities.

I set the Text node to use Helvetica by default and I keep the font in my Dropbox folder to make sure it’s always with me. Why? Because the default is normally Arial and seriously, are you kidding me?

I love using StickyNote nodes to write myself notes as I’m working. But because either my screen resolution is too high or I’m getting old and going blind [4], I always have to crank up the font size to read the damn things.

When I decide to use a RotoPaint node instead of a simple Roto node, it’s because I want to paint something. And more often than not, I want to paint or clone something for the entire duration of the shot, rather than just a single frame. Boom. Default.

What Else?

I would love to set the default feathering falloff in the Roto and RotoPaint nodes to smooth rather than linear, but I haven’t been able to figure out how to make that happen as of yet.

If you’d like to use these preferences in your Nuke setup, simply copy and paste the code into your init.py file in your .nuke directory. If you don’t have an init.py file in there, just open a text editor and make one.

Happy comping.

UPDATE – September 09, 2013, 04:55:04PM

As Joe Rosensteel pointed out on Twitter, another great tip is changing your 3D control type to Maya controls. I’m not a Maya user myself, but nearly all the 3D artists I work with are, and nothing makes them happier to help you out than saying, “Would you like to take stab at it? You know the 3D controls are the same as Maya’s”. And the 3D control type preference is super easy to adjust. It’s part of the GUI in the application preferences pane, under the Viewers tab.


  1. If you were unaware, you can modify Nuke’s default state by creating a file called init.py in your .nuke directory. The application loads your preferences on launch and it’s easy enough to add/remove settings without screwing up your install. More info on page 18 of the Nuke User Guide  ↩

  2. I don’t remember exactly which version introduced it, but it’s the button that looks like a snowflake to the left of the playback controls.  ↩

  3. Yes, I’m familiar with the concept of LUTs.  ↩

  4. Rhetroical.  ↩

Intel Fabs Hit the (Really) Big Screen

​Click to see full resolution version.

​Click to see full resolution version.

The Show

Every year Intel® holds a conference for its Sales and Marketing Group known as ISMC. Over the course of several days, attendees get hands on with Intel’s latest products, meet with Engineers, take training classes, and attend keynotes from company executives.

Being the one major face-to-face event each year for the global sales force, the conference is a big to do. For ISMC 2012, Intel Studios [1] was asked to create a video unlike any we had produced before [2].

The Project

One of the recurring keynotes at ISMC is given by the head of the manufacturing division of the company; giving the audience a look at the innovation and engineering behind the products they sell, as well as a glimpse at the company’s roadmap for the coming years.

Presenting for ISMC 2012 was Executive Vice President and Chief Operating Officer Brian Krzanich; one of Intel Studios’ regular customers. Over the years we’ve created a number of products to accompany his presentations, both internally and externally.

For 2012, the project request was straightforward. Intel was in the process of building two identical manufacturing facilities in Arizona and Oregon, at a cost of around $5 Billion each. The factories represent the state of the art for semiconductor manufacturing [3], but more importantly they are two of the largest cleanroom facilities in the world.

Our job was to highlight the massive scale of these new factories, larger than any Intel has built before, required to make microprocessors at a scale smaller than ever before. The two factories being built were identical, so since our studio was located in Arizona, and production was to take place in December, the Arizona factory was chosen as our subject.

With little more direction than that, we began to develop a story for the video. During concept development we discussed a number of ideas for emphasizing the enormous size of the construction project with the appropriate amount of “wow factor”. Our answer came in the form of the video’s playback venue.

The Venue

​The Anaheim Convention Center

The keynote presentations were set up with three projector screens above the stage that would be used together as one large, contiguous display. The combined screen measured nearly 160 feet wide, with a resolution of 7,360 x 1,080; more than twice the width of a standard cinema screen. With a display of such unique proportions, it was an easy decision to shoot panoramic video that would span all three screens, rather than create a collage of separate images to fill the space.

It was sure to be an incredible viewing experience, but within that opportunity was a major technical hurdle for the production team. At the time of production, no single camera existed that was capable of capturing an image of the required resolution. Still, we knew this was an avenue we wanted to pursue, and preproduction for a panoramic video began.

As we were discussing technical solutions to our resolution problem, it was suggested that we shoot with a single Red Epic camera, with a resolution of 5,120 x 2,700, and scale up the final image to fit the screen. While this would have been the easiest solution to implement both on set and in post, it failed the selection criteria for two reasons.

First, the image would be scaled to more than 140% of its original size, compromising the clarity of the final image. And even if a 140% blowup provided an acceptable level of quality, the math is not quite so simple. The Epic has a 5K Bayer pattern sensor that produces a measurable resolution closer to 4K. If we treat the Epic as a 4K camera, we’d be looking at a blowup closer to 180%. With a viewing distance between 30 and 200 feet in the auditorium, the quality loss may have been imperceptible to the audience, but the bigger issue was one of sensor size and optics.

To highlight the size of the factory and take full advantage of the massive screen size in the auditorium, we needed to shoot images with a large Field of View (FOV). The FOV of a given image is determined by a combination of the lens’ focal length and the size of the camera’s sensor [4]. Since there were practical limitations to the focal length of the lenses we would use (more on that in a moment), the only way to create an increased FOV was to increase the width of our sensor. Since we can’t change the actual sensor in the camera, we would need to find a way to combine multiple cameras, each with their own Super–35 sized sensor [5], to simulate a larger sensor camera.

Camera Selection

Now that we knew our solution would involve multiple cameras and wide angle lenses, we started crunching numbers to see which lenses we would use and how many cameras we would need. We could have easily satisfied the technical requirements for projection with two Epics, but we opted to use three cameras for a few reasons.

The first reason was this idea of a simulated large sensor camera. With a two camera solution, we’d need to use the widest possible lenses to properly capture the subject; the factory. The use of extreme wide angle lenses would give us a great deal of optical distortion around the edges of the frame and make postproduction very difficult when attempting to stitch the cameras together. Since we’d have essentially zero time to test just how much distortion would be too much, the safer choice was to use longer lenses on three cameras. Not to mention that using two cameras would place the stitch seam right in the middle of our final image. If there was any slop in the composite, there would be nowhere for our mistakes to hide. So, while we came prepared with a set of Tokina 11–16mm lenses, the widest focal length we used was around 25mm on two Red 18–50mm lenses and one Red 17–50mm.

On that note, we had initially hoped that postproduction would be as simple as hiding the seams of our final image in the small gaps between the three playback screens. However, during a preproduction meeting, we were informed that the center screen would be a good deal wider than the side screens, requiring us to deliver a seamlessly stitched image under the assumption that the seams would be on screen. Starting with a roughly 15K raw image gave us the ability to adjust the overlap between the three cameras based on the objects in the scene and the varying amount of parallax between the foreground and background objects; something that we would learn on set was extremely important for creating successful images.

Testing

After selecting the Epic as the camera for this job, we immediately requested three days of camera rental to test our yet-unproven design for the camera platform. Building an effective camera rig from scratch in under a week is a difficult task when it doesn’t involve multiple cameras.

As with most decisions involving money, it would take several days for us to receive an answer. With the start of production mere days away, there was no time to waste. Using what we had in the studio, we grabbed three Canon T3i DSLRs, some small tripod ball heads, and a cheese plate to create a proof of concept camera rig.

With no idea how best to align the three cameras, we created two proof of concept rigs; one to test correcting a vertical disparity between camera sensors and one to test a horizontal disparity. It was clear from the moment we turned on the cameras that the vertical rig was unusable, so on we went with our design of the horizontal rig.

​Test rig with vertical disparity

​Test rig with horizontal disparity

We set up the camera rig outside, near a corner of our office building, giving us a wide view of two sides of the building, the parking lot, and the street. Our primary concern for the stitching process was correcting the parallax discrepancies created by the physical separation of the cameras so, in the test footage, we had a person walk through all three frames, at a variety of distances from the cameras, to see how much error we would encounter in the stitching process. From there the footage was brought into Adobe AfterEffects, synced and aligned.

We determined we were able to stitch and sync the cameras to a relatively high level of success in just a few minutes but, as predicted, the camera separation caused serious parallax discrepancies that had to be corrected with shot-specific compositing. In our test footage, we got a successful panoramic image not by aligning the building that spanned all 3 cameras, but by ensuring that alignment was accurate at the object on which the viewer was focused. In this case, we had to ensure accurate alignment on the person crossing the screen and get creative with hiding misalignment in the background.

The aligning and stitching process felt surprisingly similar to Stereoscopic 3D postproduction we had completed for the 2011 ISMC Keynote [6] where we would ensure elements on the convergence plane were aligned perfectly, and would work around errors in the distant background or close foreground.

A major misstep in our testing process that would come back to bite us later was our failure to test camera movement. The final product was scripted to include pans, tilts, and jib moves, but we were in a hurry to report back to the production team as to the camera platform’s technical feasibility. We attempted to modify a tripod to accept our cheese plate camera platform, but it was clear from our lack of available hardware and materials that in order to test a moving shot we’d have to push back testing at least a day. After an hour of fruitless experimenting with the tripod, we gave up and decided to shoot the test static, propping up the rig on some apple boxes.

After shooting the tests, we spent the rest of the day trying to stitch the images, hiding seams and parallax errors. Just as we began to feel comfortable with the process that would be required, we received word that there was not enough money in the budget to test the real camera setup with Epics.

Building The Rig

By now it was Friday and production was set to begin on Monday. Two of our three rental Epics had arrived, along with a variety of cheese plates and assorted hardware. We spent the day gathering the rest of the production gear and laying out potential designs of the camera platform.

The first incarnation of our camera platform was built as small as possible to keep the cameras close together and minimize overall weight, but after seeing how much the metal flexed under the fully built rig, we realized larger and thicker cheese plates would be required. We also benefited from using larger plates in our ability to slide the cameras forward and backward on their dovetails, giving us greater balance control over the more minimalist rig.

​Initial smaller camera platform design

​Final camera platform with larger cheese plates

On Saturday the production team gathered in our studio to build what would hopefully be our final camera platform. We had our jib operator bring his gear so we could make sure our solution would mount properly to his equipment. The majority of the day was spent drilling into the cheese plates so we could countersink the large bolts that were necessary to hold the pieces together.

The addition of a small plate and some angle-iron on the far side of the platform allowed us to attach a support cable to the arm of the jib, taking the weight off of the delicate motors and reducing the amount of bounce to the system. We wouldn’t find out for another couple of days, but it sure looked like we had a camera rig that was going to work.

​Here you can see the additional support cable connecting the far edge of the camera platform with the jib arm.

Just like every other step of the project, we documented the rig building process with our iPhones. When we had the cameras up and running, I posted two photos of our setup on Twitter.

The next day, I received a response from a gentleman names Zac Crosby that included a picture of a panoramic rig built with Epics that he had recently used for a project. His rig was different than ours, built in an almost cube shape with a larger angle between the cameras than we had chosen. It seemed as though Zac’s rig was built to serve a purpose different than ours, but our optimism was re-energized by the idea that we were not the first to attempt such a thing. Especially in light of the (completely unfounded) assumption that if his project had suffered some catastrophic failure, he would have cautioned us about shooting panoramic video.

Lens selection

Since we had to place our gear order before we had a rig design, we took our best guess at which lenses we would need. Being that our goal was to create a massive wide angle image, we ordered three Tokina 11–16mm PL mount lenses, as well as two Red Zoom 18–50mm lenses to supplement our own Red Pro Zoom 17–50mm lens.

To mitigate some of our risk, not every shot in the video was to be shot panoramic. There would also be instances of collages made up of multiple images, so we brought along our Red Pro Primes, as well as an Angenieux Optimo 24–290mm zoom.

When we finally got lenses on the cameras, we determined a focal length between 20mm and 40mm gave us the best balance of a wide FOV, minimal lens distortion, and enough overlap to properly stitch the shots.

For the majority of the shoot, all three cameras were sporting the Red Zoom lenses. Since we already owned a Red Pro Zoom 17–50mm lens, we only rented two more zooms. In our search for local rental gear, we were only able to find the older 18–50mm Red Zooms. Having never put the updated model side-by-side with its predecessor, we were unaware of the dramatic differences in optical distortion between the two.

The mismatched distortion turned out to not be a problem, but due to our inability to properly test the camera setup, it wasn’t discovered until we began to stitch dailies on set and found undistorting images was not producing expected results. A potential disaster, averted by sheer luck.

On Set

Monday morning started with a lengthy safety briefing from our site escorts before driving our grip truck, with DIT station inside, onto the heavily guarded construction site.

Once inside, we built our camera rig on the jib. We brought along a Fisher 10 dolly, but it was mostly used as a building and transportation platform for the camera rig before transferring it to the jib for shooting. The dolly was also used, to a lesser degree, for one-off static shots between setups. It was the last thing loaded onto the truck and the first thing off, so we occasionally rolled it to the edge of the lift-gate and picked off a few shots from an elevated position.

The cameras were set to record in 5KFF (to take full advantage of the sensor’s FOV) at 24fps and a compression of 6:1. As is often done on shoots involving multiple cameras, each camera, its associated accessories, and magazines were color coded to avoid confusion. A small effort that greatly helped speed up downloads and dailies.

Once we had the jib operating on a live set, we immediately noticed that any tilting of the camera platform caused the left and right cameras to dutch severely. Obvious in retrospect, as all three cameras were rotating on a different axis to their focal plane.

Since our time on the construction site was limited, redesigning the camera platform was not an option. Instead we had to limit our camera movement to booming up and down.

We discovered another limitation of the camera rig while attempting to swing the camera from left to right, following our talent, and booming up over a large mound of dirt to reveal the construction site. Since our talent and dirt hill were about 20 feet from the camera and the construction site was about 200 yards away, our parallax was irreconcilable and, due to the horizontal movement of the jib, there was nowhere to hide the seams should we try to hack the shots together.

After declaring the setup unusable, it was recommended that we either reshoot the scene, limiting the camera move to a simple boom-up, or we spend an unknown amount of time in post separating and completely rebuilding the shot in VFX, to an indeterminable level of success. As with production, our post timeline was extremely limited so we opted to reshoot the scene the next day.

Once we understood the rig’s limitations, the production proceeded relatively smoothly. The only hiccup occurred when one camera’s dovetail came loose in transit, causing the camera to do a backwards somersault off the jib onto the ground. Luckily the jib was only about 12 inches off the ground and the camera landed squarely on the Red Touch 5.0 LCD. The camera was unharmed, and the LCD was perfectly functional, but the metal swivel near the lemo connection snapped and the frame of the LCD was scuffed [7]. The Epic, though, is a tough camera and we were back up and running in a matter of minutes.

Since we knew we would need to undistort the images from each camera in order to stand a chance at stitching them together, we created several 24’’ x 36’’ checkerboard grids on foam boards that were recorded at the beginning of each setup.

Our inability to test the rig in preproduction also resulted in one of our more clever solutions for syncing the cameras. We had neither the cabling nor the knowledge to properly timecode sync three Epics. Our solution involved placing our 24’’ x 36’’ foam lens distortion grid about 4 inches away from the center camera so it was just barely visible on all three cameras at the same time. When the card was in place, someone would flash a DSLR flash against the white board, causing a flash-frame on all three images. It didn’t give us perfect results, and sometimes we had to flash the board twice since the shutters on each camera were not synced, but it was effective enough to give us a useful sync point.

DIT

​DIT station in the back of the grip truck

I was acting as the production DIT and Visual Effects Supervisor. As such, I was responsible for not only backing up footage, but attempting to stitch as many shots as possible while we still had the opportunity to reshoot them should there be an issue with a given setup.

Since nearly all of our prep time went to assembling the camera rig, we didn’t have much of an opportunity to customize our DIT station. I was able to make sure the system arrived with a large e-sata raid, additional e-sata ports for the Red Stations, and a Red Rocket card for realtime processing. The only software I had an opportunity to load aside from AfterEffects was a tethering app for my iPhone, allowing me to download software in the field as needed [8].

This being essentially an “out of the box” Mac Pro, it was outside our firewall and unable to connect to our license server running Nuke. Our composites were to be finished in Nuke, but stitching on site was only possible with our local copy of AfterEffects, resulting in some rework for me later and a few inconsistencies between on-set results and the final composites.

All backups were performed with R3D Data Manager and all dailies were created with RedCine X. For the dailies, I selected the best take of a given setup, and created a full resolution, 5K JPEG image sequence from each camera.

The JPEG sequences were immediately imported into AfterEffects and undistored with the help of the lens grid charts. Since we used limited focal lengths during shooting, I was able to reuse lens distortion data to speed up the stitching process from shot to shot, getting the undistored plates “close-enough” for a quick and dirty composite. Imperfect lens correction was acceptable at this point because stitching on set was only for the purpose of checking parallax errors and determining if we would be able to hide issues in the final composite.

When I felt I had a composite that was good enough and would be easy to finalize in Nuke with a proper amount of attention, I rendered a 3,680 x 540 H264 file to watch (repeatedly) at full screen on the 30’’ Apple Cinema Display. When the Director, Producer, and DP had bought off on the successful stitch, I moved on to the next shot, while continuing to download, transfer, and render in the background. When our four day production ended, we had nearly 4Tb of data, consisting primarily of the raw camera backups.

The largest obstacle to overcome for the DIT work was my physical location. Since we were on a secure construction site with a minimal number of escorts, the most pragmatic location for the DIT station was in the back of the grip truck, traveling with the crew. Our grip truck is equipped with an on-board generator, but this meant I was unable to charge batteries or backup footage overnight [9], and since we had contractors working in our crew, we were held to a strict 10 hour day [10].

Another frustration for DIT work was the cold temperature in the back of the grip truck. If you’ve never been to a desert like Phoenix in the winter, you probably wouldn’t assume the temperature drops as low as it does. Construction shifts begin early and so did we. Our call time each day was pre-dawn and the temperature was in the 30s. The days warmed up around midday, but the shaded location for the metal Mac Pro typically stayed around 50 degrees Fahrenheit.

I bring up the cold mornings not as a complaint about uncomfortable conditions, but because extreme temperatures affect electronics. The DIT station was built into a rolling shipping container, designed specifically for working in the field. That was great for portability, but it made the swapping of components incredibly difficult, specifically the main surge protector that powered the system. Each day, the device that had worked perfectly in our warm office the week before, refused to power up on the first three attempts. On the fourth, the system would boot and remain on all day, but it was certainly a scare that we didn’t need each morning.

Along with the surge protector, the monitor occasionally obscured the images with a snowy noise not unlike an old analog television. My assumption at the time was the graphics card had come loose during travel and needed to be adjusted in its slot. A reboot and a jiggling of the DVI cable seemed to resolve the issue each time it occurred.

In the recent weeks, this system was still exhibiting some odd behavior. After the graphics card was swapped to no avail, the system was sent back to Apple for further investigation. They were able to determine the fault lied with the electronics in the 30’’ Cinema Display, not the graphics card. Additionally, they found a crack in the motherboard of the Mac Pro which occurred on the shoot, proving just how lucky we were to even finish the project.

​Monitor problems

The Edit

At the time of production, Intel Studios’ primary NLE was Final Cut Pro 7. One limitation in FCP7 is its inability to edit projects with a resolution above 4K. We built a custom project template inside of Final Cut, allowing us to edit in 4K and transfer the project to AfterEffects for finishing at full resolution, but it was overly complicated and introduced many opportunities for failure.

As a result, we took this opportunity to audition Adobe Premiere Pro CS 5.0 as a replacement NLE [11]. Premiere offered us the ability to create a project at our full, final resolution of 7,360 x 1,080. With the aid of another Red Rocket card, the editor was able to assemble the project from the full resolution R3D files.

At the same time, I created EXR and JPEG sequences of the selected panoramic takes and created the final stitched composites in Nuke. Ninety percent of the compositing was done with the JPEG sequences to ease the load on the CPU and our network. This became especially important when we found the best results were generated by using camera re-projection in Nuke’s 3D environment, and photographing the 3D composite at the full 7K resolution. While this technique slowed the compositing process compared to a traditional 2D composite, the time was reclaimed when we were able to reuse the 3D camera re-projection rig, again thanks to our limited number of focal lengths used on set. Before rendering, the JPEG sequence was swapped for the EXR sequence and final color matching of the three cameras was adjusted.

When a shot was completed it was again rendered at 3,680 x 540, this time in ProRes HQ, and scrutinized on a 30’’ computer monitor. Once approved, a 7,360 x 1,080 TIFF sequence was sent to the editor for integration into the video.

How To Build A Better Rig

If we were to attempt such a project again (hopefully with a bit more time and money) I’d be very interested to test the potential use of a Stereo3D beam splitter camera rig.

While it’s certainly not designed for this purpose, using a stereo rig set to zero interaxial distance and adjusting convergence to pan the second camera, I think we could design a panoramic video rig that would be much more forgiving with parallax errors, and create better looking final images. Additionally, if we were able to remove the majority of the parallax errors by getting the sensors closer together, the use of two cameras instead of three might be feasible.

So, How Did It Look?

After all the production hurdles we encountered, I must admit, seeing 160 foot wide, 7K+ panoramic video was beautiful. More importantly, the crowd and the customer loved the video. It’s hard to ask for more than that.

The Crew

By now, one thing that should be patently obvious is that the success of this project was due entirely to our dedicated and talented crew. I would be remiss if I did not recognize them here:

Writer/Director: Roland Richards

Producer(s): Charlyn Villegas, Keith Bell

Director of Photography: Jeff Caroli

1st AC: Josh Miller

DIT/Compositor: Dan Sturm

Editor: AJ Von Wolfe

Music by: Karlton Coffin

And additional thanks are in order for Keith Bell who both commissioned this writeup and offered editorial guidance.


Photo Gallery


  1. Intel Studios is an internal media team within Intel® Corporation.  ↩

  2. In the name of disclosure, I must inform you that, as of March 2013, I am no longer an employee of Intel® Corporation.  ↩

  3. 14 nanometer process technology, to be more specific.  ↩

  4. For a practical demonstration of how sensor size affects Field of View, check out this awesome web app from AbleCine http://www.abelcine.com/fov/  ↩

  5. Technically the Red Epic sensor is slightly larger than Super–35, measuring 27.7mm (h) x 14.6mm (v) versus Super–35’s 24.9mm (h) x 14mm (v).  ↩

  6. A very long story for another time.  ↩

  7. We immediately purchased a replacement LCD for the rental house. Sorry Jason and Josh!  ↩

  8. I do not recommend this solution, even if you have an unlimited data plan. Talk about unreliable.  ↩

  9. The construction site, until completion, was the property of the construction company. Despite being an Intel facility, we were guests and required to abide by a great many rules regarding safety. Leaving an unattended generator running in a truck overnight was not allowed.  ↩

  10. When you factor in security briefings, the inability to leapfrog setups, and waiting for construction cranes that cannot be directed, 10 hours is much less time than you’d think.  ↩

  11. Since development of FCP7 had been abandoned by Apple in favor of the replacement product FCPX, we were already in the market for a new NLE.  ↩