Home

You’re Doing That Wrong is a journal of various successes and failures by Dan Sturm.

Incrementing File Versions with Keyboard Maestro

File version management is one of the most important tenets of an efficient, sane post-production workflow. That's why most of the apps we use for our work have built-in tools for "increment and save" or scanning for new versions of an asset. Which makes it pretty odd that I haven't universally quashed this minor annoyance with some simple automation.

Primarily, when I need to version-up a file in Finder, I'm hitting CMD+D, deleting "copy" from the end of the file name and then manually incrementing the version number by typing. What am I, a farmer?

Bound to a hyperkey shortcut, this Keyboard Maestro macro makes it super quick to version up a selection of files in Finder.

Steps

  1. Get the selection of files in Finder
  2. Split the file path into three pieces: everything before the version number, the version number, and everything after the version number
  3. Drop the "v" from the version number. It's cleaner.
  4. Increment the version number with two-digit number padding
  5. Assemble the new file path and name
  6. Copy the original file to its new destination

An Improvement

One thing I don't love about this macro is that it uses rigid two-digit number padding for the version number, no matter the padding on the original file. A factor of the way Keyboard Maestro variables and calculations work.

As much as I disagree with using more than two digits in a file version number, consistency and predictability are far more important to me. So, after a quick chat with the folks over on the Keyboard Maestro forum, I swapped out the "Set Variable to Calculation" action in Step 4 with the following actions, suggested by The Keyboard Maestro himself, Peter N Lewis:

  1. Replace all digits in the version number variable with zeros to create a "padding format"
  2. Build the calculation with KM Variables and our new padding format
  3. Evaluate the calculation

If this change seems overly-complicated and nearly identical to the original version of the macro, it is. The reason we're doing it this way is that the "Set Variable to Calculation" action does not currently support tokens in the "Format" parameter (though Peter says that's fixed in the next version). Building our own calculation formula out of variables and evaluating that formula with a Filter action is functionally the same, but with the extra flexibility we needed. Neat.

iSight Scripting Redux

I was looking through some old photos on my iPhone the other day, reminiscing. Unfortunately, there weren’t a lot of photos from the period of time I was scrolling through. My craving for personal nostalgia was not fully satisfied. Luckily, I remembered another place I might find a few more memories to peruse: my iSight Capture folder in my Dropbox archive.

To spare you the headache of reading that meandering, ill-framed, eight-year-old blog post, here’s the gist of what I built back in 2012.

I set up a bash script and launchd task that used a CLI tool I found called isightcapture to take a photo through my MacBook Pro’s iSight camera every 2 hours, and put them in a Dropbox folder.

I had some grand idea that this would help me locate my laptop if it were ever stolen; giving me photographic clues to its location. Which was incredibly stupid.

But, in setting up this little script, I inadvertently created a sort of photo diary for myself. These incredibly unremarkable images gave me a consistent snapshot of my day-to-day life that, as the months and years passed, became an incredible record of one of the more interesting (to me) periods of my life.

The photos themselves are terrible quality. They’re 640x480 resolution. They’re dark, grainy, and usually framed badly. In some cases, you can barely even see me. But I can see me.

I can see the frustration and immaturity on my face in 2012 — my last year at Intel — before I left to pursue…something I hadn’t worked out yet.

I can see myself doing everything in my power to suppress a sea of self-doubt when I found myself working in the Sandwich Video office for a few weeks in 2013. I was absolutely exhausted driving back and forth between Phoenix and Los Angeles probably a dozen times over the course of a few months. But I was (and still am) so grateful for everything I learned from that experience.

I can see the unexpected enthusiasm on my face as I taught two semesters of post-production finishing and vfx at the film school at Scottsdale Community College; taking over for a professor who had moved away.

There are the backstage photos of those sales and marketing conferences I worked. The ones where we basically didn’t sleep for three days, shooting and editing around the clock.

I see my temporary office setup in the family room of my dad’s house where I lived for a month after moving back to Phoenix from San Jose. Behind me in the photos are the dozens of moving boxes my now-wife and I couldn’t unpack until we found a new place to live. (I really didn’t plan that career/life transition well at all.)

I can see a whole lot in those crappy, automated webcam photos.

May 26, 2016

I don’t remember why, but in May of 2016, my script broke. This is the last photo it took.

I figured I’d find a way to fix it, at some point. Eventually, though, I just gave up and moved the folder of images into my unsynced Dropbox archive folder.

But, looking through those old photos gave me the motivation to give it another shot.

ImageSnap

In the years since my script broke, isightcapture has been abandoned. I found a thread in a forum somewhere mentioning ImageSnap as a potential replacement, and it turned out to be perfect for my needs.

First of all, it’s able to capture the full resolution of my 2020 iMac’s new 1920x1080 FaceTime camera. Still, each photo is only about 250 KB.

Second, it’s much faster than isightcapture. Both apps, when taking a photo, activated the camera’s green “active” light (as they should). But with isightcapture, I was occasionally able to notice the green light in my peripheral vision, look at the camera, and pose for the photo. So far, no matter how quick my reaction time, each photo taken by ImageSnap has been captured before my eyes were able to dart up to the camera. Which is exactly what I want.

I’m genuinely surprised just how fast it is. Especially because the command I’m using to take the photo includes a one second delay to let the camera “warm up”. The hardware (apparently) needs that to avoid occasionally taking an underexposed image.

Here’s the single-line shell script that takes the photo:

/usr/local/bin/imagesnap -d "FaceTime HD Camera (Built-in)" -q -w 1 /Users/dansturm/Dropbox/Photos/iSight_Capture/$(date +"%F—%H-%M-%S").jpg

Automator?

I’m running this script by way of an Automator “Application” because, thanks to macOS Catalina, things that access the camera of my Mac need to throw a macOS system permissions dialogue box so I can grant them explicit access to the hardware. Currently [1], running the command directly inside Keyboard Maestro does not cause the system to prompt for camera permissions, so the command fails.

Not to worry, though. We’ll use Keyboard Maestro to launch the Automator application at the interval of our choosing because it has great trigger options.

Having the app take a photo every two hours was never ideal. A full third of the images are just black because they were taken in the dead of night. And another third were of my empty office chair. Not terribly interesting.

So, this time around, I set the macro to trigger at 10am, 1pm, and 4pm. Three times I can almost guarantee I’ll be sitting at my desk, in front of the camera. And the Keyboard Maestro trigger is flexible enough that, if I decide later to expand the window of time or frequency of images, it’s an easy adjustment.

It’s a little strange how excited I was to get this thing working again. My life is far less unpredictable than it was eight years ago, and it’s likely that I’ll just end up with hundreds of photos that looks exactly the same, featuring nothing of any real interest.

But, you know, that’s exactly what I thought the first time I set it up.


  1. Thanks to Dr. Drang and Peter N Lewis for helping me diagnose this issue. And it sounds like Peter is working on a fix.  ↩

Project Folder Structure and Task Management

For my work, producing videos, it’s incredibly important to keep my files and tasks organized. Managing multiple versions of sequences and shots across thousands of files, hundreds of gigabytes in size, can and will get unwieldy very quickly. The better organized I am from the start, the less likely I’ll be banging my head against my desk in the eleventh hour.

Here’s a brief overview of what I do and how I do it.

The Folder Structure

I’ve been using roughly the same folder structure for my projects for the better part of a decade now. Here’s what it looks like:

For the record, I hate the macOS list view and I never use it while working. I'm only using it here to show you the whole folder structure in one image.

I have high-level folders for the key stages of production: pre-production, production, post-production, and final deliverables. Within each of those folders are subfolders corresponding to specific types of assets that will be gathered or created along the way.

It’s worth mentioning that this folder structure is just the starting point for each of my projects. Not every folder will be used on every project. And, for some projects, many additional folders will be added. For example, a project that includes vfx work (most of them) will have files and folders programmatically created for each vfx shot within the Comp Files, Plates, and Renders folders [1].

Launchbar

To create that folder structure, I run a Keyboard Maestro macro — from Launchbar — that gives me this little pop up:

I give the project a name, hit Return, and the new folder is created in my “Projects” folder in Dropbox.

As an aside, the Launchbar Action I run to activate the Keyboard Maestro macro is called “Planter” because I first began automating my folder structure creation with Brett Terpstra’s great, free application Planter, and I’m a big fan of maintaining muscle memory regardless of whether or not it makes any actual sense [2].

The Things Project

I’m sure you noticed the checkbox on the “New Project” dialog box. Yes, that does what you think it does. If checked (the default), it creates a new project in Things in my “Work” Area with the designated name of the project, and pre-populates it with a standard set of tasks. Again, this is just a starting point. Some tasks may not be required, and many more will likely be added. Regardless, after I hit Return on that dialog box, this is what shows up in Things.

The Tags

There’s one other trick I added to this setup a few years ago that I really like. When navigating through dozens of folders, often with very similar names, it’s easy to get lost. Something that helps me find what I’m looking for more quickly is a Hazel rule that looks through my folder structure and adds a macOS green tag to any folder that is not empty.

This way, after I’ve programmatically created a series of folders that will receive my yet-to-be-rendered vfx shots, I’ll more quickly be able to see which shots have been rendered and which have not.

It doesn’t tell me anything about what’s going on inside that folder, but I find it makes navigation a little quicker and easier, especially with a tired brain.

The Technical Bits

Alright, let’s get to the part with all the images and the scrolling.

The folder creation and the Things project creation are done with two separate Keyboard Maestro macros to attempt to keep things as tidy-ish as possible [3]. The folder creation macro calls the Things macro (if the box is checked) at the end of its steps.

The Folder Creation Macro

Click through to see the whole enchilada.

Click through to see the whole enchilada.

The Things Macro

Click through to see the full image.

Click through to see the full image.

The “New Things Project” macro is built using Things’ JSON based commands, rather than the more limited URL Scheme commands. It’s much more flexible, faster to modify, and is the only way to access certain features like Headings.

If you’re curious, you can see the full code inside that second block of the macro here.

The Hazel Rule

The Hazel rule is fairly simple:

The AppleScript in the second Condition is:

set root_fol to theFile

tell application "Finder"  
    set files_ to count files of entire contents of root_fol  
end tell

if files_ is 0 then  
    return false  
else  
    return true  
end if

I honestly have no recollection of where I found this script on the internet. However, I can tell you with a fair amount of certainty that I did not write it.

This Hazel rule is running on my entire “Projects” folder, so any new folder that’s added will automatically be analyzed and tagged.

All That’s Left to Do Is Everything

So, that’s it. That’s how I begin work on every project. Once a project has a name, it gets a folder and a Things project. Then the actual work can begin. Hooray…


  1. These files and folders are created by the project management tools inside Nuke Studio and that is absolutely a post for another time. Or maybe not because sheesh.  ↩

  2. See also: I use the abbreviation “ch” to launch Safari with Launchbar despite having switched from using Chrome as my primary browser probably 7 years ago. It’s fast and my hands are used to it, so I’m sticking with it.  ↩

  3. lol  ↩

The One With Dropbox and the Symlink

I’ll be honest, this post is more for me than it is for you. I’m writing it so the next time I’m setting up a new Mac, I’ll know exactly where to find the solution to one of Dropbox’s most puzzling idiosyncrasies.

Dropbox users, like myself, who have both a personal Dropbox account and a business account, have the ability to link those two accounts and sync both folders to a single computer [1]. When you do that, your folder previously-named Dropbox is renamed to Dropbox (Personal) and your business account is named Your Company Dropbox. Which…seems silly to me, but okay, I guess.

Aside from being an unattractive folder name, Dropbox (Personal) is pretty terrible for automation and general usability. Spaces? Parentheses? It’s like they’re trying to break all my scripts and customizations.

And before you ask, no, you can’t just rename the folder back to Dropbox. Once you install the Dropbox app, the computer belongs to them, you’re just allowed to use it. Them’s the rules.

Now, when I first linked my two Dropbox accounts, I could have gone through all my Keyboard Maestro macros and shell scripts and done some find & replace work to make everything work again. But as fun as that sounds, I opted for the lazy route: creating a symlink called Dropbox that pointed to Dropbox (Personal) so all my stuff would just work again.

If I were smarter and better versed in the ways of the Terminal, I wouldn’t have to look this sort of stuff up when I needed to do it. Alas, this isn’t the sort of thing I need to do more than once every handful of years, and this knowledge will leave my brain the second I close that Terminal window.

I’m Writing It Down to Remember It Later

Here’s what I did:

  1. Make a symlink to “Dropbox (Personal)” and place it on the Desktop, thusly:
    ln -s "/Users/dansturm/Dropbox (Personal)" /Users/dansturm/Desktop

  2. Next rename the symlink on the Desktop to just “Dropbox”.

  3. Move the symlink into your Home directory, next to the ugly folder.

  4. Hide the symlink so you can forget you ever had to do this whole stupid thing:
    chflags -h hidden /Users/dansturm/Dropbox

And now your scripts, automations, file paths, etc, will function just as they did before Dropbox decided to get cutesy with the name of the folder where you keep everything you’ve ever created on your Mac for the past 10+ years.


  1. You can link one personal account to one business account and that’s it. You can’t have two personal accounts or two business accounts or three of any combination. Kinda makes you think this whole linking thing was a bit of an afterthought.  ↩

Custom URL Redirects with Rebrandly and Keyboard Maestro

For better or worse, the primary way I share files with clients, other artists, whomever, is sending Dropbox links. With features like password protection and link expiration, Dropbox links are pretty great. But there are two areas where they fail: one, they’re long and ugly, and two, they’re absolute and unable to be redirected to a different file after you’ve shared them.

Yes, in a perfect world I wouldn’t notice problems with my files after I hit send on my emails. Luckily, I can mitigate embarrassing link misfires by creating a custom short URL redirect that I can change later if necessary.

Redirect, Your Honor

I own a couple of short domain names I’ve used to create custom redirects for years. I used to create the redirects directly in the Hover domain dashboard. Which, as you can probably imagine, is not the quickest process.

Recently, however, I was introduced to Rebrandly by my pals Jeff Hodges and Zach Hobesh. Rebrandly is a service for creating and managing custom shortened URL redirects. It connects to your domains quickly and easily, and makes re-branding URLs super easy.

When creating a shortened URL redirect with Rebrandly, the three 1 main components are:

  1. The destination URL
  2. The shortened URL path, which Rebrandly calls a “Slashtag”
  3. A short description of the link, for organizational purposes

Rebrandly’s API documentation is full of example code that made it very easy for me to create a Keyboard Maestro macro that uses their service for shortening my links.

Wrkflw

Creating a rebranded URL with the macro is as simple as:

  1. Place the destination URL on the clipboard.
  2. Press the keyboard shortcut.

A dialogue box will pop up, prefilling the destination URL from the clipboard. There are 2 optional input fields for the Slashtag and the Description. If left blank, Rebrandly will create a random Slashtag that's as short as possible.

Pressing Return will create the shortened URL on Rebrandly, and place it on my clipboard, ready to be pasted into my email, Slack message, or wherever.

I’ve only been using Rebrandly and this Keyboard Maestro macro for a short time, but I’ve been very happy with how fast and easy it makes rebranding URLs.


  1. Technically, the Slashtag and the Description are optional. ↩

Open on Which Mac for Mac

The names of these things are getting pretty bad. But the Open on Which Mac iOS Shortcut I created is still one of my favorite automations I’ve created. I use it every day to send links from my phone to whichever computer suits the context for the content I’m saving for later.

But I don't only discover interesting links on my iPhone. Occasionally I’ll come across something while I’m at work and want to send it to my home computer so it’s open, waiting for me when I sit down.

The flexibility of having built the OOWM service on Plaintext files, Dropbox, and Hazel means I don't need to modify any of the existing automation steps in order to add a new source to the mix. All I need to do is create a fast method for saving a url to a specifically-named text file in a specific location, and the Hazel rule will see it and act on it, just as it does with links from my iPhone.

Keyboard Maestro, Duh

The Keyboard Maestro macro follows the same structure as the iOS Shortcut. First, it grabs the url from the active Safari tab. Then it presents the user with a list of computers to choose from for the destination of the link. Once a computer name has been selected, it uses a dictionary of computer name short-codes to create the specific text file name, then saves the file to the OOWM folder in Dropbox.

The filename — since it's too long to read in the screenshot — is:

%Dictionary[WhichComp,%Variable%compName%]%-URL-%ICUDateTime%yyyy-MM-dd%.txt

It really is that simple. And since this is running on a Mac, it’s much faster than its iOS counterpart.

And with that, one of my favorite automations just got more favoriter.

Remote Screen Sharing Automation

I’ve always found it fairly easy to manage my multiple Macs with tools like Dropbox, the Mac App Store, and iCloud. But trying to manage Macs that are in different physical locations, on different networks, has really put some of my workflows to the test. There are, it seems, some workflow issues that can’t solved by just putting things in Dropbox. Go figure 1 .

For the past few weeks, I’ve been looking around for ways to “get to” my home iMac from my iMac at work. I quickly found I needed more than just “access to the data” on that computer. I needed to control it via some form of screen sharing.

I tried Screens . It didn’t like my company’s port mapping. Nothing I can really do about that.

Someone in our IT department recommended Royal which, according to my research, is an application that does…something.

In true You’re Doing That Wrong fashion, the solution that had the most success was texting my wife at home and asking her to click “Accept” on the iMessage Screen Sharing request that I was sending from my work computer. But after about a half-dozen requests, I knew I needed a better solution.

Solutioneering

The Screen Sharing tools built into iMessage are great. They’re simple, easy to use, and (miraculously) they just work. I don’t need to open ports on my router or run a private VPN, I just open iMessage, select the person I want to Screen Share with, and click “Ask to Share Screen”.

Given my propensity for making very bad, very unsafe automations, you may be imagining that I just created a Keyboard Maestro macro that would watch for the “Incoming Screen Sharing Request” notification and click “Accept”. But even I realized what an awful idea that would be.

Luckily, there’s another menu item in iMessage, just above the “Ask to Share Screen” item. It’s the “Invite to Share My Screen” option. So, I set about making a tool that I could activate remotely, that would call me from my home iMac and offer to share its screen.

Shortcuts & Hazel

The easiest way to get things up and running was to duplicate a few of the things I’d created for my Open on Which Mac tool. I duplicated the iOS Shortcut I’d use to trigger the whole thing, and the Hazel rule watching for the Shortcut’s input.


  1. Sometimes Dropbox is the issue. But that’s a post for another time. ↩
 
 

The only thing I needed to change in the Shortcut and Hazel rule was to swap “URL” for “ScreenShare” in the filename. So, the Destination Path in the shortcut reads: Applications/Batch/openonmac/Dictionary Value-ScreenShare-Current Date.txt .

While I’m currently only going to use the tool to remote into my home iMac from work, leaving the rest of the Shortcut intact will allow me to more easily 2 add the ability to remote into other computers later.

Keyboard Maestro

Now on to the meat of the thing. We start by using the macOS URL scheme for Messages.app to send a message to my Apple ID. By hard-coding my Apple ID into the macro, there’s no way I can accidentally send the invitation to someone else. Which would be very bad.

By opening the url imessage:myappleid@email.com , KM will open Messages.app and create a new iMessage to my Apple ID. Now, it turns out, it’s not enough to just create a new message with a recipient selected. The Screen Sharing menu items aren’t accessible until you actually send something. So I took this as an opportunity to add a bit of transparency to the process. The macro types out the words “Incoming Connection from Dan’s iMac” and hits Return . In addition to making the Screen Sharing tools accessible, I will get an iMessage (everywhere) letting me know that the Screen Sharing Invitation is imminent and it’s coming from the computer I expected.

Next, the macro opens the “Buddies” menu and selects “Invite to Share My Screen”. Within a few seconds, wherever I may be, and invitation to share the screen of my home iMac appears on my desktop and I can click “Connect”.


  1. With one potentially major hurdle. ↩

That was…surprisingly simple.

Not quite

Since Apple is very good about keeping things safe and secure, the Screen Sharing session activates in “Observing” mode. Which is not terribly helpful. Additionally complicating matters, the only way to approve “Control” of the Screen Sharing session to a remote user is to click on the Screen Sharing menu bar icon that indicates a connection is active.

Initially, I tried to click the menu with Keyboard Maestro’s “Click at Found Image” action, but the menu bar icon flashes when connected and it failed more often than it succeeded. After a bit of googling, some poking around in Activity Monitor, and a brief consultation with Dr. Drang , I discovered I could activate the menu and select “Allow Dan to control my screen” with some basic AppleScript. Which looks like this:

tell application "System Events" to tell process "SSInvitationAgent"
    click menu bar item 1 of menu bar 1
    click menu item 2 of menu 1 of menu bar item 1 of menu bar 1
end tell

Limitations and Improvements

There is one big limitation to this tool. You may have already guessed it. The tool, as it exists here, doesn’t work when the computer is locked. So I resorted to turning off “Require Password” in System Preferences on my home iMac. Which sounds like a huge security risk not worth taking for the benefit it provides but, frankly, if an untrustworthy person is sitting at my desk in my home office, I have bigger problems than whether or not there’s a password on my iMac.

This does, however, preclude me from using this particular solution for the reverse procedure of connecting to my work iMac from home. Turning off my system password definitely isn’t going to fly with our IT department. So, at the moment, this is at best half a solution.

Another thing I’ll probably change in the next iteration of the tool is to remove Hazel from the process entirely. Recently, in the process of debugging a Hazel rule, I recreated it from scratch inside Keyboard Maestro. KM’s ability to watch a folder and act on files that appear inside worked well enough for me to consider migrating more “watch folder” actions over there in the future. Its debugging tools are better, too.

Something else that could use improving is the speed of some of the actions. Currently, depending on how long it takes for me to accept the screen sharing session from my work iMac, the screen sharing menu bar icon may not be available in time for the AppleScript action to find it and grant me “Control” access. My current workaround is to just run the whole process again while I’m in “Observe” mode. It only takes a few second and it works fine.

Speaking of the AppleScript step, there’s also an odd delay of a few seconds between opening the menu bar app and selecting the “Allow Dan to Control” item. In my conversation with Dr. Drang, he pointed me to this post on Stack Overflow which both explained and solved the issue, so that seems like an easy fix for the next version.

By the way, it would seem (to me) that none of this would need to exist if there was some mechanism by which iMessage could tell that the Screen Sharing request was coming from my Apple ID, sent to my Apple ID, and allow me to automatically authenticate those interactions. Hell, prompt me for my iCloud password if you want to keep it safe. Seems like a reasonable request to me, but what do I know. I’m just some idiot with a blog.

"Open on Which Mac" Shortcut v3

Two whole days ago, I posted an updated version of my Open on Mac Shortcut. When I post my hacky automation tools online, the absolute best possible response I can hope for is being corrected by someone much smarter than I am.

Like when I posted v1 of the shortcut and Jason Snell pointed out that I had inadvertently created a way for anyone with access to my Dropbox account to execute arbitrary code on my computer. Which is a pretty bad thing, to be honest. Luckily, he modified the shortcut and posted a much better version on Six Colors.

When I posted v2 of my shortcut on Tuesday, in the caption for the (very long) shortcut image, I wrote:

These If statements are terrible and ugly and there’s got to be a better way to do this, but I don't know what it is.

A few hours later, I received a lovely Twitter DM from Dr. Drang with the answer to my question.

To avoid the nested if statements, set up a dictionary with the Mac names as the keys and the file name prefixes as the values. Then assemble the file name by looking up from that dictionary after the Choose step.

— Dr. Drang, Famous Internet Snowman

 
 

The file name in the Destination Path of the Save File action is "Dictionary Value-URL-Current Date.txt. The shortcut is now much shorter, easier to understand, faster, and generally less bad.

Thanks, Doc.

"Open on Which Mac" Shortcut

A few weeks ago, I started a new job. Along with that job came a new iMac and Touch Bar MacBook Pro. Having doubled the number of computers in my life, I quickly found that my frequently-used Open on Mac iOS Shortcut was not working as expected.

While at work, attempting to open a webpage on my iMac would result in...nothing. When I got home, I found the pages open and waiting for me on my personal iMac.

Prior to the newly acquired computers, I had never given much thought to why webpages opened on my iMac rather than my MacBook Pro. I spent 98% of my time on the iMac and, since it was doing what I wanted it to do, there was no reason to ask why. I mostly assumed it was because the MacBook Pro was asleep and the iMac is always awake.

As it turns out, the real reason webpages always opened on my personal iMac is because it has the fastest internet connection (a wired fiber connection) and would therefore download the Dropbox file containing the URL before any other computers had the chance. Hazel would then do its thing, trash the file, and that was that.

It had become necessary to modify my iOS Shortcut, allowing me to specify on which computer I wanted to open the webpage. To accomplish that goal, I added a "Choose from List" action to the shortcut where I could pick which computer to use. Then, I added short prefixes to the filename that represented each computer.

 
 

The original text file containing the page URL was called "URL-Current Date.txt". The new file names are:

  • The Touch Bar: tbr-URL-Current Date.txt
  • The New iMac: niM-URL-Current Date.txt
  • My MacBook Pro: mbp-URL-Current Date.txt
  • My iMac: diM-URL-Current Date.txt

Add a couple of "If" statements to the shortcut and we're about done. Here's what the new, much longer shortcut looks like.

 

These If statements are terrible and ugly and there’s got to be a better way to do this, but I don't know what it is.

 

After finishing the shortcut, all that was left to do was add the prefixes to the name search field in the Hazel rules running on each computer and call it done.

P.S. Thank you, again, to Jason Snell for fixing my very unsafe, poorly conceived version 1.0.

"Overcast to Castro" Shortcut

I love podcasts. And I love when my friends on the internet share the podcasts they love.

One of the most common ways people share their podcast recommendations is with a link from their podcast player app, which, more often than not, is Overcast. I, however, am primarily a Castro user.

I can't count how many times I've opened an Overcast link on social media, switched over to Castro, searched for that podcast by name in the Castro "Discover" tab, then added the recommended episode to my Queue for listening later. An incredibly inefficient and annoying workflow.

Oh, how I wish I could just press a button and have that Overcast link open in Castro, showing me the episode ready to be queued.

Both Overcast and Castro support public URLs for sharing shows and individual episodes. This is in addition to the apps' specific iOS URL schemes.

I have no idea how either of these apps are generating their episode-specific URLs, but the URLs for the main feed of a podcast use the podcast's iTunes ID. The Overcast and Castro links for the Defocused main feed are https://overcast.fm/itunes891398524 and https://castro.fm/itunes/891398524, respectively.

Which means I can create a quick Shortcut to swap an Overcast link for a Castro link.

 
 

Half of a Solution

Since this shortcut only works on a podcast's main feed URL, not an episode specific URL, I still have to do some work to get the podcast episode into my Castro Queue. I have to open the Overcast link, tap on the name of the podcast at the top of the player to go to its main feed, run the shortcut, tap the "Open in Castro" button, tap the button that allows Safari to actually open Castro, then find and add the specific episode to my Queue.

Look how pretty these screenshots are. They were made with Stu Maschwitz’s “Big Tennis Screenshots” Shortcut, which you can download here.

Not ideal, but much more pleasant than manually searching for the name of the show. Especially if episode being shared by the Overcast user happens to be the most recent episode of the show since Castro loads with the Action buttons for that episode ready to tap.

Maybe one of these days I or, more likely, one of you much smarter people, will figure out how to translate episode specific URLs that open directly within Castro (or Overcast), avoiding all these Safari links as a bridge. Heck, while I'm wishing for unlikely things, maybe Castro will finally get timestamped URLs, too. One can dream, right?

Node Sets for Nuke v1.2

The Selectable Edition

Yesterday, while trying to address a note on a near-finished animation, I discovered the need for a new tool in my Node Sets toolbox that was both useful and trivially simple to create. A rare combination when it comes to my code.

The original intended use for the Node Sets tagging tools was that animated nodes would be tagged as you work and, when you need to adjust an animation's timing, you would run the "Show Nodes" command to open all of the tagged nodes. The idea being, you'll need to open not only the nodes that need to be adjusted, but also all of the other relevant animated nodes for timing and context.

The problem I encountered involves this methodology's inability to scale with the modularity of larger projects. One of the main benefits of a node-based workflow is the ability to create any number of blocks of operations, separate from the main process tree, then connect and combine them as necessary. Each of these blocks would have its own set of animated nodes, building a piece of the overall animation.

But the comp I was working on yesterday had 140 tagged animated nodes and, while it would technically still work to open all of them every time I need to make a timing change, it's slow and unwieldy to have 140 node property panes open at the same time.

A solution I proposed to this issue in the v1.0 blog post was the ability to use a different tag for different types or groups of nodes and open them each independently. A fine idea that I never personally implemented because the tags are hard coded into the tool and there's no way to add more tags without closing the app, modifying the menu.py file, and cluttering up the toolset with a lot of similarly named tools. A terrible workflow.

A solution that solves this problem in a much simpler, smarter way is to use a selection of nodes to narrow the search for tags. So, when working on a smaller section of the animation, I can select a block of nodes and run the new command "Node Set: Show Selection" to open the tagged nodes contained within.

 

The selected block of nodes used to search for tagged nodes.

 

The Code

Like I mentioned at the top of this post, the code for this new addition was exceptionally simple. Specifically, I duplicated and renamed the "Node Set: Show Nodes" code, and changed one word. In the function's for loop, I changed nuke.allNodes() to nuke.selectedNodes(). And that was it. Writing this blog post has already taken several orders of magnitude longer than writing the code.

The full function, called showOnlySelectedNodes(), looks like this:

def showOnlySelectedNodes():
  names = []
  li = []
  for node in nuke.selectedNodes():
    if "inNodeSet" in node['label'].value():
      names.extend([node.name()])
      li.extend([node])
  numPan = nuke.toNode('preferences')['maxPanels']
  numPan.setValue(len(names))
  for i in range(len(li)):
    node = li[i]
    node.showControlPanel()

And the additional line to add the tool to the menu is:

nsets.addCommand('Node Set: Show Selection', 'showOnlySelectedNodes()', icon='NodeSetsMenu-show.png')

It's rare that the solution to an issue I encounter while working is so simple to create that it's quicker to just make the tool than capture a note to create it later, but that was the case with this one and I'm very happy to have this new option.

Head over to the Downloads page to get the full updated Node Sets v1.2 code.

Viewing Alexa Footage in Nuke and Nuke Studio

The Arri Alexa remains one of the most common cameras used in production these days. Its proprietary LogC format captures fantastic highlight detail and exceptionally clean imagery.

But with each new proprietary camera format comes a new process for decoding, viewing, and interacting with the camera's footage. Generally speaking, this involves applying a specific LUT to our footage.

Most applications have these LUTs built in to their media management tools. All it takes to correctly view your footage is to select which LUT to use on your clip.

This is, unfortunately, not the full story when it comes to Alexa footage.

If you've ever imported an Alexa colorspace clip into Nuke, set your Read node to "AlexaV3LogC", and viewed it with the default Viewer settings, you may notice that the highlights look blown out. If you use a color corrector or the Exposure slider on your Viewer, you'll see that the image detail in the highlights is still there, it's just not being displayed correctly.

An Alexa LogC clip being viewed in NukeX with the sRGB Viewer Input Process.

If you import that same clip into DaVinci Resolve, again, set it to Alexa colorspace and view it, you'll notice that it doesn't match the Nuke viewer. In Resolve, the footage looks "correct".

An Alexa LogC clip being viewed in Resolve with the Arri Alexa LogC to Rec709 3D LUT applied.

So, what's going on here?

The Alexa's LogC footage needs to be gamma corrected and tone-mapped to a Rec709 colorspace. In Nuke, this is a 2-step process. The footage gets its gamma linearized in the Read node before work is done, then, after our work has been added, the footage needs to be converted to Rec709 colorspace. In DaVinci Resolve, these 2 steps are performed at the same time.

The problem is that second step in Nuke. There is no built-in Viewer Input Process to properly view Alexa footage. We could toss a OCIOColorSpace node at the end of our script and work in between it and our Read. But we don't want to bake that Rec709 conversion into our render, we just want to view it in the corrected colorspace.

Adding a Custom Input Process

The first thing we're going to need is the Alexa Viewer LUT. No, this is not the same LUT that comes with the application. You can download it here, or build your own with Arri's online LUT generator.

If you only use Nuke/NukeX, adding the Input Process is relatively simple, and bares a striking resemblance to a lot of the Defaults customization we've done in the past. If, however, you also use Nuke Studio or Hiero, you'll want to ignore this section and skip ahead to the OCIOConfig version.

Nuke / NukeX

To get started, create a new Nuke project. Then:

  1. Create a OCIOFileTransform node and add the downloaded LUT file.
  2. Set your "working space" to "AlexaV3LogC". Leave the "direction" on "forward" and "interpolation" on "linear".
  3. After the OCIOFileTransform node, add an OCIOColorSpace node.
  4. Set your "in" to "linear" and your "out" to "AlexaV3LogC"

The nodes for the AlexaLUT Gizmo in Nuke.

Now we need to turn these 2 nodes into a Gizmo. To do that, select them both, hit CMD+G on the keyboard to Group them, then click the "Export Gizmo" button. Save the Gizmo in your .nuke directory. Mine is called Alexa_LUT.gizmo.

Once we've saved our Gizmo, we just need to add the following line to our Init.py file:

nuke.ViewerProcess.register("Alexa", nuke.Node, ("Alexa_LUT", ""))

Now, when you start up Nuke, you'll have your Alexa LUT in the Input Process menu in your Viewer.

The Alexa Input Process in the Nuke Viewer.

And, just so we're clear, if we're working on an Alexa colorspace clip, as a Good VFX Artist, we're going to send back a render that is also in Alexa colorspace. That means setting the "colorspace" on our Write node to "AlexaV3LogC", regardless of the file format.

NukeStudio (and Also Nuke / NukeX)

Welcome, Nuke Studio users. For you, this process is going to be a little more work.

Just like everything in Nuke Studio, am I right?

Sorry. Let's get started.

To add our Alexa LUT to Nuke Studio, we need to create our own custom OCIOConfig. Since we're lazy (read: smart), we'll duplicate and modify the Nuke Default OCIOConfig to save us a lot of time and effort.

The OCIOConfigs that come with Nuke can be found in the app's installation directory under /plugins/OCIOConfigs/configs/. We're going to copy the folder called "nuke-default" and paste it into .nuke/OCIOConfigs/configs/ and let's rename it to something like "default-alexa".

Before we do anything else, we need to put our Alexa Viewer LUT inside the "luts" folder inside our "default-alexa" folder.

Is it there? Good.

Inside our "default-alexa" folder, is a file called "config.ocio". Open that in a text editor of your choice.

Near the top of the file, you'll see a section that looks like this:

displays:
  default:
    - !<View> {name: None, colorspace: raw}
    - !<View> {name: sRGB, colorspace: sRGB}
    - !<View> {name: rec709, colorspace: rec709}
    - !<View> {name: rec1886, colorspace: Gamma2.4}

We need to add this line:

- !<View> {name: Alexa, colorspace: AlexaViewer}

I put mine at the top, first in the list, because I want the Alexa viewer to be my primary Input Process LUT. A good 80% of the footage I work with is Alexa footage. Your use case may vary. Rearranging these lines will have not break anything as long as you keep the indentation the same.

Now, scroll all the way down to the bottom of the file, past all the built-in colorspace configs. Add the following:

- !<ColorSpace>
  name: AlexaViewer
  description: |
    Alexa Log C
  from_reference: !<GroupTransform>
    children:
      - !<ColorSpaceTransform> {src: linear, dst: AlexaV3LogC}
      - !<FileTransform> {src: ARRI_LogC2Video_709_davinci3d.cube, interpolation: linear}

That wasn't so bad, was it. Was it?

Now, all that's left to do is open Nuke and/or Nuke Studio, go to your application preferences, and under the "Color Management" section, select our new OCIOConfig file.

Choosing our custom OCIOConfig in the Nuke application preferences.

Now, you'll have your Alexa LUT in your Input Process dropdown in both Nuke and Nuke Studio and you can finally get to work.

Thanks Are in Order

I've been putting off this blog post for a very long time. Very nearly 2 years, to be specific.

I was deep into a project in Nuke Studio and was losing my mind over not being able to properly view my Alexa raw footage or Alexa-encoded renders. This project also included a large number of motion graphics, so making sure colors and white levels matched was doubly important.

So, I sent an email to Foundry support.

After about a week and a half of unsuccessful back-and-forth with my initial contact, my issue was escalated and I was contacted by Senior Customer Support Engineer Elisabeth Wetchy.

Elisabeth deserves all of the credit for solving this issue. She was possibly the most helpful customer support representative I've ever worked with.

Also, in the process of doing some research for this blog post (yeah, I do that sometimes shut up), I came across an article she wrote the day we figured this stuff out. So I guess I shouldn't feel too bad for making you guys wait 2 years for my post.

Note: Test footage from of Arri can be found here.

Update for Nuke 12 and Up

In case you didn't click through to that support article above, there's a very important update for users of Nuke 12 and up:

As of Nuke 12, the active_views list will now be respected, and this controls which views are visible and the order in which they appear.

So for the custom LUT to appear in the Viewer, you will need to append the LUT to the active_views list in the OCIO config:

active_views: [sRGB, rec709, rec1886, None]

For example:

active_views: [sRGB, rec709, rec1886, AlexaToRec709, None]

This line is also optional and, by default, will set all views to be visible and will respect the order of the views under the display. So if you wish for all LUTs to be visible, you could simply delete this line.

In my case, my active_views line says active_views: [Alexa, sRGB, sRGBf, rec709, rec1886, None] because my Alexa profile is named "Alexa", not "AlexaToRec709". I also put it at the front of the active_views list because I want it to be the default since I primarily work with Alexa LogC footage.

Global Motion Blur Controls in Nuke

I’m back again with another custom tool for my Nuke setup. That can mean only one thing: I’m doing dumb stuff again.

I recently embarked on another large motion graphics project, animated entirely in Nuke. Just as with the creation of my Center Transform tool, using Nuke for such a project quickly reveals a glaring omission in the native Nuke toolset which, on this project, I just couldn't continue working without. I speak, of course, of Global Motion Blur Controls.

The Use Case

Most assets that move, especially motion graphics, need to have motion blur on them. But motion blur is incredibly processor-intensive, so, while you're working, it's almost always necessary to turn off motion blur while you animate, turning it back on to preview and render.

In Nuke, that means setting the motionblur parameter on a Transform node to 0 while you work, then setting it to 1 (or higher) to preview and render. Simple enough when you only have a handful of Transform nodes in your script. Nigh impossible to manage when you have almost 200.

The Problem

Currently, each Transform node has its own set of motion blur controls: Samples, Shutter, and Shutter Offset. There is no mechanism for modifying or enabling / disabling all motion blur parameters at the same time like there is in, say, After Effects.

Smart Nuke artists will use Cloned Transform nodes or expression link the motion blur parameters to each other. Or, take it one step further and create a custom motion blur controller with a NoOp node and expression link all Transforms to that.

While that saves some effort, you've got to add the NoOp expression to every Transform node (twice), including each new Transform you create. And, of course, there's the very likely possibility that you'll forget or miss one along the way and have to track it down once you notice your render looks wrong.

This is how I have previously dealt with this problem.

A Half-Step Forward

To make this process faster, I wrote a script to quickly expression link the motionblur and shutter parameters of selected nodes to my custom NoOp, which I have saved as a Toolset for easy access in each new Nuke script.

That script looks like this:

def SetNoOpBlur():
  for xNode in nuke.selectedNodes():
    xNode['motionblur'].setExpression( 'NoOp1.mBlur' )
    xNode['shutter'].setExpression( 'NoOp1.mShutter' )

toolbar = nuke.menu("Nodes")
gzmos = toolbar.addMenu("Gizmos", icon='Gizmos4.png')
gzmos.addCommand("Link NoOp Blur Control", 'SetNoOpBlur()')

The Link to NoOp tool in Nuke

This makes the expression linking faster and easier, but I still have to select all the Transform nodes by hand before running the script. It's also incredibly fragile since I hard-coded the name of the controller node (NoOp1) into the function.

This level of half-assed automation simply won't do. We need to whole-ass a better solution.

The Solution

The goal would be to have motion blur settings in the Nuke script's Project Settings that control all Transform nodes by default, with the ability to override each node's individual settings, as needed.

Here’s what I came up with [1]:

# Customize Transform Controls - No Center Transform Button

def OnTransformCreate():
  nTR = nuke.thisNode()
  if nTR != None:
    # Create "Use Local Motion Blur" button
    lbscript="mbT = nuke.thisNode()['motionblur']; mbT.clearAnimated(); stT = nuke.thisNode()['shutter']; stT.clearAnimated(); soT = nuke.thisNode()['shutteroffset']; stT.clearAnimated();"
    lb = nuke.PyScript_Knob('clear-global-mblur', 'Use Local Motion Blur')
    lb.setCommand(lbscript)
    nTR.addKnob(lb)
    # Create "Use Global Motion Blur" button
    gbscript="nBB = nuke.thisNode(); nBB['motionblur'].setExpression('root.motionblur'); nBB['shutter'].setExpression('root.shutter'); nBB['shutteroffset'].setExpression('root.shutteroffset');"
    gb = nuke.PyScript_Knob('use-global-mblur', 'Use Global Motion Blur')
    gb.setCommand(gbscript)
    nTR.addKnob(gb)
    # Set Transform Node to use Global Motion Blur by Default
    nTR['motionblur'].setExpression('root.motionblur')
    nTR['shutter'].setExpression('root.shutter')
    nTR['shutteroffset'].setExpression('root.shutteroffset')

nuke.addOnUserCreate(OnTransformCreate, nodeClass="Transform")

# Root Modifications for Global Motion Blur

def GlobalMotionBlur():
  ## Create Motion Blur tab in Project Settings
  nRT = nuke.root()
  tBE = nuke.Tab_Knob("Motion Blur")
  nuke.Root().addKnob(tBE)
  
  ## Create motionblur, shutter, and shutter offset controls, ranges, and defaults
  mBL = nuke.Double_Knob('motionblur', 'motionblur')
  mBL.setRange(0,4)
  sTR = nuke.Double_Knob('shutter', 'shutter')
  sTR.setRange(0,2)
  oFS = nuke.Enumeration_Knob('shutteroffset', 'shutter offset', ['centered', 'start', 'end'])
  oFS.setValue('start')
  
  ## Add new knobs to the Motion Blur tab
  mblb = nuke.Text_Knob("gmbcl","Global Motion Blur Controls")
  nRT.addKnob(mblb)
  nRT.addKnob(mBL)
  nRT.addKnob(sTR)
  nRT.addKnob(oFS)

GlobalMotionBlur()

Init.py Script

# Global Motion Blur Defaults
nuke.knobDefault("Root.motionblur", "1")
nuke.knobDefault("Root.shutter", ".5")
nuke.knobDefault("Root.shutteroffset", "start")

The Motion Blur tab in Project Settings

The expression linked motion blur controls

The unlink / re-link buttons

I’ve created global parameters for Motion Blur, Shutter, and Shutter Offset [2]. When you create a Transform node, it automatically adds 2 buttons to the User tab to make it easy to unlink / re-link to the global controller.

In my version, all Transform nodes created are linked to the global setting by default. If you'd prefer each node be un-linked by default, you can just remove the last 3 lines of the OnTransformCreate() function. Then, you can click the "Use Global Motion Blur" button on each node that you want to link.

While I haven't spent a ton of time with this new setup, I'm really happy with how it's come out. Though, as with most of my weird customizations, I look forward to the day that The Foundry adds this functionality to the app, making my code obsolete.


  1. This is just the new code without the Center Transform button that I normally have in my OnTransformCreate() function. The function in my Menu.py file actually looks like this.  ↩

  2. I did not add the Custom Shutter Offset control to the global controller because, for one, I really don’t use that option much (or ever), and two, it turned out to be much harder to script than the rest of the options. It simply wasn’t worth the effort to figure out how to create a global controller for something I never use, and the command is still accessible by using per-node motion blur settings.  ↩

Open the Doors

If my penchant for removing incredibly specific, minor inconveniences from my life with overly-complicated, home-grown automation tools wasn't yet fully evident, get ready to be dazzled by the lengths to which I go with this one.

It's winter time here in terrible Phoenix, Arizona, and that means temperatures with highs in the high-70s to low-80s, and lows in the mid-40s. Translated: it's a bit too warm to turn on the heater, and a bit too cool to necessitate air-conditioning.

As a result, over the course of a day, the temperature inside our home ranges from 70F in the morning, to upwards of 78F by late afternoon. Since I work at home and I hate feeling hot [3], I like to keep the front and back doors to the house open in the mornings and evenings, in an effort to cool the house enough to keep the mid-day temperatures inside below 75F.

Generally, that means keeping the doors open in the morning until the temperature outside rises above 70F, and keeping them closed until the temperature drops back down below 70F in the evening.

"I don't see the problem," you say. "Just shout at Siri or your Echo Dot and ask the temperature periodically. Or just look down at your Apple Watch. Or literally any number of other options at your disposal."

Yes, I totally hear you.

Now, take a deep breath because it's going to get weird.

Most weather services use a weather station downtown or at the airport of your city. In my case, those weather stations are 25 miles away and on the other side of a very large mountain. The result being that they're almost always wrong for my neighborhood by about 3 or 4 degrees.

So, I primarily monitor the temperature with a Weather Underground station located less than a half-mile from my home. I keep the WU widget in my Today view on my phone and periodically swipe over, scroll down, and wait for it to update. I love a lot of things about Weather Underground. The speed at which its app refreshes is definitely not one of them. In fact, I usually end up launching the app from the widget in order to make sure it's actually refreshed and not showing me old data. And don't even get me started on its Apple Watch complication. It's tiny and ugly and I hate it.

Are you still reading? Okay, good.

Unrelated to the weather, I've recently begun playing around with Pushover on iOS to send myself custom push notifications based on whatever criteria I deem worthy of a notification. It's super simple to set up and use, has a ton of flexibility, and does exactly what you'd expect it to do. I love it.

I've heard of people using it to alert themselves when a long video render has completed so they can go about their day without needlessly checking the progress bar on their computer. A very cool use case that I will definitely investigate. But, on this morning, I thought to myself, how cool would it be if I could set up Pushover to send me a notification when the temperature at my local WU station goes above / drops below 70F?

To the WU

In addition to being a very cool service, Weather Underground has a nice developer API. You can sign up for a free developer account that will let you to request Current Conditions up to 500 times per day. That's more than enough for what I want to do.

With a simple call of:

curl http://api.wunderground.com/api/DEVELOPERID/conditions/q/AZ/pws:EXAMPLESTATION.json

I get a return like this:

{
  "response": {
  "version":"0.1",
  "termsofService":"http://www.wunderground.com/weather/api/d/terms.html",
  "features": {
  "conditions": 1
  }
    }
  ,    "current_observation": {
        "image": {
        "url":"http://icons.wxug.com/graphics/wu2/logo_130x80.png",
        "title":"Weather Underground",
        "link":"http://www.wunderground.com"
        },
        "display_location": {
        "full":"Phoenix, AZ",
        "city":"Phoenix",
        "state":"AZ",
        "state_name":"Arizona",
        "country":"US",
        "country_iso3166":"US",
        "zip":"XXXXX",
        "magic":"1",
        "wmo":"99999",
        "latitude":"33.XXXXXX",
        "longitude":"-112.XXXXXX",
        "elevation":"373.1"
        },
        "observation_location": {
        "full":"Example Station, Phoenix, Arizona",
        "city":"Example Station, Phoenix",
        "state":"Arizona",
        "country":"US",
        "country_iso3166":"US",
        "latitude":"33.XXXXXXX",
        "longitude":"-112.XXXXXX",
        "elevation":"1214 ft"
        },
        "estimated": {
        },
        "station_id":"EXAMPLESTATION",
        "observation_time":"Last Updated on February 6, 11:46 AM MST",
        "observation_time_rfc822":"Tue, 06 Feb 2018 11:46:53 -0700",
        "observation_epoch":"1517942813",
        "local_time_rfc822":"Tue, 06 Feb 2018 11:47:00 -0700",
        "local_epoch":"1517942820",
        "local_tz_short":"MST",
        "local_tz_long":"America/Phoenix",
        "local_tz_offset":"-0700",
        "weather":"Clear",
        "temperature_string":"72.4 F (22.4 C)",
        "temp_f":72.4,
        "temp_c":22.4,
        "relative_humidity":"21%",
        "wind_string":"From the SE at 1.0 MPH Gusting to 3.0 MPH",
        "wind_dir":"SE",
        "wind_degrees":139,
        "wind_mph":1.0,
        "wind_gust_mph":"3.0",
        "wind_kph":1.6,
        "wind_gust_kph":"4.8",
        "pressure_mb":"1014",
        "pressure_in":"29.95",
        "pressure_trend":"-",
        "dewpoint_string":"30 F (-1 C)",
        "dewpoint_f":30,
        "dewpoint_c":-1,
        "heat_index_string":"NA",
        "heat_index_f":"NA",
        "heat_index_c":"NA",
        "windchill_string":"NA",
        "windchill_f":"NA",
        "windchill_c":"NA",
        "feelslike_string":"72.4 F (22.4 C)",
        "feelslike_f":"72.4",
        "feelslike_c":"22.4",
        "visibility_mi":"10.0",
        "visibility_km":"16.1",
        "solarradiation":"--",
        "UV":"4","precip_1hr_string":"0.00 in ( 0 mm)",
        "precip_1hr_in":"0.00",
        "precip_1hr_metric":" 0",
        "precip_today_string":"0.00 in (0 mm)",
        "precip_today_in":"0.00",
        "precip_today_metric":"0",
        "icon":"clear",
        "icon_url":"http://icons.wxug.com/i/c/k/clear.gif",
        "forecast_url":"http://www.wunderground.com/US/AZ/Phoenix.html",
        "history_url":"http://www.wunderground.com/weatherstation/WXDailyHistory.asp?ID=EXAMPLESTATION",
        "ob_url":"http://www.wunderground.com/cgi-bin/findweather/getForecast?query=33.XXXXXX,-112.XXXXXX",
        "nowcast":""
    }
}

It's a lot, I know. But it includes everything we would ever want to know about our hyper-local weather station. Including the current temperature, after the value labled temp_f. With a quick REGEX, we can search through this response and pull out just the current temperature in Fahrenheit.

That REGEX looks like this:

(?<="temp_f":)(.*?)(?=,)

The Push

Once we've determined our current temperature is above 70.0F, we'll send ourselves a notification with Pushover by running a command that looks like this:

curl -s \
  --form-string "token=MY_TOKEN" \
  --form-string "user=MY_USERID" \
  --form-string "message=It's above 70F outisde." \
  --form-string "title=Close the Doors" \
  https://api.pushover.net/1/messages.json

Which pops up on my iPhone and Apple Watch looking like this:

The Push Notification from Pushover

The Push Notification from Pushover

Workflow, Assemble

To put all these pieces together, I turn once again to my beloved Keyboard Maestro. Since I'm sending 2 push notifications over the course of the day, I set up 2 macros with different trigger criteria.

Our "Morning" macro doesn't need to start pinging the weather station at 12:01AM, and it won't need to keep checking into the afternoon, so I set it to start, every day, at 6:30AM and stop at 1:00PM. When it stops, the "Evening" macro starts. It begins checking at 1:00PM and stops at Midnight.

While running, each macro requests the current conditions from the weather station every 5 minutes (300 seconds); well under the 500 requests per day we're allowed with our free developer account. Once the temperature reaches 70.0F, the macro ends the loop, sends the push notification, and restarts the next day.

Here are both the Morning and Evening macros:

The Morning Macro in Keyboard Maestro

The Evening Macro in Keyboard Maestro

Why did you do this and why did I just read that?

Truth be told, I'll probably only get 2 or 3 months of usage from this thing each year. Soon, the temperature will be above 70F all day and night and our monthly air-conditioning bill will cost as much as an iPad.

But, until then, this tool is a delightful aide in my quest to stay cool at home, and it was a fun way to explore the Weather Underground and Pushover APIs.

Plus, I haven't posted anything to this blog in a while and I hear that's bad. So.


  1. No, the irony of where I live does not escape me.  ↩

Replacing Native Nuke Nodes with Custom Gizmos

Friends, I feel like an idiot.

So many of the posts on this site are about creating custom gizmos to replace the native nodes inside of Nuke. But they've never completely satisfied their mission because, until now, I didn't know how to tell Nuke, "Hey, when I call a FrameHold give me my FrameHold_DS gizmo instead". So my FrameHold_DS gimzo has lived alongside the native FrameHold node since its creation. Which, by the way, is super annoying because it shows up lower in the tab-search results than the native node.

The alternative I've used — to a lesser degree of success — is to customize native nodes with the addOnUserCreate python function. While that has been effective at adding features to the native nodes, it's entirely python based and results in all my customizations being banished to a properties tab named "User". Just the sight of which makes me sad.

The good news is, I have finally figured out how to actually tell Nuke "Hey, when I call a FrameHold give me my FrameHold_DS gizmo instead". The bad news is, it's so incredibly, stupidly easy, I can't believe it took me this long to figure it out.

I was reading the Assigning a Hotkey section of the "Customizing the UI" python guide and saw this:

To assign a hotkey to an existing menu item, you effectively replace the whole menu item.

Let’s assign a hotkey to the Axis2 node.

nuke.menu( 'Nodes' ).addCommand( '3D/Axis', nuke.createNode( 'Axis2' ), 'a')

Pressing a on the keyboard now creates an Axis node.

I've known for a long time that I could add custom hotkeys to nodes, but the tab-search method was always fast enough for me that I've never wanted to do so.

But what caught my eye was the line of code. Before adding the hotkey, it defines the application's menu path to the node, then the createNode call for the node itself.

I thought to myself, there's no way I could just swap out the node name in the createNode call with the name of one of my gizmos. It couldn't possibly be that easy.

It is.

By adding the single line of code —

nuke.menu( 'Nodes' ).addCommand( 'Time/FrameHold', "nuke.createNode( 'FrameHold_DS' )")

— to my Menu.py file, calling a FrameHold node will now result in my FrameHold_DS gizmo being added instead.

Now, rather than debating which half-assed method for creating custom nodes is more suited to the tool I'm trying to create, I will create custom gizmos and remap their calls using this method.

I've been wanting to do this for so long. It's a very exciting discovery for me, only slightly overshadowed by feeling like a total doofus for not figuring it out sooner.

Postscript

"But what if I want to be able to call the native node at some point, too?"

Well, I have no desire to do that, but if you do, you could always add a second line of code to rename the native node to something else, like:

nuke.menu( 'Nodes' ).addCommand( 'Time/Dumb-Stupid-Native-FrameHold', "nuke.createNode( 'FrameHold' )")

That way it won't show up when you hit tab and start typing "Fra", but you will be able to find it if you need it.

Dumb Hold 2.png

Nuke: Center Transform Button

As I continue to use Nuke in ways in which it was never intended to be used (read: motion graphics), I keep finding small bits of friction that I just can't help but remove with app customizations.

My latest annoyance stems from an animated project that involved more traditional motion-graphics-style animation than the typical interface design and animation I usually create. I built all the graphic assets I would need for the video ahead of time, then assembled and animated them into a sequence, entirely in Nuke.

Again and again, I would merge a new graphic asset onto my shot, and I would have to do some math to figure out how to transform it into the center of the frame. Since the origin (0,0) of a Nuke frame is the bottom left corner, by default, images show up in the lower left of the frame rather than the center. Which is not what I want.

So, I'd add a Transform to the asset and move it to the center of the 1920 x 1080 frame. Since I care about precision, I didn't just eyeball the transform. I want it to be exact.

As long as I add a Transform to a graphic element with the upstream node selected, the Transform will detect the width and height of the asset and place the transform jack in the center of the object. As a Nuke user, you already knew that.

Then, I place my cursor in the x translate parameter box and type 1920/2 - whatever value was in the x center position, as determined by the upstream node. I repeat this process for the y translate parameter, using 1080/2 to match the frame's height.

And lo, we have discovered another simple, math-based operation, prone to human error, ripe for automation. The formula is simple:

  • The x translate parameter should be defined as half the frame width minus half the asset width.
  • The y translate parameter should be defined as half the frame height minus half the asset height.
  • If we have added the Translate node directly to the asset — which is to say we have not added it to our script unconnected — the x center and y center parameters will be automatically filled with the half-width and half-height values of our asset.

In Nuke Python, this formula would be expressed as:

n = nuke.thisNode()

# Get the x and y values of the Transform's center point
xVal = n['center'].value(0)
yVal = n['center'].value(1)

# Get the width and height of the frame format
rVal = nuke.Root()
xfVal = rVal.width()
yfVal = rVal.height()

# Define the variables to set the translate values
txVal = n['translate'].value(0)
tyVal = n['translate'].value(1)

# Find difference between center of frame and center of transform
cxVal = xfVal/2-xVal
cyVal = yfVal/2-yVal

# Translate to center of frame format
n['translate'].setValue(cxVal, 0)
n['translate'].setValue(cyVal, 1)

Next, we take that nicely formatted Python script and shove it into an addOnUserCreate function within our Menu.py file thusly:

def OnTransformCreate():
  nTR = nuke.thisNode()
  if nTR != None:
    script="n = nuke.thisNode(); xVal = n['center'].value(0); yVal = n['center'].value(1); rVal = nuke.Root(); xfVal = rVal.width(); yfVal = rVal.height(); txVal = n['translate'].value(0); tyVal = n['translate'].value(1); cxVal = xfVal/2-xVal; cyVal = yfVal/2-yVal; n['translate'].setValue(cxVal, 0); n['translate'].setValue(cyVal, 1);"
    k = nuke.PyScript_Knob('center_trans', 'Center Transform')
    k.setCommand(script)
    nTR.addKnob(k)

nuke.addOnUserCreate(OnTransformCreate, nodeClass="Transform")

Now, every Transform node created will have a nice big "Center Transform" button added to it automatically.

So, when I bring in a 584 x 1024 graphic asset like, say, this:

And I merge it over a 1920 x 1080 background...

...add a Transform node — which will find the center point to be (292,512)

All I have to do to center my graphic asset is click this button...

...and boom. Automated.

Update – 2020-04-20

Back in January, reader Birger sent me an email explaining his method for centering non-root-sized images over a background.

He writes:

For me, the easiest way would be to put a reformat (1920x1080) underneath the asset and set resize type to none. Would that work for you too?

As I replied to Birger, this will definitely accomplish the same thing once you tick the "black outside" checkbox. Additionally, the Reformat node concatenates just like the Transform node, so if you need to stack transforms, you wont lose any quality due to filtering.

The only arguments I can make for using my version over a Reformat node are:

  1. I like to see a Transform node on the node graph when I'm repositioning things because it helps me understand what's happening at a glance.

  2. When I'm working, I often put down a Transform node before I know I need something centered, so it's easier for me to just click the "Center" button.

  3. In the event that I want to start with a centered image and then move it slightly off-center, I can use the same node, center the object, then move from there. But I probably wouldn't do that since I can add an additional Transform after the Center operation and the nodes would concatenate into a single Transform operation, so this one isn't really valid.

Anyway, thanks Birger for the additional solution!.

Smarter, More Flexible Viewer Frame Handles

The best thing about posting my amateur, hacky Nuke scripts on this blog is that you, the handsome readers of this site, are often much smarter than I am, and frequently write in with enhancements or improvements to my scripts.

Such was the case, recently, with my Automated Viewer Frame Handles script. Reader and Visual Effects Supervisor Sean Danischevsky sent me this:

def set_viewer_handles(head_handles, tail_handles):
  #from https://doingthatwrong.com/
  # set in and out points of viewer to script range minus handle frames
  # Get the node that is the current viewer
  v = nuke.activeViewer().node()
  # Get the first and last frames from the project settings
  firstFrame = nuke.Root()['first_frame'].value()
  lastFrame = nuke.Root()['last_frame'].value()
  # get a string for the new range and set this on the viewer
  newRange = str(int(firstFrame)+head_handles) + '-' + str(int(lastFrame) - tail_handles)
  v['frame_range_lock'].setValue(True)
  v['frame_range'].setValue(newRange)


# Add the commands to the Viewer Menu
nuke.menu('Nuke').addCommand('Viewer/Viewer Handles - 16f',
"set_viewer_handles(16, 16)")
nuke.menu('Nuke').addCommand('Viewer/Viewer Handles - 12f',
"set_viewer_handles(12, 12)")
nuke.menu('Nuke').addCommand('Viewer/Viewer Handles - 10f',
"set_viewer_handles(10, 10)")
nuke.menu('Nuke').addCommand('Viewer/Viewer Handles - 8f',
"set_viewer_handles(8, 8)")

In my original script, I had hard-coded the frame handle length into the function, and created duplicate functions for each of my different handle lengths. Sean, being much better at this than I am, created a single function that takes a handle length input from the function call. In his version, all that's required to add an alternative frame handle length to the menu options is to duplicate the line that adds the menu command, and change the handle length that's sent to the function. Sean also added the ability to set different head and tail handle lengths to the script.

In thanking Sean for sending me this improved version of the script, I mentioned that it seemed that he'd set up the function in a way that would make it easy to prompt users to input a handle length, should they require a custom handle that wasn't already in their menu options. To which he replied with this:

def set_viewer_range(head_handles= 10, tail_handles= 10, ask= False):
    # set in and out points of viewer to script range minus handle frames
    # from https://doingthatwrong.com/
    # with some tweaks by Sean Danischevsky 2017
    if ask:
        p= nuke.Panel('Set Viewer Handles')
        p.addSingleLineInput('Head', head_handles)
        p.addSingleLineInput('Tail', tail_handles)
        #show the panel
        ret = p.show()
        if ret:
            head_handles= p.value('Head')
            tail_handles= p.value('Tail')
        else:
            return

    #only positive integers, please
    head_handles= max(0, int(head_handles))
    tail_handles= max(0, int(tail_handles))

    # Get the node that is the current viewer
    v = nuke.activeViewer().node()

    # Get the first and last frames from the project settings
    firstFrame = nuke.Root()['first_frame'].value()
    lastFrame = nuke.Root()['last_frame'].value()

    # get a string for the new range and set this on the viewer
    newRange = str(int(firstFrame)+ head_handles) + '-' + str(int(lastFrame) - tail_handles)
    v['frame_range_lock'].setValue(True)
    v['frame_range'].setValue(newRange)


# Add the commands to the Viewer Menu
nuke.menu('Nuke').addCommand('Viewer/Viewer Handles - 16f',
"set_viewer_range(16, 16)")
nuke.menu('Nuke').addCommand('Viewer/Viewer Handles - 12f',
"set_viewer_range(12, 12)")
nuke.menu('Nuke').addCommand('Viewer/Viewer Handles - 10f',
"set_viewer_range(10, 10)")
nuke.menu('Nuke').addCommand('Viewer/Viewer Handles - 8f',
"set_viewer_range(8, 8)")
nuke.menu('Nuke').addCommand('Viewer/Viewer Handles - ask',
"set_viewer_range(ask= True)")

Now, in addition to the set, common handle lengths in the menu, there's now an option to prompt the user for input. The pop-up is pre-filled with a value of 10, something that can be customized, as well. It's a thing of beauty.

I'd like to thank Sean for sending me both of these scripts. He took my ugly, half-formed idea, simplified it and made it more flexible. I've already begun using his script in place of mine, and I suggest you do the same.

A Few Gizmo Updates

I recently made some small updates to 3 of my Nuke Gizmos. None of them really warrant an entire blog post, so we'll call this post more of a "changelog".

QuickGrade

I made the QuickGrade Gizmo to be a fast, lightweight color correction tool for making common adjustments to a wide variety of clips. It's pretty great at doing just that, with one exception.

One of the most common "image balancing" adjustments is remapping the black and white points of a clip; usually with a Grade node. In the Grade node, one typically uses the Eyedropper to select the brightest and darkest pixels in the image as the Whitepoint and Blackpoint, respectively. This is ostensibly "calibrating" the other knobs in the tool. Once the black and white points have been set, the Lift and Gain knobs are used to set the new values for the darkest and brightest pixels in the frame. They are, by default, set to 0 and 1, having the effect making the darkest pixel value 0 (pure black) and the brightest value 1 (solid white).

In the 1.0 version of QuickGrade, I did include Blackpoint and Whitepoint controls, but I made them Floating Point Sliders, not RGBA Sliders, so the Eyedropper tool was unavailable for selecting pixels from the image. This has now been rectified.

Compare Side-by-Side and Compare Vertical

I continue to find both of these incredibly simple Gizmos to be indispensable to my day-to-day work. Which would make you think I'd have noticed, long ago, that they didn't work when an image had an alpha channel of solid black.

Especially considering that the Gizmos already had the Shuffle nodes in them to replace the alpha with solid white. But, for some idiotic reason, I left the Shuffle nodes in their default configuration, doing nothing at all to the image. This has now been rectified and these Gizmos will work with all images.

Go Get 'Em

The Downloads page has been updated with the latest versions of the Gizmos, so head on over there to get your updates. Seacrest out.

Open Website on Mac Workflow

Update – 2018-01-05

Over at Six Colors, Jason Snell has created a much smarter, safer version of this whole iOS to Mac automation idea.

His version, smartly, builds the shell script on the Mac side — only receiving keywords and URLs from the iOS device — as opposed to my version which is set up to just immediately run whatever text file shell script happens to pop up in my Dropbox folder. A less-than-ideal setup should someone else acquire access to that Dropbox folder. Go check it out.


This may be one of the laziest automation tools I've ever created, but I solves an annoyance that's been bugging me for a long while now.

Often times I'll be looking at a website on my iPhone and I'll want to switch over to viewing it on my Mac. "That's why Apple created Handoff," you say. Yes, well, personally I find Handoff to be slow, unreliable, and only half of the solution.

I'm looking at a website on my phone. I want to press a button on my phone and have that website open in Safari on my Mac. I don't want to wait for a dock icon to appear on my Mac. I don't want to try to click on it, quickly, before it disappears. I don't want to look through a list of open iCloud tabs in Safari on my Mac.

I want to tap and have it open.

What I Did

I used everyone's favorite iOS automation tool Workflow.app to create an Application Extension Workflow that grabs the current URL and saves it to a date/time stamped text file in a specific Dropbox folder. It looks like this:

 
 

Then, I set up Hazel on my Mac to monitor that folder and, when it sees a new file, run the file with Bash, then throw it in the Trash. Here's what that looks like:

Overly complicated? Probably.

Lazy? Almost certainly.

Does it do the job I wanted it to do? Absolutely.

It takes about 6 seconds from tapping on the Workflow in Safari on my phone to having an open page in Safari on my Mac. That's not exactly fast, but it's no slower than using Handoff. It's also far more reliable and requires less interaction from me.

And, let's not forget, this will work from any distance. There's no Bluetooth range limit like Handoff. Wherever you are, as long as you've got an internet connection, you can use this Workflow to have a website open and waiting for you on your Mac when you get home.

Nuke: Copy with Expressions

So, here's a thing I made.

I was recently doing some motion graphics work in Nuke, as I do. I had several elements in a shot that I needed to animate-on with the same motion, speed, size, etc.

The graphics were already in their "final" positions, having done a full layout before animating. Now, I just wanted to add an animated Transform to each element. And I wanted to be able to easily adjust them all, together, as I finessed the animation.

But I couldn't just Clone a Transform and paste it into the other branches. The problem being that none of the elements shared the same Anchor Point (Center). If I cloned the Transform, all of the graphics would be scaling from the center of the first element, not their own centers.

I needed a handful of Transforms that were linked by all of their parameters except Center.

So, rather than spending 15 minutes writing and copy/pasting expressions to link all of the various knobs on the Transform nodes, I spent an hour writing a Python tool that will do it with a keyboard shortcut.

def CopyWithExp():
  sourceNode = nuke.selectedNode().name()
  nuke.nodeCopy("%clipboard%")
  nuke.nodePaste("%clipboard%")
  destNode = nuke.selectedNode().name()
  for i in nuke.selectedNodes():
      for j in i.knobs():
          i[j].setExpression( sourceNode + '.' + j )
      i['xpos'].setExpression('')
      i['ypos'].setExpression('')
      i['selected'].setExpression('')
      i['channels'].setExpression( sourceNode + '.channels' )


nuke.menu('Nuke').addCommand('Edit/Copy with Expressions', "CopyWithExp()", "^#C")

(This goes in your Menu.py file in your ./nuke directory.)

Now, when I press ⌥+⌘+C, it will duplicate the selected node, and link every knob with an expression. Essentially a DIY Clone, but with the ability to easily "declone" individual parameters by right clicking on the parameter and selecting Set to default.

What's Up With That Extra Junk In The For Loop?

It wouldn't be a homemade tool if it didn't include some hacky code to fix some unexpected results. As it turns out, when you programmatically link every knob from one node to another, you also end up linking the hidden knobs that are not exposed to the user in the GUI. Which is not always good.

In the for loop above, you'll see that, after linking every knob between the old and new nodes, I'm reverting the parameters for xpos, ypos, and selected. These parameters are the x and y position of the node on the node graph, and whether or not the node has been selected.

For obvious reasons, we'd like the ability to select the nodes individually. And, if we don't unlink the x and y positions of the nodes, the new node will be permanently affixed atop the old node. You won't even be able to see the original node. Not super helpful.

I've also "manually" linked the channels knob. For some reason, it was not expression linking correctly on its own. It would end up linked to the channel knob, which is a different thing entirely. So, rather than figuring out why it wasn't working, I lazily fixed it with an extra line of code.

Does It Work With Nodes That Aren't Transform Nodes?

Yes, it does. But you may discover a node that has a parameter that breaks in the duplication, like the channels knob did in the Transforms. If/when you find a broken knob, you can add it to the "whitelist" of parameters at the end of the for loop and so on, and so on.

If you'd like to do some exploring, you can see a full list of the knobs associated with a node by firing up Nuke's Script Editor and running the following command while the node is selected:

for i in nuke.selectedNode().knobs():
    print i

Ugh. Are There Any Other Limitations?

There totally are.

Being that the new nodes created are linked via expressions, copying the nodes into a new Nuke project will result in the same error message one would see copying any expression linked node into a new script. It will complain that it can't find the source node that the expressions are looking for. Even if you copy the source node with the expression linked nodes, it will throw an error, then, when you dismiss the error message, the nodes will work. Nuke.

Also, the nuke.nodeCopy("%clipboard%") and nuke.nodePaste("%clipboard%") commands in the Python script use the system clipboard to duplicate the node. This isn't different than using the normal system copy and paste tools in Nuke, but some of you out there use weird clipboard utilities that do things I can't predict. So. There's not much I can do for you there.

Anyway

This tools comes from a conversation I had with friend-of-the-blog, and my podcast co-host, Joe Steel. I was complaining about this problem (and others) in our Slack channel, and he mentioned that Katana had a Copy with Expressions command that would do what I was asking.

I'm actually surprised this feature isn't already built in to Nuke, but now, thanks to Joe, all of our Expression-Linked Dreams have come true.