As part of the plan for our feature we chose to invest in the Nuke compositor as well as use the Adobe suite. There are a number of reasons why and also before I get too used to Nuke i though it might be useful to jot down some experiences and assumptions i made along the way. The kind of misconceptions that seem obvious once you're using something but forget the little 'epiphanies' you made along the way. Perhaps this will make someone else's life a little easier if they're considering the move too.
This isn't an advert for Nuke, and i'm not trashing AE. I still use both, and there are strengths and weaknesses. I started with AE before Adobe were involved back when it was from CoSA. Back in those days i also used wavefront composer and Chalice (and PowerAnimation & Maya et al) - so node based approaches aren't new either. Of course Nuke is considerably more expensive than AE. Is it worth it? In a word, Yes. In a few more words, Yes, so long as the type of work you do would benefit.
We knew that there was a lot of vfx work to be done on Marriage. It's fundamental to telling the story and as we're in micro-ultra-less-than-you-can-image budget territory then we don't have the luxury of a a full facilities house to fall on. We can't throw money at the problem. Instead we have to work cleverer and faster. Nuke was seen as a way of speeding up workflows with less messing around bending software to do something complicated.
Nuke X is considered, rightly so, at the top tier of compositing applications. It's draw is the workflow, speed, facilities and openness. (We're using Nuke X which is the version that has the extras in, Nuke is the basic version - Nuke X gives access to de-noising, 3D Camera tracking and deep compositing nodes)
Here are what i personally consider the major differences between Nuke and AE
AE is layer and time based. You work on a timeline stacking elements ontop of each other in time. You add effects to each layer as you go. It's pretty quick to generate large multilayer compositions and it can be quite difficult to reverse engineer or remember how something is constructed. Conversely it's very easy to 'linearly edit' and see the time line at a glance.
Nuke is node based. You start with a Read node and then the output of that is 'piped' into the next node, split off into a branch, processed, brought back, combined with other read nodes and output via a write node. It's like coding up parcels of intelligence that the images run though. These node trees can be annotated and are usually made from smaller discreet steps then the effects plugins in AE.
Because you have a finer grained control over the process, Nuke is more flexible. Most image processing operations are simple math based convolutions or functions that are built up to create a result. You might hit an issue in AE and you're stuck because you can only go so far if the particular effect you're using isn't working for you 100%. In Nuke you can usually rebuild the effect to a certain degree or 'code' around it. I guess another way of looking at it is that the nodes in Nuke are a lower level of functionality, which makes it more flexible, although it can be a little daunting to start with. A side effect of this is combining nodes in ways that you wouldn't normally see together. For example if you're converting a track in 3D to it's 2D screen position then using a 2D UV distortion map node to tweak that position based on the lens distortion.
One of the 'a ha' aspects of understanding Nuke is realising that in the trunk of a node graph, Nuke is passing not just RGBA information but potentially hundreds of other arbitrary channels. So even if one part of the tree isn't using that information, a later part of the tree could. Some of the node design approaches for Nuke revolve around this ability to store channels in one place and retrieve them later on. This kind of approach would be nigh impossible to mimic in AE and represents a shift in the way to think about compositing.
In Nuke you need to keep track of this flow of channels. Remember that comping is A over B and it's the B channel that usually provides the trunk of image flow.
Nuke forces you to truly understand the compositing operations you are using. For example you have to explicitly pre-multiple and unpre-multiple alpha channels as you use them. It can be quite confusing to wrap your head around why and how when you first start out. Actually, truth be told, it can still be quite confusing to work out why and when even when you've been using Nuke a while.
Nuke is very good at being able to reuse approaches or bits of a tree. You can copy parts of a tree and paste them elsewhere. You can create gizmos from scripts and even give them controls. A gizmo is like wrapping up a nuke node tree in a black box, adding inputs and an output and handing that to someone. In AE it's quite difficult to reuse a comp for other layers and images. You can manually go through and replace layers but if the sizes are off them you'd need to make changes and it quickly becomes a headache. A gizmo by comparison can have lots of different inputs whereas most AE plug ins tend to operate on a single layer.
The UI workflow in Nuke is a little more production focused. The Roto and Paint nodes are laid out sensibly and for the most part pretty robust. There's an element of Nuke being battle tested in day to day production. The support of the foundry is excellent too - a benefit of having a smaller customer based to listen too.
The viewer system in Nuke is very flexible, you can connect the viewer to any node very quickly and flick between different views or compare different versions. Jumping around the node tree checking the image at any stage is very quick.
Time in Nuke is confusing. Generally Nuke is shot based, you work on a single shot at a time. You can take a Read node then connect a viewer and the timeline can be used to scan through the footage. If you want to focus on a section of that clip then you'd put a TimeClip node after and use that to isolate a part of the larger clip. You can use nodes to edit and 'switch' between image streams but it's not very visual. Editing clips together in AE is much easier and more obvious. This confused me for a while when making the transition.
(Note: Nuke 8 which has just been announced appears to have some time spent on the dope sheet, which is nukes time based view, so these observations may alter soon)
Nuke has a very robust 3D system built in. AE also has a 3D system but it's never felt truly integrated. In Nuke you can create a 3D environment (with a good UI) and pipe that scene to a renderer which can then go into your compositing tree. You also have all of Nukes nodes when creating or using images as textures. The integration between the 3D and 2D is very tight and the renderer node can output many types of layer, from depth to indirect passes. These passes can be invaluable in composites when integrating other elements. Conversely the 3D system in AE appears to be targeted more to motion graphics and AFAIK only outputs RGBA. You can have any number of these 3D scenes in a tree.
Nuke includes a number of tools for reconstructing real world environments within it's 3D system. It's possible to solve a 3D camera move and generate a coloured point cloud of all the solved 3D points. Pipe that into a scene and build geometry around it or use it as a basis for placing other elements.
It's even possible to use a 3rd party renderer within Nuke, we've been using AtomKraft which replaces the Nuke built in scan line renderer with a full RenderMan based version. This includes Image Based Lighting and produces some spectacular results very quickly.
Nuke also includes a comprehensive particle system, fully integrated into the UI and workflow. AE has the wonderful (but 3rd party) trapcode particular which can produce great results but it's pretty tough to control and see what you're doing. Nuke allows particles to be sculpted by any number of forces and rules.
I think one of the key differences in approach between AE and Nuke is a recognition that no vfx shot is ever the same, no keying shot is the same. The idea of dropping a keying plugin on some footage, tweaking some parameters and bang, there's your key is fanciful at the best of times. Greenscreens are never shot perfectly, even when you're the one shooting them. A plug in workflow might get you 90% but that last 10% is the killer.
Building up a keying matte in After Effects using multiple techniques can be quite a struggle, combining roto, different keys and so on. The coup-de-grace usually comes with unusual keying edges, for example if the RGB channels show diffraction of the key differently (a future post). In Nuke there are ways to erode and treat channels differently. This is difficult in AE, to split up the channels into R,G,B or even YCrCb and process each separately. Yes, you can just about, but it's not really part of AEs workflow.
Colour in Nuke is very simple. The whole application is built on a 32 bit floating point linear colour space, so any time you read or write something you need to specify what colourspace the image is coming in and what you want going out, Nuke will convert from one to the other. The Nuke viewer can also change the output colourspace for viewing -- if you're viewing on a sRGB monitor then the view is set to sRGB, conversely on a REC709 calibrated monitor you'd set it accordingly - so that the colours matched across both displays.
AE projects can be switched between 8bit, 16bit to 32bit and attempts to automagically work out what footage is. It can be confusing because you can switch between compositing methods (linear or gamma based) and choose different colour working spaces. It's also seems a little unnecessarily complex. It may seem simple going in (and for a lot of projects it is) because it's all hidden but can very quickly become confusing if you have specific requirements.
Colour is a big topic and i'm writing another post on it and how to work with it.
Are there things one application can do that the other cannot?
AE has been used to great effect in many top tier features, i'm not sure there's much you can do in one and not the other but how you get there is vastly different. (Although an observation about that is that AE tends to be used in monitor displays, graphics, HUDs and titles more than actual core vfx work)
It might be a case of how well one does something over the other. For example 3D Camera Tracking is offered by both. AE is a simple, elegant interface, Nuke can start simple but you can tweak and refine settings. Broadly speaking Nuke has always offered a better solve that AE (in my experience of comparing both side by side) but more importantly, it offers tools to improve a solve when it's not ideal. Nuke also recognises the need to include Lens distortion details into these operations too and has nodes for calculating the degree and type of distortion for a lens, either via grids that have been shot or pulling lines from the actual footage. Being able to undistort and distort elements to comp into a frame is very important for realism and AE tends to hide this workflow (i'm not sure it can do complicated distortions beyond a simple Lens Distortion plug in).
Same for 2D tracking, 2D tracking is very easy in both apps but i've found that AE is a bit more accurate by default, but if AE cannot do the track then you're done. Whereas there are more tools in Nuke to get around it, also Nuke includes Planar tracker which can work really well in some cases. Planar tracking in AE is offered via Mocha and isn't integrated.
Some areas like Deep Compositing are only in Nuke and likely to remain that way for a while because that needs to be fundamentally part of the core system in order to be truly effective. We're not using any DC yet, so my experience is somewhat limited.
3D integration is light years ahead in Nuke, as is the particle system.
Getting footage in and out of Nuke is more limiting. Nuke is built on top of the EXR format and as such file based inputs and outputs work best. However i much prefer to use mezzanine movie codecs (Cineform, DNxHD and ProRes 4444) to transport data around. I find them faster and obviously lighter on space being compressed. The purist vfx folk will balk at the idea of compression but in terms of space/speed and workflow i've not seen any issues where uncompressed output is noticeably better than these lightly compressed containers. AE by comparison works well with all formats. Nuke really doesn't like them and in some cases won't work at all or really take advantage of them.
Nukes method of caching seems a bit more structured than AE. The global cache in AE is pretty much an auto solution. In Nuke you can write out the contents of a node tree at any point into a disc cache and then the downstream tree can source from that. I use it a lot of cache complex parts of a branch. Personally i much prefer choosing what to cache. Nuke writes out EXRs of the tree at that point, including all layers (if required)
Using Nuke was a small uphill hike but once over that peak then everything falls in place, it's become an enjoyable process solving shot issues and promotes a level of creativity and exploration that is sometimes quite difficult to achieve with a large AE comp (in terms of speed of updates and just isolating a small section). Within Nuke the viewer could be focused on a small piece of the final comp and you can be way back into the early stages of a tree tweaking some part of a source texture.
In the future we'll add more posts about actually using Nuke on specific shots and try to share some information.