VFX Voice

The award-winning definitive authority on all things visual effects in the world of film, TV, gaming, virtual reality, commercials, theme parks, and other new media.

Winner of three prestigious Folio Awards for excellence in publishing.

Subscribe to the VFX Voice Print Edition

Subscriptions & Single Issues


April 06
2022

ISSUE

Spring 2022

KEY NEW FEATURES IN COMPOSITING TOOLS TO HELP EXPEDITE YOUR WORK

By IAN FAILES 

Nuke’s CopyCat node, which relies on machine learning techniques, in action for carrying out a paint fix. (Image courtesy of Foundry)

Nuke’s CopyCat node, which relies on machine learning techniques, in action for carrying out a paint fix. (Image courtesy of Foundry) 

If you’re a visual effects compositor, you may tend to stick mostly to just one tool for your 2D work. So you may not be overly familiar with the latest features in some other compositing software that can help improve your productivity. 

By identifying some key new features of the most popular compositing tools out there right now, you can find out about a useful feature or workflow in a tool you don’t currently use. 

Here, we talk to representatives from Foundry (Nuke), Autodesk (Flame), Blackmagic Design (Fusion) and Adobe (After Effects), who each highlight a new or prominent feature in their respective compositing toolset. Some of these features involve machine learning or AI techniques, while others deal with specific workflow speed-ups. 

NUKE’S CUSTOM MACHINE LEARNING NETWORKS WITH COPYCAT 

A number of compositing tools have adopted machine learning and AI techniques to enhance image processing. In particular, Foundry’s Nuke, in conjunction with CopyCat, allows you to orchestrate bespoke training for machine learning networks specific to individual compositing problems. 

“The easiest way to think about it is with, for example, a cleanup shot,” outlines Juan Salazar, Nuke Product Manager at Foundry. “Say you have a really difficult object removal to do on a 100-frame shot. You could do that cleanup on four or five carefully chosen frames, and pass those into CopyCat along with the original, un-cleaned-up versions of those frames. With that data and a bit of training time, CopyCat can train a machine learning algorithm whose only job is to remove that specific object from that specific shot. “Once you run that network on your shot,” continues Salazar, “using the new Inference node, that object will be removed from every frame. Now, it probably won’t work for any other shots, but that doesn’t matter; you’ve solved your specific problem and you’ve done 100 frames of really tricky cleanup, but only painted five or so frames manually – the rest is done by the computer.”

CopyCat came about when Foundry Research Engineering Manager Ben Kent, as part of Foundry’s Artificial Intelligence Research (AIR) team, was investigating how to develop a machine learning deblur. “He was working with a shot where the focus slipped for a few frames,” recalls Salazar. “It’s the kind of shot where you’d have to cut when the focus slipped and be locked to the frame range that was in focus. Ben realized that by taking pieces of the frame while they were in focus, and the same pieces of the frame when they were out of focus, he could train an ML network to make the out-of-focus frames look like the in-focus frames. The network he trained fixed the focus slip and made the whole frame range usable.”

CopyCat copies sequence-specific effects such as garbage matting, beauty repairs or deblurring. (Image courtesy of Foundry)

CopyCat copies sequence-specific effects such as garbage matting, beauty repairs or deblurring. (Image courtesy of Foundry) 

Salazar adds that CopyCat is not simply just a cleanup tool, and that it can generate any image-to-image machine learning network. “That means if you have reference images for where you’re starting and what you want the result to look like, you can train an ML network to do that for you. We’ve seen people doing all sorts of really cool things with it – style transfers, deep fakes, deblurs, and applying full lighting effects to normal renders.”

FLAME GOES AI WITH MATCH MOVE 3D CAMERA SOLVER CAMERA ANALYSIS

Autodesk, similarly, has introduced several machine learning approaches into its compositing package, Flame. A highlight is Camera Analysis, a match move 3D camera solver that combines the techniques of SFM (Structure from Motion), Visual SLAM (Simultaneous Localization and Mapping), and Semantic Machine Learning for non-static object recognition. With it, you can complete visual effects tasks like adding objects into a scene, carrying out cleanup work with projections, or crafting set extensions.

How does Camera Analysis work? Will Harris, Flame Family Product Manager at Autodesk, explains. “Autonomous vehicle-style ‘smart’ vision and dense point cloud solving work in conjunction with machine learning object detection to discard bad data from the scene of the camera solve, such as moving people, vehicles, skies and unwanted reflections such as those found in lakes, puddles, etc. Once solved, you can build accurate z-depth maps and/or 3D geometry of objects in your scene, for example, for painting, projections or relighting.” 

Camera Analysis arose inside Flame due to requests from users on the Flame Feedback website, says Harris. “In developing the tool, we knew it had to be fast – by leveraging GPUs – accurate, and predominantly automatic, to allow Flame artists to use the feature within their daily workflow. The tool came together through new industry developments, including autonomous vehicle-style spatial solving via 3D motion vector analysis, as well as near real-time SLAM point solving, based solely on video footage. 

Camera Analysis is a match move 3D camera solver in Flame that aids in adding objects into a scene, carrying out cleanup work with projections, or crafting set extensions. (Image courtesy of Autodesk)

Camera Analysis is a match move 3D camera solver in Flame that aids in adding objects into a scene, carrying out cleanup work with projections, or crafting set extensions. (Image courtesy of Autodesk) 

Face match moving using the Flame Camera Analysis solver. (Image courtesy of Autodesk)

Face match moving using the Flame Camera Analysis solver. (Image courtesy of Autodesk) 

The user interface setup for Blackmagic Design’s Fusion 17 Studio. (Image courtesy of Blackmagic Design)

The user interface setup for Blackmagic Design’s Fusion 17 Studio. (Image courtesy of Blackmagic Design) 

Fusion is now able to run on the Apple M1 platform. (Image courtesy of Blackmagic Design)

Fusion is now able to run on the Apple M1 platform. (Image courtesy of Blackmagic Design) 

A scene being rendered in Adobe After Effects’ new Multi-Frame Rendering architecture. (Image courtesy of Adobe)

A scene being rendered in Adobe After Effects’ new Multi-Frame Rendering architecture. (Image courtesy of Adobe) 

The Composition Profiler in After Effects, which displays which layers and effects in a composition are taking the most time to render. (Image courtesy of Adobe)

The Composition Profiler in After Effects, which displays which layers and effects in a composition are taking the most time to render. (Image courtesy of Adobe) 

“Beyond these advancements,” adds Harris, “it was a natural progression to reuse our machine learning-trained models for a human head, body, skies, and other objects to provide automatic occlusion from the algorithm. As a result, in a very short amount of time, this offers a primarily automatic camera solve with tens of thousands of points locked to the static features of a shot, with a reliable 3D camera.” 

Indeed, Harris sees big changes coming in compositing and VFX tools to continue to take advantage of AI and machine learning workflows. “Among the recent machine learning-powered features are depth and face maps, sky extraction, human head and body extraction, and Salient Keyer,” Harris notes. 

FUSION NOW SUPPORTING MORE PLATFORMS 

The standout new feature for Blackmagic Design’s compositing software, Fusion Studio, is not so much a feature, but more in relation to its capability to now run on the Apple M1 platform. These M1 chips are available in the latest MacBook and Mac mini systems from Apple, furthering a trend that sees visual effects creation enabled on smaller, portable machines. 

As Dan May, President of Blackmagic Design, Americas, comments, the M1 platform is “a non x86-GPU system and our fourth platform we support. Fusion now runs seamlessly on Windows, Linux, OSX and M1. What makes this possible is the under-the-hood GPU compute optimizations using Apple’s Metal, AMD and NVIDIA GPUs to process more of the pipeline faster.” 

Fusion was primarily Windows-based prior to its acquisition by Blackmagic Design in 2014. It has since been redeveloped to work across platforms and be more broadly available. Indeed, Fusion is closely tied in with Blackmagic Design’s DaVinci Resolve, which has both free and paid versions. 

Beyond the M1 support, May notes that there are also improvements in the GPU compute image pipeline and multi-language support for the interface. “We’ve also expanded Fusion’s availability by integrating its capabilities into our DaVinci Resolve post-production software, which has been a long development path,” advises May. “So, there’s now the standalone Fusion Studio VFX and motion graphics software, and DaVinci Resolve, which is an end-to-end post-production solution that features a robust Fusion page for compositing.” 

AFTER EFFECTS IS ALL-IN ON MULTI-FRAME RENDERING

A new central feature in Adobe’s After Effects is Multi-Frame Rendering. The intention of the feature is to utilize a system’s CPU cores when previewing and rendering. Inside the feature are a couple of specific attributes aimed at enhancing performance. One is the Speculative Preview, which renders compositions while the application is idle. Then there is the Composition Profiler, which displays which layers and effects in a composition are taking the most time to render.

“The Composition Profiler will show you what’s actually taking up your render time,” observes Victoria Nece, Adobe Senior Product Manager, Motion Graphics and Visual Effects. “That’s something that I think is particularly relevant for anyone who’s dealing with a complex project or handing off a project to someone else and trying to figure out, ‘Wait, what’s going on here? Why isn’t this performing like it should?’”

Nece identifies Multi-Frame Rendering as being the “No. 1 feature request for the better part of a decade. This is something that everyone’s been asking for – ‘Use more of my hardware, use it more efficiently.’ When we took a look at what people wanted, it wasn’t just to make it faster. We really saw that it was about the preview and iteration loop as the biggest things. It’s not always about your final export. So now while you’re browsing for effects or you’re writing an email, it’ll start rendering in the background so that when you get back to your timeline, it’s ready to preview.”

To enable Multi-Frame Rendering, Adobe’s internal ‘Dynamic Composition Analysis’ allows After Effects to look at the hardware that’s available and the complexity of the project from frame-to-frame, and then scale up and down the system resources that it is using to manage the rendering. “If you have a project that’s really slow through one part and really light through another part,” explains Nece, “it’ll scale up and down how much of your system it’s using for each frame. You can actually watch that number change up and down while you render.

“Multi-Frame Rendering is a workflow efficiency feature,” concludes Nece, who adds that M1 support is also moving ahead in After Effects. “It’s really all about speed, and it is the idea of helping the software get out of your way. And so, instead of being something that’s front and center – like, ‘Wow, this is a great new effect’ – now you don’t have to think about what’s going on inside After Effects because you can just think about the work you’re doing in it instead.” 

A New Compositing Tool on the Horizon?

Arguably there are relatively few off-the-shelf compositing packages available for the wider public. But soon a new tool is looking to enter the market, Autograph, from French company Left Angle. The founders of Left Angle, Alexandre Gauthier-Foichat and Francois Grassard, have experience in software development, including the animation software Anima and open-source compositing tool Natron. 

Autograph is designed to focus on the motion graphics and VFX, as well as on the wider video and multi-format needs of content made for social networks. The tool is a compositor that combines both a layer-based and nodal interface, and relies on a combination of 2D and 3D approaches. 

“The layer-based approach is there to help you work quickly and easily synchronize animated graphic designs with sound and music,” notes Grassard. “On top of that, Autograph will allow for the connection of parameters together and adding of different kinds of modifiers – for images and sound but also textual, numerical and geometrical – without expressions or scripting, giving you a high level of control.

“We will also provide several features for creating complex animations quickly, precise VFX tools like a keyer, re-timer and a tracker based on motion vectors. And we will integrate a full 3D renderer which can ingest huge scenes through the USD standard. The idea is that you can render your scene on the fly and then composite all passes without saving any files on your storage, in a unified solution.”

The first version of Autograph is planned for release in Q2 of 2022. It will be distributed by RE:Vision Effects. 

A screenshot from the upcoming new compositing tool Autograph, from Left Angle. (Image courtesy of Left Angle)

A screenshot from the upcoming new compositing tool Autograph, from Left Angle. (Image courtesy of Left Angle) 


Share this post with

Most Popular Stories

CHANNELING THE VISUAL EFFECTS ON SET FOR THE WHEEL OF TIME SEASON 2
23 January 2024
Tech & Tools
CHANNELING THE VISUAL EFFECTS ON SET FOR THE WHEEL OF TIME SEASON 2
In the case of the second season of The Wheel of Time, the on-set visual effects supervision was equally divided between Roni Rodrigues and Mike Stillwell.
AGING PHILADELPHIA FOR THE WALKING DEAD: THE ONES WHO LIVE
03 April 2024
Tech & Tools
AGING PHILADELPHIA FOR THE WALKING DEAD: THE ONES WHO LIVE
The final season of The Walking Dead: The Ones Who Live follows Rick Grimes and Michonne Hawthorne.
NAVIGATING LONDON UNDERWATER FOR THE END WE START FROM
05 March 2024
Tech & Tools
NAVIGATING LONDON UNDERWATER FOR THE END WE START FROM
Mahalia Belo’s remarkable feature directorial debut The End We Start From follows a woman (Jodie Comer) and her newborn child as she embarks on a treacherous journey to find safe refuge after a devastating flood.
CINESITE GETS SNOWED UNDER BY TRUE DETECTIVE: NIGHT COUNTRY
27 March 2024
Tech & Tools
CINESITE GETS SNOWED UNDER BY TRUE DETECTIVE: NIGHT COUNTRY
True Detective: Night Country features a diverse cast including Jodie Foster.
GODZILLA MINUS ONE GAINS GLOBAL RECOGNITION
13 February 2024
Tech & Tools
GODZILLA MINUS ONE GAINS GLOBAL RECOGNITION
Visual and special effects have dramatically evolved like the creatures in the Godzilla franchise.