VFX Voice

The award-winning definitive authority on all things visual effects in the world of film, TV, gaming, virtual reality, commercials, theme parks, and other new media.

Winner of three prestigious Folio Awards for excellence in publishing.

Subscribe to the VFX Voice Print Edition

Subscriptions & Single Issues


June 08
2021

ISSUE

Summer 2021

EVOLVING TECHNOLOGY MAKES INVISIBLE EFFECTS VISIBLY MORE EFFICIENT

By TREVOR HOGG

Images courtesy of Adobe, Autodesk, Foundry, SideFX, Cinefade and Epic Games.

The physically-based renderer produced by Arnold.

 Despite the ability to create fantastical worlds and creatures digitally, the majority of the work for the visual effects industry is focused on making unnoticeable alterations, whether it be painting out rigging, extending sets and locations, or doing face replacements for stunt doubles. Leading the way in creating the tools and technology to create and execute these invisible effects are software companies Autodesk, Adobe, Foundry and SideFX, as well as Epic Games and Cinefade. 

“It’s incremental advancements in the tools up and down the line with periodic leaps sprinkled in there,” believes Ben Fischler, Industry Strategy Manager at Autodesk. “I spent years doing lighting and rendering in compositing. If you look to the move to path tracers like Arnold and physically-based shading, lighters can think the same way that a DP would on set. The integration of Arnold with Maya and 3ds Max is one of the biggest pieces that we now have. Arnold has both a CPU and GPU mode, and for doing look development and lighting work on a desktop it is incredibly fast, and you can be rendering in the Maya viewport.” 

 Where the technology has evolved most recently is with in-camera visual effects. “It’s a process that is changing the future of all visual effects,” notes David Morin, Industry Manager for Media & Entertainment at Epic Games. “With in-camera visual effects, the greenscreen is replaced with LED displays on set while shooting live-action elements. This can enable in-camera capture of both practical and digital elements, giving the director and cinematographer more access to the final look of a scene earlier than ever before. This is an important step forward for invisible effects.”

Unreal Engine by Epic Games is allowing for more in-camera effects to occur on set in real-time, such as with The Mandalorian.

“I look at machine learning as the assistant you wish you could hire rather than the thing that is going to replace you. We don’t want to replace people with robots.”

—Victoria Nece, Senior Product Manager, Motion Graphics and Visual Effects, Adobe

Cinefade was frequently used by cinematographer Erik Messerschmidt while shooting Mank.

Content-Aware Fill in Adobe Photoshop; After Effects uses machine learning to fill in the background when objects are removed.
Solaris is a suite of look development, layout and lighting tools that enables the creation of USD-based scene graphs from asset creation to final render.

“Creating invisible effects has always been much of the ‘bread and butter’ of Foundry tools, including Nuke, Katana, Mari, and even in the early days of Foundry’s Furnace toolset for rig removal and clean-up tasks,” states Christy Anzelmo, Senior Director of Product at Foundry. “Nuke’s underlying ethos is to give the artist technical and creative control of what is happening in their shot to achieve those high-quality results. Its highly scalable processing engine, industry-standard color management workflows, along with the ability to quickly set up projections make Nuke well-suited for creating convincing digital matte paintings and other invisible effects. Mari also works well with the very high-quality textures needed for creating photoreal CG elements, with GPU-enabled workflows that enable artists to have a fluid creative workflow when texturing.”

“At SideFX, we have a relentless drive toward cinematic/ photoreal quality throughout the pipeline and continually push for significant improvements in our tools for creating character FX [cloth, hair and fur], crowds, water, fire, smoke, destruction and world building,” remarks Cristin Barghiel, Vice President of Research and Development at SideFX. “Houdini Digital Asset technology provides the ability for a technical artist to build and package up a powerful and complex tool and give that asset to a creative artist who can use it even without knowing Houdini. Solaris is a suite of look development, layout and lighting tools that empower artists to create Universal Scene Description (USD)- based scene graphs that go from asset creation to final render. Solaris integrates with USD’s HYDRA Imaging Framework for access to a wide range of renderers such as the new SideFX Karma, Pixar RenderMan, Autodesk Arnold, Maxon Redshift, AMD ProRender and more.” 

“The professional video workflow crosses many apps,” states Victoria Nece, Senior Product Manager, Motion Graphics and Visual Effects at Adobe. “Depending on someone’s role they are probably using half of Creative Cloud, but it’s a different half than a person sitting next to them. We look at the professional tools as working together as building blocks of that pipeline. But there are also new tools that make things simpler for less experienced users. Premiere Rush is a space where we see there’s an opportunity for more emerging filmmakers, like social video creators, to do it all in one. We tier things to the audience and how much precision and control they need. Someone who is doing more advanced visual effects will want to dive into After Effects where they have every button and knob to dial in the final results.” 

Autodesk has placed an emphasis on producing more procedural tools. “The best example is the Bifrost Graph in Maya,” states Fischler. “Think of it is as a visual programming environment. Teams or individual artists can build custom tools that then can be scaled up to entire teams. You can think of Bifrost as an effects or rigging tool and different solvers can easily be integrated into it. Bifrost is currently part of Maya, but we’re also exploring bringing it into other tools as well. Being a plug-in, the Bifrost development team can iterate independent of the Maya teams. From a software development standpoint, plug-in architecture is smart software design because it’s modular. That allows you to build in such a way where there are fewer dependencies between other parts of the software.

Maya has been around for years, but the code base is constantly evolving, and the team over the last few years has made a big push to make it more modular under the hood. Some of the things that you’ve seen recently are animation caching features which has allowed for huge performance gains.

The Arnold renderer has a plug-in to Maya, which means when you’re using Maya it doesn’t feel like you’re in a different environment.” 

“In recent years, there has been a push toward helping artists work faster and reduce the often manual work needed to create precise invisible effects,” notes Anzelmo. “One example of this in Nuke is the development of the Smart Vector toolset, which uses a unique type of motion vector to automate the warping of paint and textures over several frames, dramatically speeding up the process when working with organic surfaces like fabrics or faces. Since their release, the tools in Nuke that use Smart Vectors have grown to include the Grid Warp Tracker in Nuke 12 and take advantage of GPU acceleration to generate the Smart Vectors on the fly without a pre-rendering step. Mari has also expanded its toolset for painting digital assets from photographs, making it easier to populate virtual production environments with high-fidelity content.” 

SideFX has developed a series of solvers to help with the creation of cloth, fire and soft body objects. “Cloth continues to be a major focal point of the Vellum solver, with key improvements to mass/scale/topology invariance, collision robustness and recovery from tangling,” states Barghiel. “Velocity blending and relative motion controls offer more stability during high-speed scenarios. Vellum also has new sliding constraints to create unique effects. The Houdini Content Library now includes a collection of eight different fabrics with unique physical behavior and shaders per fabric. These include silk, velvet, wool, leather, jersey, raincoat, tulle with embroidery and jeans. With the addition of a new Sparse solver, artists can now create more impressive fire and smoke shots with detail where they need it. With the solve only taking place in the active parts of the simulation, processing time is cut into a fraction of what was required previously. Whether artists are flattening dough, squeezing gummy bears or adding jiggle to a seal, Houdini lets them choose between the Vellum solver for speed or the Finite Elements [FEM] solver for accuracy. FEM now offers a fast, accurate and stable global-nonlinear solver, a fully-symmetric solver with built-in Neo-Hookean material model and robust recovery from partially penetrating tetrahedral meshes. FEM also has significantly improved handling of fast-moving objects.”

“A big part of traditional visual effects, whether or not they’re invisible, is rendering,” observes Morin. “It has historically taken a long time for computers to process the algorithms that define what makes up a visual effect. You would have to program your visuals and then let them render frame-by-frame, usually overnight. As part of this process, visual effects artists developed algorithms to represent simple things from rocks to complex things like realistic water, fur and human hair and skin. As computing power has become faster and more accessible, those algorithms can run faster and faster. The video game industry has contributed a lot to that speed shift as video games have always required real-time graphics and visual effects image processing. The game industry made a massive investment in making those algorithms work in 1/30th of a second, which is the required speed for continuous gameplay. Today we’re benefiting from that effort in the visual effects industry with tools like Unreal Engine, which was developed for games, but is now available to filmmakers and content creators to do part of all of their work in real-time. 

Bifrost Graph is a node-based, visual programming environment that enables users to construct procedural graphs to create effects such as sand, fire, smoke and explosions.

Using the Soft Selection in Nuke. An example of the Grid Warp Tracker in Nuke.

Volumetric clouds created by using Bifrost Graph.

Absolute Post creates a Houdini water simulation for Outlander.
 

This has also benefited creative flow across teams as production designers can build sets in Unreal Engine and achieve photoreal results right there in the art department. We’ve also seen artists like production designer Andrew Jones use these techniques to great effect on The Mandalorian and creature creators like Aaron Sims use these virtual production tools for character development as well.” 

“Of the biggest things that we’ve seen that is transforming the industry is machine learning,” observes Nece. “What was incredibly painful to do by hand and could take days or even weeks to get right, you can do in minutes because of machine learning. We’re particularly excited about Roto Brush 2 in After Effects, which shipped last fall. That uses Adobe Sensei, which is our machine learning technology to generate a matte over multiple frames. You could select a person or object in your scene and it will track it from frame to frame with incredible precision. It’s introducing the ability to do roto in places where it wasn’t possible before because of a deadline or budget. You see that speedup and that speaks to the other piece of this, which is the democratization of it. You can do this kind of work even if you’re working on your own. You can have a car go through the background, or power lines, or someone in a shot; that’s a space where the visual effects are even more invisible because no one assumes that they’re even doing visual effects in the first place.” 

“Content-Aware Fill originated in Photoshop and now we’ve brought it into After Effects,” remarks Nece. “That’s letting you take a piece of your image and either filling it in with another piece of your image, or in the case of After Effects, filling in with something from a different point of time. A colleague calls them ‘time-traveling pixels.’ You have a car go through the background in your shot and it’s a distraction. You want to remove it. You can mask out that car loosely and fill in that hole. You can do it either automatically and it will guess what belongs there based on the other frames in the shot or you can give it more information. We always want to make sure it’s not a black box and that these advanced algorithms are things that you can control, art direct, adjust and improve on so the hand of the artist is still there. For instance, you might need to paint in a piece of road and will figure out how to move that from frame to frame. And we just put lighting correction in that space. If you had a shot that changed over time in brightness, maybe the sun came out or you had a reflection that shifted, it can now compensate for that. It takes pixels from other points in time to figure out what to put in the space that you’re trying to fill. I look at machine learning as the assistant you wish you could hire rather than the thing that is going to replace you. We don’t want to replace people with robots.” 

“The Foundry approach is to explore machine learning as a means to accelerate repeat image processing tasks, such as up-resing footage, and to assist artists rather than removing the artist entirely,” explains Anzelmo. “We have seen some cases where ML can return lost detail or correct an image that would otherwise have been unusable, which in itself is exciting!

Today, artists are used to applying an effect and then tweaking its parameters until the effect does exactly what they want. With machine learning, the quality of the result depends highly on how the model is trained, and there isn’t the same ability to tweak the model in the same way. The Foundry’s Research efforts led to the creation of the ML-Server, an open-source client-server tool for training machine learning models directly in Nuke. Based on learnings from the ML-Server, we have some other exciting projects in the works.”

“There have already been examples of AI and machine learning being used in production with Houdini such as Spider-Man: Into the Spider-Verse,” notes Barghiel. “While we don’t see AI and machine learning as widely adopted in the standard media pipeline today, it may someday be a critical part of the creative pipeline. For now, we see the greatest gains being made by using art-directable procedural workflows in conjunction with pipeline automation, to provide artists with powerful tools they can use to create iteratively, leading to the best results from everyone’s time and resource investment.”

The precision of the machine learning is dependent on the quality of the data set being provided. “As we release new versions of Flame, the sophistication and the quality of the solution will go up because they continue to train it on larger and different data sets. That’s one of the cool things about it, it’s not static,” states Fischler. “The Flame team is focused on trying to solve specific problems because those are trainable things. For things like sky replacement, we came out with the last version, and face tracking. It’s when you get into the more open-ended challenges that things get tougher. Stay tuned because there are going to be some more Flame announcements with additional machine learning tools.” 

Not all of the innovation is associated with post-production, as the Cinefade system enabled David Fincher and cinematographer Erik Messerschmidt to play with the variable depth-of-field effect on several occasions during principal photography for Mank. “The Cinefade system consists of a variable ND filter that is synced to the iris motor of the camera and controlled via a cmotion cPro lens control system,” explains Oliver Janesh Christiansen, inventor of Cinefade. “The operator varies iris diameter to affect depth of field and the VariND automatically compensates for the change in light transmission, keeping exposure constant. The effect is achieved completely in-camera, giving filmmakers complete control and enabling a novel form of cinematic expression. It allows cinematographers to seamlessly transition between a deep and a shallow depth of field in one shot, resulting in a unique in-camera effect in which the foreground remains sharp while the background gradually becomes blurry, isolating the character and drawing the viewer’s attention. Cinefade VariND, which is controlled remotely, can also be used as a practical exposure tool to hide the transition between an interior and exterior location.” 

Unreal Engine was utilized to provide previsualization for John Wick: Chapter 3 – Parabellum.

Cinefade allows cinematographers to gradually transition between a deep and a shallow depth of field in one shot at constant exposure.

Share this post with

Most Popular Stories

CHANNELING THE VISUAL EFFECTS ON SET FOR THE WHEEL OF TIME SEASON 2
23 January 2024
Tech & Tools
CHANNELING THE VISUAL EFFECTS ON SET FOR THE WHEEL OF TIME SEASON 2
In the case of the second season of The Wheel of Time, the on-set visual effects supervision was equally divided between Roni Rodrigues and Mike Stillwell.
2024 STATE OF THE VFX/ANIMATION INDUSTRY: FULL SPEED AHEAD
09 January 2024
Tech & Tools
2024 STATE OF THE VFX/ANIMATION INDUSTRY: FULL SPEED AHEAD
Despite production lulls, innovations continue to forward the craft.
GAIA BUSSOLATI: CONNECTING HOLLYWOOD AND ITALIAN CINEMA – AND STAYING CURIOUS
09 January 2024
Tech & Tools
GAIA BUSSOLATI: CONNECTING HOLLYWOOD AND ITALIAN CINEMA – AND STAYING CURIOUS
VFX Supervisor bridges Italian cinema, Hollywood blockbusters.
NAVIGATING LONDON UNDERWATER FOR THE END WE START FROM
05 March 2024
Tech & Tools
NAVIGATING LONDON UNDERWATER FOR THE END WE START FROM
Mahalia Belo’s remarkable feature directorial debut The End We Start From follows a woman (Jodie Comer) and her newborn child as she embarks on a treacherous journey to find safe refuge after a devastating flood.
HOW TIME KEEPS SLIPPING AWAY IN LOKI SEASON 2
19 December 2023
Tech & Tools
HOW TIME KEEPS SLIPPING AWAY IN LOKI SEASON 2
Created by Michael Waldron for Disney +, the second season of Loki follows notorious Marvel villain Loki (portrayed by Tom Hiddleston), Thor’s adopted brother and God of Mischief.