VFX Voice

The award-winning definitive authority on all things visual effects in the world of film, TV, gaming, virtual reality, commercials, theme parks, and other new media.

Winner of three prestigious Folio Awards for excellence in publishing.

Subscribe to the VFX Voice Print Edition

Subscriptions & Single Issues


June 08
2021

ISSUE

Summer 2021

INVISIBLE EFFECTS: THE ESSENTIAL CRAFT OF HOW NOT TO DRAW ATTENTION

By TREVOR HOGG

The forest is replaced by a mountain range and the fortress walls digitally extended by Image Engine in Mulan. (Images courtesy of Image Engine)

“The photogram technology has evolved to the point that it is solid. [Data wrangler] Rebecca [McKee] blew people’s minds on set by showing them the models of the rooms that we were standing in. It’s a newer tool that has helped with visible and invisible effects. If we need to get rid of the picture on the wall it can be replaced because we have a model of it.”

—Lou Pecora, Visual Effects Supervisor, Zoic Studios

In the past, invisible effects were associated with rig removals, set extensions and environmental clean-up. However, the advances in photorealism and drone photography have expanded the scope to include digital doubles and what used to be considered physically impossible shots. The ultimate goal is for audience members to get lost in the storytelling and the world building rather than drawing attention to the work of digital artists. Invisible effects are not going to disappear as CGI has become an accepted part of cinematic language even for independent productions and television sitcoms. Veterans of film and television projects share various insights into the past, present and future of what remains as the bread-and-butter work of the visual effects industry. 

 Lou Pecora, Visual Effects Supervisor, Zoic Studios 

“One of the things that I’ve come to rely on are my HDR acquisition tools. Except for specialized cases I don’t hold up production anymore to go out with shiny and grey balls, and a color chart. My camera assistants have the color charts and pop them into the camera at the beginning. Occasionally, I can line up color based on skin tone and other things. I use the Ricoh Theta to do all of my HDRs. I can set my brackets, walk out, everybody clears, and I’m there for a minute at most capturing the full scene and all of the lights in it. If there are light patterns I have them turn the lights on and off. You’re not going to LiDAR every set because with a TV schedule and budget you don’t have that luxury. 

“On Legion, I always knew that we were going to have  surprises in post, so my incredible on-set data wrangler, Rebecca McKee, would walk around and photogram everything. She had a high-powered laptop and used RealityCapture to build models of every set and location. We definitely used them. The photogram technology has evolved to the point that it is solid. Rebecca blew people’s minds on set by showing them the models of the rooms that we were standing in. It’s a newer tool that has helped with visible and invisible effects. If we need to get rid of the picture on the wall it can be replaced because we have a model of it.” 

A series of shots needed to be stitched together by MPC as if done in one continuous take for 1917. (Images courtesy of MPC)

World War II gets recreated by Pixomondo for the HBO series Perry Mason. (Images courtesy of Pixomondo)

It was important for MR. X to recreate Mexico City from the memories of filmmaker Alfonso Cuarón, rather than be entirely period-accurate in Roma.
(Images courtesy of Pixomondo)

“Digital doubles are invisible work like Rachel [in Blade Runner 2049], stage actors who don’t exist anymore or in this time period. When we do CG animals – because they can’t be filmed the way we want them, such as a mouse, and you’re respecting what they do for movements – that’s invisible as well. It’s all in the eye of the beholder.”

—Arundi Asregadoo, Visual Supervisor, MPC Film

Arundi Asregadoo, Visual Supervisor, MPC Film

“When looking at the breakdowns for 1917 and The Revenant, you realize how much work has gone into the background in creating those stitches. The process is unassuming sometimes. The most important thing in the opening sequence of The Revenant, when they’re running through the forest, was to make sure that we weren’t using the same language and devices that would tell we were cutting from one to the next by a wipe of a tree or a horse running to the front. It was simple things like the arrows, which you wouldn’t think about being an actor within that moment or beat. Digital doubles are invisible work like Rachel [in Blade Runner 2049], stage actors who don’t exist anymore or in this time period. When we do CG animals because they can’t be filmed the way we want them, such as a mouse, and you’re respecting what they do for movements that’s invisible as well. It’s all in the eye of the beholder.”

Michael Shelton, Visual Effects Supervisor, Pixomondo 

“What drives the tools that I pick for a project is the amount of time that we have to complete the work. There are some things that are being done right now that are interesting but not quite mature enough. There is content-aware removal of objects. Adobe has been doing some machine-learning-based things that you can find in Photoshop and is even in After Effects right now. Content-Aware Fill can analyze footage, try to remove a moving object and rebuild what was behind it. A tool like that is exciting because it opens up a lot of possibilities for automation in that area, which is helpful to our process. With anything that is cutting edge, it works well, but under controlled circumstances. There are people who are pushing that technology to get it to a place where it is something that we can consider as part of our toolset.” 

Carsten Kolve, Facility Digital Supervisor, Image Engine 

“The hardest part of creating invisible effects is adding all the complex imperfections that exist in the real world. If we are unable to acquire this detail as reference right on set or via dedicated shooting/scanning sessions, we rely in the first instance on the skill of our highly specialized artists to make up for that and fill in the missing data with their artistic ability. Primarily due to their experience and ability to adapt related reference imagery, we are able to fill in the blanks where reference data could not be gathered, or we need to create something completely new that still feels true to the real world. 

“While a lot of technological advances have been made in all areas of acquisition of reference data and, subsequent of it, in shot pipelines, we should never forget that our work is primarily guided by artists working closely together with technicians in every discipline, that make creative decisions on how to achieve the vision for an effect given all the source data and tools we have at our disposal. VFX is a craft that can’t be reduced to a simple ‘one-button’ tech commodity, but instead it’s artists and technologists working together, creating imagery in support of a story that hopefully touches the audience on an emotional level. Working at this intersection of disciplines is what makes working in this field so exciting, even if the ultimate validation for your work is to not be noticed.” 

Rob Price, Visual Effects Supervisor, Zoic Studios

“It’s been a lot of improving on what’s already existed, such as the fidelity and detail that you can get out of LiDAR and photogrammetry. Photogrammetry takes a lot of processing power, so refinement of the computers computing like the cloud and just having more resources to crunch the numbers continually gets a lot better for us.

“In the mid-1990s, when doing a set extension, people would say that the only way possible is with a locked-off camera and to do a single matte painting. But filmmakers want their big moving cameras, so you need more 3D environments that look photoreal. The set extensions need to be at a higher level of quality and believability, so we have to up our game on how we create them.

“It’s a necessity more and more that you need a camera in an environment to help every other department. Even the 2D is made infinitely easier if we have a digital camera that matches the practical one. A lot of what we do with the LiDAR scanning isn’t always to recreate 3D objects. On The Haunting of Bly Manor, all of the interiors were scanned for a dual purpose. We needed to see through the windows from the outside. Also, if we have the geometry for the exact interior, it becomes a lot quicker for that camera track to solve because you have the entire room.”

Jeff Campbell, Visual Effects Supervisor/Partner, SPINVFX 

“The finishing resolution has gone up and I hope we have plateaued at 4K. I can’t see any reason to go any higher unless TVs become the size of your living room wall, but they likely will. 

“Now we have physically-based rendering (PBR) pipelines that render images based on the flow of light in the real world. Much of what makes a physically-based pipeline different is a more detailed approach to the behavior of light and surfaces. Working physically-based means that all you have to do is put lights where they’re supposed to be, use the correct materials and you’ll get the real result.

“You have to understand the strengths and weakness of what a computer can do. Computers are great at rendering solid objects. Animal fur, hair, cloth and skin have come a long way and can look great. Humans are difficult unless they are further away from camera, like in crowds. Simulation water and pyro looks great now too. “The methods have generally stayed the same – highly detailed assets, photorealistic lighting and real-world references – but the tools have evolved to give us much better and faster results.” 

“It’s been a lot of improving on what’s already existed, such as the fidelity and detail that you can get out of LiDAR and photogrammetry. Photogrammetry takes a lot of processing power, so refinement of the computers computing like the cloud and just having more resources to crunch the numbers continually gets a lot better for us.”

—Rob Price, Visual Effects Supervisor, Zoic Studios 

The hair of Jean Grey (Sophie Turner) was art directed by Soho VFX for the outer space scene in Dark Phoenix. (Images courtesy of Soho VFX)

Shattered glass is created digitally in Extraction by SPINVFX. (Images courtesy of SPINVFX)

A ring of fire is augmented to make it appear more dangerous in Episode 404 of Fargo. (Images courtesy of Zoic Studios)

Hybride makes use of practical camera angles to make their CG work on The Mandalorian believable and grounded. (Images courtesy of Hybride)

“This [shot in Roma] wasn’t meant to be an exact replication, but how Alfonso Cuarón remembered it. While the archival photos were helpful, we still had to aesthetically tweak after that to get the exact look he was going for.”

—Aaron Weintraub, Co-founder, MR. X

Joseph Kasparian, Visual Effects Supervisor, Hybride 

“Some of the great tools used in past projects are connected to the RTX. In Solo, Han Solo is walking with Lando Calrissian, and we’re seeing the Millennium Falcon for the first time. That whole shot was done in real-time on our side. What happened is we built and rendered that shot in Houdini with the basic lights and the environment with global illumination. After that we took all of the elements of that shot and exported them into Flame. We had a machine with two RTX in it. We didn’t have real-time ray tracing on it, but we could do look development in real-time. We could put lights and smoke in real-time. We could play with the atmosphere in real-time. In the end, we did that shot with a real-time renderer and redesigned completely the look development using a real-time engine. It was so fast to iterate. You could create 10 different moods in a day because everything was real-time. With the RTX that is now part of the Unreal Engine you can put ray tracing in. Everything in the real-time world is going to change because of that graphic card.” 

Berj Bannayan, Co-founder, Soho VFX 

“The digital realm of visual effects is basically the same as it was 25 years ago. We can do more with the resources that we have because the computers are faster and the algorithms are better. When we were working on The Incredible Hulk in New York, one LiDAR scan would take a 25 to 45 degrees cone, and that one little section would take 15 minutes. Then you had to move the camera and do it again. Now we have a LiDAR scanner that can do 360 degrees in two minutes. We can be a lot more confident now in our ability to do something. A lot of our industry is evolutionary rather than revolutionary. It’s a steady march towards more realistic visual effects. Even with the same tools we can do more because we learned more about what we do. The toolmakers have also learned more. I lead our software team on lighting and rendering. Having those photorealistic and physically-based materials, cameras and lights make the visual effects better, and integrates them into the photographic elements so that they really become invisible.” 

Pete Jopling, Executive Visual Effects Supervisor, MPC Episodic 

“If you get your viewer to accept the world you build then that reality is your new reality. But then you could look at it in a binary way with a show like Chernobyl. It was invisible effects, and that typically involved more 2D disciplines such as environment work and disguising where the joins are. You don’t want people to question whether visual effects have been used here. However, something like a giant explosion of a nuclear reactor obviously had to be recreated. How do you make it something that you believe is happening right here, right now? That’s a journey you have to take your viewer on. I remember the director telling us that the smokestacks were another character in the story. It played an important part for him in reminding the viewer that this terrible thing was happening no matter how peripheral it might be. The smokestacks are there fairly constantly throughout a lot of the episodes; it creates a tension throughout the whole series. The explosion itself isn’t quite devastating to you, but everything else that follows is.” 

Extensive environmental extensions were done by Hybride for Episode 208 of Tom Clancy’s Jack Ryan. (Images courtesy of Hybride)

“I remember the director telling us that the smokestacks [in Chernobyl] were another character in the story. It played an important part for him in reminding the viewer that this terrible thing was happening no matter how peripheral it might be. The smokestacks are there fairly constantly throughout a lot of the episodes; it creates a tension throughout the whole series. The explosion itself isn’t quite devastating to you, but everything else that follows is.” 

—Pete Jopling, Executive Visual Effects Supervisor, MPC Episodic 

Aaron Weintraub, Co-founder, MR. X 

“In Roma, we had to reconstruct Mexico City in the 1970s. There is a 1,000-frame-long dolly shot of the main character walking across a main thoroughfare, and there’s a movie theater and neon signs. It’s all digital. Most of the foreground elements were built practically, and then a lot of that was replaced, extended, built up and built off into the background. There was a small bluescreen in the background that helped a little bit. There was a lot of roto and neon reflecting in the puddles on the ground that had to be replaced. They had a practical streetcar, but it wasn’t exactly the period one. It was painted blue for blocking and was then replaced with the period-built, full-CG streetcar. This wasn’t meant to be an exact replication, but how Alfonso Cuarón remembered it. While the archival photos were helpful, we still had to aesthetically tweak after that to get the exact look he was going for.” 

“Nowadays, you can pretty much do anything, but a big part of me still enjoys doing something that nobody expects was treated, thinking that it existed in camera on the day it was shot.” 

—Steve Ramone, Visual Effects Supervisor, SPINVFX 

Steve Ramone, Visual Effects Supervisor, SPINVFX 

“Good compositing tools, plate stabilization, color correction, multiple choice of keyers and good grain tools [are important in achieving invisible effects]. A tool that allows the owner to create gizmos or subtools is where invisible effects can shine, like Nuke. The biggest improvements were years ago when everything went from Flames and Infernos down to desktop computers and made access easier for everyone. People want to work on big shots, so the need to evolve them isn’t as important as, say, a better renderer or other big tools. I come from a time when trying to get one past the audience was the reward because the tech wasn’t as powerful as it is today, and VFX was highly critiqued for how poor the work looked, mostly because the creative overstepped the limitations of the tech. Nowadays, you can pretty much do anything, but a big part of me still enjoys doing something that nobody expects was treated, thinking that it existed in camera on the day it was shot.” 


Share this post with

Most Popular Stories

AGING PHILADELPHIA FOR THE WALKING DEAD: THE ONES WHO LIVE
03 April 2024
VFX Trends
AGING PHILADELPHIA FOR THE WALKING DEAD: THE ONES WHO LIVE
The final season of The Walking Dead: The Ones Who Live follows Rick Grimes and Michonne Hawthorne.
SINGING PRAISES FOR UNSUNG HEROES
15 April 2024
VFX Trends
SINGING PRAISES FOR UNSUNG HEROES
Recognizing ‘hidden’ talent pivotal to making final shots a reality.
NAVIGATING LONDON UNDERWATER FOR THE END WE START FROM
05 March 2024
VFX Trends
NAVIGATING LONDON UNDERWATER FOR THE END WE START FROM
Mahalia Belo’s remarkable feature directorial debut The End We Start From follows a woman (Jodie Comer) and her newborn child as she embarks on a treacherous journey to find safe refuge after a devastating flood.
CINESITE GETS SNOWED UNDER BY TRUE DETECTIVE: NIGHT COUNTRY
27 March 2024
VFX Trends
CINESITE GETS SNOWED UNDER BY TRUE DETECTIVE: NIGHT COUNTRY
True Detective: Night Country features a diverse cast including Jodie Foster.
THE CONSEQUENCES OF FALLOUT
15 April 2024
VFX Trends
THE CONSEQUENCES OF FALLOUT
Westworld team brings Megaton Power to game adaption.