VFX Voice

The award-winning definitive authority on all things visual effects in the world of film, TV, gaming, virtual reality, commercials, theme parks, and other new media.

Winner of three prestigious Folio Awards for excellence in publishing.

Subscribe to the VFX Voice Print Edition

Subscriptions & Single Issues


November 29
2022

ISSUE

Web Exclusive

WEIRD: THE AL YANKOVIC STORY GETS HEADY ABOUT DEEPFAKES

By TREVOR HOGG

Images courtesy of VFX Los Angeles

Daniel Radcliffe's face was inserted into the actual “Eat It” music video performance footage of Yankovic with a deepfake face replacement of Radcliffe onto the real body of Weird Al. VFX LA provided the effects.

Daniel Radcliffe’s face was inserted into the actual “Eat It” music video performance footage of Yankovic with a deepfake face replacement of Radcliffe onto the real body of Weird Al. VFX LA provided the effects.

Getting some serious visual effects work from VFX LA and Wolverine VFX is the irreverent biopic Weird: The Al Yankovic Story, directed by Eric Appel and starring Daniel Radcliffe as the real-life accordionist who became famous in the 1980s for his song parodies that include “Like a Surgeon,” “Eat It” and “My Bologna.” The team at VFX LA consisted of a trio of visual effects supervisors – Charles H. Joslain, Izzy Traub and Joseph Sperber – who were responsible of 45 shots, with most challenging being concert scenes where there was no actual stadium and doing a deepfake face replacement of Radcliffe onto the real body of Yankovic for another sequence.

Daniel Radcliffe stars as the real-life accordionist who became famous in the 1980s for his hit parodies “Like a Surgeon,” “Eat It” and “My Bologna.” Rainn Wilson, right, plays Dr. Demento.

Daniel Radcliffe stars as the real-life accordionist who became famous in the 1980s for his hit parodies “Like a Surgeon,” “Eat It” and “My Bologna.” Rainn Wilson, right, plays Dr. Demento.

Post-production lasted two months starting in June 2022, and the world premiere occurred in September 2022 as part of the 47th Toronto International Film Festival. “We had about 15 people on our team, and there were certain aspects that we thought were going to be easier and they weren’t,” Traub states. “We had to ask for a week extension, and they had built in some time contingencies into their own schedule so were able to give it to us.” Visual effects have become a more affordable tool for independent productions. “What can be done now could not been done five years ago,  especially when you take into account of having the majority of everything cloud-based. Since the pandemic, an entire [stretch of] time has gone offline, which in turn has enabled us to offer our services at a much better price than we normally could, simply because you’re not paying the overhead that goes along with having a big studio.”

“That was part of the comedy of [the ‘Eat It’ music video]. They wanted the face to be manipulated and for the body itself be straight-up Weird Al. We trained our model of Daniel [Radcliffe’s] face on the footage of Weird Al and rotoscoped, cleaned everything up, did a bunch of other compositing stuff to it, and added the glasses back in.”

—Izzy Traub, Visual Effects Supervisor

Pre-trained footage of the Daniel Radcliffe face.

Pre-trained footage of the Daniel Radcliffe face.

Radcliffe’s face inserted onto the original 16 celluloid footage with 3D glasses added afterwards.

Radcliffe’s face inserted onto the original 16 celluloid footage with 3D glasses added afterwards.

Radcliffe’s face inserted onto the original 16 celluloid footage with 3D glasses added afterwards.

Numerous references were provided by the director. “The name of the game was to get first drafts in front of Eric as quickly as possible,” Traub notes. “The best examples of look development were the CG stadium and crowd simulations we did. The first draft was, ‘Here’s the framing, layout and camera position, then get that signed off, and from there develop the look further with colors, lighting, textures and animation. Eric was thorough with what he wanted, so we had a good idea of the placement of everything, His feedback was extremely concise. We used one of the shots for look development and went through four different revisions, which is not a lot at all. From there, we replicated that look across the other shots and might have gone through one or two revisions.” The aesthetic was determined by the footage rather than studying previous projects by Appel. “We watched the film beforehand, looked at the lighting, and they had a LUT that was given to us.”

“We had access to Daniel [Radcliffe], had him look into a camera for 20 minutes and do facial expressions, but we had to create the [AI] training set ourselves from the footage in the film. Things were relatively straightforward until we composited it and saw that Daniel’s glasses aren’t working well because he’s turning his head and there’s parallax between the glasses and his actual face. We brought any tracking we could into Maya, lined up the glasses, and then manually hand-tracked those glasses to Daniel’s face. We added additional reflections to the glasses to cover up some of the cleanup work there.”

—Izzy Traub, Visual Effects Supervisor

Special attention had to be paid to the hair of the CG crowd because of concert lighting.

Special attention had to be paid to the hair of the CG crowd because of concert lighting.

Special attention had to be paid to the hair of the CG crowd because of concert lighting.

No body alterations were done in CG for the deepfake shots, which saw Radcliffe’s face inserted into the actual “Eat It” music video performance footage of Yankovic. “That was part of the comedy of it,” Traub remarks. “They wanted the face to be manipulated and for the body itself be straight-up Weird Al. We trained our model of Daniel’s face on the footage of Weird Al and rotoscoped, cleaned everything up, did a bunch of other compositing stuff to it, and added the glasses back in.” A data set had to be created to train the AI. Comments Traub, “Normally, we would have had access to Daniel, and had him look into a camera for 20 minutes and do facial expressions, but we had to create the training set ourselves from the footage in the film. Things were relatively straightforward until we composited it and saw that Daniel’s glasses aren’t working well because he’s turning his head and there’s parallax between the glasses and his actual face. We brought any tracking we could into Maya, lined up the glasses, and then manually hand-tracked those glasses to Daniel’s face. We added additional reflections to the glasses to cover up some of the cleanup work there.”

“You can see in the deepfake, it struggles with realistic eyes… Depending on the footage, movement, lighting and training data, the neuro network will respond differently. You’re doing trial and error. In this particular case, the eyes looked quite big and difficult to truly clean up. The 3D lenses [on the glasses] helped to integrate the face with our shot.”

—Izzy Traub, Visual Effects Supervisor

Side-by-side comparisons of the original 16mm celluloids and the final deepfake versions.

Side-by-side comparisons of the original 16mm celluloids and the final deepfake versions.

Side-by-side comparisons of the original 16mm celluloids and the final deepfake versions.

The visual aesthetic was determined by the footage of the movie rather than studying previous projects by director Eric Appel.

The visual aesthetic was determined by the footage of the movie rather than studying previous projects by director Eric Appel.

The visual aesthetic was determined by the footage of the movie rather than studying previous projects by director Eric Appel.

One thing that deepfakes do not do well are eyeballs. “You can see in the deepfake, it struggles with realistic eyes, and this is our third deepfake project in 2022,” Traub observes. “Depending on the footage, movement, lighting and training data, the neuro network will respond differently. You’re doing trial and error. In this particular case, the eyes looked quite big and difficult to truly clean up. The 3D lenses [on the glasses] helped to integrate the face with our shot.” VFX LA has its own custom neuro network for deepfakes. Explains Traub, “We were using DeepFaceLab on other projects, which is a main tool. Then we built our own Neural Network. However, a talented artist on this was responsible for using his own tech that he had constructed over the last couple of years. From there, the model output a face layer that then goes into After Effects for all of the compositing and clean up. Sometimes when Weird Al would turn his head, we weren’t actually able to use the deepfake. We were able to grab frames of Daniel [Radcliffe’s] face and paint them onto Weird Al’s face so that when he turned, we could mask that portion of Daniel’s face away to avoid any sort of weird morphing with the deepfake.”

The trickiest shots to achieve were the CG crowd shots for the concert.

The trickiest shots to achieve were the CG crowd shots for the concert.

The trickiest shots to achieve were the CG crowd shots for the concert.

“We were using DeepFaceLab on other projects, which is a main tool. Then we built our own Neural Network. However, a talented artist on this was responsible for using his own tech that he had constructed over the last couple of years. From there, the model output a face layer that then goes into After Effects for all of the compositing and clean up. Sometimes when Weird Al would turn his head, we weren’t actually able to use the deepfake. We were able to grab frames of Daniel’s face and paint them onto Weird Al’s face so that when he turned, we could mask that portion of Daniel’s face away to avoid any sort of weird morphing with the deepfake.”

—Izzy Traub, Visual Effects Supervisor

Only one panel was physically constructed for the awards ceremony set, with the remainder constructed in Photoshop.

Only one panel was physically constructed for the awards ceremony set, with the remainder constructed in Photoshop.

Only one panel was physically constructed for the awards ceremony set, with the remainder constructed in Photoshop.

Initially, it was thought that the deepfake would be trickiest to achieve, but in fact that distinction belonged to the CG crowd shots. “One of our artists recorded themselves dancing in a mocap suit for roughly an hour,” Traub remarks. “We took that animation, spliced it up, applied it to a bunch of different characters that we had in Maya and replicated it.” Variants were built into the CG characters such as different hair movements. “We were able to get away with 2K and 4K textures rather than 8K on our characters,” Traub adds. “The hair had to look decent and took more time than we were expecting, simply because we have lights moving across them from time to time, so you do see some of the detail in the hair.” More audience members were added in the background for the awards ceremony. “They had shot a lot of those plates on set, so we were able to use that for the modeling,” Traub states. “We took the plates, combined them and put those people in the back. Only one panel was built, so we had to extend the set. We took what they had for the back plate for the set, made our own in Photoshop, and that was it!”


Share this post with

Most Popular Stories

AGING PHILADELPHIA FOR THE WALKING DEAD: THE ONES WHO LIVE
03 April 2024
Television/ Streaming
AGING PHILADELPHIA FOR THE WALKING DEAD: THE ONES WHO LIVE
The final season of The Walking Dead: The Ones Who Live follows Rick Grimes and Michonne Hawthorne.
NAVIGATING LONDON UNDERWATER FOR THE END WE START FROM
05 March 2024
Television/ Streaming
NAVIGATING LONDON UNDERWATER FOR THE END WE START FROM
Mahalia Belo’s remarkable feature directorial debut The End We Start From follows a woman (Jodie Comer) and her newborn child as she embarks on a treacherous journey to find safe refuge after a devastating flood.
SINGING PRAISES FOR UNSUNG HEROES
15 April 2024
Television/ Streaming
SINGING PRAISES FOR UNSUNG HEROES
Recognizing ‘hidden’ talent pivotal to making final shots a reality.
CINESITE GETS SNOWED UNDER BY TRUE DETECTIVE: NIGHT COUNTRY
27 March 2024
Television/ Streaming
CINESITE GETS SNOWED UNDER BY TRUE DETECTIVE: NIGHT COUNTRY
True Detective: Night Country features a diverse cast including Jodie Foster.
GODZILLA MINUS ONE GAINS GLOBAL RECOGNITION
13 February 2024
Television/ Streaming
GODZILLA MINUS ONE GAINS GLOBAL RECOGNITION
Visual and special effects have dramatically evolved like the creatures in the Godzilla franchise.