VFX Voice

The award-winning definitive authority on all things visual effects in the world of film, TV, gaming, virtual reality, commercials, theme parks, and other new media.

Winner of three prestigious Folio Awards for excellence in publishing.

Subscribe to the VFX Voice Print Edition

Subscriptions & Single Issues


November 24
2020

ISSUE

Web Exclusive

Deep Fakes: Part 1 – A Creative Perspective

By IAN FAILES

New ‘deep fake’ face-swapping videos seem to go viral on the internet with increasing regularity. Perhaps that’s because artists, and the machine learning algorithms they use for deep fakes, are increasingly getting so much better at them. Many have pondered, therefore, whether deep fakes will soon make their mark – or are already doing so – in traditional filmed entertainment.

To look into the state of deep fakes, VFX Voice is launching a special series to explore this emerging art and technology. In part 1, experts from research, video art and visual effects weigh in on how deep fakes work and how they’re impacting filmed entertainment right now.

Who is making deep fakes, and how?

The deep fakes you tend to see in online videos, where a speaking person’s face is typically replaced with that of another (often famous) person, rely on deep learning algorithms and training data. This data is usually video footage or photographs of the other person used to craft a convincing model for face swapping.

A noted researcher in the field is Hao Li, CEO and Co-founder of Pinscreen, a maker of 3D avatar applications that typically rely on machine learning techniques. Pinscreen’s main products focus on creating photoreal virtual assistants and high-end virtual avatars, but they also offer production-level A.I. face replacement for film production.

“Our latest technology,” states Li, “is the PaGAN 2 neural face renderer, which is integrated into Unreal Engine and allows one to turn any CG face into a photoreal one, using a real-time deep neural network that is trained using a few minutes of a person’s facial performance.”

Li is a close observer of the deep fakes community, and marvels at the impressive work being done by both deep fake researchers and artists. One of those people Li points out specifically is the artist known an ‘ctrl shift face,’ whose YouTube deep fake videos regularly reach into the millions of views.

Hao Li, CEO and Co-founder, Pinscreen

“Until a year ago, deep fakes still suffered from weird image artifacts, blurry results, low resolution output and flickering around lighting. [But now] it is already possible to generate near-perfect deep fakes in most conditions and when sufficient training data is available and of high quality.”

—Hao Li, CEO and Co-founder, Pinscreen

A preview of an upcoming deep fake work by ctrl shift face. (Image courtesy of  ctrl shift face)

“The main impact of deep fakes will be the ability for filmmakers to not be limited by the storyline due to the extreme cost for using digital actors. They now have a tool where they can find a double, and use A.I. face replacement to turn their actor into any person they want at a very low cost, or make the person younger.”

—Hao Li, CEO and Co-founder, Pinscreen

Ctrl shift face’s journey into deep fakes, as related to VFX Voice, began as something that was just for fun. When starting out, ctrl shift face says “there was hardly any material to follow at the time and most deep fakes did not look that convincing. So I experimented a lot and through trial and error I found a way to get results I wanted. The software is still improving, so I’m still trying to improve with it.”

Some of the most popular deep fakes made by ctrl shift face include actors doing impressions of other actors, with their faces swapped. Those, in particular, have an extra level of appeal since the voice is already part-way there. “I make videos that I want to see and make me laugh,” ctrl shift face advises, in relation to deep fake video project choices. “Some ideas fail because the technology is not there yet, but most often they fail because of YouTube policies and copyrights.”

Also drawing upon the idea of impressions being a useful starting point to a deep fake video is the VFX studio Framestore, which recently made a demo deep fake for the Cannes LIONS Live virtual creativity festival. Here, the studio filmed comedian and impressionist Lewis Macleod performing as Boris Johnson and then Donald Trump, before swapping the actor’s face with those politicians’ faces.

“Essentially,” explains Framestore Chief Creative Officer Mike McGee, “we trained a neural network with tens of thousands of images of Boris and Donald. The network breaks down the data from a complex set of pixels into a simple version of core information and a network that allows us to reconstruct photographic versions of our protagonists.

“For us,” continues McGee, “the core information we want in a simplified state is the position and movement data. Once that’s separated from the com-plicated construction of pixels that make up the likeness, it becomes easy for us to adjust it. Tracked from an alternative performance we replace the position and movement data and feed that back through the network in the other direction, thereby creating a new set of pixels and a photographic representation of our subject but now with a new performance.”

Mike McGee, Chief Creative Officer, Framestore

“For us, the core information we want in a simplified state is the position and movement data. Once that’s separated from the complicated construction of pixels that make up the likeness, it becomes easy for us to adjust it. Tracked from an alternative performance we replace the position and movement data and feed that back through the network in the other direction, thereby creating a new set of pixels and a photographic representation of our subject but now with a new performance.

—Mike McGee, Chief Creative Officer, Framestore

The process Framestore followed to make their Boris Johnson deep fake video. (Image courtesy of Framestore)

Training data for Framestore’s Boris Johnson deep fake: multiple photographs of the politician from many angles. (Image courtesy of Framestore)

Framestore’s deep fake Boris Johnson. (Image courtesy of Framestore)

How deep fakes are being used in filmed entertainment 

Deep fakes seem to be everywhere, but are these fun face-swapping videos and the tech behind them actually making their way into everyday filmed entertainment? Could deep fakes be used effectively for face replacement, which has previously been the domain of photorealistic CG humans and 2D compositing? And could deep fakes help complete the work of deceased actors? In the case of VFX studios, it’s clear many are certainly looking into deep fakes, or related deep and machine learning techniques, but there remains some hesitation about their use for final shots.

“As far as using deep fakes in VFX goes, they’re still very much in their infancy, but they do offer creatives an exciting new range of storytelling possibilities,” outlines Framestore Executive Creative Director William Bartlett. “We used it for example on a very small part of our work on Pokémon: Detective Pikachu where the film required Bill Nighy’s character to appear younger in an ‘old’ news reel. It required some fixes, but deep fakes suited this need because of the nature of what would ultimately be presented on screen.”

Meanwhile, members of a recent panel of visual effects supervisors who discussed the subject of digital aging and de-aging, as reported on by VFX Voice, noted that while they had often been asked about deep fakes, some of the quality issues and shot-specific requirements meant deep fakes had not yet been directly used for final shots they’d been involved with.

However, like many things in visual effects, the technology behind deep fakes is constantly changing. Li notes, for instance, that “until a year ago, deep fakes still suffered from weird image artifacts, blurry results, low resolution output and flickering around lighting.” But now, Li attests, “it is already possible to generate near-perfect deep fakes in most conditions and when sufficient training data is available and of high quality.”

What VFX studios appear to be doing currently is capitalizing on developments in A.I. and machine/deep learning techniques to aid in character work. For example, Weta Digital and Digital Domain implemented machine or deep learning elements in their respective CG Thanos creations for Avengers: Endgame. In another example, presenters at this year’s VFX Oscar bake-off for Terminator: Dark Fate mentioned that director Tim Miller would review CG actor iterations and, if he felt they needed additional work, would send through deep fake samples to help continue shot refinement. And ILM, which carried out extensive de-aging VFX work for The Irishman, deployed an A.I. solution called Face Finder that relied on huge libraries of images of the film’s actors to compare their de-aging results against.

William Bartlett, Executive Creative Director, Framestore

“As far as using deep fakes in VFX goes, they’re still very much in their infancy, but they do of course offer creatives an exciting new range of storytelling possibilities. We used it for example on a very small part of our work on Pokémon: Detective Pikachu where the film required Bill Nighy’s character to appear younger in an ‘old’ news reel. It required some fixes, but deep fakes suited this need because of the nature of what would ultimately be presented on screen.”

—William Bartlett, Executive Creative Director, Framestore

In addition to the work mentioned on Pokémon: Detective Pikachu, and some additional machine learning techniques used for their Endgame Smart Hulk, Framestore has also been looking at how the technology that sits behind deep fakes can be extended for other purposes. “We’re continuing to do a lot of research and development into A.I. and machine learning,” McGee adds, “looking to adapt and integrate new functionality and ways of working into existing tools like facial capture, performance capture, creature animation and fast rendering tool sets.”

Framestore’s deep fake Donald Trump. (Image courtesy of Framestore)

A simplified version of Pinscreen’s approach to real-time deep fakes. (Image courtesy of Pinscreen)

The impact on talent-for-hire

As CG humans appearing in filmed entertainment became more popular – and more lifelike – many commentators suggested that ‘digital synthespians’ might some day replace actors altogether. Of course, this has not happened, and actors have regularly informed, especially via body and facial capture, their CG selves or other synthetic characters.

But could it be deep fakes, rather than completely CG humans, that spell the death knell for actors, or significantly change the game in the talent-for-hire industry?

Li’s view is that well-made deep fakes can become extremely beneficial when there is a need for a digital actor. “The main impact of deep fakes,” he says, “will be the ability for filmmakers to not be limited by the storyline due to the extreme cost for using digital actors. They now have a tool where they can find a double, and use A.I. face replacement to turn their actor into any person they want at a very low cost, or make the person younger.”

“We’ll always need performances from actors, even if it’s for a completely digital character,” offers Bartlett on this subject. “Motion capture and references are invaluable for any VFX team, especially animators. Deep fakes are adding to the topic of VFX and digital humans putting actors out of work, something we’re always asked about. But what we’re applying is digital makeup, not replacing anyone’s performance. So, as much as actors and performers are needed now, they’ll be needed tomorrow even as this technology continues to evolve.”

McGee agrees. “A deep fake A.I. can be a useful tool to do a bit of the heavy-lifting in a shot or sequence that VFX teams can build upon. This is what we did in Detective Pikachu and it worked in that instance because of the ultimate output. But if we were to go into a project where a stunt performer does an entire performance and their face had to be replaced, deep fake isn’t the way to go. At least not yet.”

Watch Framestore showcase and explain its deep fake process in shooting an actor and face swapping Boris Johnson and Donald Trump onto him.
Pinscreen also made a Donald Trump deep fake. For this video, it used its PaGAN neural face renderer for face replacement.
Watch ctrl shift face’s most popular deep fake video featuring face swapping between Bill Hader and Arnold Schwarzenegger.

Share this post with

Most Popular Stories

CHANNELING THE VISUAL EFFECTS ON SET FOR THE WHEEL OF TIME SEASON 2
23 January 2024
Exclusives, Special Report
CHANNELING THE VISUAL EFFECTS ON SET FOR THE WHEEL OF TIME SEASON 2
In the case of the second season of The Wheel of Time, the on-set visual effects supervision was equally divided between Roni Rodrigues and Mike Stillwell.
2024 STATE OF THE VFX/ANIMATION INDUSTRY: FULL SPEED AHEAD
09 January 2024
Exclusives, Special Report
2024 STATE OF THE VFX/ANIMATION INDUSTRY: FULL SPEED AHEAD
Despite production lulls, innovations continue to forward the craft.
GAIA BUSSOLATI: CONNECTING HOLLYWOOD AND ITALIAN CINEMA – AND STAYING CURIOUS
09 January 2024
Exclusives, Special Report
GAIA BUSSOLATI: CONNECTING HOLLYWOOD AND ITALIAN CINEMA – AND STAYING CURIOUS
VFX Supervisor bridges Italian cinema, Hollywood blockbusters.
HOW TIME KEEPS SLIPPING AWAY IN LOKI SEASON 2
19 December 2023
Exclusives, Special Report
HOW TIME KEEPS SLIPPING AWAY IN LOKI SEASON 2
Created by Michael Waldron for Disney +, the second season of Loki follows notorious Marvel villain Loki (portrayed by Tom Hiddleston), Thor’s adopted brother and God of Mischief.
NAVIGATING LONDON UNDERWATER FOR THE END WE START FROM
05 March 2024
Exclusives, Special Report
NAVIGATING LONDON UNDERWATER FOR THE END WE START FROM
Mahalia Belo’s remarkable feature directorial debut The End We Start From follows a woman (Jodie Comer) and her newborn child as she embarks on a treacherous journey to find safe refuge after a devastating flood.