VFX Voice

The award-winning definitive authority on all things visual effects in the world of film, TV, gaming, virtual reality, commercials, theme parks, and other new media.

Winner of three prestigious Folio Awards for excellence in publishing.

Subscribe to the VFX Voice Print Edition

Subscriptions & Single Issues


February 01
2022

ISSUE

Web Exclusive

AND NOW… DIGITAL HUMANS FOR THE REST OF US

By IAN FAILES

The MetaHuman Creator app from Epic Games is cloud-based. (Image courtesy of Epic Games)

The MetaHuman Creator app from Epic Games is cloud-based. (Image courtesy of Epic Games)

Photoreal digital humans were once – and perhaps still are – one of the hardest things to achieve in visual effects and animation. Today, however, there are now a swathe of options that allow both novice and experienced artists to ‘jump right in’ and craft digital humans for projects, ranging from incredibly photorealistic models to more stylized but still believable versions.

Today there are a number of popular and accessible methods for crafting digital ‘avatars,’ ranging from somewhat traditional 3D digital content creation tools, to new real-time approaches, and also ones that take advantage of machine learning and artificial intelligence techniques.

MetaHuman Creator comes pre-loaded with many different body types and details, with blending available between them. (Image courtesy of Epic Games)

MetaHuman Creator comes preloaded with many different body types and details, with blending available between them. (Image courtesy of Epic Games)

THE CREATION OF METAHUMAN CREATOR

The cloud-based MetaHuman Creator app from Epic Games has quickly become a go-to for building digital humans. That’s partly wrapped up in the explosion of interest in and abilities of real-time tools, in this case, Epic’s Unreal Engine, and partly because of the speed at which users can generate a realistic human with the app. It’s also free.

The ‘MetaHumans’ are fully rigged and ready for animation. (Image courtesy of Ep-ic Games)

The ‘MetaHumans’ are fully rigged and ready for animation. (Image courtesy of Epic Games)

“We focused on democratizing the creation of high-fidelity digital humans, and the goal was to take a task that only a handful of domain experts in the world can do in months and simplify it so that anyone could create a believable human in minutes,” outlines Chris Evans, Technical Animation Director, Advanced Character Group at Epic Games. “Key to this was the vision of Vladimir Mastilovic and the unique tools and expertise of his company, 3Lateral. We had worked with Vlad for many years and decided to invite him and his team to join the Epic family with this goal in mind.

“It was a real challenge to develop a system that feels simple and intuitive, yet drives a back end that’s rebuilding a very complex rig and has all the details required to use the character in a game or interactive experience,” adds Evans. “It was especially challenging to create a user interface for the generation of believable human skin. Our skin shader is one of the most complex shaders we have made at Epic, and distilling it down to a few intuitive sliders was a challenge. We had to research a lot of techniques involving machine learning and pragmatic uses of principal component analysis (PCA) and frequency separation to get us there, but we’re happy with the results.”

The Headshot feature in Reallusion’s Character Creator allows users to transform from a real face to a CG one. (Image courtesy of Reallusion)

The Headshot feature in Reallusion’s Character Creator allows users to transform from a real face to a CG one. (Image courtesy of Reallusion)

“We focused on democratizing the creation of high-fidelity digital humans, and the goal was to take a task that only a handful of domain experts in the world can do in months and simplify it so that anyone could create a believable human in minutes.”

—Chris Evans, Technical Animation Director, Advanced Character Group, Epic Games

To make MetaHuman as accessible as possible, Evans said Epic initially concentrated on the sculpting tools. However, he relates, “even with very intuitive sculpting tools, you still needed to know facial anatomy and construction to use them successfully. For instance, if someone wanted to create a character of Asian descent, they would need to know enough to pull down the eye cover fold, creating a monolid as seen in some Asian facial morphologies. So we added the ability to blend in specific traits of faces from a pool of existing characters. This allows you to browse existing presets and then toss them into a blendspace, and just use the nose from one, to the eye from another – it’s a lot more user friendly.”

A DIGITAL HUMAN WORKFLOW

A screenshot from Reallusion’s iClone toolset, where animation takes place. (Image courtesy of Reallusion)

A screenshot from Reallusion’s iClone toolset, where animation takes place. (Image courtesy of Reallusion)

Another methodology artists have been adopting to quickly create believable digital humans is via Reallusion’s Character Creator (build) and iClone (animation) pipeline. The software, which also incorporates a number of real-time features, contains base models that are highly morphable and fully rigged.

“Digital humans made with Character Creator are able to take on the look of any person or image from the Reallusion Headshot tool feature that allows users to import a photo and generate a head model based on the person in the photo,” advises Reallusion Vice President of Marketing John Martin. “James Gunn’s The Suicide Squad used Character Creator and iClone extensively to develop and animate every hero featured in the film for 16 scenes of previs, for example.”

There are tools for crafting specific facial details, too, such as SkinGen, Hair and Beard apps, with SkinGen enabling aspects such as micro-details for pores, blemishes and wrinkles. Once a character is designed, it can be sent to iClone for animation or exported as FBX or USD to many platforms including NVIDIA’s Omniverse and Unreal Engine. A raft of tools, including real-time ones, that capitalize on new approaches to lipsync, facial puppeteering, motion capture and keyframing can give the digital human movement.

Generating Humans with Human Generator

Crafting digital humans can be a laborious process, so it helps if there’s a way to kick-start the build. That’s the aim of the Human Generator add-on for the open-source 3D software Blender made by Alexander Lashko and Oliver J. Post.

Lashko observes that Human Generator was made based on “the growing need for human character creation. Whether it’s for film, games, architectural visualization, video content creation or simply digital artwork, human characters play a major role in the design process. Likewise with the growing potential of Blender software and its community, we thought having a native tool would help creators with their projects.”

The fully rigged digital human characters available in the add-on can be customized to different genders, ethnicities, body shapes, hair type and age. Clothing presented an interesting challenge for Lashko and Post, owing to interpenetrations. “One solution we came up with,” details Lashko, “was to hide the parts of the body that are covered by the clothes. This also saves on performance, since the rendering engine does not require to calculate the hidden parts.”

Skingen is Reallusion’s tool for adding in human facial details like pores, wrinkles and blemishes. (Image of courtesy Reallusion)

Skingen is Reallusion’s tool for adding in human facial details like pores, wrinkles and blemishes. (Image of courtesy Reallusion)

Like many of the apps discussed here, the idea behind Reallusion’s tools is to enable the complex creation of digital humans to be done without the user having to necessarily understand all that complexity. But that doesn’t mean it’s not complex under the hood.

“Every element of development for digital humans has a hurdle to cross for appearance and movement,” says Martin. “The skin appearance involved an extensive effort to provide users with the balance of ease of application and deeply detailed capabilities. We developed a full, multi-layered system for addressing the skin appearance and how users could edit and customize their looks.

Results of Pinscreen’s algorithm going from a single photo to a digital avatar. (Im-age courtesy of Pinscreen)

Results of Pinscreen’s algorithm going from a single photo to a digital avatar. (Image courtesy of Pinscreen)

“Wherever we see challenges in the workflow it’s our mission to solve them, especially when you get to the really difficult things like animation. The more we can democratize that, the more we’re going to give capability to storytellers, designers and developers to extend their vision with more powerful, yet accessible tools.”

—Richard Kerris, Vice President, Omniverse Platform Development, NVIDIA

“Starting from basic skin details and micro normals,” continues Martin, “we worked with TexturingXYZ, an industry leader in skin assets to help overcome some of the compelling challenges of micro details for our layered design. SkinGen became the layered appearance editor developed for users to have access to every element of the skin, resulting in a streamlined tool to control the digital human’s skin from head to toe and from human skin to creature skin, glamorous makeup looks to ghastly wounds.”

PERSONALIZING DIGITAL HUMANS

Further examples of the results of Pinscreen’s photo-based avatar digitization tech-nology. (Image courtesy of Pinscreen)

Further examples of the results of Pinscreen’s photo-based avatar digitization technology. (Image courtesy of Pinscreen)

Another major development in the ‘democratization’ of digital humans is the ability to almost instantly create lifelike 3D likenesses of real people from single photographs. This is an area in which Pinscreen operates, having recently developed several new technologies and solutions for making the creation of photorealistic avatars accessible to consumers, as well as applications that make full use of deployable and personalized virtual humans.

“Our latest photo-based avatar digitization technology consists of a highly robust algorithm to generate normalized 3D avatars from highly unconstrained input photos,” explains Pinscreen CEO and Co-founder Hao Li. “This means that you can take input photos of yourself in very challenging lighting conditions. You can be smiling or side facing and the algorithm can still produce a consistent lighting normalized 3D avatar head with neutral expressions. This is extremely important for rendering fully parametric CG characters in any virtual environments and for performance-driven animation. Our method is based on a variant of StyleGAN2 which can produce 3D textured meshes of a head and a differentiable rendering framework that uses perceptual loss for refinement.”

Pinscreen is looking to build a VR-based communication system that can go beyond traditional 2D video conferencing. (Image courtesy of Pinscreen)

Pinscreen is looking to build a VR-based communication system that can go beyond traditional 2D video conferencing. (Image courtesy of Pinscreen)

“Our latest photo-based avatar digitization technology consists of a highly robust algorithm to generate normalized 3D avatars from highly unconstrained input photos. This means that you can take input photos of yourself in very challenging lighting conditions. You can be smiling or side facing and the algorithm can still produce a consistent-lighting normalized 3D avatar head with neutral expressions. This is extremely important for rendering fully parametric CG characters in any virtual environments and for performance-driven animation.”

—Hao Li, CEO/Co-founder, Pinscreen

To demonstrate their solution, Pinscreen has been building an immersive chat application called PinScape. It was demonstrated at SIGGRAPH 2021’s Real-Time Live! “The idea,” says Li, “is to build a VR-based communication system that can go beyond traditional 2D video conferencing. At the beginning, a user would take a selfie using a webcam, and the system automatically creates a complete full-body avatar from that input photo. The avatar consists of a fully rigged face and body, and also digitizes hair using an upgraded technology of what we presented a few years ago.”

A screenshot from NVIDIA Omniverse Audio2Face, which takes an audio track and generates matching expressive facial animation. (Image courtesy of NVIDIA)

A screenshot from NVIDIA Omniverse Audio2Face, which takes an audio track and generates matching expressive facial animation. (Image courtesy of NVIDIA)

Pinscreen’s approach to generating a believable human avatar from a single input image isn’t without its challenges. They have to overcome what can be sub-optimal lighting and shadows, although of course the idea is to avoid the need for any kind of controlled studio lighting scenario to produce the avatar. “Another challenge,” mentions Li, “is that people often smile in photographs or may not be front facing the camera. To this end, we developed an algorithm that allows us to extract a consistent likeness of a person regardless of the lighting, facial expression, and pose from a photo, and then generate a 3D model of a normalized avatar.”

The Audio2Face app relies on machine learning techniques to convert audio to ex-pressions. (Image courtesy of NVIDIA)

The Audio2Face app relies on machine learning techniques to convert audio to expressions. (Image courtesy of NVIDIA)

MAKING YOUR DIGITAL HUMAN EMOTE

Animating your digital human creation is of course just as important as building it. There are myriad options out there for facial capture and motion capture to do this, while another method has been to take an audio track and, using A.I., generate matching expressive facial animation from just that audio source. NVIDIA’s Omniverse Audio2Face app enables this, and even has a pre-loaded 3D character called ‘Digital Mark’ for users to get started with.

Audio2Face comes with a pre-loaded character. The facial movements are able to be translated to other CG characters. (Image courtesy of NVIDIA)

Audio2Face comes with a preloaded character. The facial movements are able to be translated to other CG characters. (Image courtesy of NVIDIA)

The intention behind the app was all about accessibility, explains Simon Yuen, Director, Graphics AI at NVIDIA. “Right now, if you want to create a 3D character, you need to be a domain expert to do it. Audio2Face is designed with a very specific purpose in mind, to help simplify voice-based facial animation while improving the quality of automated solutions of today. The method leverages deep learning and runs in real-time. It supports realistic or stylized characters and neither rigging nor AI training is required to use it. It soon will support a broader range of motion and emotion for the full face. And it’s designed to complement and work with other existing tools and workflows.”

For Welcome to Chechnya, stand-in actors were captured to start the face-replacement process. (Image copyright © 2021 Southport Music Box Corporation d/b/a Music Box Films)

For Welcome to Chechnya, stand-in actors were captured to start the face-replacement process. (Image copyright © 2021 Southport Music Box Corporation d/b/a Music Box Films)

“Wherever we see challenges in the workflow it’s our mission to solve them,” adds Richard Kerris, Vice President, Omniverse Platform Development at NVIDIA, “especially when you get to the really difficult things like animation. The more we can democratize that, the more we’re going to give capability to storytellers, designers and developers to extend their vision with more powerful, yet accessible tools.”

The ‘digital humans’ or ‘digital veils’ were ultimately crafted for Welcome to Chechnya with a mix of A.I. and compositing techniques. (Image copyright © 2021 Southport Music Box Corporation d/b/a Music Box Films)

The ‘digital humans’ or ‘digital veils’ were ultimately crafted for Welcome to Chechnya with a mix of A.I. and compositing techniques. (Image copyright © 2021 Southport Music Box Corporation d/b/a Music Box Films)

Audio2Face continues to be updated and now with blendshape support for instance. Users are able to re-target to any 3D human or human-looking face, and even more alien or animal-esque faces. “We’ve tested from rhinos to aliens to other things,” notes Yuen, “and we do plan to have more pre-trained data available with Audio2Face and better support of a larger variety of voices and languages.”

A final shot from the film where the digital veil was purposefully kept somewhat soft in appearance. (Image copyright © 2021 Southport Music Box Corporation d/b/a Music Box Films)

A final shot from the film where the digital veil was purposefully kept somewhat soft in appearance. (Image copyright © 2021 Southport Music Box Corporation d/b/a Music Box Films)

WHEN A VERY DIFFERENT KIND OF DIGITAL HUMAN IS REQUIRED

The face replacements in David France’s Welcome to Chechnya (2020) documentary caught many people’s attention, not only for the delicate way they protected the identities of those featured in the film who risked suffering persecution from the Russian republic of Chechnya, but also the way they were achieved with A.I. and machine learning techniques. In the end, what ultimately were produced were ‘new’ digital humans.

A character crafted in the Human Generator app. (Image courtesy of Human Genera-tor)

A character crafted in the Human Generator app. (Image courtesy of Human Generator)

“It was especially challenging to create a user interface for the generation of believable human skin. Our skin shader is one of the most complex shaders we have made at Epic, and distilling it down to a few intuitive sliders was a challenge. We had to research a lot of techniques involving machine learning and pragmatic uses of principal component analysis (PCA) and frequency separation to get us there, but we’re happy with the results.”

—Chris Evans, Technical Animation Director, Advanced Character Group, Epic Games

Visual Effects Supervisor Ryan Laney oversaw that work, taking new performers as face doubles and masking them over the original subjects, aided by machine learning and traditional compositing. The approach taken fit in with the film’s limited budget and also meant expensive and time-consuming, completely-CG photoreal digital avatars were not necessary. Furthermore, it wasn’t quite the same process as the current deep fakes phenomenon – itself now also a widely accessible toolset – although it certainly shares some technology.

The Human Generator interface within Blender. (Image courtesy of Human Genera-tor)

The Human Generator interface within Blender. (Image courtesy of Human Generator)

An interesting aspect of the final shots was some noticeable softness or blur in the faces, which in fact was deliberately retained in the digital humans. “We leveraged the visual language of that softness to help us maintain journalistic integrity,” notes Laney. “There was a moment halfway through the production where I wasn’t sure if what was on the screen was the double or the original. So, it really did help us to have those visual indicators, but journalistic integrity was why keeping the soft look stayed in.”

Close-up on a digital avatar made from the many variations of characters within Human Generator. (Image courtesy of Human Generator)

Close-up on a digital avatar made from the many variations of characters within Human Generator. (Image courtesy of Human Generator)

Laney has come to refer to his team’s digital human creation technique, which he is utilizing in other projects, as ‘digital veils.’ “We’re also calling the process ‘automated face replacement,’” he says. “I feel like it fits in with the analogy of automated dialogue replacement or ADR. We capture the data and we do this training session to marry the faces, and then there’s still work involved to lay it in. That automated face replacement is the process that produces digital veils.”


Share this post with

Most Popular Stories

AGING PHILADELPHIA FOR THE WALKING DEAD: THE ONES WHO LIVE
03 April 2024
Exclusives, VFX Trends
AGING PHILADELPHIA FOR THE WALKING DEAD: THE ONES WHO LIVE
The final season of The Walking Dead: The Ones Who Live follows Rick Grimes and Michonne Hawthorne.
SINGING PRAISES FOR UNSUNG HEROES
15 April 2024
Exclusives, VFX Trends
SINGING PRAISES FOR UNSUNG HEROES
Recognizing ‘hidden’ talent pivotal to making final shots a reality.
NAVIGATING LONDON UNDERWATER FOR THE END WE START FROM
05 March 2024
Exclusives, VFX Trends
NAVIGATING LONDON UNDERWATER FOR THE END WE START FROM
Mahalia Belo’s remarkable feature directorial debut The End We Start From follows a woman (Jodie Comer) and her newborn child as she embarks on a treacherous journey to find safe refuge after a devastating flood.
CINESITE GETS SNOWED UNDER BY TRUE DETECTIVE: NIGHT COUNTRY
27 March 2024
Exclusives, VFX Trends
CINESITE GETS SNOWED UNDER BY TRUE DETECTIVE: NIGHT COUNTRY
True Detective: Night Country features a diverse cast including Jodie Foster.
THE CONSEQUENCES OF FALLOUT
15 April 2024
Exclusives, VFX Trends
THE CONSEQUENCES OF FALLOUT
Westworld team brings Megaton Power to game adaption.