VFX Voice

The award-winning definitive authority on all things visual effects in the world of film, TV, gaming, virtual reality, commercials, theme parks, and other new media.

Winner of three prestigious Folio Awards for excellence in publishing.

Subscribe to the VFX Voice Print Edition

Subscriptions & Single Issues


October 04
2023

ISSUE

Fall 2023

RAPID EVOLUTION AT THE INTERSECTION OF AI AND VFX

By CHRIS McGOWAN

Wētā’s adapted new deep learning methodologies and utilized neural networks for Avatar: The Way of Water. (Images courtesy of 20th Century Studios)

TOP AND BOTTOM: Wētā’s adapted new deep learning methodologies and utilized neural networks for Avatar: The Way of Water. (Images courtesy of 20th Century Studios)

Wētā’s adapted new deep learning methodologies and utilized neural networks for Avatar: The Way of Water. (Images courtesy of 20th Century Studios)

Crafty Apes’s AI division came into existence during the pandemic. Company Co-Founder Chris LeDoux recalls, “It all started during COVID with me watching YouTube videos from authors like Bycloud and Two Minute Papers. Then our VFX Supervisor and resident mad scientist Aldo Ruggiero began to show me a number of incredible things he was using AI for on the film he was supervising.” It became clear to LeDoux “that AI was going to shake up our industry in a massive way.” He explains, “Developments in AI/ML seemed like they would create a fundamental shift in how we approached and solved problems as it relates to shot creation and augmentation. I knew we had to make it a top priority.” Since then, Crafty Apes has applied AI to a wide range of VFX projects, reflecting an accelerating implementation of AI technology by the visual effects industry.

LeDoux comments, “I can tell you that we have leveraged machine learning [ML] for tasks like deepfake creations, de-aging effects, facial manipulation, rotoscoping, image and video processing and style transfer, and the list continues to grow.” He notes that once AI tools are integrated into the pipeline, they “speed up the workflow drastically, lower the costs of VFX significantly, and allow the artists to put more time into creativity.”

Machine learning helped Digital Domain meet its deadlines for She Hulk: Attorney at Law. (Images courtesy of Marvel Studios)

Regarding the teaming up of AI with VFX, “the first challenge is really managing expectations, in both directions,” says Hanno Basse, Chief Technology Officer of Digital Domain. He adds, “We shouldn’t overestimate what AI will be able to do, and there is a lot of hype out there now. At the same time, it will have a significant and immediate impact on all aspects of content creation, and we need to recognize the consequences of that.”

Digital Domain

The industry is looking at “many concepts and implementations for AI and ML that are very promising, and [is] using some of them already today,” according to Basse. Digital Domain has utilized machine learning on high-profile movies such as Avengers: Infinity War and Avengers: Endgame, and She-Hulk: Attorney at Law limited series for Disney+. It also created a 3D “visual simulation” of famed NFL coach Vince Lombardi – with the help of Charlatan, Digital Domain’s machine learning neural rendering software – for the February 2021 Super Bowl.

OPPOSITE TOP TO BOTTOM: Machine learning helped Digital Domain meet its deadlines for She Hulk: Attorney at Law. (Images courtesy of Marvel Studios

OPPOSITE TOP TO BOTTOM: Machine learning helped Digital Domain meet its deadlines for She Hulk: Attorney at Law.(Images courtesy of Marvel Studios

Machine learning helped Digital Domain meet its deadlines for She Hulk: Attorney at Law. (Images courtesy of Marvel Studios)

TOP TO BOTTOM: Rising Sun Pictures’ machine learning incorporated data from a “learned library of reference material” to help create Baby Thor for Thor: Love and Thunder. An early adopter of AI, RSP used machine learning to give the baby and uncannily lifelike quality while exhibiting behaviors required by the script. (Images courtesy of Marvel Studios)

TOP TO BOTTOM: Rising Sun Pictures’ machine learning incorporated data from a “learned library of reference material” to help create Baby Thor for Thor: Love and Thunder. An early adopter of AI, RSP used machine learning to give the baby and uncannily lifelike quality while exhibiting behaviors required by the script. (Images courtesy of Marvel Studios)

TOP TO BOTTOM: Rising Sun Pictures’ machine learning incorporated data from a “learned library of reference material” to help create Baby Thor for Thor: Love and Thunder. An early adopter of AI, RSP used machine learning to give the baby and uncannily lifelike quality while exhibiting behaviors required by the script. (Images courtesy of Marvel Studios)

TOP TO BOTTOM: Rising Sun Pictures’ machine learning incorporated data from a “learned library of reference material” to help create Baby Thor for Thor: Love and Thunder. An early adopter of AI, RSP used machine learning to give the baby and uncannily lifelike quality while exhibiting behaviors required by the script. (Images courtesy of Marvel Studios)

TOP TO BOTTOM: Rising Sun Pictures’ machine learning incorporated data from a “learned library of reference material” to help create Baby Thor for Thor: Love and Thunder. An early adopter of AI, RSPused machine learning to give the baby and uncannily lifelike quality while exhibiting behaviors required by the script. (Images courtesy of Marvel Studios)

Rising Sun Pictures’ machine learning incorporated data from a “learned library of reference material” to help create Baby Thor for Thor: Love and Thunder. An early adopter of AI, RSP used machine learning to give the baby and uncannily lifelike quality while exhibiting behaviors required by the script. (Images courtesy of Marvel Studios)

“AI, and especially its close cousin, machine learning, have been in our toolbox for five years or so. We use it on things like facial animation, face-swapping, cloth simulation and other applications,” Basse says. “Our work on She-Hulk last year made extensive use of this technology. In fact, we don’t believe we could have delivered that many shots without it given the time and resources we had to work on this project. [We also did] some fantastic work with cloth simulation on Blue Beetle. We’re basically using this technology now on virtually any show we get to work on.”

Prior to that, the digital creation of the character Thanos’ face in Avengers: Infinity War was Digital Domain’s first major application of machine learning and utilized the Masquerade facial-capture system. Avengers: Endgame followed close on its heels. “Since then, DD has done a lot more work with this technology,” Basse remarks. “For example, we created an older version of David Beckham for his ‘Malaria Must Die – So Millions Can Live’ campaign and used our ML-based face-swapping technology Charlatan to bring deceased Taiwanese singer Teresa Teng back to life, virtually.”

Basse adds, “In general, machine learning has proven very useful to help create more photorealistic and accurate results. But it’s really the interplay of AI and the craft of our artists – which they acquired over decades, in many cases – that enables us to create believable results.”

Wētā FX

“We have been working with various ML tools and basic AI models for a long time,” says Wētā FX Senior Visual Effects Supervisor Joe Letteri. In fact, Massive software, employed all the way back on The Lord of the Rings, uses primitive fuzzy logic AI to drive its agents. Letteri notes, “Machine learning has also been prevalent in rendering for de-noising for years across the industry. For Gemini Man we used a deep learning solver to help us achieve greater consistency with the muscle activations in our facial system. It helped us streamline the combinations that were involved in complex movements across the face to build a more predictable result.”

Wētā changed its facial animation system for Avatar 2 and adapted new deep learning methodologies. Letteri says, “Our FACS-based facial animation system yielded great results, but we felt we could do better. As our animators and facial modelers got better, we needed increasingly more flexible and complex systems to accommodate their work. So, we took a neural network approach that allowed us to leverage more of what the actor was doing and hide away some of the complexity from the artists while giving them more control. We were also able to get more complex secondary muscle activations right from the start, so the face was working as a complete system, within a given manifold space, much like the human face,”

Letteri and his crew created another neural network to do real time depth compositing during live-action filming. He explains, “During that setup process, we utilized rendered images to train the deep learning model in addition to photographed elements. This allowed us to gather more reference of different variations and positions than we could feasibly get onset. We could train the system to understand a given set environment and the placement of characters in nearly every position on the set in a wide range of poses – something that would be impractical to do with actors on a working film set.”

Comments Letteri, “VFX pipelines are always evolving, sometimes driven by hardware or software advancements, sometimes through new and innovative techniques. There is no reason to think that we won’t find new ways to deploy AI-enhanced workflows within VFX. Giving artists ways to rapidly iterate and explore many simultaneous outcomes at the same time can be enormously powerful. It also has great potential as a QC or consistency tool, the way many artists are using it now.”

 

Autodesk

“AI has the potential to be revolutionary for VFX as artists design and make the future,” says Ben Fischler, Director of Product Management, Content Creation at Autodesk. “The Internet took shape over many years, and it took time for it to become a part of our daily lives, and it will be similar with AI. For the visual effects industry, it’s all about integrating it into workflows to make them better. It won’t be an immediate flip of the switch, and while certain areas will be rapid, others will take longer.”

It has been more than two years since Autodesk embraced AI tools in Flame. “Flame puts a sprinkle of AI into an artist’s workflow and supercharges it dramatically. Things like rotoscoping, wire removal and complex face matte creation are processes that go back to the origins of visual effects when we did things optically, not digitally, and they’re still labor intensive. These are the processes where a little AI in the right places goes a long way,” Fischler explains. “In the case of Flame, we can take a process that had an artist grinding away for hours and turn it into a 20-minute process.”

Autodesk recently launched a private beta of Maya Assist in collaboration with Microsoft. “It was developed for new users to Maya and 3D animation and uses voice prompts via ChatGPT to interface with Maya,” Fischler says.

TOP TWO: Data on a real baby was collected from the grandson of a former Disney executive in order to create the digital Baby Thor in Thor: Love and Thunder. (Images courtesy of Marvel Studios) BOTTOM TWO: Soccer icon David Beckham participated in the ‘Malaria Must Die – So Millions Can Live’ campaign. His face was aged into his ‘70s by Digital Domain’s Charlatan technology. The video was produced by the Ridley Scott Creative Group Amsterdam. (Images courtesy of Digital Domain)

TOP TWO: Data on a real baby was collected from the grandson of a former Disney executive in order to create the digital Baby Thor in Thor: Love and Thunder. (Images courtesy of Marvel Studios) BOTTOM TWO: Soccer icon David Beckham participated in the ‘Malaria Must Die – So Millions Can Live’ campaign. His face was aged into his ‘70s by Digital Domain’s Charlatan technology. The video was produced by the Ridley Scott Creative Group Amsterdam. (Images courtesy of Digital Domain)

Data on a real baby was collected from the grandson of a former Disney executive in order to create the digital Baby Thor in Thor: Love and Thunder. (Images courtesy of Marvel Studios)

TOP TWO: Data on a real baby was collected from the grandson of a former Disney executive in order to create the digital Baby Thor in Thor: Love and Thunder. (Images courtesy of Marvel Studios) BOTTOM TWO: Soccer icon David Beckham participated in the ‘Malaria Must Die – So Millions Can Live’ campaign. His face was aged into his ‘70s by Digital Domain’s Charlatan technology. The video was produced by the Ridley Scott Creative Group Amsterdam. (Images courtesy of Digital Domain)

TOP TWO: Data on a real baby was collected from the grandson of a former Disney executive in order tocreate the digital Baby Thor in Thor: Love and Thunder. (Images courtesy of Marvel Studios) BOTTOM TWO: Soccer icon David Beckham participated in the ‘Malaria Must Die – So Millions Can Live’ campaign. His face was aged into his ‘70s by Digital Domain’s Charlatan technology. The video was produced by the Ridley Scott Creative Group Amsterdam. (Images courtesy of Digital Domain)

Soccer icon David Beckham participated in the ‘Malaria Must Die – So Millions Can Live’ campaign. His face was aged into his ‘70s by Digital Domain’s Charlatan technology. The video was produced by the Ridley Scott Creative Group Amsterdam. (Images courtesy of Digital Domain)

TOP TO BOTTOM: Wētā utilized a “deep learning solver” to achieve greater consistency with the muscle activations in the facial system for Gemini Man. (Images courtesy of Wētā FX and Paramount Pictures)

TOP TO BOTTOM: Wētā utilized a “deep learning solver” to achieve greater consistency with the muscle activations in the facial system for Gemini Man. (Images courtesy of Wētā FX and Paramount Pictures)

TOP TO BOTTOM: Wētā utilized a “deep learning solver” to achieve greater consistency with the muscle activations in the facial system for Gemini Man. (Images courtesy of Wētā FX and Paramount Pictures)

Wētā utilized a “deep learning solver” to achieve greater consistency with the muscle activations in the facial system for Gemini Man. (Images courtesy of Wētā FX and Paramount Pictures)

Rising Sun Pictures

Some five years ago, RSP began collaborating with the Australian Institute for Machine Learning (AIML), which is associated with the University of Adelaide, on ways to incorporate emerging technologies into its visual effects pipeline. AIML post-doctoral researchers John Bastian and Ben Ward saw the potential for AI in filmmaking and joined RSP; they now lead its AI development team with Troy Tobin.

One of the multiple projects that benefited from their work was Marvel’s Thor: Love and Thunder, in which RSP applied data collected from a human baby (the grandson of former Disney CEO Bob Chapek) to a CG infant. Working in tandem with the film’s production team, they were able to “direct” their digital baby to perform specific gestures and exhibit emotions required by the script. According to the Senior VFX Producer on the film, Ian Cope, quoted on the RSP website, “The advantage of this technique over standard ‘deep fake’ methods is that the performance derives from animation enhanced by a learned library of reference material.” The look was honed over many iterations to achieve a digital baby that would seem real to audiences.

“The work we’re doing is not just machine learning,” adds Ward, now Senior Machine Learning Developer at RSP. “Our developers are also responsible for integrating our tools into the pipeline used by the artists. That means production tracking and asset management and providing artists with the control they need from a creative point of view.”

Working together across several projects, the AI and compositing teams have grown in their mutual understanding. Having explored this space early, “we’ve learned a lot about how the two worlds collide and how we can utilize their [AI] tools in our production environment,” observes RSP Lead Compositor Robert Beveridge. “The collaboration has improved with each project, and that’s helped us to one-up what we’ve done before. The quality of the work keeps getting better.”

Jellyfish Pictures

“AI and ML offer exciting opportunities to our workflows. and we are exploring how to best implement them,” says Paul J. Baaske, Jellyfish Pictures Head of Technical Direction. “For example, how we can leverage AI to create cloth and muscle simulation with higher fidelity. This is a really intriguing avenue for us. Other areas are in imaging – from better denoising, roto-masks, to creating textures quicker. But some of the greatest gains we see [are] in areas like data or library management.”

Baaske adds, “The key moving forward will be for studios to look at their output and data through the lens of ‘how can we learn and develop our internal models further?’ Having historic data available for training and cleverly deploying it to gain competitive advantage can help make a difference and empower artists to focus more on the creative than waiting for long calculations.”

 

Vicon

“One of the most significant ways I think AI is going to impact motion capture is the extent to which it’s going to broaden both its application and the core user base,” comments David “Ed” Edwards, VFX Product Manager for Vicon. “Every approach to motion capture has its respective strengths and shortcomings. What we’ve seen from VFX to date – and certainly during the proliferation of virtual production – is that technical accessibility and suitability to collaboration are driving forces in adoption. AI solutions are showing a great deal of promise in this respect.”

Edwards adds, “The demands and expectations of modern audiences means content needs to be produced faster than ever and to a consistently high standard. As AI is fast becoming ubiquitous across numerous applications, workflows and pipelines, it’s already making a strong case for itself as a unifier, as much as an effective tool in its own right.”

Studio Lab/Dimension 5

“I think with the use of AI we will see many processes be streamlined, which will allow us to see multiple variations of a unique look,” says Ian Messina, Director of Virtual Production at Studio Lab and owner of real-time production company Dimension 5. Wesley Messina, Dimension 5 Director of Generative AI, says, “Some trailblazers, like Wonder.ai, are pushing the boundaries of technology by developing tools that can turn any actor into a digital character using just video footage. This gets rid of the need for heavy motion-tracking suits and paints a promising picture of what’s to come in animation.”

Wesley Messina adds, “As the technology becomes more widely available, we can expect to see AI tools being used by more and more creators. This will change the way we make movies and other visual content, bringing stories to life in ways we’ve never seen before.”

TOP TWO: Digital Domain created the digital face of Thanos in Avengers: Infinity War with the Masquerade system and machine learning, and then worked their magic again in Avengers: Endgame. (Images courtesy of Marvel Studios) BOTTOM TWO: Digital Domain used their Charlatan technology and machine learning to create a CGI likeness of the late Taiwanese singer Teresa Teng for a virtual concert that mesmerized fans. (Images courtesy of Digital Domain, Prism Entertainment and the Teresa Teng Foundation)

TOP TWO: Digital Domain created the digital face of Thanos in Avengers: Infinity War with the Masquerade system and machine learning, and then worked their magic again in Avengers: Endgame. (Images courtesy of Marvel Studios) BOTTOM TWO: Digital Domain used their Charlatan technology and machine learning to create a CGI likeness of the late Taiwanese singer Teresa Teng for a virtual concert that mesmerized fans. (Images courtesy of Digital Domain, Prism Entertainment and the Teresa Teng Foundation)

Digital Domain created the digital face of Thanos in Avengers: Infinity War with the Masquerade system and machine learning, and then worked their magic again in Avengers: Endgame. (Images courtesy of Marvel Studios)

TOP TWO: Digital Domain created the digital face of Thanos in Avengers: Infinity War with the Masquerade system and machine learning, and then worked their magic again in Avengers: Endgame. (Images courtesy of Marvel Studios) BOTTOM TWO: Digital Domain used their Charlatan technology and machine learning to create a CGI likeness of the late Taiwanese singer Teresa Teng for a virtual concert that mesmerized fans. (Images courtesy of Digital Domain, Prism Entertainment and the Teresa Teng Foundation)

TOP TWO: Digital Domain created the digital face of Thanos in Avengers: Infinity War with the Masquerade system and machine learning, and then worked their magic again in Avengers: Endgame. (Images courtesy of Marvel Studios)BOTTOM TWO: Digital Domain used their Charlatan technology and machine learning to create a CGI likeness of the late Taiwanese singer Teresa Teng for a virtual concert that mesmerized fans. (Images courtesy of Digital Domain, Prism Entertainment and the Teresa Teng Foundation)

Digital Domain used their Charlatan technology and machine learning to create a CGI likeness of the late Taiwanese singer Teresa Teng for a virtual concert that mesmerized fans. (Images courtesy of Digital Domain, Prism Entertainment and the Teresa Teng Foundation)

“The Internet took shape over many years, and it took time for it to become a part of our daily lives, and it will be similar with AI. For the visual effects industry, it’s all about integrating it into workflows to make them better. It won’t be an immediate flip of the switch, and while certain areas will be rapid, others will take longer.”

—Ben Fischler, Director of Product Management, Content Creation, Autodesk

TOP: Autodesk’s Maya Assist has a ChatGPT assistant. (Image courtesy of Autodesk) BOTTOM TWO: Autodesk Flame software offers the ability to extract mattes of the human body, head and face with AI-powered tools for color adjustment, relighting and beauty work, as well as to quickly isolate skies and salient objects for grading and VFX work. (Images courtesy of Autodesk)

Autodesk’s Maya Assist has a ChatGPT assistant. (Image courtesy of Autodesk)

TOP: Autodesk’s Maya Assist has a ChatGPT assistant. (Image courtesy of Autodesk) BOTTOM TWO: Autodesk Flame software offers the ability to extract mattes of the human body, head and face with AI-powered tools for color adjustment, relighting and beauty work, as well as to quickly isolate skies and salient objects for grading and VFX work. (Images courtesy of Autodesk)

TOP: Autodesk’s Maya Assist has a ChatGPT assistant.(Image courtesy of Autodesk) BOTTOM TWO: Autodesk Flame software offers the ability to extract mattes of the human body, head and face with AI-powered tools for color adjustment, relighting and beauty work, as well as to quickly isolate skies and salient objects for grading and VFX work. (Images courtesy of Autodesk)

Autodesk Flame software offers the ability to extract mattes of the human body, head and face with AI-powered tools for color adjustment, relighting and beauty work, as well as to quickly isolate skies and salient objects for grading and VFX work.
(Images courtesy of Autodesk)

Perforce and VP

Rod Cope, CTO of Perforce Software, sees AI as having a big impact on virtual production. He explains, “For one, AI is going to let creative teams generate a lot more art assets, especially as text-to-3D AI tools become more sophisticated. That will be key for virtual production and previs. Producers and art directors are going to be able to experiment with a wider array of options, and I think this will spur their creativity in a lot of new ways.”

The Synthetic World

Synthesis AI founder and CEO Yashar Behzadi opines that synthetic data will have a transformative impact on TV and film production in a number of areas, such as virtual sets and environments, pre-visualization and storyboards, virtual characters and creatures, and VFX and post-production.

Behzadi continues, “The vision for Synthesis AI has always been to synthesize the world. Our team consists of people with experience in animation, game design and VFX. Their expertise in this field has enabled Synthesis AI to create and release a library of over 100,000 digital humans, which serves as the training data for our text-to-3D project, Synthesis Labs.”

More on GenAI

“Now, with the emergence of more sophisticated generative AI models and solutions, we’re starting to look at many more ways to use it,” explains Digital Domain’s Basse. “Emerging tools in generative AI, such as ChatGPT, MidJourney, Stable Diffusion and RunwayML, show a lot of promise.”

Basse continues, “GenAI is really good to start the creative process, generating ideas and choices. GenAI does not actually generate art, it creates variants and choices which are based on prior art. But this process can provide great starting points for concept art. But the ultimate product will still come from human artists, as only they really know what they want. Having said that, I have high expectations for the use of GenAI technology in storyboarding and previsualization. I believe we will see a lot of traction with GenAI in those areas very soon.”

Autodesk’s Fischler notes, “Having the ability to generate high quality assets would be very impactful to content creators in production. but the challenge is making these assets production-ready for film, television or Triple A games. We are seeing potentially useful tools on the lower end, but it’s much harder to have AI generate useful assets when you have a director, creative director and animation supervisor with a creative vision and complex shot sequence to build.”

Wes Messina adds that text-to-3D-model technology “could be a game-changer, moving us away from the hard work of starting from scratch in developing 3D assets.”

LeDoux argues, “However, it’s important to remember that AI-generated concept art isn’t here to replace human creativity. Instead, it’s a rad tool that can add to and improve the artistic process. By using these AI technologies, artists can focus on the creative side of their work and bring the director’s vision to life more effectively, leading to super engaging and visually stunning productions.”

VFX Taskmasters

Overall, AI will help with many tasks. LeDoux comments, “If we divide it up into prep, production, and post-production, and then think of all of the aspects of VFX for each one, you can help wrap your mind around all of the applications. In prep, having generative tools such as Stable Diffusion to help create concept art is obvious, but other tools to help plan, such as language models to help parse the script for VFX-based bidding purposes, as well as planning via storyboarding and previz is massive. In production, having tools to help with digital asset management, stitching and asset building for virtual production is a massive time saver. In post-production, the list is endless from rotoscope assistance to color matching to animation assistance.”

“We think that AI will impact our entire workflow,” Basse says. “There are so many scenarios we can think about: creating 3D models with text prompts, creating complex rigs and animation cycles, but we also see potential applications in layout, lighting, texture and lookdev. There is also an expectation that machine learning will revolutionize rotoscoping, which is a very labor-intensive and tedious part of our workflow today.”

Perforce’s Cope adds, “AI is going to have an impact on quality assurance and workflow as well. I think we will see AI automate some of the more rote tasks in 3D animation, like stitching and UV mapping, and identifying rendering defects – things that take time but don’t require as much creativity. AI is going to accelerate those tasks. And, since AI allows teams to go faster, directors will demand even more with quicker turnarounds. Teams that don’t adopt AI in their workflows will be left behind sooner than later.”

VFX tasks that will benefit from AI also include object removal, matchmoving, color grading, and image upscaling and restoration, according to Synthesis AI’s Behzadi.

AI, VR and Video Games

It’s easy to imagine that AI could give a big boost to video games and VR by vastly increasing interactivity and realism. “Thinking on another level, I think that games as we know them will change,” Cope says. For example, “AI is going to open the doors for more natural and unique interactions with characters in an RPG. And could even lead to completely unique in-game experiences for each player and journey.”

Synthesis AI’s Behzadi comments, “Virtual reality experiences can be greatly enhanced by AI in several ways, including digital human development, enhanced simulations and training, as well as computer vision applications, to name a few.”

Behzadi continues, “AI can generate realistic digital humans or avatars that can interact with users in real-time. These avatars can understand and respond to users’ gestures, facial expressions and voice commands, creating more natural and engaging interactions within virtual environments. When coupled with computer vision techniques, AI has a powerful impact on enhancing the visual quality of VR experiences, including improved graphics rendering, realistic physics simulations, object recognition, and tracking users’ movements within the virtual environment. These advancements ultimately lead to more visually stunning and immersive VR worlds.”

The Road Ahead

Looking ahead, LeDoux opines, “While it’s true that AI, in its essence, is an automation tool with the potential to displace jobs, historical precedents suggest that automation can also stimulate job creation in emerging sectors.” A look back at the last quarter-century provides a good understanding of this trend, he notes. “The VFX industry has seen exponential growth, fueled largely by clients demanding increasingly complex visual effects as technology progresses.” LeDoux adds that AI will bring significant improvements in the quality and accessibility of visual effects, “thereby enhancing our capacity for storytelling and creative expression.”

Letteri comments, “In VFX we are always looking for new ways to help the director tell their story. Sometimes this is developing new tools that enable greater image fidelity or more sophisticated simulations of natural phenomena – and sometimes they are about finding ways to do all of that more efficiently.” Basse concludes, “Not a day goes by where we don’t see an announcement from our tool vendors or new startups touting some new accomplishment relating to content creation using AI and ML. It’s a very exciting time for our industry. For many applications, there is still a lot of work to be done, but this technology is evolving so rapidly that I think we need to measure these major advancements in months, not years.”


,

Share this post with

Most Popular Stories

THE CONSEQUENCES OF FALLOUT
15 April 2024
VFX Trends
THE CONSEQUENCES OF FALLOUT
Westworld team brings Megaton Power to game adaption.
SINGING PRAISES FOR UNSUNG HEROES
15 April 2024
VFX Trends
SINGING PRAISES FOR UNSUNG HEROES
Recognizing ‘hidden’ talent pivotal to making final shots a reality.
AGING PHILADELPHIA FOR THE WALKING DEAD: THE ONES WHO LIVE
03 April 2024
VFX Trends
AGING PHILADELPHIA FOR THE WALKING DEAD: THE ONES WHO LIVE
The final season of The Walking Dead: The Ones Who Live follows Rick Grimes and Michonne Hawthorne.
LAS VEGAS’ SPHERE: WORLD’S LARGEST HIGH-RES LED SCREEN FOR LIVE ACTION AND VFX
15 April 2024
VFX Trends
LAS VEGAS’ SPHERE: WORLD’S LARGEST HIGH-RES LED SCREEN FOR LIVE ACTION AND VFX
World’s largest high-resolution LED screen immerses audiences.
NAVIGATING LONDON UNDERWATER FOR THE END WE START FROM
05 March 2024
VFX Trends
NAVIGATING LONDON UNDERWATER FOR THE END WE START FROM
Mahalia Belo’s remarkable feature directorial debut The End We Start From follows a woman (Jodie Comer) and her newborn child as she embarks on a treacherous journey to find safe refuge after a devastating flood.