By JIM McCULLAUGH
The award-winning definitive authority on all things visual effects in the world of film, TV, gaming, virtual reality, commercials, theme parks, and other new media.
Winner of three prestigious Folio Awards for excellence in publishing.
By JIM McCULLAUGH
The VFX industry in 2024 will be shaped by a convergence of advanced hardware and software innovations. From quantum computing and AI-accelerated hardware to real-time ray tracing and AI-driven automation, the tools at the disposal of VFX artists and studios are poised to create visuals that were once deemed impossible. With these trends, we can expect a new era of creativity and realism, redefining the boundaries of what can be achieved in the world of visual effects. Here, a cross-section of technology company thought leaders calculate the new year’s progressions.
Rick Champagne, Director, Global Media & Entertainment Industry Marketing and Strategy, NVIDIA
The explosion of AI has taken the media entertainment industry by storm with a myriad of tools to accelerate production and enable new creative options. From initial concepts and content creation to final production and distribution, AI plays a crucial role every step of the way.
Artists are tapping into AI to bring their early concept drawings to life in 3D – or even in fully animated sequences. AI and advanced technology like NVIDIA GPUs can help virtual art departments create camera-ready environments in a fraction of the time, while enabling more creative options live on set. Capabilities like removing objects, changing backgrounds or foregrounds, or simply up-resing content in real-time for LED volumes are simpler than ever. AI is also helping studios dynamically curate and customize content tailored for individual consumers. This new era of personalization will open up new revenue streams to fuel the industry’s future growth.
Customizable generative AI will enable artists, filmmakers and content creators to scale and monetize their unique styles and produce more diverse outputs for broader consumption. For example, speech AI gives people the ability to converse with devices, machines and computers to simplify and augment their lives. It makes consuming content in any language much easier, providing advanced captioning, translation and dubbing while also enabling the use of voice for more natural interaction in immersive experiences. Enterprises and creators can also take their generative AI to the next level with NVIDIA Picasso to run optimized inference on their models, train state-of-the-art generative models on proprietary data or start from pretrained models to generate images, video and 3D content from text or image prompts. Large language models, or LLMs, like NVIDIA NeMo can be trained using data from game lore, comics, books, television shows and cinematic universes to create new opportunities for audiences to engage with content and characters.
Nearly three-fourths of M&E leaders believe AI will be crucial to staying competitive over the next five years. The new era of AI has inspired studios to create new departments to develop game-changing tools using computer vision, machine learning and foundation models. With NVIDIA AI Enterprise, businesses can streamline the development and deployment of production-ready generative AI, computer vision, speech AI and more. To overcome rising costs of production and decreasing budgets, studios are now planning their AI strategies. The upcoming year will reveal the steps they’re considering to move faster and more efficiently.
Mac Moore, Head of Media &Entertainment, CoreWeave
2023 will certainly go down as the year of gen AI across industries; however, in 2024 we’ll start to see its implications. Through experimentation, it has become clear that training foundational models and running gen AI applications require a ton of compute, increasing competition for hardware that’s already relatively scarce. Looking forward, I also expect real-time game engines will continue gaining traction in production, as they allow studios to leverage the same environment for previz as their production renders. Like real-time, AI/ML promises to increase efficiency through automation, reducing redundant and time-consuming tasks in the VFX pipeline. Cloud computing will be the backbone of the AI/ML expansion, as it can be used to train and run models without the financial and logistical challenges of purchasing infrastructure.
David “Ed” Edwards, VFX Product Manager, Vicon
We’ll continue to see AI impact motion capture and VFX in new and improved ways in 2024 and beyond. One of the most notable of those is the extent to which it’s going to broaden both the application and the core user base. Every approach to motion capture has its respective strengths and shortcomings. What we’ve seen from VFX to date – and certainly during the proliferation of virtual production – is that technical accessibility and suitability to collaboration are driving forces in adoption. AI solutions are showing a great deal of promise in this respect.
Matthew Allard, Director, Alliances & Solutions, Dell Technologies
More and more commercial notebook users are transitioning to mobile and fixed workstations to leverage the scalability, performance and reliability of workstations. Specifically, those users transitioning are looking for more performance and advanced features from their hardware to use specialized business software and applications, heavy office usage and run complex tasks and workloads. As a result of an increased need for systems with higher performance, the technology behind workstations and workstation performance continues to advance as well. This is leading to two major changes – workstations offering CPUs with an increasing core count and more options for multiple GPUs in one workstation (up to four GPUs). In addition, workstations offer a very efficient and cost-effective option to run workloads locally. Particularly with the growth of AI, workstations are a valuable option to run AI workloads before you need to deploy at scale.
Addy Ghani, Vice President of Virtual Production, disguise
The hardware and software trends in media and entertainment may surprise us in the next coming years. Going forward, we are seeing evolving trends in areas of decentralization and an ever-increasing reliance on high-speed, low-latency networking infrastructure. One of the areas where we may see a huge uptick is in real-time graphics performance, in near-cloud infrastructure. As more AI SaaS services emerge, the rise of powerful GPUs, and even specialized AI cores, will need to be accessed almost instantaneously – or as close to it as possible.
Jeremy Smith, Chief Technologist, HP Inc.
The amount of data that VFX professionals have to work with these days is staggering. When it comes to looking into the crystal ball, it’s figuring out how artists and studios are going to work with massive volumes of data, the compute power required to process that data, and how all of that data is going to be managed and stored. All of the exciting creative trends in television and film, like virtual production, AI-augmented workflows and the continuous demand for higher-quality content, are all driving this data boom.
So even as computation time and power are shooting up dramatically, the time to create content hasn’t actually changed that much because the quality bar keeps increasing at an almost exponential rate. In order to maximize processing power, we’re seeing developers embrace both the GPU and the CPU in workstations like the HP Z8 Fury, enabling artists to iterate much faster than ever before. We’re also seeing that organizations in VFX continue to operate with teams in a remote environment, leveraging talent pools around the world. All of these forces are driving a massive digital transformation across the industry where studios are looking to technology developers to help devise smarter ways to handle workflow infrastructure, distributed teams, a massive influx of data – and innovative ways to store and manage that data.
Tram Le-Jones, Vice President of Solutions, Backlight
2023 has been a tumultuous year, but there are many reasons to look forward to an exciting and innovation-filled 2024. This year won’t just be about new cutting-edge tools that will redefine what’s possible in VFX; I’m also anticipating a transformation in how we consider the way in which we work.
Realizing a more efficient and interconnected future won’t hinge on a single change. Perhaps the most significant objective the industry can have in 2024 will be fostering more collaboration and bringing people together both in VFX and across the entire production pipeline. Workers want to feel connected in the production process – they want to know what’s happening upstream, how that information translates downstream, and how we keep it all connected so teams are referencing the same information efficiently. As we brace for a deluge of work in 2024, the industry will need to optimize for both efficiency and impact. That will start with taking a hard look at our processes and revisiting workflows, production management and especially asset management. Given the sheer volume of data and media we expect to handle, the demand for intuitive, scalable solutions will skyrocket. I believe this need will catalyze a surge in collaboration across the whole production ecosystem, with more software providers partnering up to provide solutions that address entire workflows. Just as we’ve witnessed with virtual production, I anticipate more departments will want to go from script to screen on one production in a more collaborative, flexible way than ever before.
James Knight, Global Director, Media & Entertainment/ Visual Effects, AMD
Those mocap volumes are not going anywhere – they’re here to stay! Television is really starting to embrace virtual production en masse, and motion capture falls under that. The use of virtual cameras for directors, directors of photography and artists to walk around, lens and film environments as if they were really there; I see that becoming even more ubiquitous than it is. I see more pros in film and TV being more curious and discovering that process. I anticipate more professional discovery and people understanding more about the power of utilizing the volume along with motion capture and virtual production, and what that can do to pipelines that didn’t previously involve VP – streamlining the traditional pillars of post-production, production and pre-production into one iterative process.
Eric Bourque, Vice President of Content Creation, Media &Entertainment, Autodesk
While it’s impossible to predict exactly what’s around the corner, we know that our VFX and animation customers are constantly looking for ways to work more efficiently. This has been a driving force behind so much innovation in media and entertainment at Autodesk, including a push toward cloud-driven production, collaborating cross-industry on open source initiatives and exploring the promise of AI for accelerating artist workflows.
Data interoperability poses massive bottlenecks for VFX studios working with distributed teams around the world, and we are investing in open source efforts to tackle these complex challenges. Most recently we came together with Pixar, Adobe, Apple and NVIDIA in the formation of the Alliance for OpenUSD. We are also working with Adobe to help drive a new source shading model with OpenPBR to move toward a reality where files can move seamlessly from one digital content creation system to another.
AI tools in some form or another have played a role in boosting VFX workflows for many years now, and we also see generative AI taking things one step further. For instance, we are integrating AI services developed using NVIDIA Picasso, a foundry for building generative AI models, into Maya, and are also teaming up with Wonder Dynamics to deliver an integration between Maya and Wonder Studio – a tool that harnesses the power of AI for character-driven VFX workflows.
Shawn Frayne, CEO & Co-Founder, Looking Glass Factory
Over the next months – not years or decades – I believe folks will find themselves chatting with AI-powered holograms on a daily basis in stores, theaters, places like the airport or stadium, our offices and eventually in our homes. We see this happening already with a handful of brands we work with, and I think that’s the beginning of something much bigger that’s about to sweep the globe. Not long after that happens, I think a lot of us will find we’re also chatting with each other (by which I mean, fellow humans) in 3D through holographic terminals like the Looking Glass – first in hybrid office setups, but eventually also in our homes – just like what was demonstrated at the recent NVIDIA Siggraph demo.
So, my hunch is the shift from the 2D phones and laptops we use today to spatial interfaces of tomorrow will accelerate dramatically over the next few months, powered by conversational AI characters and AI-powered holographic communication.
Christopher Nichols, Director, Chaos Group Labs
Advance of General GPU (GPGPU) and AI tools: The demand for AI tools will continue to explode, not only externally but internally within organizations. This desire is going to also raise the demand for GPUs that can handle the AI training models that organizations will be creating. However, in terms of M&E/VFX, the vast majority of tools won’t necessarily be based around image generation. Instead, we’re likely going to see things like smarter denoising, animation accelerants (auto-rigging, auto-blendshapes) and motion capture workflows that start with a smartphone.
Andrew Sinagra, Co-Founder, NIM Labs
As companies look for ways to create efficiencies across global productions, migrating more services and systems to a based cloud infrastructure is key. We’re seeing businesses and solutions that facilitate the hybrid (on-prem and cloud) model as well as the full studio-in-the-cloud model rise to the surface of conversations. Over the past several years, we’ve recognized the challenges of a decentralized business model, and the solutions to these issues will be a top priority in 2024.
Dade Orgeron, Vice President of 3D Innovation, Shutterstock
We’re already seeing impressive AI applications for motion capture, character replacement and environment generation, but what’s most exciting are the tools that will break down the barriers to entry into the 3D industry, opening the doors for more creators while simultaneously reducing creation timelines from days to minutes.
Ofir Benovici, CEO, Zero Density
After helping to produce two million hours’ worth of content, we are seeing an increasing pressure to adopt open standards. 3D platforms need to work harmoniously together and be compatible with industry protocols. Anything else just makes workflows unnecessarily slow and painful to set up.
Kamal Mistry, CEO, Arcturus
In the next year, we expect to see virtual production continue to accelerate, and hopefully solve a fairly significant problem: adding real people to virtual backgrounds. One of the best aspects of virtual production is that it gives performers the opportunity to fully immerse themselves in the world where their story takes place. Some of the most impressive examples are sci-fi hits like The Mandalorian and Star Trek: Strange New Worlds, which feature imaginative alien worlds. In most cases, though, those worlds are depicted as deserted landscapes, devoid of actual people. Right now, the best way to solve that is for artists to create CG digidoubles, a costly and time-consuming process. There are also limits in how realistic the CG characters are – which is why you often see them in the far background.
In the coming year, volumetric video will play a prominent role in the creation of virtual humans for virtual production. Productions can simply record a real performer in costume carrying out an assigned task, then add them to a virtual production scene. We’ve seen that in films like the Whitney Houston biopic, the latest Matrix, comic book movies and more, but there have been limits due to how much data is needed to display each volumetric character. That is changing though, and volumetric video developers – including Arcturus – have found new ways to make the volumetric recordings lightweight, meaning content creators can add hundreds (if not more) real performances into virtual production backgrounds as needed.
For creators, it means they have a new way to assign relightable 3D assets in a traditional 2D pipeline, leading to better crowd scenes, new VFX options and a better way to create 3D compositions. For audiences, that means a better viewing experience, and that’s really what it’s all about.
Gretchen Libby, Director, Visual Computing, Amazon Web Services (AWS)
In 2024, cloud computing will continue to serve a vital role in VFX and animation workflows, to support the increasing pace and scale, as well as a growing global workforce. As more studios enable their artists to work in the cloud, they will need robust cost reporting and management tools, as well as data management across multiple locations. The cloud provides tremendous scalability and flexibility, but as demand increases in 2024, they will need to optimize their costs to match their budgets. Interoperability also remains a top priority in content creation workflows.
Christy Anzelmo, Chief Product Officer, Foundry
As the industry continues to evolve in response to new challenges, the drive for delivering high-quality VFX with even greater efficacy continues, and building efficient and flexible pipelines where skilled artists are empowered by technology has never been more essential for any VFX project or business. It’s exciting to see the industry’s continued progression toward leveraging USD to enable interop between applications and departments without compromising on performance or scale. As we continue to embed USD further within Nuke, Katana, Mari and other products, Foundry is looking forward to participating in the recently announced Alliance for OpenUSD and supporting the path towards standardization.
Adrian Jeakins, Director of Engineering, Brompton Technology
LED products designed specifically for virtual production are what we’ll see emerging more and more as manufacturers understand better the unique requirements of LED for in-camera visual effects (ICVFX). One of the first and most obvious of these is LED panels that give better color rendering by using extra emitters to improve spectral quality. A challenge here for panel manufacturers is sourcing LED packages with extra emitters, which are not widely available. As a result, we’ll see coarse panels first, which will be really good for ceilings in a volume, and then finer panels. Processing is also presented with some new challenges because the algorithms for calibrated control of these extra emitters are a lot more complex than those in standard LED panels.