VFX Voice

The award-winning definitive authority on all things visual effects in the world of film, TV, gaming, virtual reality, commercials, theme parks, and other new media.

Winner of three prestigious Folio Awards for excellence in publishing.

Subscribe to the VFX Voice Print Edition

Subscriptions & Single Issues


January 10
2022

ISSUE

Winter 2022

VES HANDBOOK: Game Engines and Real-Time Rendering

By DAVID JOHNSON, VES

Edited for this publication by Jeffrey A. Okun, VES 

Abstracted from The VES Handbook of Visual Effects – 3rd Edition,
Edited by Jeffrey A. Okun, VES and Susan Zwerman, VES

Figure 8.1 Screen capture of Call of Duty: Ghosts. (Image courtesy of Activision/Blizzard. Copyright © 2013)

 Figure 8.1 Screen capture of Call of Duty: Ghosts. (Image courtesy of Activision/Blizzard. Copyright © 2013)

 Rendering starts with a chunk of memory that contains the positions of each model in the scene, definitions for all of the lights in the scene and a set of parameters defining the settings for the post-processing, and it ends with all the final colored pixels that make up the presented image. In the middle of this process, a series of steps occur, the specifics of which will vary from engine to engine as well as depending on the type of rendering being used. The starting point for a renderer is the frame buffer: a chunk of memory that represents the canvas that each draw call will resolve to. “Double-buffered rendering” is a common technique that allows the renderer to draw to one canvas of memory while the previous frame is being “presented” to the player – insuring if the frame takes longer to draw than the allowed time, the old frame can be maintained while rendering is resolved. Once the new frame is complete, a pointer is switched, the new frame is presented and the available frame buffer is cleared. A “depth pre-pass” is rendered without lighting or textures to give the renderer a mechanism to understand the depth at which all the opaque pixels will land. The depth pre-pass is used as both an optimization, and also as a source for other operations such as depth of field and depth fading particles. Next, some other pre-render calculations may occur, such as setting up the cascaded shadow maps and primary lights. From there the scene begins to draw.

The next steps in rendering will differ, depending on whether one uses a forward or deferred renderer. For forward renderers, each object in the world will rasterize to the frame buffer, one at a time; the lighting, textures and shaders are calculated pixel by pixel, triangle by triangle, using the GPU’s vertex shader and fragment shader operations. The vertex shader largely resolves the vertices of the model from local space through world and camera transforms, and it handles all the UV transformations. The fragment shader handles all the textures, lighting and material logic. Typically, all the static opaque models are resolved in a bucket before moving on to the animated and dynamic objects. Once all the opaque models have resolved, the transparent or emissive models are drawn.

Deferred renderers work differently. They build up a series of frame buffers, each with a different contribution. The albedo, normal, specular responses, etc., of all the objects in the scene are accumulated into independent buffers; then lighting can occur. For each light in the scene, its radius compared to the depth buffer will determine an area of the frame buffer and how many pixels will be hit. Those pixels are all processed to resolve the final output color, then move on to the next light and repeat the process until complete. Once the entire frame buffer is drawn, the post-processing step can then process the image to add color timing, contrast operations, look-up tables, vignettes and other operations to tune the look of the scene for the desired feel. Lastly, the UI is drawn, and the completed results are ready to be presented to the player.

Debug tools such as PIX (Performance Investigator for Xbox), PS4 Razor or RenderDoc for PC, are frame analysis tools that can be used to capture a frame and step through every draw call, get nanosecond GPU timings on every object rendered and examine CPU calls, determine how much time was spent on them and present how they were threaded across CPU cores.


Share this post with

Most Popular Stories

CHANNELING THE VISUAL EFFECTS ON SET FOR THE WHEEL OF TIME SEASON 2
23 January 2024
VES Handbook
CHANNELING THE VISUAL EFFECTS ON SET FOR THE WHEEL OF TIME SEASON 2
In the case of the second season of The Wheel of Time, the on-set visual effects supervision was equally divided between Roni Rodrigues and Mike Stillwell.
AGING PHILADELPHIA FOR THE WALKING DEAD: THE ONES WHO LIVE
03 April 2024
VES Handbook
AGING PHILADELPHIA FOR THE WALKING DEAD: THE ONES WHO LIVE
The final season of The Walking Dead: The Ones Who Live follows Rick Grimes and Michonne Hawthorne.
NAVIGATING LONDON UNDERWATER FOR THE END WE START FROM
05 March 2024
VES Handbook
NAVIGATING LONDON UNDERWATER FOR THE END WE START FROM
Mahalia Belo’s remarkable feature directorial debut The End We Start From follows a woman (Jodie Comer) and her newborn child as she embarks on a treacherous journey to find safe refuge after a devastating flood.
GODZILLA MINUS ONE GAINS GLOBAL RECOGNITION
13 February 2024
VES Handbook
GODZILLA MINUS ONE GAINS GLOBAL RECOGNITION
Visual and special effects have dramatically evolved like the creatures in the Godzilla franchise.
CINESITE GETS SNOWED UNDER BY TRUE DETECTIVE: NIGHT COUNTRY
27 March 2024
VES Handbook
CINESITE GETS SNOWED UNDER BY TRUE DETECTIVE: NIGHT COUNTRY
True Detective: Night Country features a diverse cast including Jodie Foster.