VFX Voice

The award-winning definitive authority on all things visual effects in the world of film, TV, gaming, virtual reality, commercials, theme parks, and other new media.

Winner of three prestigious Folio Awards for excellence in publishing.

Subscribe to the VFX Voice Print Edition

Subscriptions & Single Issues


January 10
2022

ISSUE

Winter 2022

VES HANDBOOK: Game Engines and Real-Time Rendering

By DAVID JOHNSON, VES

Edited for this publication by Jeffrey A. Okun, VES 

Abstracted from The VES Handbook of Visual Effects – 3rd Edition,
Edited by Jeffrey A. Okun, VES and Susan Zwerman, VES

Figure 8.1 Screen capture of Call of Duty: Ghosts. (Image courtesy of Activision/Blizzard. Copyright © 2013)

 Figure 8.1 Screen capture of Call of Duty: Ghosts. (Image courtesy of Activision/Blizzard. Copyright © 2013)

 Rendering starts with a chunk of memory that contains the positions of each model in the scene, definitions for all of the lights in the scene and a set of parameters defining the settings for the post-processing, and it ends with all the final colored pixels that make up the presented image. In the middle of this process, a series of steps occur, the specifics of which will vary from engine to engine as well as depending on the type of rendering being used. The starting point for a renderer is the frame buffer: a chunk of memory that represents the canvas that each draw call will resolve to. “Double-buffered rendering” is a common technique that allows the renderer to draw to one canvas of memory while the previous frame is being “presented” to the player – insuring if the frame takes longer to draw than the allowed time, the old frame can be maintained while rendering is resolved. Once the new frame is complete, a pointer is switched, the new frame is presented and the available frame buffer is cleared. A “depth pre-pass” is rendered without lighting or textures to give the renderer a mechanism to understand the depth at which all the opaque pixels will land. The depth pre-pass is used as both an optimization, and also as a source for other operations such as depth of field and depth fading particles. Next, some other pre-render calculations may occur, such as setting up the cascaded shadow maps and primary lights. From there the scene begins to draw.

The next steps in rendering will differ, depending on whether one uses a forward or deferred renderer. For forward renderers, each object in the world will rasterize to the frame buffer, one at a time; the lighting, textures and shaders are calculated pixel by pixel, triangle by triangle, using the GPU’s vertex shader and fragment shader operations. The vertex shader largely resolves the vertices of the model from local space through world and camera transforms, and it handles all the UV transformations. The fragment shader handles all the textures, lighting and material logic. Typically, all the static opaque models are resolved in a bucket before moving on to the animated and dynamic objects. Once all the opaque models have resolved, the transparent or emissive models are drawn.

Deferred renderers work differently. They build up a series of frame buffers, each with a different contribution. The albedo, normal, specular responses, etc., of all the objects in the scene are accumulated into independent buffers; then lighting can occur. For each light in the scene, its radius compared to the depth buffer will determine an area of the frame buffer and how many pixels will be hit. Those pixels are all processed to resolve the final output color, then move on to the next light and repeat the process until complete. Once the entire frame buffer is drawn, the post-processing step can then process the image to add color timing, contrast operations, look-up tables, vignettes and other operations to tune the look of the scene for the desired feel. Lastly, the UI is drawn, and the completed results are ready to be presented to the player.

Debug tools such as PIX (Performance Investigator for Xbox), PS4 Razor or RenderDoc for PC, are frame analysis tools that can be used to capture a frame and step through every draw call, get nanosecond GPU timings on every object rendered and examine CPU calls, determine how much time was spent on them and present how they were threaded across CPU cores.


Share this post with

Most Popular Stories

CHECKING INTO HAZBIN HOTEL TO CHECK OUT THE ANIMATION
16 July 2024
VES Handbook
CHECKING INTO HAZBIN HOTEL TO CHECK OUT THE ANIMATION
Animator Vivienne Medrano created her series Hazbin Hotel which has received 109 million views on her VivziePop YouTube Channel.
LIGHTWHIPS, DAGGERS AND SPACESHIPS: REFRESHING THE STAR WARS UNIVERSE FOR THE ACOLYTE
30 July 2024
VES Handbook
LIGHTWHIPS, DAGGERS AND SPACESHIPS: REFRESHING THE STAR WARS UNIVERSE FOR THE ACOLYTE
Creator, executive producer, showrunner, director and writer Leslye Headland is the force behind The Acolyte, which occurs a century before the Star Wars prequel trilogy.
SUMMONING CREATIVE VFX TO HEIGHTEN REALITY IN THE SYMPATHIZER
09 July 2024
VES Handbook
SUMMONING CREATIVE VFX TO HEIGHTEN REALITY IN THE SYMPATHIZER
Park Chan-wook was an ideal choice as a co-showrunner, director and writer to create a seven-episode adaptation of The Sympathizer for HBO.
TONAL SHIFT BRINGS A MORE CINEMATIC LOOK TO HALO SEASON 2
23 July 2024
VES Handbook
TONAL SHIFT BRINGS A MORE CINEMATIC LOOK TO HALO SEASON 2
There is an influx of video game adaptations, with Paramount+ entering into the fray with the second season of Halo.
FILMMAKER PABLO BERGER MAY NEVER STOP HAVING ROBOT DREAMS
06 August 2024
VES Handbook
FILMMAKER PABLO BERGER MAY NEVER STOP HAVING ROBOT DREAMS
The Oscar Nominated Spanish-French co-production Robot Dreams deals with themes of loneliness, companionship and people growing apart – without a word of dialogue.