VFX Voice

The award-winning definitive authority on all things visual effects in the world of film, TV, gaming, virtual reality, commercials, theme parks, and other new media.

Winner of three prestigious Folio Awards for excellence in publishing.

Subscribe to the VFX Voice Print Edition

Subscriptions & Single Issues


January 10
2022

ISSUE

Winter 2022

VES HANDBOOK: Game Engines and Real-Time Rendering

By DAVID JOHNSON, VES

Edited for this publication by Jeffrey A. Okun, VES 

Abstracted from The VES Handbook of Visual Effects – 3rd Edition,
Edited by Jeffrey A. Okun, VES and Susan Zwerman, VES

Figure 8.1 Screen capture of Call of Duty: Ghosts. (Image courtesy of Activision/Blizzard. Copyright © 2013)

 Figure 8.1 Screen capture of Call of Duty: Ghosts. (Image courtesy of Activision/Blizzard. Copyright © 2013)

 Rendering starts with a chunk of memory that contains the positions of each model in the scene, definitions for all of the lights in the scene and a set of parameters defining the settings for the post-processing, and it ends with all the final colored pixels that make up the presented image. In the middle of this process, a series of steps occur, the specifics of which will vary from engine to engine as well as depending on the type of rendering being used. The starting point for a renderer is the frame buffer: a chunk of memory that represents the canvas that each draw call will resolve to. “Double-buffered rendering” is a common technique that allows the renderer to draw to one canvas of memory while the previous frame is being “presented” to the player – insuring if the frame takes longer to draw than the allowed time, the old frame can be maintained while rendering is resolved. Once the new frame is complete, a pointer is switched, the new frame is presented and the available frame buffer is cleared. A “depth pre-pass” is rendered without lighting or textures to give the renderer a mechanism to understand the depth at which all the opaque pixels will land. The depth pre-pass is used as both an optimization, and also as a source for other operations such as depth of field and depth fading particles. Next, some other pre-render calculations may occur, such as setting up the cascaded shadow maps and primary lights. From there the scene begins to draw.

The next steps in rendering will differ, depending on whether one uses a forward or deferred renderer. For forward renderers, each object in the world will rasterize to the frame buffer, one at a time; the lighting, textures and shaders are calculated pixel by pixel, triangle by triangle, using the GPU’s vertex shader and fragment shader operations. The vertex shader largely resolves the vertices of the model from local space through world and camera transforms, and it handles all the UV transformations. The fragment shader handles all the textures, lighting and material logic. Typically, all the static opaque models are resolved in a bucket before moving on to the animated and dynamic objects. Once all the opaque models have resolved, the transparent or emissive models are drawn.

Deferred renderers work differently. They build up a series of frame buffers, each with a different contribution. The albedo, normal, specular responses, etc., of all the objects in the scene are accumulated into independent buffers; then lighting can occur. For each light in the scene, its radius compared to the depth buffer will determine an area of the frame buffer and how many pixels will be hit. Those pixels are all processed to resolve the final output color, then move on to the next light and repeat the process until complete. Once the entire frame buffer is drawn, the post-processing step can then process the image to add color timing, contrast operations, look-up tables, vignettes and other operations to tune the look of the scene for the desired feel. Lastly, the UI is drawn, and the completed results are ready to be presented to the player.

Debug tools such as PIX (Performance Investigator for Xbox), PS4 Razor or RenderDoc for PC, are frame analysis tools that can be used to capture a frame and step through every draw call, get nanosecond GPU timings on every object rendered and examine CPU calls, determine how much time was spent on them and present how they were threaded across CPU cores.


Share this post with

Most Popular Stories

How to Start a <strong>VFX Studio</strong>
01 October 2019
VES Handbook
How to Start a VFX Studio
Four new VFX studios (CVD VFX, Mavericks VFX, Outpost VFX, Future Associate) share their startup stories
The Miniature Models of <strong>BLADE RUNNER</strong>
02 October 2017
VES Handbook
The Miniature Models of BLADE RUNNER
In 1982, Ridley Scott’s Blade Runner set a distinctive tone for the look and feel of many sci-fi future film noirs to come, taking advantage of stylized production design, art direction and visual effects work.
Converting a Classic: How Stereo D Gave <strong>TERMINATOR 2: JUDGMENT DAY</strong> a 3D Makeover
24 August 2017
VES Handbook
Converting a Classic: How Stereo D Gave TERMINATOR 2: JUDGMENT DAY a 3D Makeover
James Cameron loves stereo. He took full advantage of shooting in native 3D on Avatar, and has made his thoughts clear in recent times about the importance of shooting natively in stereo when possible...
The New <strong>Artificial Intelligence</strong> Frontier of VFX
20 March 2019
VES Handbook
The New Artificial Intelligence Frontier of VFX
The new wave of smart VFX software solutions utilizing A.I.
THE PEARL: THE SUPER ALIEN MODELS OF<strong> VALERIAN</strong>
02 August 2017
VES Handbook
THE PEARL: THE SUPER ALIEN MODELS OF VALERIAN
Among the many creatures and aliens showcased in Luc Besson’s Valerian and the City of a Thousand Planets are members of the Pearl, a beautiful...
cheap cialis online online cialis