VFX Voice

The award-winning definitive authority on all things visual effects in the world of film, TV, gaming, virtual reality, commercials, theme parks, and other new media.

Winner of three prestigious Folio Awards for excellence in publishing.

Subscribe to the VFX Voice Print Edition

Subscriptions & Single Issues


January 10
2022

ISSUE

Winter 2022

VES HANDBOOK: Game Engines and Real-Time Rendering

By DAVID JOHNSON, VES

Edited for this publication by Jeffrey A. Okun, VES 

Abstracted from The VES Handbook of Visual Effects – 3rd Edition,
Edited by Jeffrey A. Okun, VES and Susan Zwerman, VES

Figure 8.1 Screen capture of Call of Duty: Ghosts. (Image courtesy of Activision/Blizzard. Copyright © 2013)

 Figure 8.1 Screen capture of Call of Duty: Ghosts. (Image courtesy of Activision/Blizzard. Copyright © 2013)

 Rendering starts with a chunk of memory that contains the positions of each model in the scene, definitions for all of the lights in the scene and a set of parameters defining the settings for the post-processing, and it ends with all the final colored pixels that make up the presented image. In the middle of this process, a series of steps occur, the specifics of which will vary from engine to engine as well as depending on the type of rendering being used. The starting point for a renderer is the frame buffer: a chunk of memory that represents the canvas that each draw call will resolve to. “Double-buffered rendering” is a common technique that allows the renderer to draw to one canvas of memory while the previous frame is being “presented” to the player – insuring if the frame takes longer to draw than the allowed time, the old frame can be maintained while rendering is resolved. Once the new frame is complete, a pointer is switched, the new frame is presented and the available frame buffer is cleared. A “depth pre-pass” is rendered without lighting or textures to give the renderer a mechanism to understand the depth at which all the opaque pixels will land. The depth pre-pass is used as both an optimization, and also as a source for other operations such as depth of field and depth fading particles. Next, some other pre-render calculations may occur, such as setting up the cascaded shadow maps and primary lights. From there the scene begins to draw.

The next steps in rendering will differ, depending on whether one uses a forward or deferred renderer. For forward renderers, each object in the world will rasterize to the frame buffer, one at a time; the lighting, textures and shaders are calculated pixel by pixel, triangle by triangle, using the GPU’s vertex shader and fragment shader operations. The vertex shader largely resolves the vertices of the model from local space through world and camera transforms, and it handles all the UV transformations. The fragment shader handles all the textures, lighting and material logic. Typically, all the static opaque models are resolved in a bucket before moving on to the animated and dynamic objects. Once all the opaque models have resolved, the transparent or emissive models are drawn.

Deferred renderers work differently. They build up a series of frame buffers, each with a different contribution. The albedo, normal, specular responses, etc., of all the objects in the scene are accumulated into independent buffers; then lighting can occur. For each light in the scene, its radius compared to the depth buffer will determine an area of the frame buffer and how many pixels will be hit. Those pixels are all processed to resolve the final output color, then move on to the next light and repeat the process until complete. Once the entire frame buffer is drawn, the post-processing step can then process the image to add color timing, contrast operations, look-up tables, vignettes and other operations to tune the look of the scene for the desired feel. Lastly, the UI is drawn, and the completed results are ready to be presented to the player.

Debug tools such as PIX (Performance Investigator for Xbox), PS4 Razor or RenderDoc for PC, are frame analysis tools that can be used to capture a frame and step through every draw call, get nanosecond GPU timings on every object rendered and examine CPU calls, determine how much time was spent on them and present how they were threaded across CPU cores.


Share this post with

Most Popular Stories

MEET THE 2023 VFX OSCAR CONTENDERS IN A YEAR OF VARIETY AND VARIABLES
03 January 2023
VES Handbook
MEET THE 2023 VFX OSCAR CONTENDERS IN A YEAR OF VARIETY AND VARIABLES
Previewing the top prospects for this year Academy Award for Best Visual effects
2023 GLOBAL PERSPECTIVE: THOUGHT LEADERS ON THE VFX AND ANIMATION SUPERHIGHWAY
03 January 2023
VES Handbook
2023 GLOBAL PERSPECTIVE: THOUGHT LEADERS ON THE VFX AND ANIMATION SUPERHIGHWAY
Industry leaders weigh the future of VFX and animation.
VFX CONTINUING EDUCATION: WHERE DO VFX ARTISTS GO TO KEEP LEARNING?
03 January 2023
VES Handbook
VFX CONTINUING EDUCATION: WHERE DO VFX ARTISTS GO TO KEEP LEARNING?
A look at several upskilling paths for VFX artists.
VFX COMPANIES LOOK TO THE METAVERSE FOR NEW VIRTUAL WORLDS TO CONQUER
03 January 2023
VES Handbook
VFX COMPANIES LOOK TO THE METAVERSE FOR NEW VIRTUAL WORLDS TO CONQUER
Shaping the VFX industry’s role in an evolving Metaverse.
CREATING WAVES WITH AVATAR: THE WAY OF WATER
03 January 2023
VES Handbook
CREATING WAVES WITH AVATAR: THE WAY OF WATER
For James Cameron, visual effects are his craft.
cialis online buy cialis