By DAVID JOHNSON, VES
Edited for this publication by Jeffrey A. Okun, VES
Abstracted from The VES Handbook of Visual Effects – 3rd Edition,
Edited by Jeffrey A. Okun, VES and Susan Zwerman, VES
By DAVID JOHNSON, VES
Edited for this publication by Jeffrey A. Okun, VES
Abstracted from The VES Handbook of Visual Effects – 3rd Edition,
Edited by Jeffrey A. Okun, VES and Susan Zwerman, VES
Rendering starts with a chunk of memory that contains the positions of each model in the scene, definitions for all of the lights in the scene and a set of parameters defining the settings for the post-processing, and it ends with all the final colored pixels that make up the presented image. In the middle of this process, a series of steps occur, the specifics of which will vary from engine to engine as well as depending on the type of rendering being used. The starting point for a renderer is the frame buffer: a chunk of memory that represents the canvas that each draw call will resolve to. “Double-buffered rendering” is a common technique that allows the renderer to draw to one canvas of memory while the previous frame is being “presented” to the player – insuring if the frame takes longer to draw than the allowed time, the old frame can be maintained while rendering is resolved. Once the new frame is complete, a pointer is switched, the new frame is presented and the available frame buffer is cleared. A “depth pre-pass” is rendered without lighting or textures to give the renderer a mechanism to understand the depth at which all the opaque pixels will land. The depth pre-pass is used as both an optimization, and also as a source for other operations such as depth of field and depth fading particles. Next, some other pre-render calculations may occur, such as setting up the cascaded shadow maps and primary lights. From there the scene begins to draw.
The next steps in rendering will differ, depending on whether one uses a forward or deferred renderer. For forward renderers, each object in the world will rasterize to the frame buffer, one at a time; the lighting, textures and shaders are calculated pixel by pixel, triangle by triangle, using the GPU’s vertex shader and fragment shader operations. The vertex shader largely resolves the vertices of the model from local space through world and camera transforms, and it handles all the UV transformations. The fragment shader handles all the textures, lighting and material logic. Typically, all the static opaque models are resolved in a bucket before moving on to the animated and dynamic objects. Once all the opaque models have resolved, the transparent or emissive models are drawn.
Deferred renderers work differently. They build up a series of frame buffers, each with a different contribution. The albedo, normal, specular responses, etc., of all the objects in the scene are accumulated into independent buffers; then lighting can occur. For each light in the scene, its radius compared to the depth buffer will determine an area of the frame buffer and how many pixels will be hit. Those pixels are all processed to resolve the final output color, then move on to the next light and repeat the process until complete. Once the entire frame buffer is drawn, the post-processing step can then process the image to add color timing, contrast operations, look-up tables, vignettes and other operations to tune the look of the scene for the desired feel. Lastly, the UI is drawn, and the completed results are ready to be presented to the player.
Debug tools such as PIX (Performance Investigator for Xbox), PS4 Razor or RenderDoc for PC, are frame analysis tools that can be used to capture a frame and step through every draw call, get nanosecond GPU timings on every object rendered and examine CPU calls, determine how much time was spent on them and present how they were threaded across CPU cores.