By JOHN ROOT
Edited for this publication by Jeffrey A. Okun, VES
Abstracted from The VES Handbook of Visual Effects – 3rd Edition
Edited by Jeffrey A. Okun, VES and Susan Zwerman, VES
By JOHN ROOT
Edited for this publication by Jeffrey A. Okun, VES
Abstracted from The VES Handbook of Visual Effects – 3rd Edition
Edited by Jeffrey A. Okun, VES and Susan Zwerman, VES
Real-time can be a very useful tool on the motion capture stage. Its uses include some of the following:
Character Identification
It can be invaluable for an actor to see their performance as it will look on the character they are portraying. The actor can work with the director to account for the differences between himself and the character and incorporate those differences into the performance.
Virtual Cinematography
By deploying a virtual camera, the director or DP can virtually fly around in the 3D set and play with camera, composition and lighting. The freedom of the virtual environment means that cameras can be placed and moved in ways not possible on a live-action set. In fact, the set itself can be rearranged near-instantaneously for composition, light, color, time of day and a variety of other factors that are under the control of the director while the performance takes place.
Character Integration
A motion capture stage is often sparse. For this reason, it can be useful for an actor to see himself in the virtual environment. This allows for a sense of space and an awareness of one’s surroundings.
Live Performances
Aside from the many benefits real-time affords to motion capture and virtual production, it can also be used as a final product. Real-time puppeteering of characters and stage performances is becoming very common.
Real-Time Limitations
Real-time motion capture is an emerging field. To date, however, significant compromises must be made to achieve real-time.
Line of Sight
Optical motion capture is computer vision-based and therefore the cameras must have an unobstructed view of the markers. In post, it is common for the tracking team to sort out artifacts induced by cameras losing line of sight or markers having been occluded. Because real-time MoCap forgoes post-processing, it is common to reduce the number of actors, limit the props and govern the performance to maximize marker visibility.
Markers
Trying to make sense of the markers on a stage in real-time is daunting. Beyond simply using fewer markers, inducing asymmetry into the marker placement will make them easier to identify. When the labeler is measuring the distance between markers, it is less likely to get confused and swap markers when the markers are placed asymmetrically. An asymmetrical marker placement, however, may be less than ideal for solving.
Solving
Some methods of solving are very processor intensive. Solving methods that are currently possible in real-time often involve approximations and limitations that will yield a resulting solve that does not look as good as an offline solve. Given this, plus the compromises induced by marker placement modifications and performance alterations, what comes through in the real-time previsualization might not be indicative of the final quality of the animation.
Visualization
Real-time rendering technology is catching up with offline rendering technology at a radical pace. Some of the games on the shelves look nearly as good as the movies in the theaters. Using real-time game rendering technology to previs a film is a common practice but one that requires significant investment. Expect that real-time rendered characters and environments will come with compromises to visual quality, most notably in lighting and lookdev (look development).