To do that, Vincent explored re-touching the live-action photography. He was looking at romanticist painter Caspar David Friedrich’s work for inspiration, particularly the painting called ‘Two Men Contemplating the Moon.’ Right at that time I’d been working with Kodak’s Cineon software and they’d just released Cinespeed, which was the first optical flow re-timer. From that, we came up with this wacky idea of doing ‘machine vision’ tracking of entire plates. We did some tests initially with painterly filters, which looked good on still frames, but they became a flickering mess when applied to a series of images. The idea became, why don’t we try to use optical flow to drive a paint system?
Chris and Albert (Cuba Gooding Jr.) come across the Purple Tree, matching a newly painted tree crafted by Chris’s wife, Annie. Digital Domain ‘grew’ the tree with L-systems techniques. The studio was one of several vendors on What Dreams May Come. along with principal vendor Mass.Illusions, and POP Film, CIS Hollywood, Radium, Illusion Arts, Mobility, Giant Killer Robots, Shadowcaster and Cinema Production Services. Overall Visual Effects Supervisor Ellen Somers oversaw the production.
VFX Voice: Can you explain what optical flow meant, in terms of the way you wanted to use it?
Brooks: Optical flow comes from machine vision. If you’re putting an autonomous robot on the surface of the moon or Mars, you give it a vision system, a minimum of two, possibly three, possibly five cameras that view the world from slightly different angles. In a way, it’s similar to the way that humans with a stereo pair of optics, i.e., eyes, see the world and are able to understand depth through convergence.
So if you’ve got a robot and you’ve got these cameras, then you need to write the perceptual brain or the perceptual program that works out the differences between these views and creates an idea of depth. Basically, it tries to match each frame to the other frame and give you ‘per pixel’. That’s called a vector field or an optical-flow vector field.
In our world, where we’ve got one camera, you create a vector field that allows you to track pixels from frame one to frame two. If you’re panning to the right for instance, you will get a set of vectors that show the features of that world as values for each of those pixels in terms of x and y shifts.
What we realized was that this meant we could generate a paint stroke for every pixel of the image, and we could transform that paint stroke to the next image and to the next image after that, and so on, because the camera is moving and we are basically generating a set of pixels. Then what we needed was a particle system that would take optical flow as transformation information. None of the existing tools had just the right kind of image processing, so we knew we had to build it ourselves.
The original plate for a scene of Chris and Katie exploring the afterlife.
The final shot was enabled with extensive development of tracking techniques, optical flow and a specialized particles tool to produce the painterly effects.
VFX Voice: How was this developed further for What Dreams May Come?
Brooks: The first thing we did was get the film studio to give us some money to test this, and we went out and shot some footage of a guy walking through a forest in South Carolina. We selected two shots from that, and we had hired a programmer, Pierre Jasmin, who had a particle system that he’d written. Pierre was one of the original programmers of Discreet Logic, and he went on to co-found RE:Vision Effects.
Pierre had written a particle system that would take an image and analyze the color per pixel. Then what we did was generate paint strokes. We physically painted a bunch of paint strokes that had white, blue and red paint mixed into it. So you can imagine these slightly Monet-like strokes of different shapes and sizes that you saw the three pigments in. We scanned them all in and we used the color of the pixels – the white, blue and red channels – to drive the movement. For example, on the photography, let’s say it was green grass; it would look at that green pixel and go, okay, if the base color is green, it would do some variations based on that.
In Pierre’s system we generated layers of particles with optical flow with these paint strokes. We had rules for orientation, depth, and all sorts of different variations. In essence, we would apply a traditional painter’s algorithm, i.e., the way you might paint from the background to the foreground in terms of how we would paint the sky and how we paint the horizon. And we essentially segmented the image into different alpha channels. So when you look at the image, it would be maybe 10 different maps or different depths.
What Dreams May Come was basically greenlit on the back of this test. Everybody realized it was possible that we could actually film in amazing locations, and be able to transform it into a moving painting without it looking kind of like kitsch or CG or over-processed.
Interestingly, we did this What Dreams May Come test – which was really successful – and right after that we did the test for bullet time using a similar technique, but without the particles. Bullet time was more about frame interpolation where we set up all these multiple cameras and interpolated across with these different camera views.
VFX Voice: Once you were in production, what things were being done on set to help with the optical flow process later on?
Brooks: We’d also been developing the use of Lidar for visual effects production. We were Lidar scanning the landscapes and getting tracking information and spacial information for the environment that helped us generate depth maps and to generate 3D data. This seems trivial today, but at the time we were absolutely using Lidar in a way that it hadn’t been used before. At that point it was still used mostly for engineering. We were scanning trees – all sorts of stuff – and learning how to segment all that point-value information, surface it, and abbreviate it so that we could use it in our paintings.
With all this information we had, we were able to just keep tweaking the imagery, and we learned so much along the way. One of those little tricks we learned was, as we were moving paint along we could kind of accumulate it and sort of smear it. As the camera moved it would be self-smearing.