Special Effects, Digital Effects and the Digital Intermediate

Visual effects include many of the technical processes that constitute the ever-evolving world of computer-generated imagery (CGI) including optical and matte paintings, miniatures and animation. CGI is used for everything from filling out the shadows, giving an extra glow to a sunset shot, and optically compositing the beach invasion in Saving Private Ryan. These are the responsibility of the visual effects coordinator/supervisor. Visual effects are such a complex, technical aspect of modern filmmaking, that we only touch on it here.

Mechanical effects, which can include everything from bullet squibs or hits to exploding geysers duplicating Yellowstone’s Old Faithful, are still commonplace and ever more sophisticated. But for many filmmakers, it’s easier to use the increasing variety of optical effects, digital matte paintings, seamless rear and front projection accomplished on green and blue screens, and the use of sophisticated miniatures that exceed the high level of miniature craftsmanship of Hollywood’s Golden Age.

Compositing from the digital format overrides the need for conventional and artistically hazardous blue and green screen photography. By allowing actors to move within the background instead of simply in front of it, this form of image manipulation is interactive in terms of the director’s involvement – he or she can alter the environment in the world of digital image processing. New motion capture systems now render this process completely realistic, as the movements of Gollum in Lord of the Rings or any of the characters in 300 demonstrate. By integrating matte work that uses the computer as a paint box, or creating a three dimensional space in which synthetic characters or other objects can “live,” these effects can be blended with actual filmed material to create the illusion of naturalism within a real three dimensional environment.

In motion capture sessions, movements of one or more actors are sampled many times per second. Whereas early techniques used images from multiple cameras to calculate 3D positions, often the purpose of motion capture is to record only the movements of the actor, not his or her visual appearance. This animation data is mapped to a 3D model so that the model performs the same actions as the actor.
Camera movements can also be motion captured so that a virtual camera in the scene will pan, tilt, or dolly around the stage driven by a camera operator while the actor is performing, and the motion capture system can capture the camera and props as well as the actor’s performance. This allows the computer-generated characters, images and sets to have the same perspective as the video images from the camera. A computer processes the data and displays the movements of the actor, providing the desired camera positions in terms of objects in the set.

Morphing, from the word metamorphosis, also uses CGI to create the fluid transformation of images. The possibilities in this area seem endless, and have already evoked the specter of artificially created actors replacing live ones. But audiences don’t pay to see artificial stars. That being said, films like 300 and Beowulf in 3-D demonstrate that CGI artificiality can enhance certain performances, which are on one level decidedly non-human.

Computer animation is probably the most ubiquitous area of CGI transformation of the entire mass media landscape. Using digital input devices, movement can be translated directly into computer images. From there, various complex stages of refinement can be carried out by 3-D animators working interactively in the digital world of computers.

An entire and growing array of traditional and computer-generated effects are now utilized. No one source will or can supply them all, and teams of visual effects artists from around the globe specialize in one type of effect or environment.
The wealth is definitely spread around in this highly competitive and ever-improving arena, but it must always be remembered that digital technology doesn’t necessarily make the filmmaker more creative. It simply gives the director more opportunity to express his or her vision.

This is particularly true in the digital intermediate process, first developed by Kodak for its Cinesite subsidiary, but now used by almost every major film production in one of various patented versions. This continually-evolving advancement, traditionally known as the telecine, for transferring film to video has now become a key tool in image enhancement and color grading and manipulation.

It starts with digitizing or scanning of select takes or, when dealing with a finished film, the entire original negative at 2K, 4K or 6K resolution. Utilizing a special scanner, film is digitized to 2K data files and loaded to a device called a Spectre Virtual Datacine for greater flexibility in real-time manipulation.

Then a skilled colorist, working with the director and the DP, grades the shots using a color corrector giving the cinematographer and the colorist the full range of primary and secondary color correction, a process that used to require the filmmaker to make back and forth trips to the lab.

The colorist also can now control image capabilities such as contrast, sharpness and even film grain. Using “power windows,” the colorist can isolate and manipulate very specific areas of the frame, such as background characters or landscape, to achieve a desired look. Another program can digitally conform live action sequences, optical effects and visual effects, and a single scan produces a digital master file for both the film and all the video formats. And they’re all color corrected simultaneously.

Many films still get transferred back to a 35mm negative and master following the digital intermediate. For all the digital effects advances and applications, 35mm film stock is still considered by many to be the richest storage and delivery medium available, as it has been for more than the past 100 years.

The Bottom Line

Postproduction does not always offer the opportunity to save a movie, but it certainly presents a chance for redemption. Films can always be improved, sharpened, tightened and smoothed by a gifted editor and a talented sound and editorial team. They can also be lost in the welter of too much attempted reconstruction, or too much fiddling about. Some legendary films required more than a year of editing, such as Days of Heaven and Apocalypse Now, but that didn’t necessarily make them better movies.

Goals in postproduction, as in production, are time and budget specific, and if possible represent a consensus opinion. The outline of the movie still remains in the screenplay, and matters have to be in very bad shape before that is finally and totally abandoned.

Most films run too long, and could uniformly afford to lose 10-15 minutes of screen time without an appreciable loss of story content or character development. In an era where more directors have greater say about the final running time of their films, if not actual final cut, there should be greater attention paid to the merits of a tight running time.

The imperative for the producer is keeping postproduction from turning into an endless meditation on the real meaning of the movie, improving what can be improved, paying special attention to the sound and music work, and praying that the effects list does not grow from 20 shots to 200.