Digital colorization
Computerized colorization began in the 1970s with a process developed by Wilson Markle. Movies colorized using early techniques have soft contrast and fairly pale, flat, washed out color; however, the technology has improved since the 1980s.
To perform digital colorization, a digitized copy of the best monochrome film print available is needed. Technicians, with the aid of computer software, associate a range of gray levels to each object, and indicate to the computer any movement of the objects within a shot. The software also is capable of sensing variations in the light level from frame to frame and correcting it if necessary. The technician selects a color for each object based on (1) common "memory" colors such as blue sky, white clouds, flesh tones and green grass, and (2) based on any known information about the movie. For example, if there are color publicity photos or props from the movie available to examine, authentic colors may be applied. (3) In the absence of any better information, the technician chooses a color that fits the gray level and that the technician feels is consistent with what a director might have chosen for the scene. The computer software then associates a variation of the basic color with each gray level in the object, while keeping intensity levels the same as in the monochrome original. The software then follows each object from frame to frame, applying the same color until the object leaves the frame. As new objects come into the frame, the technician must associate colors to each new object in the same way as described above.[5] This technique was patented in 1991.[6]
A major difficulty with this process is its labor-intensity. For example, in order to colorize a still image an artist typically begins by dividing the image into regions, and then assigning a color to each region. This approach, also known as the segmentation method, is time consuming, as the process of dividing the picture into correct segments is painstaking. This problem occurs mainly because there have been no fully automatic algorithms to identify fuzzy or complex region boundaries, such as between a subject’s hair and face. Colorization of moving images also requires tracking regions as movement occurs from one frame to the next (motion compensation). There are several companies which claim to have produced automatic region-tracking algorithms.
Legend Films describes their core technology as pattern recognition and background compositing which moves and morphs foreground and background masks from frame to frame. In the process, backgrounds are colorized separately in a single composite frame which functions as a visual database of a cut, and includes all offset data on each camera movement. Once the foregrounds are colorized the background masks are applied frame to frame in a utility process.[clarification needed]
Timebrush describes a process based on neural net technology which produces saturated and crisp colors with clear lines and no apparent spill-over. It is claimed that the process is cost effective and equally suitable for low-budget colorization, as well as for prime time broadcast-quality or theatrical projection.
A team at the Hebrew University of Jerusalem's Benin School of Computer Science and Engineering describe their method as an interactive process which does not require precise, manual, region detection, nor accurate tracking and is based on the simple premise that nearby pixels in space and time that have similar gray levels should also have similar colors. At the University of Minnesota, a color propagation method was developed that uses geodesic distance.[7]
A highly labor-intensive process, employed by the UK-based film and video colorization artist Stuart Humphryes in conjunction with video restoration company SVS Resources, was employed by the BBC in 2013 for the commercial release of two Doctor Who serials - episode one of The Mind of Evil and newly discovered monochrome footage in the Director's Cut of Terror of the Zygons. For these ventures approximately 7000 key-frames (approximately every 5th PAL video frame) were fully colorized by hand, without the use of masks, layers or the segmentation method. These were then utilised by SVS Resources to interpolate the colour across the intervening surrounding frames using a part computerized/part manual process.[8]