Recording material into a system, such as a nonlinear editor, as a background task. Thus the foreground task may continue uninterrupted and when one job is completed, the next is already loaded, potentially increasing the throughput of the editing system.
A Bayer filter is a matrix of red, green and blue non co-sited filters placed onto an imaging chip (CCD, CMOS) so that it can capture the separate red, blue and green primary colors of the image to record a color digital image. This greatly simplified the construction of color cameras, and somewhat mimics how our (single-retina) eyes see color. As our eyes have more resolution for green light than red or blue, the Bayer filter on the imaging chip has twice as many green cells as red and blue. Some redundancy of the green pixels produces an image which is less noisy and has finer detail than would be achieved if there were an equal number of red, green and blue cells.
For further use, the R, G and B pixels generated by the Bayer-filter-and-imaging-chip combination need to be ‘unmasked’ using a complex algorithm. This process, sometimes called Debayering, produces the separate red, green and blue images that together make up the color image.
Traditionally professional TV cameras have used three image sensors, one to pick up each primary color. This arrangement requires that behind the lens there is a three-way light-splitting glass block delivering the separate R, G and B images to three light sensors that must be accurately registered together. This has involved a considerably more bulky construction and greater cost than is needed by those based on a single image-sensor chip complete with a Bayer filter.
The Bayer filter was patented in 1976 and early use was with consumer stills and video cameras. A number of other variants of RGB filter array are now in use. It was 30 years later that single-chip cameras started to be accepted for the professional video and movie markets, some years after they had been widely used in high-end stills cameras, for example. ARRI Alexa, Canon, Sony F65/55 and RED.
Charge Coupled Device (CCD), either assembled as a linear or two-dimensional array of light sensitive elements. Light is converted to an electrical charge in a linear fashion – proportional to the brightness impinging on each cell. The cells are coupled to a scanning system which, after analog to digital conversion, presents the image as a series of binary digits.
Typically the charge created on each cell is shuffled along to the end of its line where it is measured by an ADC that outputs a digital number corresponding to the charge, and hence, the brightness. This handling of the charge exposes it to noise and early CCD arrays were unable to work over a wide range of brightness. Now they offer low noise, high resolution imaging for television and digital cinematography.
See also: CMOS
The process of overlaying one video picture, or part of, over another. The areas of overlay are defined by a specific range of color, or chrominance, on the background video signal that is used to create a key signal for a chroma keyer. For this to work reliably, the chrominance must have sufficient resolution, or bandwidth. PAL or NTSC analog coding systems significantly restrict chroma bandwidth and so are of very limited use for making a chroma key which, for many years, was restricted to using live, RGB camera feeds.
An objective of the ITU-R BT.601 and 709 digital sampling standards was to allow high quality chroma keying in post production. The 4:2:2 sampling system allows far greater bandwidth for chroma than PAL or NTSC and helped chroma keying, and the whole business of layering, to thrive in post production. High signal quality is still important to derive good keys so some high-end operations favor using RGB (4:4:4) for keying – despite the additional storage requirements. Certainly anything but very mild compression tends to result in keying errors appearing – especially at DCT block boundaries.
Chroma keying techniques have continued to advance and use many refinements, to the point where totally convincing composite images can be easily created. You can no longer see the join and it may no longer be possible to distinguish between what is real and what is keyed.
The normal interpretation of a component video signal is one in which the luminance and chrominance remain as separate components, e.g. analog components in Betacam VTRs, digital components Y, Cr, Cb in ITU-R BT.601 and 709. RGB is also a component signal.
Component video signals retain maximum luminance and chrominance bandwidth and the frames are independent of each other and so can be edited at any frame boundary.
See also: Cut (edit)
Luminance and chrominance are combined along with the timing reference sync and color burst information using one of the color coding standards – NTSC, PAL or SECAM – to make composite video. The process, which is an analog form of video compression, restricts the bandwidths (image detail) of the color components. In the composite result color is literally added to the monochrome (luminance or Y) information using a visually acceptable technique. As our eyes have far more luminance resolving power than for color, the sharpness (bandwidth) of the coded color signal is reduced to well below that of the luminance.
This provides a good solution for transmission and viewing but it becomes difficult, if not impossible, to accurately reverse the process (decode) back into pure luminance and chrominance. This limits its use in post production as repetitive decode, recode cycles severely impair the pictures. Deriving keys from composite video gives poor results.
A film-style digital camera from Arri that is highly modular and uses a single Super35mm-sized image CMOS sensor with a Bayer filter and producing the same field of view and depth of field as that of traditional 35mm film motion picture cameras. Like real film cameras it uses a detachable optical viewfinder that is widely preferred to electronic versions on other cameras. It is capable of 1-60Hz frame rates and produces 1080-line images in 4:2:2 or 4:4:4.
See Bayer filter
Introduced in 1993 as a development of the original analog Betacam VTR that records SD component video and audio digitally onto Betacam-style cassettes. It uses mild intra-field compression to reduce the ITU-R BT.601 sampled video data by about 2:1 to provide a good portable and much cheaper alternative to the uncompressed D1 format.
Shooting movies with digital movie cameras, not film. Nearly all movies are now shot digitally with cameras specifically made for movies, differing in many ways from those made for television. The output offers RAW data that includes the full range of brightness captured by the image sensors, allowing color grading, format changes, etc. to be executed as a part of the digital intermediate process. The camera’s image sensor is typically about the size of full frame 35mm film delivering at least 4096 x 2160 (4K) sized images, with the same movie look, including a narrow depth of field.
By contrast, television cameras are designed to work live. They include front-end processing for gamma correction, set-up for the required color look and clipping to suit home viewing conditions. Sensors are usually smaller, half-inch size, delivering the much deeper depth of field that we expect from TV, and is very useful when covering live events such as sports.
See also: VFR
Electronic Newsgathering. Term applied to a small portable outfit, with a broadcast quality TV camera, recorder and/or microwave link, usually used for news. The term was originated to distinguish between newsgathering on film and video tape (electronic) and it now lives on in the chip/card plugable storage era.
See also: ING
It has been standard practice from the early days of film to shoot film at a higher rate, over-cranking, then it will be played back at normal speed to produce a slow-motion effect of high quality. However there was a limit to the speed at which film can be run through the camera. Many complex designs for film transports have been tried but they are limited to around 400 frames per second depending on film size and framing. Early uses in military applications were picked up by the creative community for dramatic effect.
Electronic imaging systems do not have the issues of moving a mass of film past the lens and once the problems of high speed digitization and storage were solved it allowed imaging up to 1,000,000 frames a second at low resolution. Most modern cameras are capable of some over-cranking to a few hundred frames per second and from there specialist cameras are available to cover high resolution and speeds of several thousand frames/second.
Latitude is the capacity of a camera to capture information over a wider brightness range than is needed for the final print. This provides a degree of freedom in post production for adjustment to match other shots, or for creating keys or adding digital effects.
See also: HDR
Data that has not been processed for use. It is often written as ‘RAW’, which may make you think it is an acronym. It is not.
Raw data usually applies to the output of digital cinematography cameras that can generally deliver images that include the full brightness range that its imager can extract from a scene. The data has not been tampered with, not processed for color or to suit any target viewing conditions, such as cinema or gamma corrected for home TV viewing. The raw data is as the imager saw it with debayering applied where needed.
Device for converting film images into video in realtime. The main operational activity here is color grading which is executed on a shot-by-shot basis and absorbs considerable telecine time. This includes the time needed for making grading decisions and involves significant handling of the film, spooling and cueing which risks film wear and damage, besides the actual transfer time. The output of a telecine is digital video (rather than data files).
Digital technology has moved the transfer process on. Now, adding a disk store or server can create a virtual telecine enabling the film-to-digital media transfer to run as one continuous operation. Whole film spools can be scanned in one pass, with useful footage selected by an EDL. In this case the telecine may be termed a Film Scanner – creating image files (rather than digital video) that contain sufficient latitude for downstream grading.
See: Digitizing Time
Variable Frame Rate shooting used to be only possible with film cameras; all electronic cameras worked at fixed frame rates. Panasonic’s HD Varicam was the first to offer variable speeds, originally with frame rates from 4 to 60 f/s in one-frame increments. Sony’s XDCAM HD offered the same range. There are also specialized digital cameras and solid-state recorders able to capture video at frame rates up to 1000 f/s, or more. Instant replay shows an otherwise unseen world of extreme slow motion.
Launched in 2003, Sony’s XDCAM professional camcorder products have evolved with technology. The first model was for SD television and made use of its Professional Disc (PD), an application of Blu-ray Disc, as the on-board recording medium. The product range included camcorders, mobile and studio decks which are designed to take advantage of the size, weight, data speed and re-record features of the PD technology. It used the DVCAM codec and record SD 4:1:1 (480-line) and 4:2:0 (576-line) video at 25 Mb/s onto the PD.
XDCAM HD camcorder images were native 1440 x 1080 and recorded as HDV: 1080/59.94I, 50I, 29.97P, 25P, and native 23.98P video using MPEG-2 MP@HL with compression and 4:2:0 sampling. Users could select 35 (HQ), 25 (SP), or 18 (LP) Mb/s bit rates according to picture quality and recording length requirements, ranging from 60 to 120 minutes. There were four channels of 16-bit, 48 kHz audio.
XDCAM EX takes the same ideas but records to solid-state storage in place of Blu-ray disc.
XDCAM HD422 is a family that includes a selection of cameras, recorders again including solid-state, and accessories.
See also: Professional Disc