Tag : Images

10-bit lin

A type of digital sampling of analog images that creates 10-bit (210, 1024 possible levels) numbers to describe the analog brightness levels of an image. Lin, short for ‘linear’ means the levels are assigned evenly to the levels of the analog signal they describe. So an LSB change describes the same change in level if it is in a bright area or a dark area of the picture. Most professional HD and some SD television is sampled this way according to ITU-R BT.601 and 709. 10-bit lin sampling allows good quality to be maintained through TV production and post production where the processes can make particular demands outside the range of normal viewing, and so produces good results for viewers. However if color grading is required then the wide dynamic range that can be described by 10-bit log would be more useful, or indeed one of the newer high dynamic range formats.

See also: 10-bit log, Gamma, Color Space

10-bit log

This usually refers to a 10-bit sampling system that maps analog values logarithmically rather than linearly. It is widely used when scanning film images which are themselves a logarithmic representation of the film’s exposure. This form of sampling is available directly from some digital cinematography cameras.

The 10-bit data can describe 210 or 1024 discrete numbers, or levels: 0 – 1023 for each of the red, blue and green (RGB) planes of an image. However, as all electronic light sensors have a linear response and so produce an output directly proportional to the light they see, when scanning film they represent the transmittance of the film. Usually it is negative film that is scanned and this means a large portion of the numbers generated describe the scene’s black and dark areas (representing bright areas of the original scene), and too few are left for the light areas (dark scene) where ‘banding’ could be a problem – especially after digital processing such as grading and color correction. Transforming the numbers into log (by use of a LUT) gives a better distribution of the digital detail between dark and light areas and so offers good rendition over the whole brightness range without having to use more bits. A minimum of 13-bit linear sampling converted to 10-bit log sampling means sufficient detail in the pictures is stored to allow headroom for downstream grading that is common in film production.

10-bit log is the basis for sampling in the Cineon and SMPTE DPX formats that are still widely used in the post production and DI industries.

See also: 10-bit lin, LUT


Historically the nominal 30 frames/60 fields per second of NTSC color television is usually multiplied by 1000/1001 (= 0.999) to produce slightly reduced rates of 29.97 and 59.94 Hz. This offset gives rise to niceties such as drop-frame timecode (dropping one frame per thousand: 33.3 seconds) and audio that also has to run in step with the video. Although having strictly analog origins, dating from the very beginning of NTSC color transmissions in 1953 as a fix-it to avoid a clash of frequencies, the 1000/1001 offset has been extended into the digital, HD and UHD world where 24 Hz becomes 23.976 and 30 frames/60 fields per second are again changed to 29.97 and 59.94 Hz. Of course, as the frame/field frequency changes, so do the line and color subcarrier frequency as they all have to be locked together. Note that this does not apply to PAL color systems as these always use the nominal values (25 Hz frame rate).

The reason for the 1000/1001 offset is based in monochrome legacy. Back in 1953, the NTSC color subcarrier was specified to be half an odd multiple (455) of line frequency to minimize the visibility of the subcarrier on the picture. Then, to minimize the beats between this and the sound carrier, the latter was to be half an even multiple of line frequency, and to ensure compatibility with the millions of existing monochrome TV sets, the sound carrier was kept unchanged – at 4.5 MHz – close to 286 times the line frequency (Fl). Then, in a real tail-wags-dog episode, it was decided to make this exactly 286 times… by slightly altering the line frequency of the color system (and hence that of the color subcarrier and frame rate). Interestingly it is said that the problem could soon have been solved with a little improved engineering, so avoiding the need for this awkward frequency offset and all the many thousands of hours of additional engineering and operational effort this has caused down the years.

Here’s the math.

Fl = frames per second x number of lines per frame
Nominally this is: 30 x 525   = 15,750 kHz

But it was decided that: 286 x Fl     = 4.5 MHz
So:                                     Fl = 4,500,000/286          = 15,734.265 kHz
This reduced Fl by:         15734.265/15750           = 1000/1001 or 0.999

As all frequencies in the color system have to be in proportion to each other, this has made:

NTSC subcarrier (Fl x 455/2) = 3.579 MHz
30 Hz frame rate (Fl/number of lines per frame) = 29.97 Hz

Following on, all digital sampling locked to video is affected so, for example, nominal 48 and 44.1 kHz embedded audio sampling becomes 47.952 and 44.056 kHz respectively.

As the reasons for ‘drop-frame’ were analog, it is not a requirement of digital television but still the frequencies appear in digital TV standards, and they are widely used, even though analog TV transmissions are now switched off in most countries.

See also: Drop-frame timecode, Embedded audio


Refers to 24 frames-per-second, progressive scan. 24 f/s has been the frame rate of motion-picture film since the ‘talkies’ arrived. It is also one of the frame rates allowed for transmission in the DVB and ATSC digital television standards, so they can handle film without needing any frame-rate change (3:2 pull-down for 60 f/s ‘NTSC’ systems or running film fast, at 25f/s, for 50 Hz ‘PAL’ systems). 24P is now accepted as a television production format – usually associated with high definition 1080 lines to give a ‘filmic’ look on 60 Hz TV systems. Drop-frame frequencies (e.g. 23.976Hz frame rate) may be used in in North America and other previously ‘NTSC’ countries).

A major attraction of 24P is its relatively easy path from this to all major television formats as well as offering direct electronic support for motion picture film and D-cinema. However, the relatively slow frame-refresh rate has drawbacks. For display it needs to be double shuttered – showing each frame twice to avoid excessive flicker, as in cinema projection, and fast pans and movements are not well portrayed. Faster vertical refresh rates are generally preferred for sports and live action.

See also: 24PsF, 25P, 3:2 Pull-down, ATSC, Common Image Format, Digital Cinema, DVB, Versioning

24PsF (segmented frame)

A system for recording 24P images in which each image is segmented: recorded as odd lines followed by even lines. Unlike normal television, the odd and even lines are from an image that represents the same snapshot in time. It is analogous to the scanning of film for television. This way the signal is more compatible (than normal progressive) for use with video systems, e.g. VTRs, SDTI or HD-SDI connections, mixers/switchers etc., which may also handle interlaced scans. Also it can easily be viewed without the need to process the pictures to reduce 24-frame flicker.

See also: Interlace Factor, Progressive


Refers to 25 f/s, progressive scan. Despite the international appeal of 24P, 25P is widely used for HD productions in Europe and other countries using 50 Hz TV systems. This is a direct follow-on from the practice of shooting film for television at 25 f/s.

See also: 24P, 24PsF, Common Image Format, DVB

3:2 Pull-down (a.k.a. 2:3 Pull-down)

A method used to map the 24 or 23.98 f/s of motion picture film onto 30 or 29.97 f/s (60 or 59/94 fields) television, so that one film frame occupies three TV fields, the next two, etc. It means the two fields of every other TV frame come from different film frames making operations such as rotoscoping impossible, and requiring care in editing. Quantel equipment can unravel the 3:2 sequence to allow clean frame-by-frame treatment and subsequently re-compose 3:2.

The 3:2 sequence repeats every 1/6th of a second, i.e. every five TV frames or four film frames, the latter identified as A-D. Only film frame A is fully on a TV frame and so exists at one timecode only, making it the only editable point of the video sequence.

Film-to-TV transfer


3:2 pull-down creates a field-based result in that every other frame contains frames comprising two different fields. This makes subsequent compression, which then has to be based on 60 fields/s, less efficient than working with 30 frames/s. This may affect delivery platforms from TV broadcast to DVDs.

3D (graphics)

Applied to graphics, this describes graphics objects that are created and shown as three-dimensional objects. As computer power has increased, so has the ability to cost-effectively produce more and more detailed 3D graphic results, as seen in feature length animations. For television presentation, live 3D computer graphics is now commonplace. The considerable computational power needed for this is generally supplied by GPUs.

3D (stereo)

In television, film or cinema 3D may refer to material that is shot using a set of ‘stereo’, left and right cameras, and shown on the screen as a pair of superimposed stereo images (often ‘decoded’ by the viewer using polarized or shuttered active spectacles). Also known as stereo 3D or stereoscopic 3D.

See also: Stereoscopy

50P and 60P

These indicate a video format that has 50 or 60 progressive frames per second and usually refers to high definition 1920 x 1080 or higher resolutions. The original HD digital television standards only included progressive frame rates above 30 Hz for image sizes up to 720 lines. It was later extended to the larger 1920 x 1080 television standards to provide a fast frame refresh rate for the rendition of fast action and progressive frames for optimum vertical resolution (better than interlaced scans). The baseband signal produces twice the data rate of the equivalent interlaced (50I and 60I) formats, pushing up equipment specifications.

See also: SDI (3G 6G, 12G, SDI)


A color standard proposed by AMPAS (Academy of Motion Picture Arts and Sciences) ACES (Academy Color Encoding System). It provides a means by which all imaging sources can share one common color space. This provides a common data set that can be repurposed and reworked without loss.

See also: Color spaces


Undesirable ‘beating’ effects caused by the presence of frequencies that are too high in an analog input signal that is converted into digits. Passing the input through a suitable low-pass filter, removing all the frequencies above half that of the analog-to-digital converter’s (ADC) clock rate (the Nyquist Frequency), solves the problem. Examples of aliasing include:

1) Temporal aliasing – e.g. wagon wheel spokes apparently reversing, also movement judder seen in the output of standards converters with insufficient temporal filtering.

2) Raster scan aliasing – twinkling effects on sharp boundaries such as horizontal lines. Due to insufficient filtering this vertical aliasing, and its horizontal equivalent, are often seen on the output of lower quality video processing equipment, such as poor DVEs, as detailed images are re-sized.

The appearance of ‘steppiness’ or ‘jaggies’ of poorly filtered images with near horizontal lines in a TV image is also referred to as aliasing.

See also: Anti-aliasing, Interpolation (temporal) Interpolation (spatial), Into digits (Tutorial 1), Nyquist (frequency)


Techniques to smooth aliasing effects created by poor filtering and other techniques. Perhaps some videos or crude character generation may produce poor, aliased, pictures. Anti-aliasing may then be applied to reduce the effect and improve the look of the image. A better approach is to avoid aliasing in the first place and with good modern technology easily available, serious aliasing should be avoided.

See also: Aliasing, Interpolation (spatial), Interpolation (temporal)


Long-term storage of information. Pictures, sound and metadata stored in digital form can be archived and recovered without loss or distortion. The storage medium must be both reliable and stable and, as large quantities of information need to be stored, low cost is of major importance. Currently many are using magnetic tape. However there is also ongoing use of optical disks including DVD and Blu-Ray Disc formats and further developments are emerging.

Today, the increasingly IP and Ethernet-connected environments involving many video formats, including digital film, mean that data recorders make good sense. Archiving systems built around the current LTO-5 and LTO-6 data recorders are increasingly proving to be efficient and effective for media applications. The formats include backward compatibility to the previous LTO type. And with the tape cartridges offering 1.5 and 2.5 TB for LTO-5 and LTO-6 respectively, there is useful capacity.

Removable CD-size optical discs potentially offer quick access and denser storage as well as long-term reliability. The Archival Disc system, expected in 2015 from Sony and Panasonic, offers 300 GB with a roadmap to 1TB storage per disc.

For archiving stills and graphics there is far less need for of strong compression as the volume of data will typically be much smaller than that for video. CDs and DVDs are convenient and robust, giving near instant access to all stored pictures.

Traditionally, material is archived after its initial use – at the end of the process. More recently some archiving has moved to the beginning, or even before, the production process. An example is news where, in some cases, new material is archived as events happen. Later subsequent editing, etc., accesses this.

With the worldwide expansion of television channels everywhere, including online and mobile services, archives are increasingly used to help fulfill the huge demand for programming.

See also: AAF, Data recorders, LTO, Optical disks


Particular visible effects on images which are a direct result of some technical limitation. Artifacts are generally not described by traditional methods of signal evaluation. For instance, the visual perception of contouring in a picture cannot be described by a signal-to-noise ratio or linearity measurement.

Aspect Ratio

1. of pictures. The ratio of length to height of pictures. All TV screens used to be 4:3, i.e. four units across to three units in height, but now all new models are widescreen, 16:9. Pictures presented this way are believed to absorb more of our attention and have obvious advantages in certain productions, such as sport. In the change towards 16:9 some in-between ratios were used for transmission, such as 14:9.

2. of pixels. The aspect ratio of the area of a picture described by one pixel. The ITU-R BT.601 digital coding standard for SD defines luminance pixels which are not square. In the 525/60 format there are 486 active lines each with 720 samples of which only 711 may be viewable due to blanking. Therefore the pixel aspect ratios on 4:3 and 16:9 screens are:

486/711 x 4/3 = 0.911 (tall)
487/711 x 16/9 = 1.218 (wide)

For the 625/50 formats there are 576 active lines each with 720 samples of which 702 are viewable so the pixel aspect ratios are:

576/702 x 4/3 = 1.094 (wide)
576/702 x 16/9 = 1.458 (wider)

All HD digital image standards define square pixels.

Account must be taken of pixel aspect ratios when, for example, executing DVE moves such as rotating a circle. The circle must always remain circular and not become elliptical. Another area where pixel aspect ratio is important is in the movement of images between platforms, such as computers and television systems. Computers generally use square pixels so their aspect ratio should be adjusted for SD television-based applications.

See also: ARC, Pixel

Axis (x,y,z)

Used to describe the three-dimensional axes set at right angles to each other, available in DVE manipulations. Viewing the picture in its original position, x lies across the screen left to right, y up the screen bottom to top and z points into the screen. Depending on the power of the equipment and the complexity of the DVE move, several hierarchical sets of xyz axes may be in use at one time. For example, one set may be referred to the screen, another to the picture, a third offset to some point in space (reference axis) and a fourth global axis controlling any number of objects together.

Axes controlling picture movement


See also: DVE, Keyframe

Bayer filter/mask array


A Bayer filter is a matrix of red, green and blue non co-sited filters placed onto an imaging chip (CCD, CMOS) so that it can capture the separate red, blue and green primary colors of the image to record a color digital image. This greatly simplified the construction of color cameras, and somewhat mimics how our (single-retina) eyes see color. As our eyes have more resolution for green light than red or blue, the Bayer filter on the imaging chip has twice as many green cells as red and blue. Some redundancy of the green pixels produces an image which is less noisy and has finer detail than would be achieved if there were an equal number of red, green and blue cells.

For further use, the R, G and B pixels generated by the Bayer-filter-and-imaging-chip combination need to be ‘unmasked’ using a complex algorithm. This process, sometimes called Debayering, produces the separate red, green and blue images that together make up the color image.

Traditionally professional TV cameras have used three image sensors, one to pick up each primary color. This arrangement requires that behind the lens there is a three-way light-splitting glass block delivering the separate R, G and B images to three light sensors that must be accurately registered together. This has involved a considerably more bulky construction and greater cost than is needed by those based on a single image-sensor chip complete with a Bayer filter.

The Bayer filter was patented in 1976 and early use was with consumer stills and video cameras. A number of other variants of RGB filter array are now in use. It was 30 years later that single-chip cameras started to be accepted for the professional video and movie markets, some years after they had been widely used in high-end stills cameras, for example. ARRI Alexa, Canon, Sony F65/55 and RED.



A method to help find required image-based material such as stills, graphics and video clips by showing many reduced size images together on a screen so enabling a quick selection of required material. For moving video, a timeline may be also be available so clips can be shuttled, so allowing the full sized images to be brought to use pre-cued.

Browse/edit facilities are used in newsroom systems to provide video editing for journalists on their desktops. The material is stored on a browse server and distributed over a network to the many users. Details differ between models but some allow frame-accurate shot selections to be made with the resulting ‘cuts decision lists’ used to conform a broadcast quality version.

Chroma keying

The process of overlaying one video picture, or part of, over another. The areas of overlay are defined by a specific range of color, or chrominance, on the background video signal that is used to create a key signal for a chroma keyer. For this to work reliably, the chrominance must have sufficient resolution, or bandwidth. PAL or NTSC analog coding systems significantly restrict chroma bandwidth and so are of very limited use for making a chroma key which, for many years, was restricted to using live, RGB camera feeds.

An objective of the ITU-R BT.601 and 709 digital sampling standards was to allow high quality chroma keying in post production. The 4:2:2 sampling system allows far greater bandwidth for chroma than PAL or NTSC and helped chroma keying, and the whole business of layering, to thrive in post production. High signal quality is still important to derive good keys so some high-end operations favor using RGB (4:4:4) for keying – despite the additional storage requirements. Certainly anything but very mild compression tends to result in keying errors appearing – especially at DCT block boundaries.

Chroma keying techniques have continued to advance and use many refinements, to the point where totally convincing composite images can be easily created. You can no longer see the join and it may no longer be possible to distinguish between what is real and what is keyed.

See also: Color space, Digital keying, Photo-real


The color part of a television signal, relating to the hue and saturation but not to the brightness (luminance) of the signal. Thus pure black, gray and white have no chrominance, but any colored signal has both chrominance and luminance. Although imaging equipment registers red, blue and green, television pictures are handled and transmitted as U and V, Cr and Cb, or (R-Y) and (B-Y), which all represent the chrominance information of a signal, and the pure luminance (Y).

See also: YUV, Y,Cr,Cb, Composite

Cineon (file)

An RGB bitmap file format (extension .cin) developed by Kodak and widely used for storing and transferring digitized film images in production and post production. It accommodates a range of film frame sizes up to full Vista Vision (rarely used format 36 x 18.3mm frame laid horizontally). In all cases the digital pictures have square pixels and use 10-bit log sampling. The sampling is scaled so that each of the code values from 0-1023 represents a density difference of 0.002 – describing a total density range of 2.046, equivalent to an exposure range of around 2,570:1 or about 11.3 stops. Note that this is no longer beyond the range of modern negative film and so has led to the use of higher precision file formats.

The format was partly designed to hold virtually all the useful information contained in negatives and so create a useful ‘digital negative’ suitable as a source for post production processing and creating a digital master of a whole program.

See also: 10-bit log, ACES, Color spaces, DPX

Color cube

A representation of color space by a three-dimensional diagram. For example, all definable colors of an RGB color space can be contained in an RGB color cube where R, G and B are axes at right angles to each other (like x, y and z at the corner of a cube). Different color spaces and interpretations of color are defined by different color cubes.

If the exact spectral values of R, G and B are defined, that cube defines an absolute color space. Such cubes are available from a number of vendors.

Color Decision List (CDL)

The American Society of Cinematographers’ Color Decision List (ASC-CDL) is a proposed metadata interchange format for color correction, developed to provide a means of guiding the next stage in the process with respect to color correction. The CDL applies primary corrections of offset, gain, gamma and saturation expressed in a normalized manner.

Color management

The control of color through a defined process. The idea is that all, or at least some, pictures, monitors or screens should portray the colors as they will appear when shown in their target medium. For example, if working on a DI project, the color management should ensure that the monitors show the colors exactly as an audience in a cinema will see them. Today that may be by use of a release print, in which case the color management has to adjust the monitoring for the film stock to be used. It could also be from a digital cinema distribution master (DCDM) and the color characteristics of that path taken into account as part of the color management.

Color science

Color Science has been an area of scientific research for over 100 years. Brought together in 1931 by the CIE (Commission Internationale de l’Eclairage) this sphere of science studies all aspects of human perception of color and brightness. Early use was to study how dyes could be mixed. The issues of color printing and the effects of different viewing conditions on perception. There are large amounts of research and books published relating to this subject.

The definitions for all our television and cinema viewing standards are rooted in color science. The numeric definition of r,g,b and the conversions to YCrCb are examples of the practical use of color science. In this multimedia world media transportability and maintenance of creative intent would not be possible without color science defining the solid core of math to support the industry.

See also: Color Space

Color space

The color range between specified references. Typically three reference points are quoted as in television is could be RGB, Y R-Y B-Y, or Hue, Saturation and Luminance (HSL). These are all color spaces. In print, typically Cyan, Magenta, Yellow and Black (CMYK) are used. Film is RGB while digital cinema uses X´Y´Z´. Pictures can be moved between these color spaces but it requires careful attention to the accuracy of the involved processing. Operating across the media – in print, film and TV, as well as between computers and television – requires color-space conversions to display the correct colors everywhere.

Electronic light sensors detect red, blue and green (RGB) light but TV signals are usually changed into Y, R-Y and B-Y components as, or very soon after they enter the electronic realm via camera, scanner or telecine. There is some discussion about which color space is best for post production – the most critical operation being keying. However, with most video storage and infrastructure being component-based, the full RGB signal is usually not available, so any of its advantages can be hard to realize for television-based productions. However, in the Digital Intermediate (DI) process, where movie footage undergoes ‘post production’, RGB color space predominates.

With the increasing use of disk storage, networking able to carry RGB and digital cameras with RGB outputs, RGB infrastructure and operations are more widely used. Even so, RGB takes up 50 percent more storage and, for most productions, its benefits over component working are rarely noticed. One area that is fixed on RGB use is in 2K and 4K digital film (digital intermediates). Modern digital techniques allow the use of both RGB and Y R-Y B-Y to best suit the requirements of production as well as those of distribution and delivery.

More recently the world has become more complex for color, with a wide diversity of camera, display systems and methodologies for moving from one to the other. The CRT display set the colorimetric standard that still holds today even though CRTs are no longer used and other display technologies, LCD, plasma, micro-mirror and OLED, for example, each have their own characteristics for color and light output transforms.

In the television broadest description Color Space defines how all parts of the imaging chain respond to light or electrical drive. Cameras separate out the red, green and blue light using filters. The spectral response, in particular the passable bandwidth, of these filters control the margins of the range of colors that can be represented. These are usually represented in a diagram of the type shown below.

Color Space

The axes of this diagram are the xy of the color notation xyY where the Y component would represent the intensity. The human visible area is infilled with color and, on more detailed depictions of this diagram, show the wavelengths of light representing each color. Saturation increases towards the edge of the locus.

The corners of the triangles forming the ACES, DCI P3 and Rec.709 are at the coordinates of the primary colors used in these displays. Only colors inside the respective triangle can be reproduced by that display. You can see that no display technology covers the full human range for color and that, for example, full green on a Rec.709 monitor is not the same color as full green on the P3 display. Some colors in nature cannot be represented at their proper saturation using today’s imaging technology.

See also: 2K, Keying

Color Transformation Language (CTL)

Color Transformation Language is a small programming language designed to serve as a building block for digital color management systems. It allows users to describe color transforms in a concise and unambiguous way by expressing them as programs that describe the transform that can be applied to pixel data. It is designed to run fast, operating on many pixels at one time.

See also: OpenEXR

Website: http://ampasctl.sourceforge.net

Constant bit rate (CBR) compression

Compression systems that are used to create a fixed rate of output data. This is usually to fit within a given bandwidth such as that available on a video tape recorder or a constant bit rate transmission channel. With video, the amount of useful information contained in the material varies widely both spatially and temporally – with movement. For example, a football match with crowds and grass texture as well as fast panning cameras typically contains far more information than a largely static head-and-shoulders shot of a newsreader. Using constant bit rate means that the video quality may suffer in order to meet a specified maximum bit rate. In the football case, the texture of the grass may go ‘flat’ during a camera pan, and reappear when the camera is still.

As overflowing the available bit rate budget could have disastrous results with bits being lost, the aim is always to use just under the available bit rate. The degree of success in almost filling the available space (not easily done live) is a measure of the quality and efficiency of the compression system.

See also: Variable bit rate


An unwanted artifact, similar to posterization. Digital video systems exhibit contouring when an insufficient number of quantizing levels are used, or inappropriate processes are used such as poor truncation – rounding of LSB errors. The result is that the picture’s brightness changes in too large steps and become visible over relatively even-brightness areas – like the sky.

See also: Dynamic Rounding


Digital Asset Management is about managing and controlling the receiving, cataloging, storage, retrieval, distribution, archive and deletion or removal of digital assets. In the media world these assets are typically digital video, audio and images. DAM could help to access materials needed to make a new program. The program is then an asset in itself which can be handled by MAM – media asset management. With thousands of TV channels now broadcasting or narrow casting, TV programs made for one channel may well be useful to others.

Digital Asset Management (DAM)

Content is worthless if you cannot find it. If you can find it easily and have rights to use or sell it, it has value. The content will vary in size from a whole movie, to a few frames of news footage. Digital Asset Management (a.k.a. media asset management or digital asset warehousing) is about the storage and use of both digital content and its metadata. The latter comprises descriptions of the content, such as text and thumbnail images, stored in a database for easy searching and management. The metadata is linked to the content files, images or video to allow retrieval.

Digital cinematography

Shooting movies with digital movie cameras, not film. Nearly all movies are now shot digitally with cameras specifically made for movies, differing in many ways from those made for television. The output offers RAW data that includes the full range of brightness captured by the image sensors, allowing color grading, format changes, etc. to be executed as a part of the digital intermediate process. The camera’s image sensor is typically about the size of full frame 35mm film delivering at least 4096 x 2160 (4K) sized images, with the same movie look, including a narrow depth of field.

By contrast, television cameras are designed to work live. They include front-end processing for gamma correction, set-up for the required color look and clipping to suit home viewing conditions. Sensors are usually smaller, half-inch size, delivering the much deeper depth of field that we expect from TV, and is very useful when covering live events such as sports.

See also: VFR


Discontinuous 2:3

It is common for the electronic editing process to be performed post telecine. When editing is performed to 2:3 there is a potential for disruptions in the 2:3 sequence. These can be 3 field sequences adjacent to other 3 field sequences, and 2 field sequences adjacent to other 2 field sequences. Also there are cases where we have single fields present that are not part of any sequence (Orphan fields). These disruptions caused by editing create a “broken 2:3 sequence”.

Display resolutions

The computer industry has developed a series of display resolutions which span television’s SD, HD and UHD, a selection is listed below. The availability of hardware to support these resolutions has, and will continue to benefit television and digital film.

All use square pixels and none correspond exactly to television formats so attention to size and aspect ratio is needed when using computer images on TV and vice versa.

TypeSizeM PixelsAspect ratio 
VGA640 x 4800.314:31.33
SVGA800 x 6000.484:31.33
XGA1024 x 7680.794:31.33
SXGA1280 x 10241.315:41.25
UXGA1600 x 12802.055:41.25
WUXGA1920 x 12002.316.101.60
QXGA2048 x 15363.154:31.33
QSXGA2560 x 20485.244:31.33
WQSXGA3200 x 20486.5516.101.56
WQUXGA3840 x 24009.2216.101.56
4K4096 x 23049.4416:91.77
HXGA4096 x 307212.584:31.33
WHUXGA7680 x 480036.8616.101.56

Video SDTV                 720 x 576 (not square pixels)
720 x 480 (not square pixels)

Video HDTV                1920 x 1080
2K DCI cinema            2048 x 1080
2K digital film             2048 x 1556
4K TV (UHD1)              3840 x 2160
4K DCI cinema            4096 x 2160
4K digital film             4096 x 3112
8K TV (UHD2)              7680 x 4320

*The image area of Full Frame film 35 mm images is usually scanned to occupy 2048 x 1536 pixels (4K – 4096 x 3072). The extra 20 (40) lines scan the black strip between successive frames which only carries image information if film is shot with an open gate.

See also: 2K, Aspect ratio

Website: www.fourmilab.ch/documents/howmanydots

Dynamic range

For images: the measurement of the range of brightness in a scene expressed as a ratio, or the Log10 of the ratio. Typically a lighting cameraman will try to keep a scene to less than 40:1 (Log = 1.6) to avoid loss of detail in the print. A 100:1 (Log = 2) contrast range in the scene is a typical maximum.

Modern cameras and film stocks have dynamic ranges in excess of 13 stops which is greater than 8000:1.

See also: Cineon file, Stop

Dynamic Rounding

Dynamic Rounding is a mathematical technique devised by Quantel for truncating the binary word length (the number of 1s and 0s) of pixels to a specified number of bits. Rather than simply ignoring the lower bits, it uses their information to control, via a randomizer, the dither of the LSB of the truncated result. This effectively removes the artifacts, such as banding, that could otherwise be visible. Dynamic Rounding is non-cumulative on any number of passes and produces statistically correct results. Earlier attempts at a solution have involved increasing the number of bits (e.g. from 8 bits to 10 bits), making the size of LSBs smaller but not removing the inherent problem.

Some form of limiting the number of bits is required as there are many instances in digital systems where a number, representing a pixel value, uses more bits than the system normally accommodates. For example, a nominally 12-bit system might have a problem handling a 24-bit word. This has to be rectified in a way that will keep as much information as possible and not cause noticeable defects even after many processes. A common example arises in image processing which often involves multiplying the values of co-located pixels in two different images, as in digital mixing. Assuming the equipment is nominally 12-bit, the mixing produces a 24-bit result from two original 12-bit numbers. At some point this has to be truncated, or rounded, back to 12-bits, either to fit within the structure of the equipment or to be accepted by external interfaces. Simply dropping the lower bits can result in unwanted visible artifacts, especially when handling pure, noise-free, computer generated pictures.

16 bits to 8 bits

Dynamic Rounding

Dynamic Rounding is licensable from Quantel and is used in a growing number of digital products both from Quantel and other manufacturers.

See also: Digital mixing,

Entry point

A point in a coded video bit stream from which a complete picture can be decoded without first having to store data from earlier pictures. In the MPEG-2 frame sequence this can only be at an I-frame, the only frames encoded with no reference to others.


The material that television programs are made of. In other words, the video, audio and any other material such as graphics and captions that are added to make up the final result.

See also: AAF


Exposure refers to the integral amount of light that falls on a film or light sensor. In a camera this is controlled by both the shutter open time, and the effective lens aperture, referred to as the F-stop number or T number.

See also: Density, Stop


A whole television picture. A frame has shape, its aspect ratio. Today all new TV frames have a 16:9 aspect ratio. Some motion pictures are presented on TV with a wider aspect ratio, typically with a black border above and below. A frame has a specific resolution and is either using interlaced (I) or progressive (P) scans. Most productions now originate in an HD format of either 1280 x 720P or 1920 x 1080(I or P) pixels. Some still use SD with 701 x 576I or 701 x 480I frames. These two SD standards do not have square pixels, all other DTV frames do. In UHD a frame could have 3840 x 2160 (4K) or 7680 X 4320 (8K) pixels. UHD only uses progressive scans. Interlace makes a relatively low frame rate of 25 or 30 f/s (shown as 50 or 60 fields/s) suitable for portraying motion quite well but, without further processing, stop motion freezes can look poor.

Another property of a frame is its color gamut, as defined in its standard. As TV video standards have progressed, so the associated color gamut has expanded. Some say this is the most striking change from HD to UHD. UHD frames may also have a higher dynamic range (HDR) – again enhancing the look of the pictures. A frame has a specific time. Usually 1/25 or 1/30 of a second. Larger frame formats, especially 4K and 8K, require faster frame rates to reasonably portray smooth movement on a big screen. See ‘Frame rate below’.

See also: Interlace

Frame rate

The number of whole pictures per unit time, usually frames per second, f/s or Hz. There is a wide range of frame rates used with modern media, typically from 23.976 to 60Hz. An even higher frame rate of 120Hz is included in ITU-R BT.2020 recommendation for UHD. This is because the large 4K and 8K video requires high frame rates to provide good smoothness of motion when looking at very big pictures.

Frame-rate conversion

The process of changing the vertical rate of moving pictures. This has become far more important with the use of HD and UHD with their inclusion of many frame rates in the standards with 23.98P, 24P, 25P, 29.97P, 30P, 50I, 50P, 59.94I, 59.94P, 60I and 60P Hz. Changing from one rate to another, including between interlaced and progressive scans, requires considerable technology to produce high quality results. Further frequencies are used in other media markets, such as mobile and some web-based areas with more restricted bandwidth, where lower rates such as 15 f/s and others are found.

Conversions that simply drop or repeat fields or frames produce poor results – especially where the video includes a lot of motion, e.g. sports. For better results some form of motion compensation is used that analyzes the motion and uses this information to create new images in between those that are coincident between the input and output.


Gamma has several meanings. In the video world a television screen’s brightness is not necessarily linearly proportional to amplitude of the picture signal. This is approximated by a power law, the power being referred to as gamma. For a CRT (cathode ray tube) the gamma is generally calculated to be 2.6. This is compensated for in TV cameras by a gamma of 0.45 giving an overall gamma of 0.45 x 2.6 = 1.17 – adding overall contrast to help compensate for domestic viewing conditions. Today, most viewers have panel screens that use one of several different technologies. Usually there is a menu somewhere for selecting a gamma setting to suit your screen for the correct gamma and gamut.

In film, gamma describes the average slope of the D/Log E curve over its most linear region. For negative stocks this is approximately 0.6, for intermediate stocks this is 1.0 and for print stocks 3.0. This gives a system gamma of 0.6 x 1 x 3 = 1.8. This overall boost in contrast is much reduced due to flare and auditorium lighting conditions of cinemas.

With video now available on a wide range of devices there may be a need to re-visit the gamma settings. For example, a digital film master is versioned for mobile phones, and for home TV (where viewers have, LED, plasma and a few CRT displays) as well as for digital and film cinemas. This can be achieved by applying suitable LUTs for each version.

See also: Internegative, Interpositive

Website: www.myperfectpicture.tv


In image media this usually refers to the available range of colors from a display device, as in Color Gamut. This varies according to the color space used; YCrCb, Rec709 (HDTV), Rec2020 (UHD), DCI-P3 (digital cinema) and CMYK (paper print colors) all have different color gamuts.

See also: Illegal colors

Grading (a.k.a. color timing)

Grading is the process of adjusting the color of a clip to get the best out of the material or to match shots perhaps taken at different times or in different lighting conditions. With film, grading was traditionally performed when going from internegative to print film by controlling the exposure of the film. In television it was traditionally done off the telecine for commercials or tape-to-tape for longform programs. Either way, both processes were, by their nature, linear.

The advent of non-linear grading systems (such as Quantel’s Pablo Rio) has changed the rules for color grading and correction. While there is still a requirement for an initial technical scan for film-originated material, from this point on grading can – and often does – happen at multiple stages in the post production process. For example, color correcting individual layers within multilayer composite shots (which may be shot under different lighting conditions) to ensure that the result is harmonious within itself. In addition, non-linear editing means that scene-to-scene comparisons and corrections can be made as the edit unfolds.

This eases the final grading process when the finished work is reviewed interactively with the director/client.

Secondary color correction is aimed at controlling a particular color or a narrow range of colors – such as those on a car or product. Here typically the hue, gain and saturation can be changed. There are also several methods available for defining the object, area or ‘window’ that requires color correction such as using wipe-pattern shapes, drawing an electronic mask by hand or a combination of automatic and by-hand methods. Some of the most sophisticated tools are provided by media workstations such as Quantel’s Pablo Rio.

See also: Film scanner, Telecine

Grain management

Controlling the amount of ‘film’ grain visible on a film or digital movie. Its appearance is considered by some to add a certain look to the production. Modern DI equipment can include grain management that can increase or decrease its visibility on film or digitally originated material. Aside from aesthetics, grain affects compression systems as they see it as extra movement and so can waste bandwidth by coding it – adding another reason for controlling the amount of grain according to the different coding requirements for, say, digital cinema and mobile reception.


Term describing limits of accuracy or resolution. For example, in editing the granularity of uncompressed component video is one frame; it can be cut on any frame boundary. The granularity of long GOP MPEG-2 is about half a second; about 12 or 15 frames for 25 or 30 f/s television. In a digital imaging system the granularity of brightness is the minimum change per sample, corresponding to the effect of one LSB change.


High Dynamic Range (Imaging) techniques allow a greater dynamic range of exposure (stops) than normally possible, with the intention of accurately representing the wide brightness range of real scenes ranging from direct sunlight to deep shadows. This is sometimes used with computer-generated images or photography (often by taking several pictures of a scene, each with a different exposure setting) and it can provide a large amount of headroom for the adjustment of images in post production.

HDR can have a significant effect on the viewer experience. Some consider it as the best achievable next step beyond the normal experience of HD – placing it above 4K and HFR.

High Speed Camera Systems

It has been standard practice from the early days of film to shoot film at a higher rate, over-cranking, then it will be played back at normal speed to produce a slow-motion effect of high quality. However there was a limit to the speed at which film can be run through the camera. Many complex designs for film transports have been tried but they are limited to around 400 frames per second depending on film size and framing. Early uses in military applications were picked up by the creative community for dramatic effect.

Electronic imaging systems do not have the issues of moving a mass of film past the lens and once the problems of high speed digitization and storage were solved it allowed imaging up to 1,000,000 frames a second at low resolution. Most modern cameras are capable of some over-cranking to a few hundred frames per second and from there specialist cameras are available to cover high resolution and speeds of several thousand frames/second.


Hierarchical Storage Management is a scheme responsible for the movement of files between archive and the other storage systems that make up hierarchical storage architecture. Typically there may be three layers of storage, online, near-line and offline, that make up the hierarchy that HSM manages. Managing these layers helps to run the archive and have the required speed of access to all stored material.

Interpolation (spatial)

Defining the value of a new pixel from those of its near neighbors. For example, when re-positioning or re-sizing a digital image, for dramatic effect or to change picture format, more, fewer or different pixels are required from those in the original image. Simply repeating or removing pixels causes unwanted artifacts. For far better results the new pixels have to be interpolated, calculated by making suitably weighted averages of adjacent input pixels, to produce a more accurate result. The quality will depend on the techniques used; bi-cubic interpolation is generally accepted as being good, and the number of pixels (points) taken into account (hence 16-point interpolation), or area of original picture that is used to calculate the result all affect the quality of the result.

See also: Anti-aliasing, Interpolation (temporal), Sub-pixel

Interpolation (temporal)

Interpolation between the same point in space (pixel) on successive frames. It can be used to provide motion smoothing and is extensively used in standards converters to reduce the judder caused by changes of field, or frame, rates such as between 50 and 60 Hz. The technique can also be adapted to create frame averaging for special effects and slow motion. Various qualities of processing are used. It can be very complex, attempting to work out how each element in successive pictures is moving in order to synthesize ‘between’ images (e.g. to convert 50 pictures into 60 pictures while still showing smooth motion).


A component of video: the black and white or brightness element, of an image. It is often represented as Y, so the Y in Y,B-Y,R-Y, YUV, YIQ and Y,Cr,Cb is the luminance information of the signal.

In a color TV system the luminance signal is usually derived from the RGB signals originating from cameras, by a matrix or summation of approximately:

Y = 0.3R + 0.6G + 0.1B (based on ITU-R BT.601)

There are other similar equations from different TV standards. The precise values depend on the color primaries of the display standard used.

See also: RGB, Y (B-Y) (R-Y), Y,Cr,Cb, YUV


LUT is the shortened description of the term ‘Look-Up-Table’. In its simplest form it consists of a finite number of  positive integer input values which map to new output values. Typically a LUT will take 2n (64, 128, 256…) input values from 0 to 2n – 1 and map the input to the new output. Although they can be any size required. For example, a small LUT of size 16 might look like this when describing a math function where the output = input2 :



The example shows positive integer address (input) producing an integer Output. This need not be the case; the output could be floating point or any number form needed.

Although the sample LUT above is a mathematical model this does not have to be the case as the input can refer to any value the user desires so LUTs are very convenient where there may be no distinct or a very complex connection between the input values and the output.

LUTs are very common in hardware based systems as they can be used to simplify a mathematical process to map Input address to Output very efficiently. With today’s high performance GPU math processes the developer needs to consider whether the LUT is as efficient as just doing the math on each value.

Inputs to LUTs are typically positive integers. It is possible to engineer a more complex form of LUT which takes a floating point input. Here the LUT is used to look up integer values above and below the Input value and the final result is interpolated from there. For example:

Input value = 3.4

For LUT Input value 3 the Output = 27.5 (these values come from some other complex process)

For LUT Input value 4 the Output  = 29.5

The final output will be 27.5 + (29.5 – 27.5) * 0.4 = 28.3.

Thus a complicated math process can be approximated by a LUT and a simple interpolation.

This process can be expanded further into multi-dimensional LUTs where the Output is a result of multiple Input values.

In image processing systems LUTs can be used for gamma correction, color space conversion (rgb to XYZ, yuv to rgb) and color correction for example.

See also: Color cube


Media Asset Management is used in modern broadcast and post production that increasingly depends on file-based operation rather than the use of tape. MAM can track and manage all aspects of the use and repurposing of media assets so it is clear which assets have been used where. This can help both in technical operations such as adjusting an edit, as well as commercial requirements such as billing.


Refers to a distortion that appears as patterning seen on images where two similar fine patterns overlap, for example two fine gratings can appear to produce diagonal lines. The effect can appear even when one of the patterns is normally invisible, such as the sampling frequency of the image. In a good image system this should be avoided by use of filters but, for instance, the fine detail of a grid pattern may suddenly collapse and appear as curves or diagonal lines as the camera zooms in and the pattern detail nears the digital sampling frequency.

The occurrence of Moiré patterning changes with the picture scanning system. The higher the picture’s resolution the finer the pattern that will cause Moiré. For example, a quite small dog-tooth patten worn by a news presenter could cause it in SD, but with HD it has to be twice as fine.

Motion Adaptive Processing

Motion adaptive processing is responsive to the output of a motion detector.  The motion detector may work on a global or a local basis in the picture, and may be binary or may measure on a continuous scale the amount of motion, or the confidence that an object or region is moving.  Motion adaptive processing controls a mix between processing that is optimized for static detail and processing that is optimized for moving areas.

Motion Compensated Processing

Motion compensated processing is responsive to the output of a motion estimator.  A motion estimator usually works on a local basis in the picture and measures not only the existence of motion but also its speed and direction.  Motion compensated processing typically controls spatiotemporal filters that track the motion of each part of the picture.


The Modulation Transfer Function is a measure of spatial resolving power. It can refer to a medium, such as film, or a lens, or any part of the scene-to-screen chain. It is akin to frequency response in electronic images. To assess the MTF of film, it is exposed to special test images comprising sine-wave bars of successively higher frequencies. The results on the processed film are assessed by measuring its density over microscopically small areas to obtain peak-to-trough values for the different frequencies. These results should then be corrected to allow for the response of the lens, the test film itself and any D/Log E non-linearities.

In a practical film system, the film images pass through many components including the camera lens, intermediate stocks and contact printing to the projection lens. Each of these has its own MTF and the system MTF can be calculated as follows.

MTFsystem = MTF1 x MTF2 x MTF3 etc

See also: Resolving power


Production Asset Management could be considered as a subset of MAM It is intended to co-ordinate the work of all stages involved with post production, and so ensure it runs efficiently and meets planned deadlines and budgets. It can enhance collaboration and manage down to a detailed level for edit suites and media storage, and manage those contributing other elements, such as special effects. Management of items such as the original video, audio, EDLs, proxies, versioning, content protection, backup, can be included. But PAM is not tightly defined and some say that it ends when the media file is ‘flattened’, with all the effects, corrections, cuts, mixes, etc, are applied and resolved to produce one layer of video and the finished tracks of audio.


Hiding or removing the defects acquired by old (archive) material and content. Digital technology has enabled many new and easy-to-use procedures to provide fast and affordable restoration. These range from fully automated systems – that depend on recognizing generic faults and treating them – to hands-on operations that offer access to appropriate toolsets – often presented as ‘brushes’.

These have been applied to both television and to film, and succeeded in making available many old archives for the ever-hungry TV channels.

Sequence Detection

This is the act of finding film frame boundaries. For “perfect” pSF or 2:3 sequences, this will produce a regular pattern of frames; for “non-perfect” sequences the pattern will not be regular and might have discontinuities at edit points for example.


Temporal conversion without the use of interpolation. If input and output field or frame rates are not identical then field or frame drops or repeats must occur.


A mathematically defined absolute color space, CIE X´Y´Z´, also known as CIE 1931 color space, was created by the International Commission on Illumination (CIE) in 1931. It was not heard much of in the digital media industry until X´Y´Z´ was selected by DCI as the color space for digital cinema.