Category Archives: D


Digital Direct Drive Image Light Amplifier. Technology developed by Hughes-JVC for video projection up to large screen size for home theaters. Digital images are displayed by a CMOS chip which has a reflective liquid-crystal surface where electronic signals are directly addressed to the image modulator. The image pixel information is addressed to a matrix of ‘transistor’ cells beneath the liquid crystal which directly responds to voltage level; the gray scale is determined by the voltage set on each pixel. The reflection of the projector lamp light focused on the chip produces an image. The technology has been used up to cinema-screen size and can display 4K resolution.


A digital video tape recording format working to the ITU-R BT.601, 4:2:2 standard using 8-bit sampling. The tape is 19 mm wide and allows up to 94 minutes to be recorded on a cassette.

Introduced in 1986, Sony’s D1 VTR set a benchmark as it was the first uncompressed component digital tape format. It offered very high quality, only small degradation over many re-record generations and, with its high chrominance bandwidth, allowed excellent chroma keying in post production. Despite the advantages, D1 use was limited by high cost and is rarely found today. However the term ‘D1’ is still occasionally used to imply uncompressed component digital recording – ‘D1’ quality.

See also: D2, DVTR


This refers to Sony’s MPEG IMX VTRs that record I-frame only 4:2:2-sampled MPEG-2 SD video at 50 Mb/s onto half-inch tape. In bit rate, this sits IMX between Betacam SX and Digital Betacam. There is a Gigabit Ethernet card available which has caused some to dub it the eVTR as it can be considered more as a ‘storage medium’ for digital operations.


A VTR standard for digital composite (coded) PAL or NTSC signals. It uses 19 mm tape and records up to 208 minutes on a single cassette. Neither cassettes nor recording formats are compatible with D1. Being relatively costly and not offering the advantages of component operation the format has fallen from favor. VTRs have not been manufactured for many years.

See also: Component, D1, D3, DVTR


A film-style digital camera from Arri that is highly modular and uses a single Super35mm-sized image CMOS sensor with a Bayer filter and producing the same field of view and depth of field as that of traditional 35mm film motion picture cameras. Like real film cameras it uses a detachable optical viewfinder that is widely preferred to electronic versions on other cameras. It is capable of 1-60Hz frame rates and produces 1080-line images in 4:2:2 or 4:4:4.



A VTR standard using half-inch tape cassettes for recording digitized composite (coded) PAL or NTSC signals sampled at 8 bits. Cassettes record 50 to 245 minutes. Since this uses a composite PAL or NTSC signal, the characteristics are generally as for D2 except that the half-inch cassette size allowed a full family of VTR equipment to be realized in one format, including a camcorder. D3 is rarely used today.


There is no D4. Most DVTR formats hail from Japan where 4 is regarded as an unlucky number.


A VTR format, introduced in 1994 by Panasonic, that uses the same cassette as D3 but recording uncompressed component signals sampled to ITU-R BT.601 recommendations at 10-bit resolution. With internal decoding, D5 VTRs can play back D3 tapes and provide component outputs.

D5 offers all the performance benefits of D1, making it suitable for high-end post production as well as more general studio use. Besides servicing the current 625 and 525 line TV standards the format extends to HDTV recording by use of about 4:1 compression (HD-D5).


A little used digital tape format which uses a 19mm helical-scan cassette tape to record non-compressed HDTV material. The Thomson VooDoo Media Recorder is the only VTR based on D6 technology. The format has passed into history.


Digital Asset Management is about managing and controlling the receiving, cataloging, storage, retrieval, distribution, archive and deletion or removal of digital assets. In the media world these assets are typically digital video, audio and images. DAM could help to access materials needed to make a new program. The program is then an asset in itself which can be handled by MAM – media asset management. With thousands of TV channels now broadcasting or narrow casting, TV programs made for one channel may well be useful to others.


Direct Attached Storage, typically on hard disks, is available only to a single user as opposed to NAS that can be available to everyone on the network. Typically this uses SCSI, SAS or Fibre Channel protocol and provides add-on storage for servers that maintain high data rate and fast access.


Dynamic Adaptive Streaming over HTTP, A.K.A. MPEG-DASH, makes use of standard HTTP (Hypertext Transfer Protocol – as in the internet) web servers to provide a high quality adaptive bit-rate streaming video service. As the video or audio is usually quite long, and a lot of data, it divides the content into small segments which are then sent as a series of small HTTP files. The server makes the content available in a range of bit rates so the receiver can select the highest quality version that provides continuous video or audio – no freezes or breaks. It also means that the service can adapt to fit with the available bandwidth as the speed of the internet connection varies.

DASH-IF is the DASH Industry Forum that is made up of 67 industry members from around the world.

DASH-PG is the Promoters’ Group. Its membership includes manufacturers, content owners, operators, and more. The goal is to promote DASH as a widely available solution for adaptive streaming.

DASH-VLC a Video LAN Player designed to work with DASH. Generally VLCs are available as downloads, and can play a wide range of video formats. A DASH-VLC is designed to work with DASH.


Data carousel

This is a file system that rotates and delivers its content into a network at a defined point in a cycle, for example, teletext pages. It is a method to make a large amount of information or data files available within a reasonably short time following a request. The data is inserted into the digital broadcast transport stream.

See also: IP over DVB

Data recorders

Machines designed to record and replay data. They usually include a high degree of error correction to ensure that the output data is absolutely correct and, due to their recording format, the data is not easily editable. These compare with digital video recorders which will conceal missing or incorrect data by repeating adjacent areas of picture, and which are designed to allow direct access to every frame for editing. Where data recorders are used for recording video there has to be an attendant ‘workstation’ to produce signals for video and audio monitoring, whereas VTRs produce the signals directly. Although many, but not all, data recorders are based on VTRs’ original designs, and vice versa, VTRs are more efficient for handling pictures and sound while data recorders are most appropriate for data. They are useful for archiving and, as they are format-independent, can be used in multi-format environments.

See also: LTO


An HD SDI dual link arrangement that is configured to carry live uncompressed DCI-sized 2K footage. That is 2048×1080 pixel images at 24P, with 12-bit 4:4:4 sampling in X´Y´Z´ color space. This involves a constant data rate of at least 1913 Mb/s, too much for a single HD-SDI, designed for 1080/60I/30P 10-bit 4:2:2 bit rate of 1248 Mb/s and audio. Hence the use of a dual link.

See also: Dual link


SMPTE Task Force On Digital Cinema, intended to aid digital cinema development by determining standards for picture formats, audio standards and compression, etc.


Digital Cinema Initiatives, LLC was formed in 2002 with members including Disney, Fox, MGM, Paramount, Sony Pictures Entertainment, Universal and Warner Bros. Studios. Its purpose was to establish and document specifications for an open architecture for Digital Cinema components that ensures a uniform and high level of technical performance, reliability and quality control. It published the Digital Cinema System Specification in July 2005 (freely available at their website) and established a set of technical specifications that allowed the industry to roll-out Digital Cinema. It is a measure of the DCI’s success that now well over half of the world’s cinemas are digital.

There are three levels of images, all with a 1:1 pixel aspect ratio, 12-bit 4:4:4 sampling in X´Y´Z´ color space.

LevelPicture SizeAspect RatioFrame Rate

The specification includes requirements for JPEG 2000 image compression, X´Y´Z´ color space and a maximum playout bit rate of 250 Mb/s. To prevent piracy by copying the media files there is AES 128 encryption (Advanced Encryption Standard able to use keys of 128, 192, and 256 bits to encrypt and decrypt data in blocks of 128 bits). There is also forensic marking to deter and trace the bootlegger’s camcorder pointed at the screen. Such schemes include Philips’ forensic watermarking or Thomson’s NexGuard watermarking.

DSM → DCDM → DCP → DCDM* → Image and Sound

DCI describes a workflow from the output of the feature post production or DI, termed the Digital Source Master (DSM), to the screen. The Digital Cinema Distribution Master (DCDM) is derived from the DSM by a digital cinema post production process, and played directly into a digital cinema projector and audio system for evaluation and approval.

The approved DCDM is then compressed, encrypted and packaged for distribution as the Digital Cinema Package (DCP). At the theater, it is unpackaged, decrypted and decompressed to create a DCDM* with images visually indistinguishable from those of the original DCDM.



DCT (compression)

Discrete Cosine Transform. As a basic operation of MPEG video compression it is widely used as the first stage of compression of digital video pictures. DCT operates on blocks (hence DCT blocks) of the picture (usually 8 x 8 pixels) resolving their content into frequencies, giving an amplitude for each frequency component. In itself DCT may not reduce the amount of data but it prepares it for following processes that will. Besides MPEG, JPEG, VC9, WM9 and DV compression all depend on DCT. The use of blocks can lead to blocks being visible on screen where data rates that are too low are used.

See also: DV, ETSI, JPEG, MPEG-2, MPEG-4, Wavelet

Decibel (dB)

Units of measurement expressing ratios of power that use logarithmic scales to give results related to human aural or visual perception. Many different attributes are given to a reference point termed 0 dB, for example a standard level of sound or power with subsequent measurements then being relative to that reference. Many performance levels are quoted in dB, for example signal to noise ratio (S/N).

Decibels are given by the expression:

10 log10 P1/P2

where power levels 1 and 2 could be audio, video or any other appropriate values.


Deliverables are the TV programs, promotions, advertising and other material that are delivered to broadcasters for distribution on their networks: TV channels, agencies, Web, DVD, mobile phones, etc. The business of making deliverables has hugely expanded with the widening scope of digital media platforms. Today it is a big and complex business.

As industry bodies like the UK’s DPP race ahead with implementation of AS-11 for broadcast TV program delivery, The Advanced Media Workflow Association (AMWA) has released an update to the MXF Commercial Delivery specification, AS-12.

Traditionally deliverables have been made by copying from the edited masters of programs, commercials, and any other material ready for transmission. This process involved copying the edited master to the customer’s required video format and recording medium for delivery to the broadcaster or publisher.

In the UK the industry has avoided dealing with a potentially huge selection of formats by the implementation of the DPP (Digital Production Partnership) standard which all users accept. This is an implementation of the AMWA AS-11 format.

Making deliverables can involve treatments such as pan and scan, color grading and standards conversion. If they are for use on mobile phones then different aspect ratios and sizes may apply, requiring processes such as conversion and image re-framing. Some internet video such as the BBC’s iPlayer catch-up service uses a lower frame rate to help reduce the data rate.

To provide the best quality for all users the original material should be produced in the highest required viewing resolution. Also the frame rate should be high enough to work well with the largest required format. For instance the 1080/24P or 25 HD format can be used to make high quality versions for most television formats. This top-down approach preserves quality as the HD image size means any resizing will be downward, making big pictures smaller, rather than up-res’d blow-ups from smaller pictures. For frame-rate conversion, over half a century of running movies on TV has established straightforward ways to fast play 24 f/s material to at 25 f/s and to map it to 60 Hz vertical rates using 3:2 pull-down for television. However, increases in the size and brightness of consumer displays mean that the resulting motion judder is becoming less acceptable, so that more sophisticated frame rate conversion is becoming necessary.

Combinations of fast replay (24 to 25 f/s), 3:2 pull-down, down-res and ARC are applied to output the required image format, vertical rate and aspect ratio. For example, fast play of 1080/24P at 104.16 percent speed produces 1080/25P. Down-res produces 16:9 images in 576 lines and then the 25 progressive frames are read as 50 interlaced frames to create the 576/50I TV format widely used in Europe and the old ‘PAL’ countries. ARC is applied for 4:3 output. Changing from 24P to 60I vertical scans is achieved using 3:2 pull-down. Increasingly, 104.16% fast play is becoming less acceptable because of the audio pitch shift, so people are demanding high quality 24 to 25 frame rate conversion.

Today a majority of movies are produced in the DCI 4K (4096 x 2160) standard for distribution. This is close in size to the UHDTV-1 4K TV standard and can be re-sized to 2K which is only slightly wider than 1080-line HD (2048 against 1920 pixels per line), and the same 1080 lines. However the sampling of TV is 4:2:2 whereas digital cinema uses 4:4:4, meaning that the TV material will not look quite as sharp as the 2K (or 4K!) made-for-movie content. Also TV and cinema use different colorspaces.


But that material may need further editing, for example, a commercial for showing in a different country may require a new soundtrack and text for pricing. There may be censorship issues so shots need adjusting or replacing. Also the growth of digital media platforms means that more work may be required for a wider deliverables market – with escalating numbers of versions required. Some scenes of a digital film master may need re-grading for domestic TV viewing or further processing to fit the bandwidth and screen limitations of mobile viewing.

This type of work may be best undertaken with the finished program in uncommitted form, where the source material and all the tools and their settings are available, so that any part of the program can be re-accessed, changed and the whole program re-output exactly as required for the target medium and without compromising the quality of other deliverables.

See also: Digital Production Partnership, 3:2 Pull-down, ARC, Down-res, Up-res

Delta editing

A Quantel term for a form of server-based nonlinear editing where only the change information, the EDL along with the any new video frames and audio created in the editing process, are sent back to the server from a connected edit station. For cuts-only editing, the stored result would simply be an EDL. If transitions are created, such as dissolves, wipes, DVE moves etc, these represent new frames that are processed by the editing workstation and sent to the server to be included as a part of EDL. Delta editing contrasts with dub editing or some NLE technology where every frame in the final edit has to be copied to a new file.

See also: FrameMagic

Densitometer (Film)

An instrument used to measure the optical density of film, usually over small areas of images. The instrument operates by measuring the light passing through the film. When measuring movie film density, two sets of color filters are used to measure Status M density for camera negative and intermediate stocks (orange/yellow-based) and Status A for print film to correctly align with the sensiometric requirements of the stocks.


The density (D) of a film is expressed as the log of opacity (O).

D = Log10 O

Using a logarithmic expression is convenient as film opacity has a very wide range and the human sense of brightness is also logarithmic.

See also: Film basics (Tutorial 2)


Depth grading (Stereoscopic)

A post production process where negative and positive parallax convergence are adjusted. This is not only a creative tool used to place objects on the Z axis but also a way to ensure that stereoscopic content can be comfortably watched on the screen size it is intended for. For example, in a post suite the director may be viewing a film on a small projection screen but the final delivery format may be a large theater or IMAX.

In practice the eyes have little ability to diverge (up to one degree is considered the rule of thumb) and this is especially a consideration in depth grading for very large screens with positive parallax images, where the distance between the left and right representations of an image may be very widely spaced.

Sometimes the term Depth Budget is used to refer to the allowed combined value of positive and negative parallax and expressed as a percentage of screen width.

See also: Parallax


Distributed File System. Used to build a hierarchical view of multiple file servers and shared files on a network. Instead of having to think of a specific machine name for each set of files, users only need to remember one name as the key to a list of shared files on multiple servers. DFS can route a client to the closest available file server and can also be installed on a cluster for even better performance and reliability. Medium-to-large sized organizations are most likely to benefit from using DFS while smaller ones should be fine with ordinary file servers.


Tests to check the correct operation of hardware and software. As digital systems continue to become more complex, built-in automated testing becomes an essential part of the equipment for tests during both manufacture and operation. This involves some extra hardware and software to make the tests operate. Digital systems with such provisions can often be quickly assessed by a trained service engineer, so speeding repair.

Remote diagnostics can make use of an Internet connection to monitor and test a product at the customer’s site while working from a central service base. Thus expert attention can be used immediately on site.

Interdependent multiple systems, such as a video server and its clients, may require simultaneous diagnostics of all major equipment. Here, combining data links from a number of pieces of networked equipment, as with Quantel’s R-MON, effectively extends the Remote Diagnostics to larger and more complex situations.

See also: Diagnostics

Digital Asset Management (DAM)

Content is worthless if you cannot find it. If you can find it easily and have rights to use or sell it, it has value. The content will vary in size from a whole movie, to a few frames of news footage. Digital Asset Management (a.k.a. media asset management or digital asset warehousing) is about the storage and use of both digital content and its metadata. The latter comprises descriptions of the content, such as text and thumbnail images, stored in a database for easy searching and management. The metadata is linked to the content files, images or video to allow retrieval.

Digital Betacam

Introduced in 1993 as a development of the original analog Betacam VTR that records SD component video and audio digitally onto Betacam-style cassettes. It uses mild intra-field compression to reduce the ITU-R BT.601 sampled video data by about 2:1 to provide a good portable and much cheaper alternative to the uncompressed D1 format.

Digital Cinema

Refers to the digital distribution and projection of cinema material. With virtually all films now using the DI process, the next step is to distribute and replay digital material. Thanks to the DCI’s Digital Cinema System Specification (July 2005), a set of standards is in place and the majority of cinemas worldwide are now converted to digital.

The digital cinema chain includes DCI-compliant equipment for mastering which generally uses JPEG 2000 compression encoding and high strength encryption. At the cinema players and digital projectors include decryption, image decoding and predominantly DLP, D-ILA technologies to display the images and provide consistently high quality viewing on large screens – no matter how many times the movie is shown.

Digital Cinema offers new methods for duplication, high-level security, efficient duplication distribution and greater flexibility. In addition, stereo cinema (a.k.a. 3D) is easy to set up and present using just one projector (not two), along with a left and right eye selective viewing system.

Many digital cinemas can also show live (or recorded) footage originating in HDTV. These presentations typically include live sports, big events and theatrical performances from around the world, creating new markets for cinemas.

See also: DCI, DLP-cinema, D-ILA, SXRD

Digital cinematography

Shooting movies with digital movie cameras, not film. Nearly all movies are now shot digitally with cameras specifically made for movies, differing in many ways from those made for television. The output offers RAW data that includes the full range of brightness captured by the image sensors, allowing color grading, format changes, etc. to be executed as a part of the digital intermediate process. The camera’s image sensor is typically about the size of full frame 35mm film delivering at least 4096 x 2160 (4K) sized images, with the same movie look, including a narrow depth of field.

By contrast, television cameras are designed to work live. They include front-end processing for gamma correction, set-up for the required color look and clipping to suit home viewing conditions. Sensors are usually smaller, half-inch size, delivering the much deeper depth of field that we expect from TV, and is very useful when covering live events such as sports.

See also: VFR


Digital disk recorder (DDR)

Computer disk systems that record digital video and generally intended as drop-in replacements for VTRs or as video caches to provide extra digital video sources for far less cost than a DVTR. They have the advantages of not requiring pre-rolls or spooling but they are not necessarily able to randomly access all video frames in realtime. DDRs can also offer the higher data rates needed for uncompressed recordings at an economic price, for SD as well as HD and 2K/4K (film) resolutions.

See also: Linear, True random access

Digital intermediate (DI)

The DI refers to the process that accepts exposed film or digital footage and eventually delivers edited and graded masters, which can either be film internegatives for production labs to generate large numbers of release prints, but today usually the Digital Source Master (DSM). Initially the term arose to describe a digital version of the traditional chemical intermediate lab where film is graded, cut and copied from camera negative to several interpositives, then to many internegatives, which are distributed to the production labs to make the release prints for cinemas. These processes include creating possibly thousands of release prints from a single set of camera negatives.

Although the boundaries may vary, generally the DI ‘lab’ accepts data from digital movie cameras, and outputs an edited and graded DSM or, possibly, an internegative master for a whole or part of a feature. However, the operation and decision-making processes of the digital workflow greatly differ from the traditional film lab, not least because of the interactive nature of the digital operation.

In the DI lab, decisions become on-screen reality and are seen in full context as they are prepared – no waiting for the ‘chemical’ lab. Grading, dissolves, cuts and effects can be seen immediately – on a big screen if needed. The interactive process allows more creativity and gives complete confidence that the decisions work well. Also grading can take place after the footage is cut together, so the shots are seen, as graded, in context.

For those still using film, whole movies can be sent for output to the digital lab’s film recorder, exposing 1000ft reels at a time and no final grading required. For digital cinemas, the DSM – digital cinema’s equivalent of internegatives – enters a short process to create the Digital Cinema Package (DCP) for distribution.

See also DCI

Digital keying and chroma keying

Digital keying differs from analog chroma keying in that it can key uniquely from any one of the billion colors of component digital video. It is then possible to key from relatively subdued colors, rather than relying on highly saturated colors which can cause color-spill problems on the foreground.

A high quality digital chroma keyer examines each of the three components Y, B-Y, R-Y or R, G, B of the picture and generates a linear key for each. These are then combined into a linear key for the final keying operation. The use of three keys allows much greater subtlety of selection than is possible with a chrominance-only key.

See also: Chroma keying, Keying

Digital Leader and Digital Projection Verifier

Digital Leader and the Digital PROjection VErifier (DPROVE) are two products that are based on SMPTE RP 428-6-2009. The Digital Leader is aimed at digital movie post production and cinemas. In post it can be added as a leader and/or footer (end) of Digital Cinema Distribution Master (DCDM) ‘reels’ so allowing a quick quality check.

DPROVE is a set of Digital Cinema Packages (DCPs) that help checking projector performance and aligment as well as the sound’s synchronization with the pictures.

See also: DCI

Digital mixing

Digital mixing requires ‘scaling’ each of two digital signals and then adding them. A and B represent the two TV signals and K the positional coefficient or value at any point of the mix between them (i.e. equivalent to the position of the transition arm on a switcher desk). In a digital system, K will also be a number, assumed here as 10-bit resolution to provide a smooth mix or dissolve.

Mathematically this can be shown as:

A x K = (Mix)1
B x (1-K) = (Mix)2
Result = (Mix)1 + (Mix)2

Note that such math also applies to soft-edge keys and any transparency created between two images. As such it is a fundamental part of video processing and good quality results are essential.

When two 10-bit numbers are multiplied together, the result is a 20-bit number (see Binary). When mixing, it is important to add the two 20-bit numbers to obtain an accurate result. This result must then be truncated or rounded to 10 bits for transmission to other parts of the digital system.

Truncation by simply dropping the lower bits of the partial result (Mix)1 or (Mix)2, to 12 bits, or even 14 bits, will introduce inaccuracies. Hence it is important that all partial results, e.g. (Mix)1 and (Mix)2, maintain 20-bit resolution. The final rounding of the result to 10 bits can reveal visible 1-bit artifacts – but these can be avoided with careful rounding techniques such as Dynamic Rounding.

See also: Binary, Dynamic Rounding

Digital negative

Digital image data that contains all the detail (spatial and dynamic/latitude) held in original camera negative (OCN) film. This allows all latitude headroom to be included on the material for use in a DI process, so adjustments of color and exposure can be made to the same degree as with film.

See also: Camera negative

Digital Production Partnership (DPP)

The Digital Production Partnership was formed by leading UK public service broadcasters to help the television industry maximize the opportunities, and be aware of the challenges, of digital television production. It works in two areas: technology, and shared thinking, information and best practice.

In 2011 it created common technical standards for the delivery of SD and HD video to the major broadcasters. In 2012 a common format, structure and wrapper for the delivery of programs by digital file, including metadata, was agreed. The DPP file-based delivery standard became the UK’s standard in late 2014. For this the DPP referred to AMWA to create a subset of its AS-11 file system that can be edited for breaks, re-timed, and may have additional language tracks, captions and subtitles wrapped into the file container.


A system which converts an analog input to a digital representation. Examples include analog to digital converters (ADCs) for television signals, touch tablets and mice. Some of these, mouse and touch tablet for example, are systems which take a spatial measurement and present it to a computer in a digital format.

See also: A/D, Into digits (Tutorial 1), GUI

Digitizing time

Time taken to record existing footage into a disk-based editing system. The name suggests the material is being played from an analog source, which is now rare. A better term is ‘loading’. Today the use of high-speed networking can enable background loading – eliminating digitizing time at the edit suite.

Digitizing time is often regarded as dead time, but it need not be. It can be reduced if some initial selection of footage has been made – for example by logging. Also, footage can be marked while loading and so be instantly available as a rough cut on completion, so speeding the final edit. The process is sometimes referred to as Triage, particularly where it is used to select and pre-edit clips from a live feed.

Discontinuous 2:3

It is common for the electronic editing process to be performed post telecine. When editing is performed to 2:3 there is a potential for disruptions in the 2:3 sequence. These can be 3 field sequences adjacent to other 3 field sequences, and 2 field sequences adjacent to other 2 field sequences. Also there are cases where we have single fields present that are not part of any sequence (Orphan fields). These disruptions caused by editing create a “broken 2:3 sequence”.

Discrete 5.1 Audio

Often referred to as ‘5.1’, this reproduces six separate (discrete) channels – Left, Center, Right, Left Rear, Right Rear, and sub-woofer (the .1). All the five main channels have full frequency response which, together with a separate low-frequency sub-woofer, create a three-dimensional effect. Only one sub-woofer is required because the human ear is not directional at low frequencies. Discrete 7.1 Audio is similar but includes more speakers.

Discrete 5.1 audio is made available with many HD television broadcasts and is specified on HD DVD and BD media.

See also: Dolby Digital

Display resolutions

The computer industry has developed a series of display resolutions which span television’s SD, HD and UHD, a selection is listed below. The availability of hardware to support these resolutions has, and will continue to benefit television and digital film.

All use square pixels and none correspond exactly to television formats so attention to size and aspect ratio is needed when using computer images on TV and vice versa.

TypeSizeM PixelsAspect ratio 
VGA640 x 4800.314:31.33
SVGA800 x 6000.484:31.33
XGA1024 x 7680.794:31.33
SXGA1280 x 10241.315:41.25
UXGA1600 x 12802.055:41.25
WUXGA1920 x 12002.316.101.60
QXGA2048 x 15363.154:31.33
QSXGA2560 x 20485.244:31.33
WQSXGA3200 x 20486.5516.101.56
WQUXGA3840 x 24009.2216.101.56
4K4096 x 23049.4416:91.77
HXGA4096 x 307212.584:31.33
WHUXGA7680 x 480036.8616.101.56

Video SDTV                 720 x 576 (not square pixels)
720 x 480 (not square pixels)

Video HDTV                1920 x 1080
2K DCI cinema            2048 x 1080
2K digital film             2048 x 1556
4K TV (UHD1)              3840 x 2160
4K DCI cinema            4096 x 2160
4K digital film             4096 x 3112
8K TV (UHD2)              7680 x 4320

*The image area of Full Frame film 35 mm images is usually scanned to occupy 2048 x 1536 pixels (4K – 4096 x 3072). The extra 20 (40) lines scan the black strip between successive frames which only carries image information if film is shot with an open gate.

See also: 2K, Aspect ratio



A digital display interface developed by the Video Electronics Standards Association (VESA). At first glance the connectors look like HDMI, but it is designed to cover a wide range of image standards, carries packetized data and typically connects a computer to a screen. It can also carry audio and other data. In 2006 version 1 offered up to 8.64 Gb/s data over a 2-meter cable. Capabilities have grown and now V 1.3 offers an overall bandwidth of 32.4 Gb/s – enough to carry up to 5K (5120 x 2880) RGB, and 8K UHD at 60 Hz with 4:2:0 sampling. It also has an HDMI 2.0 compatibility mode.



In digital television, analog original pictures are converted to digits: a continuous range of luminance and chrominance values is translated into a range of finite numbers. While some analog values will correspond exactly to numbers, inevitably others will fall in between. Given that there will always be some degree of noise in the original analog signal the numbers may dither by one Least Significant Bit (LSB) between the two nearest values. This has the advantage of providing a means by which the digital system can describe analog values between LSBs to give a very accurate digital rendition of the analog world.


If an image is produced by a computer, or is the result of digital processing, it may have virtually no noise and so the digital dither may not exist – which can lead to contouring effects. One approach to cure this is to use greater bit depth, eg 10 bits instead of 8, this will reduce the size of the problem but may not solve it. Another approach is offered with Dynamic Rounding. Invented by Quantel, this can intelligently add dither to pictures to give more accurate, better looking results.


A video codec created by DivX, Inc. which can compress long video segments into relatively small data spaces while maintaining reasonable picture quality. It uses MPEG-4/H.264 or AVC compression to balance quality against file size, and is commonly associated with transferring DVD audio and video to hard disks.


Digital Living Network Alliance is a nonprofit organization founded by Sony in 2003 that aims to deliver an interoperability framework of design guidelines based on open industry standards to complete cross-industry digital convergence.

The resulting ‘digital home’ should then be a network of consumer electronic, mobile and PC devices that transparently co-operate to deliver simple, seamless interoperability that enhances and enriches users’ experience.



(Texas Instruments Inc.) Digital Light Processing is the projection and display technology which uses digital micromirror devices (DMD) as its light modulator. It is a collection of electronic and optical subsystems which enable picture information to be decoded and projected as high-resolution digital color images. DLP technology enables the making of very compact, high brightness projectors. Over 100,000 movie theater screens worldwide use its DLP Cinema technology, and many more DLP-driven systems are used by consumers.

The technology also allows for higher frame rates (beyond 24f/s) and stereo 3D presentations.

See also: DMD



Digital Multimedia Broadcasting. Developed and first adopted in South Korea (2005), DMB is a digital transmission system for television, radio and datacasting to mobile devices/phones and can operate over satellite (S-DMB) or terrestrially (T-DMB). DMB is based on the Eureka 147 Digital Audio Broadcasting (DAB) standard, and has similarities with DVB-H, a competing mobile TV standard.

T-DMB (ETSI standard TS 102 427 and TS 102 428) uses MPEG-4 H.264 for video and HE-AAC V2 for the audio, together encapsulated in an MPEG-2 transport stream (TS). The encoded TS is broadcast on DAB in data stream mode. Application devices include mobile phones, portable TV, and PDAs as well as data/radio for cars.

See also: DVB-H, MediaFLO


(Texas Instruments Inc.) Digital Micromirror Device. A silicon CMOS integrated circuit used to modulate light in a wide variety of applications. The most common use is in digital projection systems where one or more devices are used to create high quality color images.

The device is a memory circuit whose elements are arranged in a display format array matrix. Each element has a minute square hinged aluminum mirror on its front which can tilt about its diagonal axis. The electrical charge in the memory cell causes the mirror to deflect from one tilt position to the other. By changing the memory data, the mirror can be switched very rapidly (about 15kHz) to create pulses of light whose duration causes the pixel to appear at a particular brightness, so producing the display of gray scales. DMDs are produced at different sizes according to the resolution required. The smallest contains over 500,000 mirrors. Devices for 2K (2048 x 1080) and 4K (4096 x 2160) DCI formats are widely used in digital cinema projectors as well as in home cinemas.

See also: DLP



Avid’s Digital Nonlinear Extensible High Definition codec with intra-frame compression is designed for multi-generation compositing with reduced storage and bandwidth requirements. It has four levels to match quality requirements and manageable data volumes. 145 or 220 Mb/s 8-bit and 220 Mb/s 10-bit all at 4:2:2. There is also a 36 Mb/s version for HD offline. This offers HD post at SD data rates, or less, meaning that infrastructure and storage requirements can be as for uncompressed SD. DNxHD is assigned as VC-3 by SMPTE.



Data Over Cable Service Interface Specification enables the addition of high-bandwidth data transfers to existing cable-TV (CATV) systems. Developed by CableLabs and others, it was first released in 1997, and is now widely used by cable television operators to provide Internet access over their existing hybrid fiber-coaxial (HFC) infrastructure. The communication is via the free spectrum that is otherwise used for TV channels: 6 MHz in North America and 8 MHz in Europe – where EuroDOCSIS, a variation of DOCSIS, offers greater data rates. DOCSIS 3.0, introduced in 2011, has provided services up to 100 Mb/s, and beyond. That same year UK broadband provider Virgin Media announced trials with a download bandwidth of 1.5 Gb/s and upload 150 Mb/s. DOCSIS 3.1 (2013) introduced an OFDM modulation system (as used in DVB TV) using smaller 20 kHz to 50 kHz-wide subcarriers which can be aggregated into a block of up to 200 MHz.



Dolby Digital (DD/AC-3) is a digital audio compression system that uses auditory masking for compression. It works with from 1 to 5.1 channels of audio and can carry Dolby Surround coded two-channel material. It applies audio masking over all channels and dynamically allocates bandwidth from a ‘common pool’. Dolby Digital is a constant bit rate system supporting from 64 kb/s to 640 kb/s rates; typically 64 kb/s mono, 192 kb/s two-channel, 320 kb/s 35 mm Cinema 5.1, 384 kb/s Laserdisc/DVD 5.1 and DVD 448 kb/s 5.1.

DVD players and ATSC receivers with Dolby Digital capability can provide a backward-compatible mix-down by extracting the five main channels and coding them into analog Dolby Surround for Pro Logic playback.

Dolby Digital Plus offers more, better quality, channels and supports data rates up to 6 Mb/s. is backwards compatible with Dolby Digital players and is offered as 7.1 channels on HD DVD and Blu-ray with data rates up to 3 and 1.7 Mb/s respectively.

Dolby E is an audio compression scheme which can encode/decode up to eight channels plus metadata – typically 5.1 mix (six channels) and Rt/Lt (Right Total/Left Total surround) or stereo two-channel mix, etc – onto two AES/EBU bitstreams at 1.92 Mb/s (20-bit audio at 48 kHz). Thus video recorders, typically with four channels, can support the greater channel requirements of DVD and some DTV systems (e.g. ATSC). With audio frames matching video frames, Dolby E is a professional distribution coding system for broadcast and post production which maintains quality up to 10 code/recode cycles.

Dolby E is widely used in HD production to carry 5.1 sound. As it is locked to video frames it has to be decoded and re-coded to work with a frame-rate conversion process.

Dolby Surround (a.k.a. Dolby Stereo, Dolby 4:2:4 Matrix) offers analog coding of four audio channels – Left, Center, Right, Surround (LCRS), into two channels referred to as Right Total and Left Total (Rt, Lt). On playback, a Dolby Surround Pro Logic decoder converts the two channels to LCRS and, optionally, a sub-woofer channel. The Pro Logic circuits steer the audio and increase channel separation. The Dolby Surround system, originally developed for the cinema, is a method of getting more audio channels but suffers from poor channel separation, a mono limited bandwidth surround channel and other limitations. A Dolby Surround track can be carried by analog audio or linear PCM, Dolby Digital and MPEG compression systems.

Dolby TrueHD is a lossless compression system designed for high-definition disk-based media and claims to be bit-for-bit identical to the studio master. Running up to 18 Mb/s up to eight 24-bit/96 kHz channels are supported on HD DVD and Blu-ray Disc standards, and is expected to feature in future A/V receivers and downloadable media. It can connect over HDMI.

See also: Auditory masking, ATSC, Discrete 5.1



Field dominance defines whether a field type 1 or type 2 represents the start of a new interlaced TV frame. Usually it is field 1 but there is no fixed rule. Dominance may go unnoticed until flash fields occur at edits made on existing cuts. Replay dominance set the opposite way to the recording can cause a juddery image display. Much equipment, including Quantel’s, allows the selection of field dominance and can handle either.


Decreasing the size of video images to fit another format. Typically this reduces an HD format to an SD format and, as the input images represent over-sampled versions of output, the final quality should be excellent; often better than an SD-shot original. Moving from 1080/60I to 480/60I is down-resing. Technically the process involves: spatial interpolation to reduce size while retaining quality, color correction to compensate for the difference in HD and SD color standards and possibly re-framing to fit 16:9 HD onto 4:3 SD. Note that down-res does not include any change of frame rate.

See also: Down conversion, Format conversion, Standards conversion, Up-res, Versioning

Drop-frame timecode

Alteration of timecode to match the 1000/1001 speed offset of NTSC transmissions and many newer HD and UHD video formats used in ‘NTSC’ countries – including the USA, Canada and Japan. 525-line NTSC at a nominal 30 f/s actually runs at 29.97 f/s, 720 and 1080-line HD as well as 2K and 4K UHD all include the 1000/1001 offset frequencies of nominal 24, 30 and 60Hz frame rate. So even 24 f/s movies run at 23.97 Hz.

With drop-frame timecode, the timecode is locked to the video and it needs to make up 1 in 1001 frames. It does this by counting two extra frames every minute while the video remains continuous. So 10:35:59:29 advances to 10:36:00:02. In addition, at every ten-minute point the jump is not done. This brings the timecode time almost exactly into step with the video.

Timecode that does not use drop-frame is then called non drop-frame time-code. Confusion arises when the wrong one is used!

See also: 1000/1001



Digital Subscriber Line. A general term for a number of techniques for delivering data over the telephone local loop (between exchange and user) – the copper wires that make up the so called ‘last mile’. Referred to generically as xDSL these offer much greater data speeds than modems on analog lines – up to 32 Mb/s upstream to the computer and 1 Mb/s or more downstream.

See also: ADSL


Digital Subscriber Line Access Multiplexer, usually located at the local telephone exchange, it connects multiple customer DSL lines to a high-speed ATM internet backbone line. It is the device that communicates with our ADSL (and SDSL) modems, creating a network similar to a LAN but without Ethernet distance restrictions, to provide an Internet connection for subscribers.

Dual link

The bandwidth of SDI and HD-SDI links allow the transport of uncompressed 4:2:2 sampled video and embedded digital audio. Dual links are often used to carry larger requirements, such as video with key (4:2:2:4), RGB (4:4:4) and RGB with key (4:4:4:4). Dual link for SD is defined in ITU-R/BT.799-2 and RP 175-1997. Dual link at HD is used for stereo3D and 50/60 P.

A dual link is arranged to allow some meaningful monitoring of each of the two links with standard equipment. So RGB is sent with Link A carrying full bandwidth G, half R and B (4:2:2). Link B is just half bandwidth R and B (0:2:2). RGB + Key is sent as (4:2:2) and (4:2:2).

See also: 0:2:2, 4:2:2, HD-SDISDI, Y Cr Cb



(Full duplex) refers to communications that are simultaneously two-way (send and receive) like the telephone. Those referred to as half-duplex switch between send and receive.


This digital VCR format was formed jointly as a co-operation between Hitachi, JVC, Sony, Matsushita, Mitsubishi, Philips, Sanyo, Sharp, Thomson and Toshiba. It uses 6.35 mm (quarter-inch) wide tape in cassettes, used in a range of products to record 525/60 or 625/50 video for the consumer (DV) and professional markets (Panasonic’s DVCPRO and Sony’s DVCAM).

All recorders use digital intra-field DCT-based ‘DV’ compression (about 5:1) to record 8-bit component digital video based on 13.5 MHz luminance sampling. The consumer versions and DVCAM sample video at 4:1:1 (525/60) or 4:2:0 (625/50) video and provide two 16-bit/48 or 44.1 kHz, or four 12-bit/32 kHz audio channels onto a 4-hour 30-minute standard cassette (125 x 78 x 14.6 mm) or smaller 1-hour ‘mini’ cassettes (66 x 48 x 12.2 mm). The data rate is 25 Mb/s. The professional DVCPRO models make use of DV’s hierarchical design, being x2 and x4 versions of the basic 25 Mb/s version.

DVCAM is Sony’s professional variant of DV which records 15-micron tracks on a metal evaporated (ME) tape. As stated, video sampling is 4:2:0 for 625/50 (PAL) and 4:1:1 for 525/60 (NTSC). Audio is four 12-bit, 32 kHz channels, or two 16-bit 48 kHz channels.

DVCPRO, introduced in 2008, is Panasonic’s development of native DV which records 18-micron tracks onto metal particle tape. It uses native DV compression at 5:1 from a 4:1:1, 8-bit sampled source. There are 12 tracks per frame for 625/50 and 10 tracks per frame for 525/60, tape speed is 33.8 mm/s and the data rate 25 Mb/s. It includes two 16-bit digital audio channels sampled at 48 kHz and an analog cue track. Both linear (LTC) and Vertical Interval Timecode (VITC) are supported.

DVCPRO 50 is a x2 variant of DVCPRO with a tape speed of 67.7 mm/s, a data rate of 50 Mb/s and using 3.3:1 video compression, it is aimed at the studio/higher quality end of the market. Sampling is 4:2:2 to give enhanced chroma resolution, useful in post production processes (e.g. chroma keying). Four 16-bit audio tracks are provided.

DVCPRO HD is a series of HD VTRs that are x2 variants of DVCPRO 50: tape speed 135.4 mm/s and a total data rate of 100 Mb/s. Sampling is 4:2:2. There are eight 16-bit, 48 kHz audio tracks. Formats supported include 1080I and 720P. This tape format has been assigned as D12. Later versions are using AVC-Intra Compression.

DVCPRO P2 is a DVCPRO recording system. It records the DV data in MXF format onto P2 memory cards which are PC plug-in cards with each carrying four SD (Secure Data) memory chips to combine capacity and aggregate transfer speeds – currently up to 640 Mb/s, many times realtime. The original P2 card offered 2 GB but the capacity keeps doubling; now the range goes from 4 GB up to 64 GB. These are able to hold from 16 – 256 minutes of SD DVCPRO (25 Mb/s) to 4 – 64 minutes for HD DVCPRO 100 (100 MB/s). Multiple cards can be used together, and swapped during continuous recording.

Recent developments include the expressP2 card, to accommodate high frame-rate 1080 HDAVC-ULTRA recording (above 60 f/s), as well as 4K.

See also: AVC-Intra



Digital Video Broadcasting, the group, with over 200 members in 25 countries, which developed the preferred scheme for digital broadcasting in Europe. Initially the DVB Group put together a portfolio of broadcast standards; the major ones including a satellite system, DVB-S, and now the more efficient DVB-S2, a matching cable system, DVB-C (and now DVB-C2), and a digital terrestrial system, DVB-T (and now DVB-T2). DVB-H is a newer broadcast standard designed for terrestrial operation with hand-held devices, typically mobile TVs, phones and tablets where power must be conserved.

DVB-S (1995) is the original DVB forward error coding and modulation standard for satellite television. DVB-S is used for both broadcast network feeds and for direct broadcast satellite services.

DVB-S2 (2003) is used for all new European digital satellite multiplexes, and satellite receivers will be equipped to decode both DVB-S and DVB-S2. Currently its main use is to distribute HDTV. DVB-S2 is based on DVB-S adding two key features: allowing changing encoding parameters in realtime (VCM, Variable Coding and Modulation) and ACM (Adaptive Coding and Modulation) to optimize the transmission parameters for various users for a claimed net performance gain of 30 percent (ie, more data transmitted for more channels).

DVB-T is a transmission scheme for digital terrestrial television (DTT). Its specification was approved by ETSI in February 1997 and DVB-T services started in the UK in autumn 1998.
As with the other DVB standards, MPEG-2 sound and vision coding are used. It uses Coded Orthogonal Frequency Division Multiplexing (COFDM) modulation. It enables effective operation in very strong multipath environments (that cause picture ‘ghosting’ in analog TV reception), meaning in can operate an overlapping network of transmitting stations using the same frequency. In the areas of overlap, the weaker received signals are rejected. Where transmitters carry the same programming the overlapping signals provide more reliable reception, known as a single-frequency network (SFN).

DVB-T2 (2009). The DVB TM-T2 technical group worked on a more advanced DTT standard focusing on modulation, channel encryption and signal layout. The resulting DVB-T2 offers a 50 percent increase in payload capacity under similar reception circumstances. Its error correction coding, shared with DVB-S2 and DVB-C2, involves LDPC (Low Density Parity Check) coding combined with BCH (Bose-Chaudhuri-Hocquengham) coding, offering a very robust signal. Along with other changes it is more flexible, supporting SD, HD, UHD, mobile TV, radio, or any combination thereof.

DVB-C (1994) for digital transmission via cable transmits an MPEG-2 or MPEG-4 family digital audio/digital video stream, using a QAM modulation with channel coding.

DVB-C2 (2010) almost doubles the payload so relieving the many cable networks that were running at near capacity.

The DVB digital TV standards are used around the world with notable exceptions being ATSC in the USA and Canada, ISDB in Japan, DMB-T/H (Digital Multimedia Broadcast-Terrestrial/ Handheld) in China, and T-DMB in South Korea.

There are several additional DVB transmission standards that can be found on the website. These include DVB-RCS2 that provides an air interface specification for low-cost two-way satellite broadband VSAT (very small aperture terminal) systems to provide dynamic, demand-assigned transmission capacity for a wide range of users. It provides a broadband Internet connection with no need of local terrestrial infrastructure. Data speeds of several tens of Mb/s down to terminals, and up to 10 Mb/s or more can be achieved.

DVB-CPCM DVB Content Protection and Copy Management is a digital rights management standard which is under development. This is intended as a practical rights management system primarily for European digital television; but other countries may adopt it.

CPCM allows adding information to digital content, such as TV programs, that shows how content may be used by other CPCM-enabled devices. Content providers can store flags with the content to indicate how it may be used. All CPCM-enabled devices should obey these flags, allowing or denying its movement, copying to other CPCM devices, controlling use on other equipment, and observing time limits.

The full technical specification of DVB-CPCM is available for free downloading at the DVB website.


See also: COFDM, IP, IP over DVB


Digital Versatile Disk (aka digital video disk) – invented and developed by Philips, Sony, Toshiba, and Panasonic in 1995 as a high-density development of the compact disk. It is the same size as a CD, 12 cm diameter, but stores upwards of 4.38 GB of actual data (seven times CD capacity) on a single-sided, single-layer disk. DVDs can also be double-sided or dual-layer – storing even more data.

The capacities commonly available are:

DVD-5              Single-side, single-layer         4.7 GB
DVD-9              Single-side, dual-layer           8.5 – 8.7 GB
DVD-10            Double-sided, single-layer     9.4 GB
DVD-18            Double-sided, dual-layer       17.08 GB

DVD-5 and DVD-9 are widely used. However the double-sided disks are quite rare, partly because they are more difficult to make and they cannot carry a label.

There are various types of DVD including:

DVD-R – recordable DVDs with a data capacity of 4.38 GB are popular and low priced.

DVD+R – dual layer recordable DVDs with a total capacity of two DVD-Rs.

DVD-RAM – re-recordable DVD, re-use up to around 100,000 times. Capacity of 4.38 GB (single-sided). Some camcorders have use these – they offer instant access to shot material and record loop features – useful when waiting to record an event, like a goal, to happen. At home it can provide a removable media alternative. A particular feature is that it can record and replay at the same time.

DVD-Video – combines the DVD with MPEG-2 video compression, with multichannel audio, subtitles and copy protection capability.

To maximize quality and playing time DVD-Video uses variable bit rate (VBR) MPEG-2 coding where the bit rate varies with the demands of the material. Typically a 525/60 TV format, 24 f/s movie would use an average bit rate of 3.5 Mb/s, but for sections with a great deal of movement it could peak at 8 or 9 Mb/s. Only 24 f/s are coded onto the disk, the 3:2 pull-down conversion to 30 f/s taking place in the player. This allows a 120-minute 24 f/s movie to fit on a DVD-5. To store video (not film) with 50 or 60 discrete fields per second, the bit rate tends to average around 6 or 7 Mb/s, but again depends on the running time, original material and desired picture quality.

CBHD China Blue High-Definition or China High Definition DVD is a variant of HD DVD which is only used within China.

Multi-channel audio DVD-Video supports PCM, MPEG and Dolby Digital audio, for anything from mono, stereo, Dolby Surround to 5.1 channels. Digital Theatre Sound (DTS) and Sony Dynamic Digital Sound (SDDS) are options. Up to eight separate audio streams can be supported, allowing multiple languages, audio description, director’s commentary, etc. For example, a release may have 5.1 Dolby Digital English, two-channel Dolby Digital Spanish with Dolby Surround, and mono French.

Region coding Disks can be region-coded and so only play in a particular region (as defined in the player), a set of regions or be ‘code-free’.

The region numbers are:

  1. Canada, US, US Territories
  2. Japan, Europe, South Africa, Middle East (including Egypt)
  3. Southeast Asia, East Asia (including Hong Kong)
  4. Australia, New Zealand, Pacific Islands, Central America, South America, Caribbean
  5. Former Soviet Union, Indian Subcontinent, Africa (also North Korea, Mongolia)
  6. China

See also: HD DVD, Blu-ray Disc, Dolby Digital, Dolby Surround, MPEG-2, MPEG-4


Digital Video Effects refers to the manipulation of video to re-size images or objects in 2D and 3D space. Originally such facilities were supplied as separate machines (boxes) but now they are included as an integral part of systems such as vision mixers/switchers, editing systems and graphics equipment. The list of effects varies but may well include zoom and position, rotation, 3D perspective, page turns, picture bending and curls, blurs, and more. Picture quality and control also widely vary depending on the processing techniques used.

See also: Axis, Global


Digital Video Tape Recorder. Sony showed the first DVTR for commercial use in 1986, working to the ITU-R BT.601 component digital video standard and the associated D1 standard for DVTRs. It used 19 mm cassettes recording 34, 78 or (using thinner tape) 94 minutes of uncompressed video. Due to its relatively high cost, its use was somewhat limited to high-budget projects. Other formats, D2 and D3 that recorded digitized composite PAL and NTSC, offered lower cost solutions at the cost of price ‘coded’ quality.

Thomson’s VooDoo Media Recorder (designated as D6) was the last uncompressed digital videotape recording system.

All subsequent DVTRs record compressed component video. Provided there is no re-coding or transcoding, DVTRs do not suffer ‘generation loss’ due to re-recordings as tape noise, moiré, etc., and dropouts are mostly invisible due to sophisticated correction and concealment techniques. However tape is subject to wear and tear and the resulting errors and dropouts necessitate complex error concealment circuitry. In extreme cases multiple passes can introduce cumulative texturing or other artifacts.

Today, videotape is rarely used as hard disks and solid-state memory are widely used in acquisition, post production and program distribution. However much video archiving makes use of computer tape formats.

See also: Betacam SX, D5, D9, DV, DVCPRO, HDCAM, LTO

Dynamic range

For images: the measurement of the range of brightness in a scene expressed as a ratio, or the Log10 of the ratio. Typically a lighting cameraman will try to keep a scene to less than 40:1 (Log = 1.6) to avoid loss of detail in the print. A 100:1 (Log = 2) contrast range in the scene is a typical maximum.

Modern cameras and film stocks have dynamic ranges in excess of 13 stops which is greater than 8000:1.

See also: Cineon file, Stop

Dynamic Rounding

Dynamic Rounding is a mathematical technique devised by Quantel for truncating the binary word length (the number of 1s and 0s) of pixels to a specified number of bits. Rather than simply ignoring the lower bits, it uses their information to control, via a randomizer, the dither of the LSB of the truncated result. This effectively removes the artifacts, such as banding, that could otherwise be visible. Dynamic Rounding is non-cumulative on any number of passes and produces statistically correct results. Earlier attempts at a solution have involved increasing the number of bits (e.g. from 8 bits to 10 bits), making the size of LSBs smaller but not removing the inherent problem.

Some form of limiting the number of bits is required as there are many instances in digital systems where a number, representing a pixel value, uses more bits than the system normally accommodates. For example, a nominally 12-bit system might have a problem handling a 24-bit word. This has to be rectified in a way that will keep as much information as possible and not cause noticeable defects even after many processes. A common example arises in image processing which often involves multiplying the values of co-located pixels in two different images, as in digital mixing. Assuming the equipment is nominally 12-bit, the mixing produces a 24-bit result from two original 12-bit numbers. At some point this has to be truncated, or rounded, back to 12-bits, either to fit within the structure of the equipment or to be accepted by external interfaces. Simply dropping the lower bits can result in unwanted visible artifacts, especially when handling pure, noise-free, computer generated pictures.

16 bits to 8 bits

Dynamic Rounding

Dynamic Rounding is licensable from Quantel and is used in a growing number of digital products both from Quantel and other manufacturers.

See also: Digital mixing,