Tag : Formats

1.125 MHz

This is a common frequency that can be derived from 625/50 PAL (and 525/60 NTSC) that runs through SD, HD and UHD digital television standards using 4:4:4 or 4:2:2 sampling, including 1080-line HD and UHD at 25, 30, 60 Hz frame rates. It can be created from the old analog (PAL or NTSC) black-and-burst signal. Because 1.125 MHz is a common frequency, black and burst was widely used as a timing reference signal in the early days of HD.

See also: 13.5, Tri-level sync

13.5 MHz

This is the sampling frequency of luminance in SD digital television systems as defined by the ITU. It is represented by the 4 in 4:2:2. The use of the number 4 is pure nostalgia as 13.5 MHz is in the region of 14.3 MHz, the sampling rate of 4 x NTSC color subcarrier (3.58 MHz), used at the very genesis of digital television equipment.

Reasons for the choice of 13.5 MHz belong to politics, physics and legacy. Politically it had to be global and to work for both 525/60 (NTSC) and 625/50 (PAL) systems. The physics is the easy part; it had to be significantly above the Nyquist frequency so that the highest luminance frequency, 5.5 MHz for 625-line PAL systems, could be faithfully reproduced from the sampled digits i.e. sampling in excess of 11 MHz but not so high as to produce unnecessary, wasteful amounts of data. Some math is required to understand the legacy.

The sampling frequency had to produce a static pattern on both 525 and 625-line standards, otherwise it would be very complicated to handle and, possibly, restrictive in use. In other words, the frequency must be a whole multiple of the line frequencies of both standards.

The line frequency of the 625/50 system is simply: 625 x 25 = 15,625 Hz
(NB 50 fields/s makes 25 frames/s)
So line length is 1/15,625 = 0.000064 or 64µs

The line frequency of the 525/60 NTSC system is complicated by its offset factor of 1000/1001 to avoid interference when transmitted. The line frequency is 525 x 30 x 1000/1001 = 15,734.265 Hz. This makes line length 1/15,734.265 = 63.5555µs

The difference between the two line lengths is 64 – 63.55555 = 0.4444µs

This time divides into 64µs exactly 144 times, and into 63.5555µs exactly 143 times. This means the lowest common frequency that would create a static pattern on both standards is 1/0.4444 MHz, or 2.25 MHz.

Now, back to the physics. The sampling frequency has to be well above 11 MHz, so 11.25 MHz (5 x 2.25) is not enough. 6 x 2.25 gives the adopting sampling frequency for luminance of 13.5 MHz.

Similar arguments have been applied to the derivation of sampling for HD. Here 74.25 MHz (33 x 2.25) is used for luminance sampling.

See also: 1.125 MHz4:1:1, 4:2:0, 4:2:2, 4:4:4, 4fsc, Nyquist (frequency)

2.25 MHz

This is a common frequency that can be derived from 625/50 PAL (and 525/60 NTSC) that runs through SD, HD and UHD digital television standards. It can be created from the old analog black and burst signal. 2.25 MHz, or multiples thereof, runs through all the major digital television standards. These can be locked to black and burst. They include 1080-line HD at 25, 30 fps.

See also: 13.5, Tri-level sync

4:1:1

This describes a set of sampling frequencies in the ratio 4:1:1, used to digitize the luminance and color difference components (Y, R-Y, B-Y) of a video signal. For SDTV the 4 represents 13.5 MHz (74.25 MHz for HDTV) sampling frequency of the Y (luminance) signal, and the 1’s are each 3.75 MHz for SD (18.5625 MHz for HD) for the R-Y and B-Y color difference signals (ie R-Y and B-Y are each sampled once for every four samples of Y).

With the color information sampled at half the rate of the 4:2:2 system, this is used as a more economic form of sampling that may be used where smaller data rates are required. Both luminance and color difference are still sampled on every line but the latter has half the horizontal resolution of 4:2:2 while the vertical resolution of the color information is maintained. 4:1:1 sampling is used in DVCPRO (625 and 525 formats), DVCAM (525/NTSC) and others.

4-1-1

See also: 4:2:0, 4:2:2, DV (DVCAM and DVCPRO)

4:2:0

A sampling system used to digitize the luminance and color difference components (Y, R-Y, B-Y) of a video signal. The 4 represents the 13.5 MHz (74.25 MHz at HD) sampling frequency of Y while the R-Y and B-Y are sampled at 6.75 MHz (37.125 MHz); effectively on every other line only (ie one line is sampled at 4:0:0, luminance only, and the next at 4:2:2).

This is used in some 625-line systems where video data rate needs to be reduced. It decreases the overall data by 25 percent against 4:2:2 sampling and the color information has a reasonably even resolution in both the vertical and horizontal directions. 4:2:0 is widely used in MPEG-2 coding meaning that the broadcast and DVD digital video seen at home is usually sampled this way. 625 DV and DVCAM coding also use 4:2:0. However the different H and V chroma bandwidths make it inappropriate for post applications.

See also: 4:1:1, 4:2:2, DV (DVCAM), MPEG-2

4:2:2

Refers to a ratio of sampling frequencies used to digitize the luminance (Y) and color difference components (R-Y, B-Y) of an image signal. The term 4:2:2 denotes that for every four samples of the Y luminance, there are two samples each of R-Y and B-Y color difference, allowing less chrominance (color) bandwidth in relation to luminance. This compares with 4:4:4 sampling where the same full bandwidth is given to all three channels, in this case usually sampled as RGB.

The term 4:2:2 originated from the ITU-R BT.601 digital video sampling where 4:2:2 sampling is the standard for digital studio equipment. The terms ‘4:2:2’ and ‘601’ are commonly (but technically incorrectly) used synonymously in TV. For SD the sampling frequency of Y is 13.5 MHz and that of R-Y and B-Y is each 6.75 MHz, providing a maximum color bandwidth of 3.37 MHz – enough for high quality chroma keying. For HD the sampling rates are 5.5 times greater, 74.25 MHz for Y, and 37.125 MHz for each of R-Y and B-Y.

The origin of the term is steeped in digital history and should strictly only be used to describe a specific format of standard definition digital television sampling. However, it is widely used to describe the sampling frequency ratios of image components (Y, B-Y, R-Y) of HD (1080-line HD is 5.5x SD, so 22:12:12 would be appropriate) and many other image formats including UHD.

See also: 13.5 MHz, Co-sited sampling, Digital keying, ITU-R BT.601, ITU-R BT.709, ITU-R BT.2020, Nyquist

4:2:2:4

This is the same as 4:2:2 but with the key signal (alpha channel) included as the fourth component, also sampled at 13.5 MHz (74.25 MHz at HD).

See also: Dual link

4:4:4

One of the sampling frequency ratios used to digitize the luminance and color difference components (Y, B-Y, R-Y) or, more usually, the RGB components of a video signal. It supports an equal number of samples for each of three components or as RGB 4:4:4. It is commonly used in standard computer platform-based equipment, and for the highest quality results in high-end post production as well as in the production of movies.
For movies, the images are kept in the RGB form all the way through the DI process to create the Digital Source Master (DSM) that can be used to produce a Digital Cinema Distribution Master (DCDM). For distribution the DCI recommends the use of X´Y´Z´ chromaticity which can be derived from RGB using a 3D LUT.

See also: 2K, 4K, DCI, Digital intermediate, Dual link, LUT, X´Y´Z´

4:4:4:4

As 4:4:4, except that the key (matte, or alpha channel) is included as a fourth component. All four components are sampled at the same rate.

See also: Dual link

4fsc

A sampling rate locked to four times the frequency of color subcarrier (fsc) of analog coded PAL or NTSC TV systems. It was used in D2 and D3 VTRs but it has long been superseded by component sampling systems such as ITR 601.

See also: Component Video

Anamorphic

Generally refers to the use of 16:9 aspect ratio pictures in a 4:3 SDTV system. For example, anamorphic supplementary lenses are used to change the proportions of the captured image to 16:9. These horizontally squashed images can then fit onto the surface of a 4:3 sensor. Outputs from 16:9 cameras and telecines produce an ‘anamorphic’ signal which is electrically the same as when working with 4:3 images, but will appear horizontally squashed if displayed at 4:3 aspect ratio.

The alternative way of carrying 16:9 pictures within 4:3 systems is letterbox. Letterbox has the advantage of showing the correct 16:9 aspect ratio on 4:3 displays, however the vertical resolution is then less than when using 16:9 anamorphic.

Cinema film is sometimes printed with anamorphic frames, allowing widescreen presentations from, typically, 4:3 images projected via a suitable anamorphic lens.

The major use of anamorphic in TV occurred when 4:3 SD cameras were used to capture 16:9 images. Now 16:9 cameras are widely available, the use of anamorphic techniques is increasingly rare.

See also: Aspect ratio – of pictures

AVC-Intra

A family of two HD codecs from Panasonic that were designed to be compliant with H.264/MPEG-4 AVC, and use only intra-frame coding (GOP of 1), making the coded material easily editable at every frame. AVC-Intra was aimed at professional users and was adopted by Panasonic for its P2 cameras (AVC-Intra P2), offering considerably more efficient compression than the original DVCPRO HD codec – maybe by as much as 2:1. This was at a time when long GOP coding was being used in products including HDV and XDCAM HD. With increased coding efficiency some believed the use of long GOP coding in professional recorders would fade.

There are two classes: AVC-Intra 50 and AVC-100. The former produces a nominal 50 Mb/s for 1920 x1080 and 1280×720 formats using 4:2:0 10-bit sampling, with the frames horizontally reduced to 0.75 of the original line length. AVC-100 produces up to 100 Mb/s with 4:2:2 sampling for the same two frame formats, but without any size reduction. Both codecs offer a range of popular framerates. Both these codecs are now included in Panasonic’s AVC-Ultra range.

 

 

See also: DVCPRO P2, MPEG-4, XAVC

Website: www.avchd-info.org

AVCHD

Advanced Video Codec High Definition, a joint development between Panasonic and Sony, applies MPEG-4’s AVC video coding and Dolby Digital (AC-3) or linear PCM audio coding, to meet the needs of the high definition consumer market with 1080i and 720p formats. The use of AVC provides at least twice the efficiency of MPEG-2 coding, used in HDV and MiniDV, to offer longer recording times or better pictures – or both. Possible recording media include standard DVD disks, flash memory and hard drives.

Further developments have expanded the applications of AVCHD technology. The AVCHD Format Version 2.0 adds specifications for 3D and 1080/60P and 50P and supporting trademarks; AVCHD 3D, AVCHD Progressive and AVCHD 3D/Progressive.

AVI (.avi)

Audio Video Interleave, a Microsoft multimedia container format introduced in 1992 as part of its Video for Windows technology. AVI files can hold audio and video data in a standard container and provide synchronous video/audio replay. Most AVI files also use the OpenDML file format extensions, forming AVI 2.0 files.

Some consider AVI outdated, as there are significant overheads using it with popular MPEG-4 codecs that seemingly unduly increase file sizes. Despite that, it remains popular among file-sharing communities, probably due to its high compatibility with existing video editing and playback software, such as Windows Media Player.

AVS and AVS+

Proposed as a national standard in 2004 , Audio Video Standard is a compression audio and video system developed by the Audio Video Coding Standard Workgroup of China. Designed to replace AAC audio and H.264/MPEG-4 AVC at a lower price than commonly used systems in the rest of the world. It is not much used outside China. AVS+ was designed to provide performance with reduced complexity.

In 2013 work started on AVS2: designed to compete with HEVC(H.265).

Bandwidth

The amount of information (data) that can be passed in a given time. In television a large bandwidth is needed to show sharp picture detail in realtime, and so is a factor in the quality of recorded and transmitted images. For example, ITU-R BT.601 and SMPTE RP 125 allow analog luminance bandwidth of 5.5 MHz and chrominance bandwidth of 2.75 MHz for standard definition video. 1080-line HD has a luminance bandwidth of 30 MHz (ITU-R BT.709).

Digital image systems generally require large bandwidths hence the reason why many storage and transmission systems resort to compression techniques to accommodate the signal.

Betacam SX (1996)

A digital tape recording format that uses a constrained version of MPEG-2 compression at the 4:2:2 profile, Main Level (422P@ML). The compression is 10:1 and uses a 2-frame GOP (one I and one B frame), making it slightly more difficult to edit. It uses half-inch tape cassettes.

See also: MPEG-2

Blu-ray Disc Association (BDA)

Founded in 2005, three years after the introduction of the Blu-ray Disc system, the BDA is a voluntary membership group for those interested in creating, manufacturing, or promoting the BD formats and products, as well as those seeking more information about the format as it evolves.

The BDA aims to develop BD specifications, ensure products are correctly implemented, promote wide adoption of the formats and provide useful information to those interested in supporting those formats.

Website: www.blu-raydisc.com

See also: Blu-ray Disc

Blu-ray Disc (BD)

This optical disk, designed for HD, can hold 25 GB on a single-layer CD-sized (12cm) disk using 405 nanometer blue-violet lasers. Dual layer disks hold up to 50 GB. Also available are triple layer (100 GB) and quadruple layer (128 GB) disks, which may accommodate 4K UHD video. The companies that established the basic specifications were: Hitachi Ltd., LG Electronics Inc., Matsushita Electric Industrial Co. Ltd., Pioneer Corporation, Royal Philips Electronics, Samsung Electronics Co. Ltd., Sharp Corporation, Sony Corporation, and Thomson Multimedia.

Players must be able to decode MPEG-2, H.264/AVC (MPEG-4 part 10) and SMPTE VC-1 coded material. MPEG-2 offers backward compatibility for DVDs while the other two more modern codecs are at least 50 percent more efficient, using less disk space or producing higher quality results. Audio codecs supported are Linear PCM, Dolby Digital, Dolby Digital Plus, Dolby TrueHD, DTS Digital Surround, DTS-HD.

The baseline data rate is 36 Mb/s – giving over one-and-a-half hours recording of HD material on a single layer, or about 13 hours of SD. For Blu-ray Disc movies (BD-ROM) the maximum transfer rate is 54 Mb/s for audio and video, with a maximum of 40 Mb/s for video. Random access allows easy video editing and simultaneous record and playback.

Ultra HD Blu-ray is the 4K Blu-ray format, expected for delivery Christmas 2015. Handling 4K UHD video at up to 60 f/s, the specification includes HEVC (H.265) video compression, a wider color gamut (than HD) as well as High Dynamic Range (HDR) and 10-bit video sampling. Disc capacities are set at 66 GB (dual layer) and 100 GB (triple layer). The system will also be able to play legacy standards including Blu-ray, DVD and CD. The final specification is expected in mid-2015.

See also: DVD, HD DVD, Optical disks, Professional Disc

Website: www.blu-raydisc.com

Cineon (file)

An RGB bitmap file format (extension .cin) developed by Kodak and widely used for storing and transferring digitized film images in production and post production. It accommodates a range of film frame sizes up to full Vista Vision (rarely used format 36 x 18.3mm frame laid horizontally). In all cases the digital pictures have square pixels and use 10-bit log sampling. The sampling is scaled so that each of the code values from 0-1023 represents a density difference of 0.002 – describing a total density range of 2.046, equivalent to an exposure range of around 2,570:1 or about 11.3 stops. Note that this is no longer beyond the range of modern negative film and so has led to the use of higher precision file formats.

The format was partly designed to hold virtually all the useful information contained in negatives and so create a useful ‘digital negative’ suitable as a source for post production processing and creating a digital master of a whole program.

See also: 10-bit log, ACES, Color spaces, DPX

Component video

The normal interpretation of a component video signal is one in which the luminance and chrominance remain as separate components, e.g. analog components in Betacam VTRs, digital components Y, Cr, Cb in ITU-R BT.601 and 709. RGB is also a component signal.

Component video signals retain maximum luminance and chrominance bandwidth and the frames are independent of each other and so can be edited at any frame boundary.

See also: Cut (edit)

Composite Video

Luminance and chrominance are combined along with the timing reference sync and color burst information using one of the color coding standards – NTSC, PAL or SECAM – to make composite video. The process, which is an analog form of video compression, restricts the bandwidths (image detail) of the color components. In the composite result color is literally added to the monochrome (luminance or Y) information using a visually acceptable technique. As our eyes have far more luminance resolving power than for color, the sharpness (bandwidth) of the coded color signal is reduced to well below that of the luminance.

This provides a good solution for transmission and viewing but it becomes difficult, if not impossible, to accurately reverse the process (decode) back into pure luminance and chrominance. This limits its use in post production as repetitive decode, recode cycles severely impair the pictures. Deriving keys from composite video gives poor results.

See also: 4fsc, D2, D3

Compression ratio

The ratio of the amount of data in the non-compressed digital video signal to the compressed version. Modern compression techniques start with component television signals but a variety of sampling systems are used, 4:2:2 (‘Studio’ MPEG-2), 4:2:0 (MPEG-2), 4:1:1 (NTSC, DVCPRO), etc. The compression ratio should not be used as the only means to assess the quality of a compressed signal. For a given technique, greater compression can be expected to result in lower picture quality, but different techniques give widely differing quality of results for the same compression ratio. The more modern technologies, MPEG-4, (H 264), VC-1, JPEG 2000 and the latest HEVC (H.265), are more efficient than MPEG-2. The only sure method of judgment is to make a very close inspection of the resulting pictures – where appropriate, re-assessing their quality after onward video processing.

See also: Concatenation, DV, ETSI, JPEG, JPEG 2000, MPEG

Compression (video)

The process of reducing the bandwidth or data rate required to transport a video or audio stream. The old analog broadcast standards, PAL, NTSC and SECAM are, in fact, compression systems which greatly reduce the information that was present in the original RGB sources.

Digital compression systems analyze their sources, such as pictures and sound, to find and remove redundancy both within the signal. Some of the techniques were primarily developed for digital data transmission but have been adopted as a means of reducing broadcast transmission bandwidths and storage requirements on disks and VTRs.

A number of compression systems are in regular use for moving images. These include ETSI, JPEG, Motion JPEG, JPEG 2000, DV, MPEG-1, MPEG-2, MPEG-4, AVC, H.264, Windows Media, Real and many more. Where different techniques are used in the same workflow, quality problems can occur and picture quality/ audio can suffer more than if the same method is used throughout.

The MPEG-2 family of compression schemes, which was originally designed for program transmission, has been adapted for studio use in Betacam SX and IMX recorders.
While there is much debate, and new technologies continue to be developed, it remains true that the best compressed results are produced from the highest quality source pictures. Poor quality inputs do not compress well. Noise, which may be interpreted as important picture detail, is the enemy of compression.

The need for ever more efficient compression systems is partly driven by the bigger TV picture formats, HD images are about five times the area of SD. More recently UHDTV-1 (4K) and UHDTV-2 (8K), respectively are 4 and 16 times the picture area of 1920 x 1080 HD. New HEVC / H.265 coding is expected to be twice as efficient as MPEG-4, which is already twice as efficient as MPEG-2. This efficiency combined with the twice-as-efficient DVB-T2 and DVB-S2, together mean that 32 SD channels can now broadcast in the space that used to carry one anaolg TV channel. It also means that the channel could carry one 4K TV channel.

See also: Compression ratio, Concatenation, Digital Betacam, ETSI, HEVC, JPEG, JPEG 2000, MPEG-2, MPEG-4, Windows Media

Digital Betacam

Introduced in 1993 as a development of the original analog Betacam VTR that records SD component video and audio digitally onto Betacam-style cassettes. It uses mild intra-field compression to reduce the ITU-R BT.601 sampled video data by about 2:1 to provide a good portable and much cheaper alternative to the uncompressed D1 format.

DivX

A video codec created by DivX, Inc. which can compress long video segments into relatively small data spaces while maintaining reasonable picture quality. It uses MPEG-4/H.264 or AVC compression to balance quality against file size, and is commonly associated with transferring DVD audio and video to hard disks.

DNxHD

Avid’s Digital Nonlinear Extensible High Definition codec with intra-frame compression is designed for multi-generation compositing with reduced storage and bandwidth requirements. It has four levels to match quality requirements and manageable data volumes. 145 or 220 Mb/s 8-bit and 220 Mb/s 10-bit all at 4:2:2. There is also a 36 Mb/s version for HD offline. This offers HD post at SD data rates, or less, meaning that infrastructure and storage requirements can be as for uncompressed SD. DNxHD is assigned as VC-3 by SMPTE.

Website: www.avid.com/dnxhd

DV

This digital VCR format was formed jointly as a co-operation between Hitachi, JVC, Sony, Matsushita, Mitsubishi, Philips, Sanyo, Sharp, Thomson and Toshiba. It uses 6.35 mm (quarter-inch) wide tape in cassettes, used in a range of products to record 525/60 or 625/50 video for the consumer (DV) and professional markets (Panasonic’s DVCPRO and Sony’s DVCAM).

All recorders use digital intra-field DCT-based ‘DV’ compression (about 5:1) to record 8-bit component digital video based on 13.5 MHz luminance sampling. The consumer versions and DVCAM sample video at 4:1:1 (525/60) or 4:2:0 (625/50) video and provide two 16-bit/48 or 44.1 kHz, or four 12-bit/32 kHz audio channels onto a 4-hour 30-minute standard cassette (125 x 78 x 14.6 mm) or smaller 1-hour ‘mini’ cassettes (66 x 48 x 12.2 mm). The data rate is 25 Mb/s. The professional DVCPRO models make use of DV’s hierarchical design, being x2 and x4 versions of the basic 25 Mb/s version.

DVCAM is Sony’s professional variant of DV which records 15-micron tracks on a metal evaporated (ME) tape. As stated, video sampling is 4:2:0 for 625/50 (PAL) and 4:1:1 for 525/60 (NTSC). Audio is four 12-bit, 32 kHz channels, or two 16-bit 48 kHz channels.

DVCPRO, introduced in 2008, is Panasonic’s development of native DV which records 18-micron tracks onto metal particle tape. It uses native DV compression at 5:1 from a 4:1:1, 8-bit sampled source. There are 12 tracks per frame for 625/50 and 10 tracks per frame for 525/60, tape speed is 33.8 mm/s and the data rate 25 Mb/s. It includes two 16-bit digital audio channels sampled at 48 kHz and an analog cue track. Both linear (LTC) and Vertical Interval Timecode (VITC) are supported.

DVCPRO 50 is a x2 variant of DVCPRO with a tape speed of 67.7 mm/s, a data rate of 50 Mb/s and using 3.3:1 video compression, it is aimed at the studio/higher quality end of the market. Sampling is 4:2:2 to give enhanced chroma resolution, useful in post production processes (e.g. chroma keying). Four 16-bit audio tracks are provided.

DVCPRO HD is a series of HD VTRs that are x2 variants of DVCPRO 50: tape speed 135.4 mm/s and a total data rate of 100 Mb/s. Sampling is 4:2:2. There are eight 16-bit, 48 kHz audio tracks. Formats supported include 1080I and 720P. This tape format has been assigned as D12. Later versions are using AVC-Intra Compression.

DVCPRO P2 is a DVCPRO recording system. It records the DV data in MXF format onto P2 memory cards which are PC plug-in cards with each carrying four SD (Secure Data) memory chips to combine capacity and aggregate transfer speeds – currently up to 640 Mb/s, many times realtime. The original P2 card offered 2 GB but the capacity keeps doubling; now the range goes from 4 GB up to 64 GB. These are able to hold from 16 – 256 minutes of SD DVCPRO (25 Mb/s) to 4 – 64 minutes for HD DVCPRO 100 (100 MB/s). Multiple cards can be used together, and swapped during continuous recording.

Recent developments include the expressP2 card, to accommodate high frame-rate 1080 HDAVC-ULTRA recording (above 60 f/s), as well as 4K.

See also: AVC-Intra

Website: http://panasonic-broadcast.com

Fibre Channel (FC)

An integrated set of standards developed by ANSI designed to improve data speeds between workstations, supercomputers, storage devices and displays while providing a single standard for networking storage and data transfer. It can be used point-to-point, switched or in an arbitrated loop (FC-AL) connecting up to 126 devices.

Planned in 1997 to run on a fiber-optic or twisted-pair cable at an initial data rate of 1Gb/s, it has been consistently upgraded to make, 2, 4, 8 and 14 Gb/s (14GFC) available. Expect both 28 and 4 x 28 Gb/s in 2015. There is a road map sketched to 2028 with the possibility of about an 8-fold further increase in speed. These are nominal wire speeds but 8b/10b encoding is used to improve transmission characteristics, provide more accuracy and better error handling. With every 8-bit data byte for transmission converted into a 10-bit Transmission Character, the useful data rate is reduced by 20 percent.

Because of its close association with disk drives, its TV application is mostly, but not always, in the creation of storage networking. It can interface with the SCSI disk interface, which is key to its operation in storage networking such as SAN.

See also: SAN

Website: www.fibrechannel.org

File-based (media)

‘File-based’ generally refers to storing media in files rather than as continuous streams like video tape. The term is more widely used to describe IT-based environments for handling and processing digital media. Media production is increasingly becoming file-based.

HD D5

A D5 VTR (1994) adapted to handle high definition signals. Using around 5:1 compression the signals connect via an HD-SDI link. HD D5 can be multi-format, operating at both SD and HD TV standards. It can replay 525-line D5 as well as HD D5 cassettes. Formats include 480/60I, 1080/24P, 1080/60I, 1080/50I, 1035/59.94I and 720/60P. The recorder can also slew between 24 and 25 Hz frame rates for PAL program duplication from a 1080/24P master.

Cassette recording times vary according to format, the longest is 155 minutes for 1080/24P.

Website: www.panasonic-broadcast.com

HD RGB

This refers to HDTV signals in RGB form rather than Y,Cr,Cb form. The difference is that HD RGB is a 4:4:4 signal that can carry the full bandwidth of each of the R, G and B channels, whereas HD (TV) is normally considered to be in 4:2:2 form where the color difference signals have a more restricted bandwidth. Generally, the 4:2:2 form of HD is sufficient for most television applications and can be carried in its uncompressed form by a single SDI connection. HD RGB is often used for critical keying shots for television, and for digital cinematography. The availability of a suitable recorder makes working with the format generally more affordable.

HDCAM

Also known as D11, this series of Sony VTRs, introduced in 1997, was based on the Betacam principles for recording HD video on a tape format which uses the same style of cassette shell as Digital Betacam, although with a different tape formulation. The technology supports 1080-line standards. Various methods are used to reduce the video data including pre-filtering, DCT-based intra-frame compression and sampling at around 3:1:1. Together these provide data reduction of between 7 and 10:1. Four non-compressed audio channels sampled at 48 kHz, 20 bits per sample, are also supported. One variation, CineAlta, is aimed at addressing the needs of digital cinematography.

HDCAM SR, introduced in 2003, was a further extension of Betacam recorders using mild MPEG-4 Studio Profile (SP) intra-frame compression to store full bandwidth 4:4:4 HD RGB 1080- and 720-line video offering more headroom for digital cinema users, as well as 4:2:2 Y,Pr,Pb component video for television. It offers video data rates of 440 Mb/s and 880 Mb/s, and more audio channels. It can cover SD, HD and film-resolution data (10 or 12 bit), and color resolution (component or RGB). The Sony F950 camera provided suitable RGB sources for HDCAM SR, including undercranked (shooting at a lower than normal framerate) footage. The close-to-raw-state of RGB material is suited to the needs of digital cinematography as the full latitude and bandwidth of the pictures is preserved through recording.

See also: 24PsF, CineAlta, D11

Website: www.sonybiz.net/hdcamsr

HDR (HDRI)

High Dynamic Range (Imaging) techniques allow a greater dynamic range of exposure (stops) than normally possible, with the intention of accurately representing the wide brightness range of real scenes ranging from direct sunlight to deep shadows. This is sometimes used with computer-generated images or photography (often by taking several pictures of a scene, each with a different exposure setting) and it can provide a large amount of headroom for the adjustment of images in post production.

HDR can have a significant effect on the viewer experience. Some consider it as the best achievable next step beyond the normal experience of HD – placing it above 4K and HFR.

HDTV

High Definition Television. A television format with higher definition than SDTV. While DTV at 625 (576) or 525 (480) lines is usually superior to analog PAL and NTSC, it is generally accepted that 720-line and upward is HD. This also has a picture aspect ratio of 16:9.

While there are many picture HDTV formats there is a consensus that 1920 x 1080 is a practical standard for global exchange of television material; a common image format. Many productions are made in this format.

See also: 24P, ATSC, Common Image Format, DVB, Table 3

HDV

High definition DV is a tape format that stores long GOP MPEG-2 encoded HD video onto DV or MiniDV tape cassettes. There are two standards. One is 1280 x 720 lines at 60, 50, 30 and 25P frame rates with a target compressed video rate of 19 Mb/s. The other is 1440 x 1080 lines at 50 and 60I interlaced vertical rate with a target bit rate of 25 Mb/s. All video sampling is 8-bit 4:2:0, 16:9 aspect ratio so the 1080-line format does not use square pixels. Audio is two channels of 48 kHz, 16-bit and uses MPEG-1 Layer 2 compression producing 384 kb/s total.

At its introduction in 2004, HDV represented a huge price drop for HD camcorders. However the quality is ‘prosumer’ but it opened up a new layer of operations for HD. Also the SD downconverted output is better than the usual SD DV results. The use of long GOP coding impedes frame-accurate editing.

See also: AVC-Intra

HEVC

HEVC is a new generation of High Efficiency Video Codecs that are used to reduce the bandwidth needed specifically to support 4K and 8K UHDTV video programming. It is hoped this will be up to 50% more efficient than MPEG-4.

See also: Display Resolution

HFR

High Frame Rate – a frame rate higher than normal. For instance, movies (films) are normally shot at 24 f/s but some have been shot at 48 f/s – HFR. Some audiences say they do not like it as it’s too real and does not look like film.

It has been observed that when viewing UHD, motion judder is often very apparent and so a higher frame rate (say 48 f/s) is recommended by some. When shooting fast-action sports, such as football, then the UHD result would look better using, say, 50 or 60 f/s. In fact the UHD standard Rec 2020 includes frame rates up to 120 f/s.

InfiniBand

InfiniBand defines an input/output architecture that can connect servers, communications infrastructure equipment, storage and embedded systems. It can achieve very high data transfers up to 120GB/s over copper and optical fiber connections, with the benefits of low latency and only requiring a low processing overhead. It is used in many data centers, high-performance computer clusters, connecting supercomputers, and in embedded applications that scale from two nodes up to a single cluster that interconnects thousands of nodes.

Inter-frame (compression)

Video compression which involves more than one frame to code and decode. Inter-frame compression compares consecutive frames to remove common elements and arrive at ‘difference’ information to describe the frames between the (integral) I-frames. MPEG-2 and MPEG-4 use two types of inter-frame processed pictures – the ‘P’ (predictive) and ‘B’ (bi-directional) frames. As ‘P’ and ‘B’ frames are not complete in themselves but relate to other adjacent frames, they cannot be edited independently.

See also: Cut edit, I-frame, MPEG-2, MPEG-4

Intra-frame (compression)

Compression that uses just one picture. The compression process only is designed to remove what it considers to be redundant and visually less significant information from within the frame itself. No account is taken of other frames. JPEG and the ‘I’ frames of MPEG-2 are coded in this way and use DCT. In an MPEG-2 sequence of frames editing can only be at I-frames as they are the only independent frames.

 

See also: DCT, I-frame only, JPEG, MPEG-2

ITU IPTV standard Rec J.700

ITU-T SG 9 (integrated broadband cable networks and television and sound transmission), in its meeting of October/November 2007, gave consent to a draft new recommendation on IPTV to go through the Alternate Approval Process (AAP). The draft new recommendation J.700 titled “IPTV Service Requirements and Framework for Secondary Distribution” is now at the Last call Judgement (LJ) stage. It describes the service requirements and functional framework architecture for support of IPTV services. Requirements for network elements and customer premises equipment (CPE) are covered. The draft new recommendation also leverages existing deployed technologies to provide a smooth path for operators to integrate IPTV technology into their networks.

Website: http://tel_archives.ofca.gov.hk/zh/ad-comm/tsac/cc-paper/ccs2008p03.pdf

ITU-R BT.2020

This defines the parameters of UHDTV (Ultra High Definition Television), including display resolution, frame rate, chroma sub-sampling, bit depth, color space and audio system. The image sizes are 4K (3840 x 2160) and 8K (7680 x 4320), with frame rates 23.976, 24, 25, 29.97, 30, 50, 59.94, 60 and 120 Hz. All scans are progressive. The system offers a wider dynamic range with the images’ colorimetry including a wider gamut  than HDTV, which is already wider than SD. Sampling may be 10 or 12-bit and 4:4:4, 4:2:2 or 4:2:0 to suit the application.

ITU-R BT.601

This standard defines the digital encoding parameters of SD television for studios. It is the international standard for digitizing component television video in both 525 and 625 line systems and is derived from SMPTE RP125. ITU-R BT.601 deals with both color difference (Y, R-Y, B-Y) and RGB component video and defines sampling systems, RGB/Y, R-Y, B-Y matrix values and filter characteristics. It does not actually define the electro-mechanical interface; see ITU-R BT. 656.

ITU-R BT.601 is normally taken to refer to color difference component digital video (rather than RGB), for which it defines 4:2:2 sampling at 13.5 MHz with 720 (4) luminance samples per active line. The color difference signals R-Y and B-Y are sampled at 6.75 MHz with 360 (2) samples per active line. Its depth may be 8 or 10 bits.

Some headroom is allowed so, with 10-bit sampling, black level is at 64 (not 0) and white at level 940 (not 1023) – to minimize clipping of noise and overshoots. With 210 levels each for Y (luminance), Cr and Cb (the digitized color difference signals) = 230 – over a billion unique colors can be defined.

The sampling frequency of 13.5 MHz was chosen to provide a politically acceptable common sampling standard between 525/59.94 and 625/50 systems, being a multiple of 2.25 MHz, the lowest common frequency to provide a static sampling pattern for both.

See also: 13.5 MHz, 4:2:2, Frequency, Into digits (Tutorial 1)

ITU-R BT.656

The international standard for interconnecting digital television equipment operating to the 4:2:2 standard defined in ITU-R BT.601. It defines blanking, embedded sync words, the video multiplexing formats used by both the parallel (now rare) and serial interfaces (SDI), the electrical characteristics of the interface and the mechanical details of the connectors.

ITU-R BT.709

In 2000, ITU-R BT.709-4 recommended the 1080 active-line high definition television standard for 50 and 60 Hz interlace scanning with sampling at 4:2:2 and 4:4:4. Actual sampling rates are 74.25 MHz for luminance Y, or R, G, B and 37.125 MHz for color difference Cb and Cr, all at 8 bits or 10 bits, and these should be used for all new productions. It also defines these 1080-line square-pixel standards as common image formats (CIF) for international exchange.

The original ITU-R BT.709 recommendation was for 1125/60 and 1250/50 (1035 and 1152 active lines) HDTV formats defining values and a ‘4:2:2’ and ‘4:4:4’ sampling structure that is 5.5 times that of ITU-R BT.601. Note that this is an ‘expanded’ form of 601 and so uses non-square pixels.

See also: Common Image Format

JPEG 2000 (.JP2)

This is another image compression system from the Joint Photographic Experts Group (ISO/ITU-T). JPEG 2000 is very different from the original JPEG; whereas JPEG is DCT-based and examines images in a series of 8 x 8 pixel blocks, JPEG 2000 is wavelet-based using Discrete Wavelet Transform (DWT), to analyze the detail of pictures in a different way. Both coding and decoding require far more processing than JPEG, MPEG-2 or MPEG-4. Also JPEG 2000 is intra-frame only; there are no predictive frames (as in MPEG). Whereas MPEG tends to show macro blocks as it starts to fail, and the original JPEG shows ‘mosquito wings’ or ringing effects, JPEG 2000 can switch to lower data rates that can cause a softening of picture areas, which is far less noticeable. There are two file-name extensions; .JP2 is for ISO/IEC 15444-1 files and .JPX for ISO/IEC 15444-2 files.

JPEG 2000 is about twice as efficient as the equivalent I-only MPEG-2, and excels at high bit rates. It is used at up to 250Mb/s for DCI Digital Cinema applications, usually showing 24 pictures per second in 2K and 4K formats. It lends itself to a wide range of uses from portable digital cameras through to advanced pre-press and television acquisition – as well as Digital Cinema. Some favor it for use in TV distribution. The company intoPix, a specialist in JPEG 2000 technology, offers a video-over-IP solution using JPEG 2000 for HD and 4K UHD via 1Gb/s media networks with 10ms of latency. Its further technology developments are aimed at expanding the use of JPEG 2000 in TV.

See also: Compression, Compression ratio, DCT, Huffman coding, MPEG

Website: www.jpeg.org

JPEG (.JPG)

Joint Photographic Experts Group (ISO/ITU-T). It has defined many types of image compression. JPEG is a DCT-based data compression standard for individual pictures (intra-frame). It offers compression of between two and 100 times and has three levels of processing which are defined as: baseline, extended and lossless encoding.

JPEG baseline compression coding, which is overwhelmingly the most common in both the broadcast and computer environments, starts with applying DCT to 8 x 8 pixel blocks of the picture, transforming them into frequency and amplitude data. This itself may not reduce data but then the generally less visible high frequencies can be divided by a high ‘quantizing’ factor (reducing many to zero), and the more visible low frequencies by a much lower factor. The ‘quantizing’ factor can be set according to data size (for constant bit rate) or picture quality (constant quality) requirements – effectively adjusting the compression ratio. The final stage is Huffman coding which is lossless but can further reduce data by 2:1 or more.

Baseline JPEG coding creates .jpg files and it is very similar to the I-frames of MPEG, the main difference being they use slightly dissimilar Huffman tables.

See also: Motion JPEG

KLV

KLV is a data encoding protocol (SMPTE 336M). The Key is a unique, registered sequence of bits that defines the type of content that is coming (video, audio, EDL, etc) and Length – number of bytes ahead of Value, the content ‘payload’ itself. Compliance to KLV means that a wider range of equipment and applications can understand each others’ files.

See also: AAF, MXF

MADI

Multichannel Audio Digital Interface, widely used among audio professionals, defines the data format and electrical characteristics of an interface carrying multiple digital audio channels, as in the Audio Engineering Society’s AES10-2008. It is popular for its large channel capacity: 28, 56, or 64 channels at up to 96 kHz, 24 bits per channel, and up to 3000m connections over optical fiber (or 100m over coax).

MediaFLO

A mobile TV/multimedia system once offered by Qualcomm. It was the technology behind FLO TV which went out of service in 2011.

MHP

Multimedia Home Platform – DVB-MHP is open middleware from the DVB project for interactive television. It enables the reception and execution of interactive, Java-based applications on a TV set that can be delivered over a broadcast channel, together with the audio and video streams. The applications can provide information services such as games, interactive voting, e-mail, SMS and shopping. Some may require using an IP return channel.

Early deployments included DVB-T in Italy, DVB-S in Korea and Poland and DVB-C in Belgium. There have also been trails in other countries.

MOS

Media Object Server (protocol) – a communications protocol for newsroom computer systems (NCS) and broadcast production equipment. It is a collaborative effort between many companies to enable journalists to see, use, and control a variety of devices from their desktop computers, effectively allowing access to all work from one screen. Such devices include video and audio servers and editors, still stores, character generators and special effects machines.

MOS uses a TCP/IP-based protocol and is designed to allow integration of production equipment from multiple vendors with newsroom computers via LANs, WANs and the Internet. It uses a ‘one-to-many’ connection strategy – multiple MOSs can be connected to a single NCS, or a single MOS to many NCSs.

Website: www.mosprotocol.com

Motion JPG

This is JPEG compression applied to digital video, where each frame or field (half frame) is individually compressed using JPEG encoding to reduce the amount of data. This is widely used in areas including video cameras, some webcams and in non-linear editing systems.

See also: JPEG, Field

MP3

A high-performance, perceptual audio compression coding scheme which exploits the properties of the human ear and brain while trying to maintain perceived sound quality. MPEG-1 and 2 define a family of three audio coding systems of increasing complexity and performance – Layer-1, Layer-2 and Layer-3. MP3 is shorthand for Layer-3 coding. MPEG defines the bitstream and the decoder but, to allow for future improvements, not an encoder. MP3 is claimed to achieve ‘CD quality’ at 128-112 kb/s – a compression of between 10 and 12:1. Not all listeners agree with that.

See also: Auditory masking

Website: www.mp3-tech.org

MPEG-1

A compression scheme designed to work at 1.2 Mb/s, the basic data rate of CD-ROMs, so that video could be played from CDs. Its quality is not up to modern standards and it is not much used.

MPEG-2

ISO/IEC 13818. A family of inter- and intra-frame compression systems designed to cover a wide range of requirements from ‘VHS quality’ all the way to HDTV through a series of compression algorithm ‘profiles’ and image resolution ‘levels’. With data rates from below 4 to 100 Mb/s, this family includes the compression system that currently delivers digital TV to homes and that puts SD video onto DVDs as well as putting HD onto 6.35mm videotape for HDV.

In all cases MPEG-2 coding starts with analyzing 8×8-pixel DCT blocks and applying quantizing to achieve intra-frame compression that is very similar to JPEG. This compression is referred to as I-frame only MPEG-2. Producing much higher compression involves analyzing the frame-to-frame movement of 16×16-pixel ‘macroblocks’ to produce vectors that show the distance and direction of macroblock movement. Their correctness is a factor of coders’ quality and efficiency. This vector data is carried in the P (predictive) and B (bi-directional predictive) frames that exist between I frames (see diagram). SDTV transmissions and DVDs typically contain two I-frames per second typically using about 4 Mb/s or less – a big difference from the 180 Mb/s of uncompressed SD video. The set of images between I-frames is a Group of Pictures (GOP) – usually about 12 for 576/50I and 15 for 480/60I transmissions. These are called ‘long GOP’. The GOP length can vary during transmission – an I-frame may be forced at the start of a new sequence, such as after a video cut, or other occasions were there is a big change at the input.

MPEG-2 12 frame GOP

MPEG-2

*Note: for transmission the last ‘I’ frame is played out ahead of the last two ‘B’ frames to form the sequence I1, B1, B2, P1, B3, I1 B4, P2, B5, B6, P3, I2, B7, B8

Levels and profiles: MPEG-2 is a single compression standard that can operate on many different levels – picture source formats ranging from about VCR quality to full HDTV, and profiles – a collection of compression tools that make up a coding system. Current interest includes the Main Profile @ Main Level (MP@ML) covering current 525/60 and 625/50 broadcast television as well as DVD-video and Main Profile @ High Level (MP@HL) for HDTV. Besides the transmission/delivery applications which use 4:2:0 sampling, the 422 Profile (4:2:2 sampling) was designed for studio use and offers greater chrominance bandwidth which is useful for post production.

Blocking and ‘blockiness’: MPEG-2 artifacts generally show as momentary rectangular areas of picture with distinct boundaries. Their appearance generally depends on the amount of compression, the quality and nature of the original pictures as well as the quality of the coder. The visible blocks may be 8 x 8 DCT blocks or, most likely, ‘misplaced blocks’ – 16 x 16 pixel macroblocks, due to the failure of motion prediction/estimation in an MPEG coder or other motion vector system, e.g. a standards converter.

Audio: Digital audio compression uses auditory masking techniques. MPEG-1audio specifies mono or two-channel audio which may be Dolby Surround coded at bit rates between 32 kb/s to 384 kb/s. MPEG-2 audio specifies up to 7.1 channels (but 5.1 is more common), rates up to 1 Mb/s and supports variable bit-rate as well as constant bit-rate coding. MPEG-2 handles backward compatibility by encoding a two-channel MPEG-1 stream, then adds the 5.1/7.1 audio as an extension.

See also: Discrete 5.1 Audio, MP3

Websites:
www.mpeg.org
www.chiariglione.org/mpeg

MPEG-21

MPEG-21 (.m21 or .mp21) , standardized as ISO/IEC 21000, creates descriptions for a multimedia framework to provide a ‘big picture’ of how the system elements relate to each other and fit together. The resulting open framework for multimedia delivery and consumption includes content creators and content consumers as focal points to give creators and service providers equal opportunities in an MPEG-21 open market. This can also give the consumers access to a large variety of content in an practical manner. MPEG-21 defines a Digital Item as a basic unit of transaction. It is a structured digital object, including a standard representation, identification and metadata.

Website: www.chiariglione.org/mpeg

MPEG-4

ISO/IEC 14496. MPEG-4 covers three areas, digital television, interactive graphics applications (synthetic content) and interactive multimedia (Web distribution and access to content). It provides the standardized technological elements enabling the integration of the production, distribution and content access of the three fields.

Since its first publication in 1999, MPEG-4 video compression achieved quality targets with ever-lower bit rates. Like MPEG-2 the compression is DCT-based and uses inter- and intra-field compression but implements many refinements, such as a choice of block sizes and motion compensation accuracy of one-eighth of a pixel against MPEG-2’s half pixel.

MPEG-4 is guilty of generating too many names and versions. The highest quality MPEG compression technology is known by ISO and IEC as MPEG-4 AVC (Advanced Video Coding). It is also know by the ITU-T as H.264 or MPEG-4 part 10. Notable predecessors are MPEG-4 part 2 (ASP) and H.263. Significantly, MPEG-4 AVC achieves up to a 64 percent bit rate reduction over MPEG-2 for the same quality and it opened possibilities for HD DVDs and transmission, etc., as well as room to offer more SD DTV channels, or more quality. MPEG-4 also specifies low bit rates (5-64 kb/s) for mobile and Internet applications with frame rates up to 15 Hz, and images up to 352 x 288 pixels.

MPEG-4 AVC video coding and decoding are far more complex than MPEG-2 but Moore’s Law absorbed that technical challenge. QuickTime and RealPlayer were among early adopters of MPEG-4. While established systems need to stick to their MPEG-2, most if not all later video services use MPEG-4.

The interactive multimedia side of MPEG-4 includes storage, access and communication as well as viewer interaction and 3D broadcasting. Aural and visual objects (AVOs) represent the content which may be natural – from cameras or microphones, or synthetic – generated by computers. Their composition is described by the Binary Format for Scene description (BIFS) – scene construction information to form composite audiovisual scenes from the AVOs. Hence, a weather forecast could require relatively little data – a fixed background image with a number of cloud, sun, etc, symbols appearing and moving, audio objects to describe the action and a video ‘talking head’ all composed and choreographed as defined by the BIFS. Viewer interactivity is provided by the selection and movement of objects or the overall point of view – both visually and aurally.

Audio: This builds on previous MPEG standards and includes High Efficiency Advanced Audio Coding (HE-AAC). This nearly doubled the efficiency of MPEG-4 Audio, improving on the original AAC and offers better quality for the same bit rate as the ubiquitous MP3 codec (from MPEG-2). Stereo CD-quality at 48 kb/s and excellent quality at 32 kb/s is reported. This is not a replacement for AAC, but rather a superset which extends the reach of high-quality MPEG-4 audio to much lower bit rates. High Efficiency AAC decoders will decode both types of AAC for backward compatibility.

DVB has approved two MPEG-4 codecs for use for broadcast transport streams: H.264/AVC video codec (MPEG-4 Part 10) and the High Efficiency Advanced Audio Coding (HE-AAC) audio codec. This mandates support of Main Profile for H.264/AVC SDTV receivers, with an option for the use of High Profile. The support of High Profile is mandated for H.264/AVC HDTV receivers.

Websites:
www.chiariglione.org/mpeg
www.m4if.org

MPEG-7

The value of information often depends on how easily it can be found, retrieved, accessed, filtered and managed. MPEG-7, formally named ‘Multimedia Content Description Interface’, provides a rich set of standardized tools to describe multimedia content. Both human users and automatic systems that process audiovisual information are within its scope. It was intended to be the standard for description and search of large volumes of audio and visual content – including that from private databases, broadcast and via the Web. Applications include database retrieval from digital libraries and other libraries, areas like broadcast channel selection, multimedia editing and multimedia directory services.

MPEG-7 offers a set of audiovisual Description Tools (the metadata elements, their structure and relationships that are defined as Descriptors and Description Schemes). It specifies a Description Definition Language (DDL) so that material with associated MPEG-7 data can be indexed and allow fast and efficient searches. These searches will permit not only text-based inquiries, but also for scene, motion and visual content. Material may include stills, graphics, 3D models, audio, speech and video as well as information about how these elements are combined. Besides uses in program-making MPEG-7 could help viewers by enhancing EPGs and program selection.

Website: www.chiariglione.org/mpeg

MPEG

Moving Picture Experts Group. This is a working group of ISO/IEC for the development of international standards for compression, decompression, processing, and coded representation of moving pictures, audio and their combination. It has also extended into metadata. Four MPEG standards were originally planned but the accommodation of HDTV within MPEG-2 has meant that MPEG-3 is now redundant. MPEG-4 is very broad and extends into multimedia applications. MPEG-7 is about metadata and MPEG-21 describes a ‘big picture’ multimedia framework.

Websites:
www.chiariglione.org/mpeg
www.mpeg.org

MPEG-HEVC H.265

MPEG High Efficiency Video Coding was developed to achieve twice the efficiency of MPEG-4 AVC. Apart from having the potential to halve the bandwidth currently used to transmit HDTV services, it also halves the data needed to be transmitted for UHD. That means that a 4K UHD channel can fit into one DVB-T2 multiplex – the bandwidth that was used for one analog PAL TV channel.

Beyond helping to enable terrestrial, satellite and cable 4K transmissions, it is also a part of the Ultra HD Blu-ray specification.

 

See also: Display Resolution

MXF

The Material eXchange Format, SMPTE 377M, is aimed at the exchange of program material between file servers, video recorders, editing systems, tape streamers and digital archives, etc. It is a container, or wrapper, format that supports a variety of differently coded streams of essence (images and sound), together with metadata describing the material in the MXF file.

There are ten Operational Patterns defined by SMPTE standards. The main nine are the intersections along two-axes describing Item (segment) versus Package (complexity). This gives the combinations of 1) Single Item, 2) Playlist Items and 3) Edit Items against a) Single Package, b) Ganged Packages and c) Alternate Packages. By far the most common is OP-1a – Single Item, Single Package – which contains a single flat clip. The tenth operational pattern is OP-Atom which is used by Panasonic P2.

Bridging file and streaming transfers, MXF helps move material between AAF file-based post production and streaming program replay using standard networks. This set-up extends the reliable essence and metadata pathways of both formats to reach from content creation to playout. The MXF body carries the content. It can include compressed formats such as MPEG and DV as well as uncompressed video and can contain an interleaved sequence of picture frames, each with audio and data essence plus frame-based metadata. There are other scenarios where, for example, P2 MXF files contain a single essence track with no interleaving and no frame-based metadata. Also Panasonic AVC Long-G MXF files do interleave but not per-frame; instead they interleave partitions (chunks) of a constant duration, typically about 10 seconds. It is perfectly valid for MXF files to be metadata-only and this has no bearing on the operational pattern; for example, AS-02 bundles always contain a metadata-only version MXF file that references separate essence MXF files.

See also: AAF, AMWA

Website: www.pro-mpeg.org

NTFS

New Technology File System – the standard file system of Windows NT and its descendants Windows 2000 through to Windows 8. It replaced Microsoft’s FAT file system used in MS-DOS and earlier Windows versions. Advantages include improved metadata support, advanced data structures, reliability, disk space use and extensions such as security access control lists (who can access), permitted operations and file system journaling that logs file changes. Full details are a Microsoft trade secret.

NTSC (television standard)

The analog color television system used in the USA, Canada, Mexico, Japan and more, where NTSC M is the broadcast standard (M defining the 525/60 line and field format). It was defined by the NTSC in 1953. With the use of digital TV systems NTSC has become part of broadcasting history.

Note that ‘NTSC’ is often incorrectly used to describe the 525/60 television format even when it is in component or digital form.

OFDMA

Orthogonal Frequency Division Multiple Access is a technology used for terrestrial radio return channels. Generally it provides several users with simultaneous low data rate transmission for fixed and mobile applications. Power can be less than a Watt and will communicate over considerable distances. OFDMA is used in the IEEE 802.16-2005 Mobile WiMAX broadband wireless access standard, enabling mobile DSL services and mobile telephony to provide customers with enhanced voice and data services.

Website: www.runcom.com

OMFI

Open Media Framework Interchange is an open standard developed by Avid, for post production interchange of digital media among applications and across platforms. It describes a file format and supports video, audio, graphics, animation and effects as well as comprehensive edit decision information. Transfers may be by removable media or over a high-speed network.

Website: www.avid.com

OpenEXR

OpenEXR is a file format developed by Industrial Light & Magic for high dynamic range (HDR) images that are used in computer imaging applications. It is used by ILM on motion pictures, has become ILM’s main image file format, and is released as free software. It offers a higher dynamic range than 8 or 10-bit pictures, support for 16-bit floating-point, 32-bit floating-point, and 32-bit integer pixels, up to 2:1 lossless compression algorithms, and extensibility to add new compression codecs and image types. It forms the container for the ACES color format.

See also: ACES, Color Transform Language