Category Archives: A

AAC

Advanced Audio Coding is a digital audio coding system that delivers lossy compression offering about 25 percent efficiency improvement over popular MP3 coding at similar bit rates. However this performance was then topped by aacPlus, also known as High Efficiency AAC (HE-AAC). It is included in MPEG-4 and delivers ‘CD quality’ stereo at 48 kb/s and 5.1 surround sound at 128 kb/s. HE-AAC is also adopted in DAB+ and Digital Radio Mondiale as well as DVB-H and ATSC-M/H mobile applications.

AAF

The Advanced Authoring Format – was an industry initiative launched in 1998, to create a file interchange standard for the easy sharing of media data and metadata among digital production tools and content creation applications, regardless of platform. It includes EBU/SMPTE metadata and management of pluggable effects and codecs. It allows open connections between equipment where video, audio and metadata, including information on how the content is composed, where it came from, etc., are transferred. It can fulfill the role of an all-embracing EDL or offer the basis for a media archive that any AAF-enabled system can use. Quantel products make extensive use of AAF.

In 2007 AAF Association, Inc. changed its name to the Advanced Media Workflow Association (AMWA), with the tag ‘Putting AAF and MXF to work’. Today it is a container (or wrapper) format, with focus on file-based workflows including MXF and other formats. It was involved with the MXF Mastering Format Project that provided real-world solutions for key workflows, focusing on creating a single MXF master file from which multiple versions of a program may be created.

Website: www.amwa.tv

Accommodation (Stereoscopic)

The ability of our eyes to refocus onto a new point of interest. In normal vision, the processes of focusing on objects at different distances (accommodation) and convergence/divergence (the angle between the lines of sight of our eyes) are linked by muscle reflex. A change in one creates a complementary change in the other. However, watching a stereoscopic film or TV program requires the viewer to break the link between these different processes by accommodating at a fixed distance (the screen) while dynamically varying eye convergence and divergence (something we don’t do in life and can quickly lead to headaches if over-used in stereo3D) to view objects at different stereoscopic distances.

ACES

A color standard proposed by AMPAS (Academy of Motion Picture Arts and Sciences) ACES (Academy Color Encoding System). It provides a means by which all imaging sources can share one common color space. This provides a common data set that can be repurposed and reworked without loss.

See also: Color spaces

Active line

The part of a television line that carries picture information. The remainder of the whole line time is mainly reserved to allow scans to reset to the start of the next line in camera tubes and CRT screens. Although the imaging and display technologies have moved on to chips and panels, there remains a break (line blanking) in the sampling of digital TV as in ITU-R BT.601 and ITU-R BT 709. These ‘spaces’ carry data for the start of lines and pictures, as well as other information such as embedded audio tracks.

See also: Active picture

Active picture

The area of a TV frame that carries picture information. Outside the active area there are line and field, or frame, blanking which roughly, but not exactly, correspond to the areas defined for the original 525- and 625-line analog systems. In digital versions of these, the blanked/active areas are defined by ITU-R BT.601, SMPTE RP125 and EBU-E.

For 1125-line HDTV (1080 active lines), which may have 60, 30, 25 or 24 Hz frame rates (and more), the active lines are always the same length: 1920 pixel samples at 74.25 MHz, a time of 25.86 microseconds, defined in SMPTE 274M and ITU-R.BT 709-4. Only their line blanking differs so the active portion may be mapped pixel-for-pixel between these formats.

DTV standards tend to be quoted by only their active picture content, eg 1920 x 1080, 1280 x 720, 720 x 576, as opposed to analog where the whole active and blanked areas are included, such as 525 and 625 lines. For both 625 and 525-line formats active line length is 720 luminance samples at 13.5 MHz = 53.3 microseconds. In digital video there are no half lines as there are in analog. The table below shows blanking for SD and some popular HD standards.

Analog Format625/50525/601125/60I1125/50I1125/24P
Active lines 576487108010801080
Field 1 lines2419222245/frame
Field 2 lines25192323-
Line Blanking12μs10.5μs3.8μs9.7μs11.5μs

Active Picture

 

Adaptive bit-rate streaming

The streaming of media often occupies a large part of a network’s available capacity, especially if the media is video and the network is the Internet – where video streaming is growing fast. The available data speed for users on the Internet varies all the time but video and audio are constant flows, with the video requiring substantial bandwidth to deliver good pictures. Relying on a constant bit rate may well run out of bandwidth, then the video freezes while waiting for the next frames – a process known as buffering.

A way around this is to vary the bit rate according to the available capacity of the network connection. This is adaptive bit-rate streaming. There are several versions in use. Generally these involve the sender system detecting the receiver’s available bit rate and CPU power, and then adjusting the sending bit rate accordingly by varying the amount of compression applied to the media. In practice this requires the sender’s coder simultaneously creating a set of streams, typically three, each with a different bit rate. These are made available as a series of files containing short sections of video, typically between 2 and 10 seconds. This way those with a fast connection see good quality video, and those with a slow one hopefully should still see satisfactory results. In practice, streaming starts with sending a manifest of the files, and then low bit-rate video files. Then, if the receiver sees there is room for a better quality level, it will ask for it. If bandwidth is getting too tight, it will switch down.

Quantel has developed this concept further in its QTube technology using virtualization to create variable bit rate data on the fly to suit the speed of the connection as it changes over time.

See also: Buffering, HTTP Live Streaming, HTTP Smooth Streaming, MPEG-DASH

ADC or A/D

Analog to Digital Conversion (or Converter). Also referred to as digitization or quantization. The conversion of analog signals into digital data, normally for subsequent use in digital equipment. For TV, samples of audio and video are taken, the accuracy of the process depending on both the sampling frequency and the resolution of the analog amplitude information: how many bits are used to describe the analog levels. For TV pictures with modern cameras 14 bits or higher is normally used behind the image sensor; for sound, 16, 20 or 24 bits are common. The ITU-R BT.601 standard defines the sampling of SD video components based on 13.5 MHz, and AES/EBU defines sampling of 44.1 (used for CDs) and 48 kHz for audio. For pictures the samples are called pixels, which contain data for brightness and color. HD and UHD video formats use higher sampling rates and generally more bit depth.

See also: AES/EBU, Binary, Bit, Into digits (Tutorial 1), Pixel

ADSL

Asymmetrical Digital Subscriber Line, working on the copper ‘local loop’ originally installed to connect phones to the exchange for communication by voice, ADSL adds a broadband downstream channel (to the user) of maximum 8 Mb/s and a narrower band upstream channel (from the user) of maximum 128-1024 kb/s, according to class. As the upstream and downstream speeds are different the service is called ‘asymmetrical’ (a widely used technique to keep the customers’ equipment technically simpler, and lower cost, with larger, more complex equipment at the supplier, e.g., telephone and TV services, MPEG compression, etc). Exactly how fast an ADSL circuit can run ultimately depends on the performance of the line (including the customer’s own wiring), and is often dictated by the distance from the telephone exchange where the DSLAM terminates the line. The highest speeds are usually only available within 1.5 km of the DSLAM. The service is normally always-on, no need to dial up. Its uses include high-speed Internet connections and streaming video.

ADSL-2 can run up to 12 Mb/s over up to 2.5 km, and ADSL-2+ can deliver 24 Mb/s over up to 1.5 km. ADSL-2/2+ effectively doubles this rate by putting two services together (all distances are approximate). These are sufficient to carry live SD or HD provided that the service is continuous, or can be recorded before viewing.

See also: Broadband, DSL, DSLAM

AES/EBU

The Audio Engineering Society (AES) and the EBU (European Broadcasting Union) together have defined a standard for Digital Audio, now adopted by ANSI (American National Standards Institute). Commonly referred to as ‘AES/EBU’ and officially as AES3, this digital audio standard permits a variety of sampling frequencies, for example CDs at 44.1 kHz, or DATs and digital VTRs at 48 kHz. 48 kHz is widely used in broadcast TV production although 32-192 kHz are allowed. One cable and connector, usually an XLR, carries two channels of digital audio.

See also: Word clock

Website: www.aes.org

Aliasing

Undesirable ‘beating’ effects caused by the presence of frequencies that are too high in an analog input signal that is converted into digits. Passing the input through a suitable low-pass filter, removing all the frequencies above half that of the analog-to-digital converter’s (ADC) clock rate (the Nyquist Frequency), solves the problem. Examples of aliasing include:

1) Temporal aliasing – e.g. wagon wheel spokes apparently reversing, also movement judder seen in the output of standards converters with insufficient temporal filtering.

2) Raster scan aliasing – twinkling effects on sharp boundaries such as horizontal lines. Due to insufficient filtering this vertical aliasing, and its horizontal equivalent, are often seen on the output of lower quality video processing equipment, such as poor DVEs, as detailed images are re-sized.

The appearance of ‘steppiness’ or ‘jaggies’ of poorly filtered images with near horizontal lines in a TV image is also referred to as aliasing.

See also: Anti-aliasing, Interpolation (temporal) Interpolation (spatial), Into digits (Tutorial 1), Nyquist (frequency)

AMWA

The Advanced Media Workflow Association is an open, community-driven forum that creates specifications and technologies for networked media workflows. These cover both business aspects and the development of standards for better technical interoperability. It focuses on file-based workflows to help content creators and distributors in the film, television, advertising, Internet and post-production industries.

In 2014 AMWA, working with the UK’s Digital Production Partnership, created a standard specification for file delivery from post-production to the broadcaster. The resulting AS-11 Contribution Application Specification (AS-11) defines a constrained MXF file and video and audio codecs along with required metadata.

See also: DPP

Anaglyph (Stereoscopic)

A type of stereoscopy in which the left eye and right eye images are separated by color filtering and then superimposed as a single image rather than two separate images. Each eye sees only the required image through the use of complementary colored filters (e.g. red and green or red and cyan). Anaglyph glasses have been popular over the years for viewing 3D comics and some 3D films (particularly on VHS and DVD).

See also: 3D

Anamorphic

Generally refers to the use of 16:9 aspect ratio pictures in a 4:3 SDTV system. For example, anamorphic supplementary lenses are used to change the proportions of the captured image to 16:9. These horizontally squashed images can then fit onto the surface of a 4:3 sensor. Outputs from 16:9 cameras and telecines produce an ‘anamorphic’ signal which is electrically the same as when working with 4:3 images, but will appear horizontally squashed if displayed at 4:3 aspect ratio.

The alternative way of carrying 16:9 pictures within 4:3 systems is letterbox. Letterbox has the advantage of showing the correct 16:9 aspect ratio on 4:3 displays, however the vertical resolution is then less than when using 16:9 anamorphic.

Cinema film is sometimes printed with anamorphic frames, allowing widescreen presentations from, typically, 4:3 images projected via a suitable anamorphic lens.

The major use of anamorphic in TV occurred when 4:3 SD cameras were used to capture 16:9 images. Now 16:9 cameras are widely available, the use of anamorphic techniques is increasingly rare.

See also: Aspect ratio – of pictures

Answer print

The answer print, also called the first trial print, is the first print made from edited film and sound track. It includes fades, dissolves and other effects. It is used as the last check before running off the release prints from the internegatives.

Anti-aliasing

Techniques to smooth aliasing effects created by poor filtering and other techniques. Perhaps some videos or crude character generation may produce poor, aliased, pictures. Anti-aliasing may then be applied to reduce the effect and improve the look of the image. A better approach is to avoid aliasing in the first place and with good modern technology easily available, serious aliasing should be avoided.

See also: Aliasing, Interpolation (spatial), Interpolation (temporal)

API

Application Programming Interface; a set of interface definitions (functions, subroutines, data structures or class descriptions) which provide a convenient interface to the functions of a subsystem. They also simplify interfacing work by insulating application programmers from minutiae of the implementation.

Arbitrated Loop (AL)

A technique used on computer networks to ensure that the network is clear before a fresh message is sent. When it is not carrying data frames, the loop carries ‘keep-alive’ frames. Any node that wants to transmit places its own ID into a ‘keep-alive’ frame. When it receives that frame back it knows that the loop is clear and that it can send its message.

See also: Fibre Channel

ARC

Aspect Ratio Converters change picture aspect ratio, usually between 16:9 and 4:3. Other aspect ratios may be also allowed for, such as 14:9, and custom values are often available. Technically, the operation involves independent horizontal and vertical resizing and there are a number of choices for the display of 4:3 originals on 16:9 screens and vice versa (e.g. letterbox, pillar box, full height and full width). Whilst changing the aspect ratio of pictures, the objects within should retain their original shape with the horizontal and vertical axes expanded or contracted equally.

See also: Aspect ratio

Archive

Long-term storage of information. Pictures, sound and metadata stored in digital form can be archived and recovered without loss or distortion. The storage medium must be both reliable and stable and, as large quantities of information need to be stored, low cost is of major importance. Currently many are using magnetic tape. However there is also ongoing use of optical disks including DVD and Blu-Ray Disc formats and further developments are emerging.

Today, the increasingly IP and Ethernet-connected environments involving many video formats, including digital film, mean that data recorders make good sense. Archiving systems built around the current LTO-5 and LTO-6 data recorders are increasingly proving to be efficient and effective for media applications. The formats include backward compatibility to the previous LTO type. And with the tape cartridges offering 1.5 and 2.5 TB for LTO-5 and LTO-6 respectively, there is useful capacity.

Removable CD-size optical discs potentially offer quick access and denser storage as well as long-term reliability. The Archival Disc system, expected in 2015 from Sony and Panasonic, offers 300 GB with a roadmap to 1TB storage per disc.

For archiving stills and graphics there is far less need for of strong compression as the volume of data will typically be much smaller than that for video. CDs and DVDs are convenient and robust, giving near instant access to all stored pictures.

Traditionally, material is archived after its initial use – at the end of the process. More recently some archiving has moved to the beginning, or even before, the production process. An example is news where, in some cases, new material is archived as events happen. Later subsequent editing, etc., accesses this.

With the worldwide expansion of television channels everywhere, including online and mobile services, archives are increasingly used to help fulfill the huge demand for programming.

See also: AAF, Data recorders, LTO, Optical disks

Areal density

The density of data held on an area of the a recording medium’s surface. This is one of the parameters that manufacturers of disk drives and tape recorders constantly strive to increase. For example one currently-available high-capacity drive from Seagate achieves around 1Tb/square inch. When compared to the 130 Gb figure reported here in the 2008 edition of this book, this shows not only the continuing high rate of development of disk-drive technology, but also that seven years is too long between editions of the Digital Fact Book! With development continuing apace, yet greater capacities can be expected in future editions.

See also: Hard disk drives

Website: www.seagate.com

ARPU

Average Revenue Per Unit, usually used by telecoms companies, to describe the money made from each ‘unit’ or ‘customer’!

Artifact

Particular visible effects on images which are a direct result of some technical limitation. Artifacts are generally not described by traditional methods of signal evaluation. For instance, the visual perception of contouring in a picture cannot be described by a signal-to-noise ratio or linearity measurement.

ASCII

American Standard Code for Information Interchange. This is a standard computer character set used throughout the industry to represent keyboard characters as digital information. There is an ASCII table containing 127 characters covering all the upper and lower case characters and non-displayed controls such as carriage return, line feed, etc. Variations and extensions of the basic code are used in special applications.

ASIC

Application Specific Integrated Circuit. Custom-designed integrated circuit with functions specifically tailored to an application. These replace the many discrete devices that could otherwise do the job but work up to ten times faster with reduced power consumption and increased reliability. ASICs are now only viable for very large-scale high volume products due to high start-up costs and their inflexibility as other programmable devices, such as FPGAs (field programmable gate arrays), offer more flexible and cheaper opportunities for small-to-medium sized production levels.

See also: PLD

Aspect Ratio

1. of pictures. The ratio of length to height of pictures. All TV screens used to be 4:3, i.e. four units across to three units in height, but now all new models are widescreen, 16:9. Pictures presented this way are believed to absorb more of our attention and have obvious advantages in certain productions, such as sport. In the change towards 16:9 some in-between ratios were used for transmission, such as 14:9.

2. of pixels. The aspect ratio of the area of a picture described by one pixel. The ITU-R BT.601 digital coding standard for SD defines luminance pixels which are not square. In the 525/60 format there are 486 active lines each with 720 samples of which only 711 may be viewable due to blanking. Therefore the pixel aspect ratios on 4:3 and 16:9 screens are:

486/711 x 4/3 = 0.911 (tall)
487/711 x 16/9 = 1.218 (wide)

For the 625/50 formats there are 576 active lines each with 720 samples of which 702 are viewable so the pixel aspect ratios are:

576/702 x 4/3 = 1.094 (wide)
576/702 x 16/9 = 1.458 (wider)

All HD digital image standards define square pixels.

Account must be taken of pixel aspect ratios when, for example, executing DVE moves such as rotating a circle. The circle must always remain circular and not become elliptical. Another area where pixel aspect ratio is important is in the movement of images between platforms, such as computers and television systems. Computers generally use square pixels so their aspect ratio should be adjusted for SD television-based applications.

See also: ARC, Pixel

Asynchronous (data transfer)

Carrying no separate timing information. There is no guarantee of time taken but a transfer uses only small resources as these are shared with many others. A transfer is ‘stop-go’ – depending on handshakes to check data is being received before sending more. Ethernet is asynchronous. Being indeterminate, asynchronous transfers of video files are used between storage devices, such as disks, but are not ideal for ‘live’ operations.

See also: Ethernet, Isochronous, Synchronous

ATM

1) Automatic Teller Machine (aka hole in the wall): a place to get cash.

2) Asynchronous Transfer Mode provides connections for the reliable transfer of streaming data, such as television. With speeds ranging up to 10Gb/s it is mostly used by telcos. 155 and 622Mb/s are most appropriate for television operations. Unlike Ethernet and Fibre Channel, ATM is connection-based: offering good Quality of Service (QoS) by establishing a path through the system before data is sent.

Sophisticated lower ATM Adaptation Layers (AAL) offer connections for higher layers of the protocol to run on. AAL1 supports constant bit rate, time-dependent traffic such as voice and video. AAL3/4 supports variable bit rate, delay-tolerant data traffic requiring some sequencing and/or error detection. AAL5 supports variable bit rate, delay-tolerant connection-oriented data traffic

Website: www.broadband-forum.org

ATSC

The (US) Advanced Television Systems Committee. Established in 1982 to co-ordinate the development of voluntary national technical standards for the generation, distribution and reception of high definition television. In 1995 the ATSC published “The Digital Television Standard” which describes the US Advanced Television System, referred to as ATSC A/53. This uses MPEG-2 compression for the video and AC-3 for the audio and includes a range of video resolutions (as described in ATSC Table 3) and audio services (Table 2). It uses 8 and 16 VSB modulation respectively for terrestrial and cable transmission.

ATSC M/H (mobile/handheld) allows digital TV to be received by mobile devices. Using 8VSB modulation it is subject to inherent problems of doppler shift and multipath interference, so additional channel coding has been added.

ATSC 2.0 is a backward-compatible revision of the standard, enabling interactive and hybrid television by connecting the TV with the Internet and allowing interactive elements. Video compression uses the more efficient AVC. There is also audience measurement, enhanced programming guides, video-on-demand services, and the ability to store information on new receivers, including non-realtime content.

ATSC 3.0 is work in progress. Many ideas are being examined including the use of OFDM modulation, as used in DVB, in place of VSB, HEVC video compression, and being able to deliver HD, 4K and 3D and probably SD too. Watch this space!

See also: VSB, Table 3, Dolby Digital (DD/AC-3), MPEG-2

Website: www.atsc.org

Auditory masking

The psycho-acoustic phenomenon of human hearing where what can be heard is affected by the components of the sound. For example, a loud sound will mask a soft sound close to it in frequency. Audio compression systems such as Dolby Digital and MP3 audio make use of auditory masking as their basis and only code what can be heard by the human ear.

See also: Dolby Digital, MP3

Autostereoscopic

Screens that allow viewers to see 3D images without them wearing special 3D glasses are referred to as being autostereoscopic. This provides so called ‘glasses-free’ 3D viewing. Typically the displays make use of a lenticular filter on the front of the screen – rather like those sometimes used on the post cards you can find in tourist shops. The filter is designed so that our left eye sees the left image on the screen, and the same with the right. With this type of system there are sweet spots, typically called ‘zones’ or ‘views’, where you can appreciate the 3D, but sadly, outside those areas 3D is not seen. So you have to pick your spot and stay there. A way to improve matters is to offer more sweet spots. Today six or eight is common, and some screens offer more. Philips and Dolby have been working together; their autostereoscopic screen provides 14 views. Some say this is so many that you can always see the 3D. Another solution is to add a camera to the screen so it can see where its viewers are and so adjust the left and right images so each can see the 3D. This can work for several viewers.

AVB

Audio Video Bridging is, as of 2012, the Time-Sensitive Networking Task Group which aims to provide specifications to allow time-synchronized, low latency streaming services through IEEE 802 networks.

See also: Ethernet

AVC-Intra

A family of two HD codecs from Panasonic that were designed to be compliant with H.264/MPEG-4 AVC, and use only intra-frame coding (GOP of 1), making the coded material easily editable at every frame. AVC-Intra was aimed at professional users and was adopted by Panasonic for its P2 cameras (AVC-Intra P2), offering considerably more efficient compression than the original DVCPRO HD codec – maybe by as much as 2:1. This was at a time when long GOP coding was being used in products including HDV and XDCAM HD. With increased coding efficiency some believed the use of long GOP coding in professional recorders would fade.

There are two classes: AVC-Intra 50 and AVC-100. The former produces a nominal 50 Mb/s for 1920 x1080 and 1280×720 formats using 4:2:0 10-bit sampling, with the frames horizontally reduced to 0.75 of the original line length. AVC-100 produces up to 100 Mb/s with 4:2:2 sampling for the same two frame formats, but without any size reduction. Both codecs offer a range of popular framerates. Both these codecs are now included in Panasonic’s AVC-Ultra range.

 

 

See also: DVCPRO P2, MPEG-4, XAVC

Website: www.avchd-info.org

AVCHD

Advanced Video Codec High Definition, a joint development between Panasonic and Sony, applies MPEG-4’s AVC video coding and Dolby Digital (AC-3) or linear PCM audio coding, to meet the needs of the high definition consumer market with 1080i and 720p formats. The use of AVC provides at least twice the efficiency of MPEG-2 coding, used in HDV and MiniDV, to offer longer recording times or better pictures – or both. Possible recording media include standard DVD disks, flash memory and hard drives.

Further developments have expanded the applications of AVCHD technology. The AVCHD Format Version 2.0 adds specifications for 3D and 1080/60P and 50P and supporting trademarks; AVCHD 3D, AVCHD Progressive and AVCHD 3D/Progressive.

AVI (.avi)

Audio Video Interleave, a Microsoft multimedia container format introduced in 1992 as part of its Video for Windows technology. AVI files can hold audio and video data in a standard container and provide synchronous video/audio replay. Most AVI files also use the OpenDML file format extensions, forming AVI 2.0 files.

Some consider AVI outdated, as there are significant overheads using it with popular MPEG-4 codecs that seemingly unduly increase file sizes. Despite that, it remains popular among file-sharing communities, probably due to its high compatibility with existing video editing and playback software, such as Windows Media Player.

AVS and AVS+

Proposed as a national standard in 2004 , Audio Video Standard is a compression audio and video system developed by the Audio Video Coding Standard Workgroup of China. Designed to replace AAC audio and H.264/MPEG-4 AVC at a lower price than commonly used systems in the rest of the world. It is not much used outside China. AVS+ was designed to provide performance with reduced complexity.

In 2013 work started on AVS2: designed to compete with HEVC(H.265).

Axis (x,y,z)

Used to describe the three-dimensional axes set at right angles to each other, available in DVE manipulations. Viewing the picture in its original position, x lies across the screen left to right, y up the screen bottom to top and z points into the screen. Depending on the power of the equipment and the complexity of the DVE move, several hierarchical sets of xyz axes may be in use at one time. For example, one set may be referred to the screen, another to the picture, a third offset to some point in space (reference axis) and a fourth global axis controlling any number of objects together.

Axes controlling picture movement

Axis

See also: DVE, Keyframe