Category Archives: M

MADI

Multichannel Audio Digital Interface, widely used among audio professionals, defines the data format and electrical characteristics of an interface carrying multiple digital audio channels, as in the Audio Engineering Society’s AES10-2008. It is popular for its large channel capacity: 28, 56, or 64 channels at up to 96 kHz, 24 bits per channel, and up to 3000m connections over optical fiber (or 100m over coax).

MAM

Media Asset Management is used in modern broadcast and post production that increasingly depends on file-based operation rather than the use of tape. MAM can track and manage all aspects of the use and repurposing of media assets so it is clear which assets have been used where. This can help both in technical operations such as adjusting an edit, as well as commercial requirements such as billing.

Master

The master of an edited program holds the material that is used as the source for making all deliverables and other versions (for language, subtitles etc.). Making a good quality master sufficient to fulfill quality needs of all target customers the standards should ensure that the deliverables are good.

For international distribution the use of 1080 x 1920 24P (the same frame rate as movies) is often regarded as the best to ensure good quality deliverables for HD and SD requirements. If it is a high-cost production aimed at a world market a 4K UHD may be required. Also, 24P can be a good frame rate for drama, but many other genres, including spot, would require higher or 50 or 60 I or P.

Supplying the best to all media platforms now often requires more than the traditional master can provide. A better form of master may be an uncommitted one, where all the original source material and all the tools used and their settings, are available so that any aspect of editing and finishing can be revisited to make the right deliverables for everyone.

See also: Deliverables

MediaFLO

A mobile TV/multimedia system once offered by Qualcomm. It was the technology behind FLO TV which went out of service in 2011.

Metadata

Data about data. In the media world it is data about the video and audio but it is not about the actual video or audio themselves. This is important for labeling and finding data – either in a ‘live’ data stream or an archive. Within studios and in transmission, digital technology allows information to be added. Some believe metadata will revolutionize every aspect of production and distribution. Metadata existed long before digital networks; video timecode and film frame numbers are but two examples. Today the metadata can also include the detail about the editing, color correction and effects work. Such history information allows a more open choice of equipment and the ability to retrace the detail of post production – should any changes be required.

See also: AAF, Content, Essence, History, MXF

MHEG-5

MHEG is the Multimedia and Hypermedia Experts Group. MHEG-5 is an open standard for TV middleware – or application program interface (API) – that enables broadcasters to offer interactive / hybrid services with a wide audience appeal, as well as video. Its TV variant, the MHEG Interaction Channel (MHEG-IC), is used by the UK’s Freeview and Freesat and is also specified by Freeview NZ and Freeview Australia.

Website: www.impala.org

MHP

Multimedia Home Platform – DVB-MHP is open middleware from the DVB project for interactive television. It enables the reception and execution of interactive, Java-based applications on a TV set that can be delivered over a broadcast channel, together with the audio and video streams. The applications can provide information services such as games, interactive voting, e-mail, SMS and shopping. Some may require using an IP return channel.

Early deployments included DVB-T in Italy, DVB-S in Korea and Poland and DVB-C in Belgium. There have also been trails in other countries.

Middleware

Software, not hardware. This exists above an operating system to provide a middle layer offering APIs for applications programmers but it is not an application itself. An example is Multimedia Home Platform (MHP) which is widely used in set-top boxes.

Miniaturization (Stereoscopic)

Confusing visual cues in a stereoscopic scene that can make an object appear to be the ‘wrong’ size; i.e. the impression of being strangely shrunk. This is due to the choice of the cameras’ interaxial distance of greater than 63.5 mm relative to the focal length of the camera lenses e.g. shooting with very widely spaced cameras. Subjectively this makes the audience feel like a giant looking at tiny objects, which is why miniaturization is sometimes referred to as Lilliputianism.

See also: Gigantism, Interocular

Mobile TV

This is where broadcasters and mobile (cell) telcos come together to provide consumers with access to video content on their mobile phones and tablet computers. This includes downloads to flash memory, 3G and 4G streaming and mobile on-demand broadcast TV. The landscape is complex as there are many competing formats including DVB-H, DVB-SH, MediaFLO, ISDB-T, S-DMB/T-DMB in different regions and backed by different hardware manufacturers, technology suppliers, content providers and mobile operators. Also there are any number of screen resolutions and aspect ratios to be catered for. China is adding its homegrown China Multimedia Mobile Broadcasting (CMMB). In Europe, the European Union has decided to support the DVB-H standard for mobile TV. DVB-H uses a separate broadcast network, rather than a phone network, to send TV content to phones or mobile devices.

Modem

Short for modulator/demodulator, it is a two-way communications interface working between a communications channel, such as a DSL line, and a machine such as a computer. That is how billions of people access the internet. Television itself is distributed live via a modulator in the transmission chain, and a demodulator at the receiving end.

The efficiency of modern modems over analog is worth noting. In analog days it generally used PAL or NTSC ‘coding’, that compressed three channels R, G and B into one, which was transmitted in one TV channel. Today there are very efficient compression systems such as MPEG-2, MPEG-4, H.264 and H.265 used in DVB, ATSC or other digital systems. The combination of the latest video compression, H.265, and the efficiency of DVB-T2 modulation will allow up to 32 SDTV channels, or 4 HD channels, or one 4K UHD channel, to be delivered in the space (bandwidth) that was occupied by one analog PAL 625-line channel.

Moiré

Refers to a distortion that appears as patterning seen on images where two similar fine patterns overlap, for example two fine gratings can appear to produce diagonal lines. The effect can appear even when one of the patterns is normally invisible, such as the sampling frequency of the image. In a good image system this should be avoided by use of filters but, for instance, the fine detail of a grid pattern may suddenly collapse and appear as curves or diagonal lines as the camera zooms in and the pattern detail nears the digital sampling frequency.

The occurrence of Moiré patterning changes with the picture scanning system. The higher the picture’s resolution the finer the pattern that will cause Moiré. For example, a quite small dog-tooth patten worn by a news presenter could cause it in SD, but with HD it has to be twice as fine.

Moore’s Law

A prediction for the rate of development of modern electronics. This has been expressed in a number of ways but in general states that the density of information storable in silicon roughly doubles every year. Or, the performance of silicon will double every eighteen months, with proportional decreases in cost. For more than two decades this prediction has held true. Moore’s Law initially talked about silicon but it could be applied to other aspects such as disk drive capacity that doubles every two years and has held true, or been exceeded, since 1980, and still continues unabated.

See: Disk drives

MOS

Media Object Server (protocol) – a communications protocol for newsroom computer systems (NCS) and broadcast production equipment. It is a collaborative effort between many companies to enable journalists to see, use, and control a variety of devices from their desktop computers, effectively allowing access to all work from one screen. Such devices include video and audio servers and editors, still stores, character generators and special effects machines.

MOS uses a TCP/IP-based protocol and is designed to allow integration of production equipment from multiple vendors with newsroom computers via LANs, WANs and the Internet. It uses a ‘one-to-many’ connection strategy – multiple MOSs can be connected to a single NCS, or a single MOS to many NCSs.

Website: www.mosprotocol.com

Motion Adaptive Processing

Motion adaptive processing is responsive to the output of a motion detector.  The motion detector may work on a global or a local basis in the picture, and may be binary or may measure on a continuous scale the amount of motion, or the confidence that an object or region is moving.  Motion adaptive processing controls a mix between processing that is optimized for static detail and processing that is optimized for moving areas.

Motion Compensated Processing

Motion compensated processing is responsive to the output of a motion estimator.  A motion estimator usually works on a local basis in the picture and measures not only the existence of motion but also its speed and direction.  Motion compensated processing typically controls spatiotemporal filters that track the motion of each part of the picture.

Motion JPG

This is JPEG compression applied to digital video, where each frame or field (half frame) is individually compressed using JPEG encoding to reduce the amount of data. This is widely used in areas including video cameras, some webcams and in non-linear editing systems.

See also: JPEG, Field

MP3

A high-performance, perceptual audio compression coding scheme which exploits the properties of the human ear and brain while trying to maintain perceived sound quality. MPEG-1 and 2 define a family of three audio coding systems of increasing complexity and performance – Layer-1, Layer-2 and Layer-3. MP3 is shorthand for Layer-3 coding. MPEG defines the bitstream and the decoder but, to allow for future improvements, not an encoder. MP3 is claimed to achieve ‘CD quality’ at 128-112 kb/s – a compression of between 10 and 12:1. Not all listeners agree with that.

See also: Auditory masking

Website: www.mp3-tech.org

MPEG-1

A compression scheme designed to work at 1.2 Mb/s, the basic data rate of CD-ROMs, so that video could be played from CDs. Its quality is not up to modern standards and it is not much used.

MPEG-2

ISO/IEC 13818. A family of inter- and intra-frame compression systems designed to cover a wide range of requirements from ‘VHS quality’ all the way to HDTV through a series of compression algorithm ‘profiles’ and image resolution ‘levels’. With data rates from below 4 to 100 Mb/s, this family includes the compression system that currently delivers digital TV to homes and that puts SD video onto DVDs as well as putting HD onto 6.35mm videotape for HDV.

In all cases MPEG-2 coding starts with analyzing 8×8-pixel DCT blocks and applying quantizing to achieve intra-frame compression that is very similar to JPEG. This compression is referred to as I-frame only MPEG-2. Producing much higher compression involves analyzing the frame-to-frame movement of 16×16-pixel ‘macroblocks’ to produce vectors that show the distance and direction of macroblock movement. Their correctness is a factor of coders’ quality and efficiency. This vector data is carried in the P (predictive) and B (bi-directional predictive) frames that exist between I frames (see diagram). SDTV transmissions and DVDs typically contain two I-frames per second typically using about 4 Mb/s or less – a big difference from the 180 Mb/s of uncompressed SD video. The set of images between I-frames is a Group of Pictures (GOP) – usually about 12 for 576/50I and 15 for 480/60I transmissions. These are called ‘long GOP’. The GOP length can vary during transmission – an I-frame may be forced at the start of a new sequence, such as after a video cut, or other occasions were there is a big change at the input.

MPEG-2 12 frame GOP

MPEG-2

*Note: for transmission the last ‘I’ frame is played out ahead of the last two ‘B’ frames to form the sequence I1, B1, B2, P1, B3, I1 B4, P2, B5, B6, P3, I2, B7, B8

Levels and profiles: MPEG-2 is a single compression standard that can operate on many different levels – picture source formats ranging from about VCR quality to full HDTV, and profiles – a collection of compression tools that make up a coding system. Current interest includes the Main Profile @ Main Level (MP@ML) covering current 525/60 and 625/50 broadcast television as well as DVD-video and Main Profile @ High Level (MP@HL) for HDTV. Besides the transmission/delivery applications which use 4:2:0 sampling, the 422 Profile (4:2:2 sampling) was designed for studio use and offers greater chrominance bandwidth which is useful for post production.

Blocking and ‘blockiness’: MPEG-2 artifacts generally show as momentary rectangular areas of picture with distinct boundaries. Their appearance generally depends on the amount of compression, the quality and nature of the original pictures as well as the quality of the coder. The visible blocks may be 8 x 8 DCT blocks or, most likely, ‘misplaced blocks’ – 16 x 16 pixel macroblocks, due to the failure of motion prediction/estimation in an MPEG coder or other motion vector system, e.g. a standards converter.

Audio: Digital audio compression uses auditory masking techniques. MPEG-1audio specifies mono or two-channel audio which may be Dolby Surround coded at bit rates between 32 kb/s to 384 kb/s. MPEG-2 audio specifies up to 7.1 channels (but 5.1 is more common), rates up to 1 Mb/s and supports variable bit-rate as well as constant bit-rate coding. MPEG-2 handles backward compatibility by encoding a two-channel MPEG-1 stream, then adds the 5.1/7.1 audio as an extension.

See also: Discrete 5.1 Audio, MP3

Websites:
www.mpeg.org
www.chiariglione.org/mpeg

MPEG-21

MPEG-21 (.m21 or .mp21) , standardized as ISO/IEC 21000, creates descriptions for a multimedia framework to provide a ‘big picture’ of how the system elements relate to each other and fit together. The resulting open framework for multimedia delivery and consumption includes content creators and content consumers as focal points to give creators and service providers equal opportunities in an MPEG-21 open market. This can also give the consumers access to a large variety of content in an practical manner. MPEG-21 defines a Digital Item as a basic unit of transaction. It is a structured digital object, including a standard representation, identification and metadata.

Website: www.chiariglione.org/mpeg

MPEG-4

ISO/IEC 14496. MPEG-4 covers three areas, digital television, interactive graphics applications (synthetic content) and interactive multimedia (Web distribution and access to content). It provides the standardized technological elements enabling the integration of the production, distribution and content access of the three fields.

Since its first publication in 1999, MPEG-4 video compression achieved quality targets with ever-lower bit rates. Like MPEG-2 the compression is DCT-based and uses inter- and intra-field compression but implements many refinements, such as a choice of block sizes and motion compensation accuracy of one-eighth of a pixel against MPEG-2’s half pixel.

MPEG-4 is guilty of generating too many names and versions. The highest quality MPEG compression technology is known by ISO and IEC as MPEG-4 AVC (Advanced Video Coding). It is also know by the ITU-T as H.264 or MPEG-4 part 10. Notable predecessors are MPEG-4 part 2 (ASP) and H.263. Significantly, MPEG-4 AVC achieves up to a 64 percent bit rate reduction over MPEG-2 for the same quality and it opened possibilities for HD DVDs and transmission, etc., as well as room to offer more SD DTV channels, or more quality. MPEG-4 also specifies low bit rates (5-64 kb/s) for mobile and Internet applications with frame rates up to 15 Hz, and images up to 352 x 288 pixels.

MPEG-4 AVC video coding and decoding are far more complex than MPEG-2 but Moore’s Law absorbed that technical challenge. QuickTime and RealPlayer were among early adopters of MPEG-4. While established systems need to stick to their MPEG-2, most if not all later video services use MPEG-4.

The interactive multimedia side of MPEG-4 includes storage, access and communication as well as viewer interaction and 3D broadcasting. Aural and visual objects (AVOs) represent the content which may be natural – from cameras or microphones, or synthetic – generated by computers. Their composition is described by the Binary Format for Scene description (BIFS) – scene construction information to form composite audiovisual scenes from the AVOs. Hence, a weather forecast could require relatively little data – a fixed background image with a number of cloud, sun, etc, symbols appearing and moving, audio objects to describe the action and a video ‘talking head’ all composed and choreographed as defined by the BIFS. Viewer interactivity is provided by the selection and movement of objects or the overall point of view – both visually and aurally.

Audio: This builds on previous MPEG standards and includes High Efficiency Advanced Audio Coding (HE-AAC). This nearly doubled the efficiency of MPEG-4 Audio, improving on the original AAC and offers better quality for the same bit rate as the ubiquitous MP3 codec (from MPEG-2). Stereo CD-quality at 48 kb/s and excellent quality at 32 kb/s is reported. This is not a replacement for AAC, but rather a superset which extends the reach of high-quality MPEG-4 audio to much lower bit rates. High Efficiency AAC decoders will decode both types of AAC for backward compatibility.

DVB has approved two MPEG-4 codecs for use for broadcast transport streams: H.264/AVC video codec (MPEG-4 Part 10) and the High Efficiency Advanced Audio Coding (HE-AAC) audio codec. This mandates support of Main Profile for H.264/AVC SDTV receivers, with an option for the use of High Profile. The support of High Profile is mandated for H.264/AVC HDTV receivers.

Websites:
www.chiariglione.org/mpeg
www.m4if.org

MPEG-7

The value of information often depends on how easily it can be found, retrieved, accessed, filtered and managed. MPEG-7, formally named ‘Multimedia Content Description Interface’, provides a rich set of standardized tools to describe multimedia content. Both human users and automatic systems that process audiovisual information are within its scope. It was intended to be the standard for description and search of large volumes of audio and visual content – including that from private databases, broadcast and via the Web. Applications include database retrieval from digital libraries and other libraries, areas like broadcast channel selection, multimedia editing and multimedia directory services.

MPEG-7 offers a set of audiovisual Description Tools (the metadata elements, their structure and relationships that are defined as Descriptors and Description Schemes). It specifies a Description Definition Language (DDL) so that material with associated MPEG-7 data can be indexed and allow fast and efficient searches. These searches will permit not only text-based inquiries, but also for scene, motion and visual content. Material may include stills, graphics, 3D models, audio, speech and video as well as information about how these elements are combined. Besides uses in program-making MPEG-7 could help viewers by enhancing EPGs and program selection.

Website: www.chiariglione.org/mpeg

MPEG

Moving Picture Experts Group. This is a working group of ISO/IEC for the development of international standards for compression, decompression, processing, and coded representation of moving pictures, audio and their combination. It has also extended into metadata. Four MPEG standards were originally planned but the accommodation of HDTV within MPEG-2 has meant that MPEG-3 is now redundant. MPEG-4 is very broad and extends into multimedia applications. MPEG-7 is about metadata and MPEG-21 describes a ‘big picture’ multimedia framework.

Websites:
www.chiariglione.org/mpeg
www.mpeg.org

MPEG-HEVC H.265

MPEG High Efficiency Video Coding was developed to achieve twice the efficiency of MPEG-4 AVC. Apart from having the potential to halve the bandwidth currently used to transmit HDTV services, it also halves the data needed to be transmitted for UHD. That means that a 4K UHD channel can fit into one DVB-T2 multiplex – the bandwidth that was used for one analog PAL TV channel.

Beyond helping to enable terrestrial, satellite and cable 4K transmissions, it is also a part of the Ultra HD Blu-ray specification.

 

See also: Display Resolution

MTBF

Mean Time Between Failure. A statistical assessment of the average time taken for a something to fail, a measure of predicted reliability. The MTBF of a piece of equipment is dependent on the reliability of each of its components. Generally the more components the lower the MTBF, so packing more into one integrated circuit can reduce the component count and so increase the reliability. Modern digital components are highly reliable. Even complex electro-mechanical assemblies such as hard disk drives now offer MTBFs of up to a million hours; some 110 years! Note this does not mean a drive has been run for 110 years and failed just once, nor that it is expected to run for this period without failure, but it does indicate the average failure rate of many components of the same type.

MTF

The Modulation Transfer Function is a measure of spatial resolving power. It can refer to a medium, such as film, or a lens, or any part of the scene-to-screen chain. It is akin to frequency response in electronic images. To assess the MTF of film, it is exposed to special test images comprising sine-wave bars of successively higher frequencies. The results on the processed film are assessed by measuring its density over microscopically small areas to obtain peak-to-trough values for the different frequencies. These results should then be corrected to allow for the response of the lens, the test film itself and any D/Log E non-linearities.

In a practical film system, the film images pass through many components including the camera lens, intermediate stocks and contact printing to the projection lens. Each of these has its own MTF and the system MTF can be calculated as follows.

MTFsystem = MTF1 x MTF2 x MTF3 etc

See also: Resolving power

Multimedia

The presentation of more than one medium. Strictly speaking TV is multimedia (if you have the sound tuned up). More typically it is pictures (moving and still), sound and often text combined in an interactive environment. This implies the use of computers and the Internet. One of the first applications was in education; now it is commonly seen at home via the Internet or DVDs.

Multimedia has a wide meaning. Another example is in the production of material which is published in many forms. For example pictures from television productions can be transferred to print for listings magazines, to EPGs and to advertising. Such transfers are commonly handled through network connections.

See also: ATM, MPEG, MPEG-4, MPEG-21

Multiplex (Mux)

Generally describes a collection of communications channels bundles into one transport system. For example, voice and data co-exist on a phone line carrying conversation and Internet access. In digital television ‘a multiplex’ describes a group of video channels multiplexed into single transmission stream occupying the space of one analog terrestrial TV channel. The term ‘Bouquet’ has also been used in this context.

MXF

The Material eXchange Format, SMPTE 377M, is aimed at the exchange of program material between file servers, video recorders, editing systems, tape streamers and digital archives, etc. It is a container, or wrapper, format that supports a variety of differently coded streams of essence (images and sound), together with metadata describing the material in the MXF file.

There are ten Operational Patterns defined by SMPTE standards. The main nine are the intersections along two-axes describing Item (segment) versus Package (complexity). This gives the combinations of 1) Single Item, 2) Playlist Items and 3) Edit Items against a) Single Package, b) Ganged Packages and c) Alternate Packages. By far the most common is OP-1a – Single Item, Single Package – which contains a single flat clip. The tenth operational pattern is OP-Atom which is used by Panasonic P2.

Bridging file and streaming transfers, MXF helps move material between AAF file-based post production and streaming program replay using standard networks. This set-up extends the reliable essence and metadata pathways of both formats to reach from content creation to playout. The MXF body carries the content. It can include compressed formats such as MPEG and DV as well as uncompressed video and can contain an interleaved sequence of picture frames, each with audio and data essence plus frame-based metadata. There are other scenarios where, for example, P2 MXF files contain a single essence track with no interleaving and no frame-based metadata. Also Panasonic AVC Long-G MXF files do interleave but not per-frame; instead they interleave partitions (chunks) of a constant duration, typically about 10 seconds. It is perfectly valid for MXF files to be metadata-only and this has no bearing on the operational pattern; for example, AS-02 bundles always contain a metadata-only version MXF file that references separate essence MXF files.

See also: AAF, AMWA

Website: www.pro-mpeg.org