A ratio of amount of light where one stop represents a x2 change – doubling or halving of the amount of light. The operating range of film and electronic light sensors, such as CCDs and CMOS, are quoted in stops. Typically, a camera’s shutter speed and the lens’s aperture setting restrict the light arriving at the sensors/film so the mid brightness of the required scene corresponds to the middle of the sensor’s or film’s sensitivity range.
Stops are simply the expression of a ratio, not absolute values. As they represent doubling or halving of light, they are actually powers of 2. So
1 stop = x 2
2 stops = x 4
3 stops = x 8
4 stops = x 16 etc.
F stops are the simple calculation taking focal length and aperture into account. This does not fully solve the problem of how much light gets to the sensor/film when lenses are not 100% clear. The T stop takes the transmissive quality of the lens into account. The transmissive quality is affected by the glass used to make the lens and any additional anti-reflective and conditioning coatings applied to the lens elements.
Use the formula:- T-stop = F-stop/transmission_fraction. Where transmission ranges from 0 (opaque) to 1 (perfectly clear)
For a perfect lens the calculation is T = F/1 -> T == F
For a lens with 80% transmission (0.8) then T = F/0.8
So for an F stop setting of 4 the real measure of light getting to the sensor is equivalent to 4/0.8 = 5.0
Note that the depth of field will still be calculated from the F stop setting – only the exposure is set using the T stop.
The amount of stereo image available to the viewer is dictated by the frame surrounding a stereoscopic image, e.g. the size of TV or projection screen. This boundary is called the Stereo Window. Depending on their parallax, objects will appear either in front, at or behind this window. IMAX has the largest window.
The process of making and presenting images using ‘left eye’ and ‘right eye’ cameras. The resulting ‘left eye’ and ‘right eye’ stereo images allow audiences to perceive depth into and out of the screen. Although the technique can add greatly to the viewing experience and is often referred to as ‘3D’, viewers cannot look around objects – as would be the case with real 3D. Stereo television and film can create a 3D illusion, but it not real 3D.
In stereoscopy, presenting objects from the left and right eyes’ point of view in the same way that our eyes would look at them in the real world, creates the depth effect. If the left and right eye images of an object are coincident at the screen, then it appears to be at the distance of the screen. If the left and right images on the screen are crossed over, with the right image on the left and the left image on the right, then the object appears to be in front of the screen as our eyes converge on the images. If the left and right images are not crossed over but closer together than the distance between our eyes (interocular distance generally taken as 63.5 mm for the average adult), then the object appears to be behind the screen as our eyes converge less. To show an object at infinity left and right images are shown spaced by the interocular distance.
HD video and DCI digital movies sparked development in new shooting technologies that can make live 3D TV. But generally post production is needed to correct unwanted differences between left and right cameras, and to finesse the point of view and perspective. Exhibition become far easier with digital cinema were one projector can sequence left and right images replayed from one player. This removes the nightmare of aligning and keeping two projectors, and running two films in sync and registration; even then weave, scratches and sparkles can lower the quality of presentation. Now most cinemas are 3D-capable and have a viewing system to sequence the left and right images into images into the correct eyes – such as Real D, Dolby or McNaughton. These require wearing glasses that are passive polarized (Real D), passive frequency based (Dolby) or active switched (McNaughton). Live shooting and easy exhibition means that live events can be shown on cinema screens – giving audiences a new experience and theaters a potential new revenue stream.
For television, 3D screens and viewing systems have been developed but usually require viewers to wear 3D glasses (active or passive depending on the system) to sequence the left and right images to the correct eyes. Probably the greatest domestic application has been for use in the virtual world of computer games.
See also: 3D
Websites: Dolby www.dolby.com/professional/motion_picture/solutions_d3ddc.html
A keying signal used in graphics systems – such as the Quantel Paintbox that is built into all the company’s editing systems. It can be drawn, derived from picture information, or both. It can be used to define the area of an object, obscure part or all of an object, making it transparent or partially transparent, and used to control the application of paint… and more.
In 2004 the Standard Evaluation Material was created by the ASC (American Society of Cinematographers) and DCI to assess the quality of possible digital cinema picture compression systems and formats against the best that film can offer. It is about 25 minutes of material from multiple film formats. At that time the most widely used compression system for video was MPEG-2 and some said the StEM film was an ‘MPEG breaker’ with smoky scenes and movement that would challenge that compression system. The next year DCI recommended JPEG 2000, an entirely different compression system to the established DCT-based JPEG.
This increases the overall efficiency of a multi-channel digital television transmission multiplex by varying the bit-rate of each of its channels to take only that share of the total multiplex bit-rate it needs at any one time. The share apportioned to each channel is predicted statistically with reference to its current and recent-past demands. For example, football, generally with much action and detail (grass and crowds), would use a higher data rate than a chat show with close-ups and far less movement. The data streams for each program are monitored and their bit rates varied accordingly to fit the bit rate of the whole multiplex.
See also: Variable bit rate
Changing the standard of existing television material that may involve two processes (four if going from and to analog coded systems such as PAL and NTSC). The two main processes are format conversion to change the spatial (horizontal and vertical) sizes of the pictures and changing the vertical scan rate – the number of pictures per second. For broadcast applications this needs to be completed while retaining the maximum possible fidelity of the input content. The re-sizing process involves the relatively straightforward task of spatial interpolation – spreading the information from the original pixel array over a different pixel structure. Note that the crude method of dropping or repeating lines/pixels will give very poor results and the detail of the interpolation process used is important for best results.
The second process is far more complex as, changing the number of frames or fields per second (temporal conversion) means creating new ones or removing some – preferably without upsetting any movement shown in the pictures, so simply repeating or dropping fields or frames will not do. For this the movement within the pictures has to be analyzed so that ‘in-between’ pictures can be synthesized. This is a very specialized area and there are highly developed techniques used on the best modern standards converters that do this very well, but never perfectly.
See also: Format (conversion), Frame-rate conversion
A digital television standard defines the picture format (pixels per line and active lines), vertical refresh rate and whether the vertical scan is interlaced or progressive. For example, European SD digital television is 720 x 576/50I, and an HD standard is 1920 x 1080/30P.
See also: Format (television)
A computer and operating system built for general-purpose use. It cannot be used on its own but must be fitted with any, or many, of the very wide range of available specific application software and additional hardware packages. For example, the same standard platform may be used for accounting, word processing and graphics but each runs from a different software applications package and may need special hardware.
The term has become somewhat confusing in that a standard platform can be anything from a PC to a super computer. Also some applications are mutually exclusive – when the computer’s hardware is configured for one it has to be re-configured to run another. It is then arguable whether this is still a standard platform or has it metamorphosed into a dedicated system?
Simple Network Management Protocol is the Internet standard protocol developed to manage nodes (servers, workstations, routers, switches, hubs, etc.) on IP networks. It enables network administrators to manage network performance, find and solve network problems, and plan for network growth. SNMP works by sending Protocol Data Units (PDUs) messages to different parts of a network. Agents, SNMP-compliant devices, store data about themselves in Management Information Bases (MIBs) and return this data to the SNMP requesters.
Society of Motion Picture and Television Engineers. A United States organization, with international branches, which includes representatives of broadcasters, manufacturers and individuals working in the film and television industry. Within its structure it has a number of committees that make recommendations (RP 125 for example) to the ITU-R and to ANSI in the USA.
Here are just a few of the many standards and recommendations issued by SMPTE:
259M SDI for 4:2:2 and 3fsc
272M AES/EBU audio and aux data into video ancillary space
294M TV format, 720 x 483 at 59.94P Hz
305M SDTI Serial Data Transport Interface
424M 3 Gb/s signal/data serial interface
A group of transmitters all working on the same frequency within range of each other. Usually these will interfere causing serious problems with reception. However with DVB-T2 transmission, and the TV channels all running on the same locked frequencies over all transmitters, any interference is constructive, adding to the quality of the received signal. SFN’s are very efficient with radio frequency spectrum use as, for example, all transmissions of a group of TV channels around a country can use the same channel frequency throughout a network. Also the constructive interference makes it easier to get good reception, sometimes even with a modest antenna.
Describes access on a video server where each of its realtime video connections clients access any sequence of stored frames regardless of the demands of other video connections. This implies there is no copying of material to achieve this. Such access makes operation and control of video servers much more straightforward, and allows many independent operations to take place at the same time even when using the same video.
See also: FrameMagic, True random access
The term used to describe the simultaneous transmission of a program over more than one channel: for example one SD, one HD, a third on the Internet as well as services to mobile devices. All versions are transmitted at the same time but due to coding times and system latency they are unlikely to be synchronous.
The ratio of the wanted signal (image, sound…) information to noise, usually expressed in dB. Noise can be high frequency, making pictures look grainy or adding a hiss to sound. Digitally generated images or sounds are theoretically capable of being pure; noise-free, having an infinite signal to noise ratio. But for pictures, their purity may cause contouring artifacts if they are processed without enough care; a reason for Dynamic Rounding.
A rule of thumb to express the realistic signal-to-noise capability of a digital system is given by the expression:
S/N (dB) = 6N + 6
where N is the number of bits. Hence an 8-bit system has 54 dB S/N and a 10-bit system 66 dB. This would be the noise level of continuous LSB dither and would only be produced over the whole picture by digitizing a flat field (i.e. the same gray over the whole picture) set at a level to lie midway between two LSBs. If it were set exactly on a digital level, there would be no noise. Other test methods give a variety of results, mostly producing higher S/N figures.
See also: Contouring, Decibel, Dither, Dynamic Rounding, MPEG-2
In 1948, C. E. Shannon’s article ‘The Mathematical Theory of Communication,’ established Information Theory which allows determination of the theoretical limit of any channel’s information-carrying capacity. Information Theory made possible development of digital systems and without it, much of modern communications, including the Internet, would not exist.
A storage system that provides audio and video storage for a network of clients. Those used in professional and broadcast applications are based on hard disk storage. Aside from those used for video on demand (VOD), video servers are applied in three areas of television operations: transmission, post production and news. Compared to general-purpose file servers, video servers must handle far more data, files are larger and must be continuously delivered.
There is no general specification for video servers and so the performance between models varies greatly according to storage capacity, number of realtime video channels, protection level (RAID), compression codec and ratio, and speed of access to stored material – the latter having a profound influence.
Store sizes are large, typically from about 500 GB up to a many terabytes. Operation depends on connected devices: edit suites, automation systems, secondary servers, etc. The effectiveness of the server’s remote control and video networking is vital to success.
A storage system that provides data files to all connected users of a local network. Typically the file server is a computer with large disk storage which is able to record or send files as requested by the other connected (client) computers, the file server often appearing as another disk on their systems.
The data files are typically around a few kB in size and are expected to be delivered within moments of request.
Video and audio editing that takes place within a server rather than in a workstation.
See also: In-server editing
Serial Digital Interface carries uncompressed video, multiple tracks of embedded audio and ancillary data usually over the ubiquitous 75-ohm coax cable, terminated in a BNC connector. As the demands of television have grown so SDI has risen to the challenge to continue providing a reliable plug-and-play connection. Today there are many types to fit with the demands for bigger and faster video formats.
SDI SMPTE 259M – for SD 4:2:2 digital television, is based on a 270 Mb/s transfer rate. This is a 10-bit, scrambled, polarity-independent interface, with common scrambling for both component ITU-R BT.601 and composite digital video and four groups each of four channels of embedded digital audio. Most broadcast video equipment includes SDI which greatly simplifies its installation and signal distribution. It can transmit the signal up to 350 meters (depending on cable quality and connected equipment, specifically distribution amplifiers and routers).
HD-SDI is standardized in SMPTE 292M is for 4:2:2 HD television. The serial bit-stream runs at 1.485 Gb/s to carry up to 10-bit Y,Cr,Cb component video as well as embedded audio and ancillary data. The interface is also specified for fiber for distances up to 2 km.
3G-SDI (SMPTE 424M) operates at 2.97 Gb/s, twice the clock rate HD-SDI and is designed to carry high bandwidth HD television such as 1080/50P, 1080/60P, HD RGB, as well as 2K DI images.
6G-SDI (not yet standardized) operates at 5.94 Gb/s and can carry 4K at up to 30P.
12G-SDI (not yet standardized) operates at 11.88 Gb/s can carry 4K at up to 60P frame rate.
The higher bit rates can work over shorter lengths of coax and Fibre Optic versions can be used for the larger/faster frame-rate formats over longer distances.
See also: Dual link, Embedded audio, UHDTV, SDTV, SMPTE
The time taken for the read/write heads of a disk drive to be positioned over a required track. Average seek time is the time to reach any track from the center track. Maximum seek/positioning time is the time to reach any track from any track. A high performance modern hard disk offers around 4 ms average seek time and typically twice that for the maximum. Minimum seek time to adjacent tracks is as low as 0.2 ms for read, 0.4 ms for write. These times are critical to disk performance, especially when operating with the very high data rates associated with video and digital film. Many long seek times can degrade performance when using a fragmented disk, so slowing the access to data.
Solid State Drives also have seek times, and their performance can degrade with fragmentation. However SSD technology has very much lower seek times, and generally provides very speedy performance.
See: Disk drives, FrameMagic
Surface-conduction Electron-emitter Display technology was gaining attention in the mid 2000’s. It is a mix of old and new, coupling Cathode Ray Tube (CRT) and LCD technologies. SED screens comprise millions of minute electron emitters – hence the CRT connection. The screens are thin and many thought the technology would take over a large part of the screen market. But it faded in 2009. All SED and done!
A non-volatile memory card format that is widely used in portable / hand-held devices, including digital cameras, tablet computers, smart phones, etc. SD cards are specified with speeds described in multiples of the standard CD data speed – 150 kB/s, and capacity.
The original SDSC (Standard Capacity) offer up to 2 GB storage and a bus speed of 25 MB/s. Next up SDHC (High Capacity) provides from 2 GB to 32 GB storage and support for FAT32 file systems (as is used on many PCs). SDXC introduced in 2009 supports capacities beyond 32 GB to 2TB and a maximum data speed of 300 MB/s. For yet more performance there is now Ultra Speed Bus. UHS-I can have a clock speed of 100 MHz, so handling 50 MB/s in 4-bit mode. UHS104 has a 208 MHz clock and can transfer 104 MB/s. UHS-II raises the transfer rates to a possible 156 MB/s (312 MB/s half duplex).
Currently the highest capacity on the market is a SDXC from SanDisk, offering 512 GB. This was introduced at the IBC 2014 trade show where many of the visitors and exhibitors are constantly seeking higher capacities and speeds to handle ever larger TV formats and higher frame rates.
Markings on the cards indicate their performance. A ‘C’ refers to a speed Class. The number in the big C is the minimum sustained transfer speed in MB/s; typically 2, 4, 6, 8, and 10 are used. Above that you will see a ‘U’ on the card. A ‘I’ in the U is for 10 MB/s UHS-I, and III for 30 MB/s.
SD cards have transformed video storage in both amateur and professional video cameras. Panasonic was an early adopter with its professional camcorders using P2 cards. Each card includes SD chips, a micro computer and RAID controller. There are no moving parts. There is no noise. It takes only tiny power and it is compact and robust – and a long way from tape and disk-based recording. P2 cards can read at 8x real-time for quick transfers to editing equipment. They can be used as edit stores in themselves.
See also: Flash memory, ING, P2, RAM
Website: aframe.com/blog, www.sdcard.org
Primary color correction, or grading, is applied to the whole image. Secondary correction is applied only to selected areas of the image, the area being defined by a pattern generator (e.g. a circle or rectangle), by curved lines or derived from the object itself using chromakey techniques or any combination of all these. This way, for example, the color of a car in a clip could be changed from say, red to green, while the rest of every image is unchanged.
Standard Definition Television. Digital television systems that operate using standard definition video formats, i.e. 720 x 480/59.94I or 720 x 576/50I. Both these may carry 4:3 or 16:9 images, and in all cases, the pixels are not square. All HDTV and UHDTV digital standards describe square pixels.
Serial Digital Transport Interface (SMPTE 305M). Based on the 270 Mb/s standard definition SDI, this provides realtime streaming transfers. It does not define the format of the signals carried but brings the possibility to create a number of packetized data formats for broadcast use. There are direct mappings for SDTI to carry Sony SX, HDCAM, DV-DIFF (DVCAM, DVCPRO 25/50, Digital-S) and MPEG TS. There is also an HD version. Standardized as SMPTE 348M this provides a 1.5 Gb/s data link.
Software Developers Kit. Typically a software and documentation package to facilitate the development of applications to run on a given operating system or other application. It provides another layer on top of an API, often including shortcuts and pre-built routines to make development easier and final operation faster.
The Small Computer Systems Interface started as a high data-rate, general-purpose parallel interface introduced in 1979 allowing up to eight devices to be connected to one bus (now 16 for Wide SCSI). This could comprise a controller and up to seven disks or devices of different sorts – hard disks, optical disks, tape drives, scanners, etc., and may be shared between several computers.
Since then SCSI has hugely increased in performance but is now used mainly on high-performance workstations and RAIDs on servers while other lower cost interfaces such as USB2 and IEEE1394 connect external devices, while SATA is used for many hard disks.
The original SCSI specified a cabling standard (50-way) that had to be kept short, a protocol for sending and receiving commands and their format. It is intended as a device-independent interface so the host computer needs no details about the peripherals it controls. SCSI’s continued development has resulted in ever faster data transfer rates. Currently 16GFC (Fibre Channel) has a maximum transfer rate of 1600 MB/s (12.6 Gb/s).
There are many other SCSI interfaces besides Fibre Channel. iSCSI (Internet Small Computer System Interface) can run over any physical transport capable of transporting Internet Protocol (IP). This gets much support as developments in Ethernet outpace those in FC. The performance of this though is network-dependent.
Serial SCSI using SSA (Serial Storage Architecture) FC-AL, IEEE1394, and Serial Attached SCSI (SAS), break away from the parallel cabling to offer data transfers currently up to 1200 MB/s. This is popular with many hard disk drives.
See also: Disk drives
Replay of audio tracks at a speed and pitch corresponding to jog speed – as heard with analog audio tape ‘scrubbing’ backwards and forwards past an audio replay head. This feature, which is natural for analog fixed-head recorders, may be provided on a digital system recording on disks to help set up cues.
A collection of tables and constraints that describe the structure of a database. It provides a level of security as no one else can interpret the stored database without the schema; it is just a collection of figures. The schema organizes the database to allow scalability for expansion and defines efficient operation to suit a particular application.
Serial ATA (Advanced Technology Attachment) is designed to transfer data between disks drives (hard and optical) and computer hardware and is the successor of ATA. SATA adapters and devices communicate over a high-speed serial link originally specified in SATA I at 1.5 Gb/s, then SATA 2 at 3 Gb/s, SATA 3 at 6 Gb/s, and the latest SATA 3.2 offers a transport speed of 16 Gb/s.
The serial interface means the connector is smaller (than ATA) and can run faster, because fast parallel data starts to suffer from skewing with some channels of the parallel connection being faster/slower than others. Serial data cannot skew. SATA does not just serialize ATA. For example, SATA 2 added native command queueing, originally a feature of SCSI, that allows handling multiple pending transactions rather than just one at a time. Also disk drives can organize the transactions and so offer faster operation.
Storage Area Network is a well-established method of providing shared video storage and can offer platform-independent storage that may be accessed from, say, Windows, Linux and Mac OS workstations. It allows applications direct access to shared storage by cutting out the usual client-server ‘middle men’ to provide improved workflow and better work sharing on a common store.
The design recognizes that moving large amounts of data (video) is inconsistent with normal-network general-data traffic. Therefore it forms a separate network to connect data-hungry workstations to a large, fast array of disks. Although any network technology could be used, Fibre Channel predominates with current speeds of up to 14 Gb/s. SANs are scalable but additions may be complex to implement. Currently, expansion is ultimately limited by architecture and management considerations. However, in practice it can be difficult to sustain multiple high bandwidth (e.g. for HD or 4K) streams from a SAN. Quantel’s Genetic Engineering is designed to solve this problem and is capable of playing back multiple 2K and 4K streams simultaneously.
See also: FAN, Fibre Channel, NAS
A standard for sampling analog waveforms to convert them into digital data. The official sampling standard for 625/50 and 525/60 television is ITU-R BT.601. ITU-R BT.709 and SMPTE 274M specify sampling for HD formats. They are generally written as the number of pixels per line x number of lines per frame/vertical refresh rate (in Hz) progressive or interlaced (P or I), e.g. 1920 x 1080/50I. Sometimes the pixel count of the lines is omitted (but understood), making the example 1080/50I.
The area of picture into which it is considered safe to place material, graphics, text or action, so that it will be viewable when received at home. Initially this was necessary with 4:3 aspect ratio CRT screens as they were always overscanned to avoid showing the ‘black’ that surrounded the active picture. Typically 5% in from the edges was considered safe. Today most TV screens are flat panels, and the pictures are 16:9. There are controls for zooming or changing the aspect ratio, but usually all the active picture is displayed on the screen.
See also: Aspect ratio (of pictures)
A system for compressing data. The principle is to store a pixel value along with a message detailing the number of adjacent pixels with that same value. This gives a very efficient way of storing large areas of flat color and text but is not so efficient with pictures from a camera, where the random nature of the information, including noise, may actually mean that more data is produced than was needed for the original picture.
Real Soon Now. A phrase coined by Jerry Pournelle to satirize the tendency in the computer industry to discuss (and even offer for sale) things that are not actually yet available.
Not to be confused with 4:2:2 sampling or 422P MPEG, this is a standard for serial data communications defined by EIA standard RS-422. It uses current-loop, balanced signaling with a twisted pair of conductors per channel, two pairs for bi-directional operation. It is more costly than RS232 but has a high level of immunity to interference and can operate over reasonably long distances: up to 300m/1000 ft. RS 422 is widely used for control links around production and post areas for a wide range of equipment including, mixers, lighting, etc.
A standard for serial data communications defined by EIA standard RS-232 that is designed for short distances only: up to 10 meters. It uses single-ended signaling with a conductor per channel plus a common ground, which is relatively cheap, easy to arrange but susceptible to interference – hence the distance limitation.
The practice of using frames of live footage as reference for painting animated sequences. Today, the meaning has extended to cover a whole range of manual retouching techniques. While the painting will always depend on the skill of the artist, modern graphics equipment integrated with a video disk or RAM store makes rotoscoping, or any graphical treatment of video frames, relatively quick and easy. This has led to many new designs and looks appearing on television as well as more mundane practices such as image repair.
The abbreviation for the Red, Green and Blue signals, the primary colors of television. Cameras have red, green and blue receptors, the TV screen has red, green and blue phosphors or LEDs. RGB is digitized with 4:4:4 sampling which generates 50% more data than 4:2:2.
Return control is needed for interactive television. Typically it needs only to offer quite a low data rate but have little latency, as action should be followed as soon as possible by reaction. DVB includes methods for return paths for cable, DVB-RCC; satellite-RCS; and terrestrial–RCT, services. While cable and terrestrial are devised to operate economically for individual viewers, the satellite solution is more appropriate for head-ends or groups – due to cost. Interestingly DVB-RCS has been adopted by many companies operating in the general telecoms world.
See also: WiMax
Hiding or removing the defects acquired by old (archive) material and content. Digital technology has enabled many new and easy-to-use procedures to provide fast and affordable restoration. These range from fully automated systems – that depend on recognizing generic faults and treating them – to hands-on operations that offer access to appropriate toolsets – often presented as ‘brushes’.
These have been applied to both television and to film, and succeeded in making available many old archives for the ever-hungry TV channels.
The resolving power of an imaging medium is a measure of its maximum spatial resolution. For digital media the pixel count dictates the maximum possible resolving power. For film it is assessed by exposing it to special test images comprising sine wave bars of successively higher frequencies. The results on the processed film are then judged by a panel of viewers, making the result somewhat subjective.
See also: MTF
A term to describe equipment that can operate at more than one resolution, though not necessarily at the same time. Historically, most dedicated television equipment was designed to operate at a single resolution although some equipment, especially that using the ITU-R BT.601 standard, could switch between the specific formats and aspect ratios of 525/60 and 625/50. More recently, the advent of the multiple formats of HDTV and UHD has encouraged new equipment able to operate with many, or all, of the video standards.
In today’s converged media world the gamut of digital ‘video’ formats now includes motion picture formats up to 4K and mobile TV operating in many formats down to 320×240, or 176×144 on some phones.
By their nature computers can handle files of almost any size so, when used for images, they can be termed ‘resolution independent’. However, as larger images require more processing, more storage and more bandwidth so, for a given platform, the speed of operation will slow as the resolution increases.
Other considerations when changing between video image resolutions may include the need to reformat or partition disks, check for sufficient RAM, allow extra time for RAM/disk caching and to select an appropriate display.
See also: Resolution co-existence
Term coined by Quantel to describe equipment able to operate with several moving image formats at the same time. For example, an editing system able to store and operate with any DTV production format material, making transitions between shots, composing layers originating from more than one format (resolution) and outputting in any chosen format.
Good equipment will be designed for fast operation at the largest specified TV format, e.g. 4K (3840 x 2160 UHD), and so may operate faster with smaller images, but also may be able to handle larger images.
See also: Resolution independent
A measure of the finest detail that can be seen, or resolved, in a reproduced image. Whilst it is influenced by the number of pixels in the display (e.g. high definition 1920 x 1080, broadcast SDTV 720 x 576 or 720 x 480) note that the pixel numbers do not define the resolution but merely the resolution of that part of the equipment chain. The quality of lenses, picture displays, edit systems and film scanners, etc., in fact any element in the program stream (from scene to screen), must be taken into account in assessing overall system resolution. Like a chain, it is only as good as its weakest link.
For decades television was only available in one analogue resolution in any one country. That improved with digital standard definition services (SDTV). Then HD at 1080 x 1920, with four-times the picture area, doubled the horizontal and vertical resolution. More recently 4K UHD doubles HD with 2160 x 3840 sized images. 8K UHD doubles those figures again.
See also: Concatenation, MTF, Viewing distance
Data that has not been processed for use. It is often written as ‘RAW’, which may make you think it is an acronym. It is not.
Raw data usually applies to the output of digital cinematography cameras that can generally deliver images that include the full brightness range that its imager can extract from a scene. The data has not been tampered with, not processed for color or to suit any target viewing conditions, such as cinema or gamma corrected for home TV viewing. The raw data is as the imager saw it with debayering applied where needed.
Random Access Memory – cost-effective memory chips (integrated circuits) used extensively in computers to give fast access (compared to disks, tapes etc. – RAM has no moving parts) and very high data rates. RAM is available in several different forms and has been subjected to Moore’s Law for over three decades. When RAM chips first arrived they had a huge impact and, as they have grown in capacity and speed while unit price remains reasonably stable, their applications and importance have multiplied.
DRAM – Dynamic RAM. DRAM chips provide high density memories which must be powered and clocked to retain data. DRAM offers very speedy access to data and is vital for the fast computer performance and there is a large industry dedicated to the development and manufacture of ever faster and bigger DRAM chips. Synchronous DRAM (SDRAM) is faster and DRAM, and now DDR SDRAM (Double Data Rate) technology is increasing the performance of many of the newer PC and graphics products. Currently DDR3 clocking at 100 MHz can achieve data transfer rates of 6400 MB/s and allows storage capacity of 8 Gb per chip. DDR4 is on the way. Along with these performance breakthroughs power consumption is being reduced.
There are many more variations and versions of RAM to suit specific applications.
SRAM – Static RAM memory chips in general behave like dynamic RAM (DRAM) except that static RAMs retain data in a six-transistor cell only needing power to operate (DRAMs require clocks as well). Because of this, current available capacity is lower than DRAM – and costs are higher, but speed is also greater. SRAM is becoming increasingly popular as an alternative to hard disk drives in computers to achieve faster operation.
See also: Flash Memory
No redundancy – benefits only of speed and capacity – generated by combining a number of disks.
Complete mirror system – two sets of disks both reading and writing the same data. This has the benefits of level 0 plus the security of full redundancy – but at twice the cost. Some performance advantage can be gained in read because only one copy need be read so two reads can be occurring simultaneously.
An array of nine disks. Each byte is recorded with one bit on each of eight disks and a parity bit recorded to the ninth. This level is rarely, if ever, used.
An array of n+1 disks recording 512 byte sectors on each of the n disks to create n x 512 ‘super sectors’ + 1 x 512 parity sector on the additional disk which is used to check the data. The minimum unit of transfer is a whole superblock. This is most suitable for systems in which large amounts of sequential data are transferred – such as for audio and video. For these it is the most efficient RAID level since it is never necessary to read/modify/write the parity block. It is less suitable for database types of access in which small amounts of data need to be transferred at random.
As level 3 but individual blocks can be transferred. When data is written it is necessary to read the old data and parity blocks before writing the new data as well as the updated parity block, which reduces performance.
As level 4, but the role of the parity disk is rotated for each block. In level 4 the parity disk receives excessive load for writes and no load for reads. In level 5 the load is balanced across the disks.
A RAID system implemented by low level software in the host system instead of a dedicated RAID controller. While saving on hardware, operation consumes some of the host’s power.
Redundant Array of Independent Disks. A grouping of standard disk drives together with a RAID controller to create storage that acts as one disk to provide performance beyond that available from individual drives. Primarily designed for operation with computers, RAIDs can offer very high capacities, fast data transfer rates and much increased reliability of data. The latter can be achieved through disk redundancy so that disk errors or failures can be detected and corrected.
A series of RAID configurations is defined by levels and, being designed by computer people, they start counting from zero. Different levels are suited to different applications.
QuickTime is a multimedia framework developed by Apple Inc. capable of handling various formats of digital video, media clips, sound, text, animation, music, and several types of interactive panoramic images. Running on either OSX or Windows operating systems, QuickTime supports software packages including iTunes, QuickTime Player and Safari.
See also: ProRes
The process of sampling an analog signal to provide packets of digital information to represent the original analog signal.
See also: Into digits (Tutorial 1)
Factor applied to DCT coefficients as a part of the process of achieving a required amount of video compression. The coefficients relating to the least noticeable aspects of picture detail e.g. high frequencies with low amplitude, are progressively reduced so that the final data will fit into the specified data file space. This space is often fixed and relates directly to the quoted compression ratio for I-frame only schemes such as DV. Note that the required quantization will vary according to scene content. Given that too much data would cause problems by overflowing the allotted capacity of the file, compression schemes are cautious and designed to undershoot the file limit. To what extent the files are filled is a measure of the quality of a compression scheme, a reason why the quoted ‘compression ratio’ does not tell the whole story.
Apt name for the world leaders in digital television equipment – abbreviated from QUANtized TELevision. Quantel has over 40 years’ experience of digital television techniques – significantly more than any other manufacturer.
Hardware and software built for a specific task (e.g. a DVE), not general purpose (computer). Purpose-built hardware was able to provide much improved processing speeds, between 10 and 100 fold, over systems using the same technology applied to general-purpose architecture and operating system software. This became important in image processing where tasks require a great deal of power, especially as the demands increase in proportion to the picture area – significant for working with HDTV and UHD.
However, as standard/general-purpose platforms continue to become ever more powerful, so it can make sense to swap out some purpose-built hardware, which tends to be more costly, for software solutions. This ability to swap is a part of Quantel’s generationQ to use GPUs to provide accelerated image processing.
Horizontal motion that can be interpreted as binocular depth. A stereo effect which is produced when 2D images moving laterally on a single plane are viewed at slightly different times by each eye.
If a stereoscopic signal is reversed (e.g. each eye is being fed the opposite eye signal) a strange ‘punched in’ effect appears. This is also referred to as inverted stereo or reversed stereo.
A range of lossy, video compression systems developed by Apple Inc. Designed for editing it is described as ‘intermediate’, between uncompressed and more highly compress delivery quality. ProRes 422 can work with formats from SD to 5K using 10-bit, I-frame only, variable bit rate coding. For HD 60i there are three qualities; Normal 147 Mb/s, High-Quality 220 Mb/s and ProRes (LT) 100Mbit/s. There is also ProRes Proxy at 36 Mb/s for offline. For SD there are Normal 42 Mb/s and High-Quality 63 Mbit/s.
ProRes 4444 adds support for a 16-bit alpha (key) channel, and video sampling up to 12-bit.
See also: Quick Time
Digital projectors input digital images and project them onto cinema-sized screens. Huge advances in this technology in recent years have been one of the driving forces behind digital cinema. For post production or DI, many houses offer big screens for customers to see what the final cinema experience will look like. Among the prominent projection technologies in the large projector area are D-ILA from JVC, SXRD from Sony and DLP from Texas Instruments. These projectors work by shining the projector light at reflective chips that display the image, so modulating the light that is reflected towards the projector’s lens. Movies are mostly made using 4K resolution, delivering great detail, without film’s scratches, dirt and weave, the audience is treated to consistent high quality results. Many digital cinema players and projectors can deliver 3D.
There is a wide range of digital projectors now available that are used for business presentations as well as for home cinema.
See also: Color management, Digital Cinema, DLP, D-ILA, SXRD
Method of scanning lines down a screen where all the lines of a picture are displayed in one continuous vertical scan (progression), 1, 2, 3, 4… etc. There are no fields or half pictures as with interlace scans. Progressive scanning has become more common as it is used with computer displays and all panel displays – LCD, LED and Plasmas, and is used in television formats, e.g. – 1080/24P, 720/60P, 1080/30P… The ‘P’ denotes progressive. A high picture refresh rate is required to give good movement portrayal, such as for fast action sports and camera pans. Higher frame rates such as 1080/50P 1080/60P are in use; they require infrastructure capable of 3 Gb/s, known as 3G SDI. With ITU-R Recommendation BT.2020 for UHDTV standards describing only progressive scans for its 4K and 8K formats, it looks as though the future is progressive.
It has been noted that rapid movement displayed on the larger screens needed to fully appreciate the contents of the big pictures, requires higher frame rates to smoothly display movement. Thus BT.2020 includes frame rates extending up to 120 f/s. This presents considerable challenges for handling the resulting vast data.
See also: 24P, Interlace, Interlace Factor
Sony’s name for their Blue Laser disk technology used in XDCAM products and for data recording. This has many similarities to the Blu-ray disk, with CD/DVD dimensions it is housed in a cartridge, weighs 90g and offers 23.3 GB storage on one-side and a data rate of 9 MB/s (72 Mb/s). It can support 70 minutes of 50 Mb/s MPEG IMX and faster-than-realtime transfers. Good for around 10,000 re-record cycles it is more compact, faster and more versatile than tape.
Further development has produced a dual-layer model with 50 GB, then a triple with 100 GB and a quad version with 128 GB, providing up to four hours of recording.
See also: Optical disks
The illumination used to expose film in a processing laboratory. ‘White’ light is passed through red, blue and green filters so that the exposure to each can be individually controlled. Film is contact printed, placing the new film stock against the processed film that carries the images. The amount of light can be varied to provide the required exposure to show more detail in the highlights or the shadows or to keep to the mid-range of the scene brightness. To print an overexposed negative will require higher values and underexposed lower values of printer lights. A change of 1 in the value represents 1/12th of a stop adjustment in exposure. Differential adjustments of the values provides basic color correction (timing). The values for the lights are recorded as grading (timing) numbers onto disk or paper tape.
See also: Color timing, Film Basics (Tutorial 2), One-light pass, Timing
Plain Old Telephone Service. This is the analog connection that many people still speak on, or connect their modems or fax machines to. Its applications have gone far beyond its initial aims.
Software, usually from a third party, that brings more functions to a computer application. For post production this may add highly specialized aspects to digital effects.
Programmable Logic Device. This is a family of devices that has included PROMs (Programmable Read Only Memories), PLAs (Programmable Logic Arrays) and PALs (Programmable Array Logic). Today FPGAs (Field Programmable Gate Arrays) are the main interest. These range in size and complexity from a few dozen up to millions of gates to provide a compact and efficient means of implementing complex non-standard logic functions. They are widely used in Quantel equipment where FPGAs also offer a fast track for the implementation of new improvements and ideas.
See also: Moore’s Law
A shortened version of ‘Picture cell’ or ‘Picture element’. The name given to one sample of picture information. Pixel can refer to an individual sample of R, G, B, luminance or chrominance, or sometimes to a collection of such samples if they are co-sited and together produce one picture element.
See also: Aspect ratio – of pixels, Sub-pixel
Video and film effects that are constructed in such a way that they look totally real, and not synthetic, are referred to as photo-real effects. This use of effects has rapidly increased and so changed the way many productions are shot and post produced – leading to lower budgets and better looking results.
Achieving photo-real results requires careful planning from before the shoot and computer imagery through to compositing in post production. Excellence in keying, so there are no telltales of blue screen halos or color spill, are among the many techniques required for successful results.
See also: Compositing, Digital keying
Short for perforations. It is a way to describe some information about the format of images on 35mm film by how many of the perforations, or sprocket holes, are used per image. For example, Full Frame is 4 perf.
This data interface is widely available on PCs. Unlike its predecessor PCI interfaces, PCIe provides fast serial connections which are more reliable for handling fast data than the parallel connections used with previous PCIs. A range of cards offer 1, 4, 8 or 16 full duplex (same speed both ways) data ‘lanes’. These are written as 1x, 4x, etc. A larger 32-lane card is available but rarely used. The cards are hot swappable.
PCIe is commonly used to connect storage such as optical and hard discs and SSD drives. The available data speed has increased over three versions so v3.0 offers a data speed up to 985 MB/s per lane.
See also: www.hardwaresecrets.com
This refers to the separation of the left and right images on the projection device or display screen. Positive Parallax puts an object behind the screen (on screen objects in the left eye image are to the left of the same objects in the right eye image). Negative parallax puts an object in front of the screen (on screen objects in the left eye image are to the right of the same objects in the right eye image).
Zero or neutral parallax puts an object on the screen (on screen objects in the left eye image are overlaid on the same objects in the right eye image).
The only difference between stereo cameras should be parallax or angle between the axes of the lenses, as in Camera Convergence – anything else can disturb the stereo viewing experience. This requires close attention, so that the cameras are set-up the same and with the same filters. Color differences, skewing, vertical misalignment, differential weave and hop, lens flares, poor VFX fixes, scratches and dirt can all cause problems.
Fast cuts between shots with strong positive and strong negative parallax can be unsettling in some circumstances. This is because the eyes and brain are being asked to jump uncomfortably quickly between positions and then make sense of the result. This can be mitigated by the use of ‘handing off’ – dynamically changing the convergence of an outgoing shot in relation to an incoming shot. Another method of dealing with this is trying wherever possible to cut between shots that are somewhat close in parallax.
Vertical parallax is a vertical offset between stereo images and is very uncomfortable to watch – so it is necessary to remove it during post production if there has been camera misalignment during shooting.
Note: The term ‘Parallax’ is sometimes used interchangeably with ‘Congruence’ or ‘Disparity’.
Using several processors simultaneously with the aim of increasing speed over single processor performance. It often refers to array processor computer hardware that carries out multiple, often identical, mathematical computations at the same time. Generally array processors are designed with specific tasks in mind and so are not suitable for running complex operational software. Due to system administration and the fact that not all processors will complete their tasks at the same moment, causing waiting time, the increase in speed gained by sharing the task may not be proportional to the number of channels available.
Due to the very different structure of a parallel processing computer, software designed to run on a single processor system may well need significant changes to take full advantage of a parallel system. The current expansion of popular PC CPUs to offer two, four or more processor CPUs generally works with established applications by running the various applications on separate core processors, rather than one application on multiple processors.
Personal area network used for communicating between computer-based devices, such as phones, mobile TVs and PDAs as well as fixed computers, all within a few meters, and through to other networks such as the Internet. PANs can be wired, via busses such as USB or IEEE 1394, or wireless (WPAN) using technologies such as Bluetooth.
Production Asset Management could be considered as a subset of MAM It is intended to co-ordinate the work of all stages involved with post production, and so ensure it runs efficiently and meets planned deadlines and budgets. It can enhance collaboration and manage down to a detailed level for edit suites and media storage, and manage those contributing other elements, such as special effects. Management of items such as the original video, audio, EDLs, proxies, versioning, content protection, backup, can be included. But PAM is not tightly defined and some say that it ends when the media file is ‘flattened’, with all the effects, corrections, cuts, mixes, etc, are applied and resolved to produce one layer of video and the finished tracks of audio.
A version of the PAL color coding system, but using a 525-line 60-field structure. It was only used in parts of South America (Brazil).
Phase Alternating Line. The analog color coding system, which effectively compressed three TV frames of red, green and blue sent from the camera or scanner, into one whole color picture. It was widely used for television in Europe and in many countries around the world, always with the 625/50 (lines per picture/interlaced fields per second) system, except in Brazil (see PAL-M). The engineers designing PAL had the advantage of seeing the NTSC system, and managed to improve on it but by swinging the phase of the reference color ‘burst’ on alternate lines, hence Phase Alternating Line. NTSC had no way of automatically correcting the drift of its subcarrier that carried the color information. So, as the phase drifted, so did did the color’s hue. Hence the system’s other name ‘Never Twice the Same Color’! As PAL switched the subcarrier phase by 90-degrees line by line (+45 and -45 from the color axis), the phase and so the hue of the colors, could be averaged, so rendering the correct hue at the receiver. This required every decoder to have a one TV-line delay, which was expensive. However the price quickly dropped as sales took off in the mid 1960s.
Bandwidth for the PAL-I system (here ‘I’ indicates 625/50 line/field scanning) is typically 5.5 MHz luminance, and 1.3 MHz for each of the color difference signals, U and V which are coded with the luminance. Note that the PAL term is frequently used to describe any 625/50I analog format even if it is component, or in the 576/50I digital television system where PAL coding is not used.
A set of clips, mattes, DVE settings, color corrector, keyer, etc., that are used together to make a video layer of a composited picture. Quantel equipment allows packs to be saved and archived so they can be used later for any re-works.
Sampling information at a higher resolution than is required for the output format. For example, an HD picture can be regarded as an over sampled version of SD. SD pictures created from down sized HD are generally clearer, cleaner (lower noise) and sharper than those made directly in SD. This is because the size reduction process tends to lower noise and the output pictures are derived from more information than is available in a direct SD scan. An increasing amount of SD material is originated this way. Similarly, 35 mm film provides an over sampled source for SD and HD.
Over-The-Top (content) refers to video and audio that is delivered over the broadband internet without involving broadcasters of cable, satellite or terrestrial television systems. The provider may be aware of the content but is not responsible for it, or in control of it.
See also: IPTV
A one-to-one condition where what is being displayed is the same as the ‘real world’. For example IMAX 3D is often shot with parallel cameras spaced at the average human adult interocular distance (approx 63.5 mm) and with wide angle lenses that closely match an audience member’s view of the screen.
Disks that use optical techniques for recording and replay of material without the read/write heads touching the disk. These offer large storage capacities on ‘CD sized’ 5.25-inch (12 cm) polycarbonate disks with technologies including CD, DVD and Blu-ray Disc. These offer capacities of 700 MB for CD, 4.37- 15.90 GB for DVD and 25-128 GB for Blu-ray Disc. The CD format was designed for audio; DVD was designed for SD video and Blu-ray Disc for HD video. All have another life in data storage. They are all available in ROM and read/write forms. The next development is expected to be a disk able to store a 4K UHD movie, which will also be useful for storing large quantities of data.
See also: Blu-ray Disc, DVD, Professional Disc, XDCAM
The base program that manages a computer and gives control of the functions designed for general purpose usage, not for specific applications. Common examples are MS-DOS, Windows and Linux for PCs, OSX for Apple Macintosh and UNIX. The operating system provides the connections to keyboards, networks, storage and displays. For actual use, such as word processing, there are specific applications software packages that run on top of the operating system. For high performance applications such as video editing the developer needs to minimise the use of the OS to ensure that the most compute cycles can be available for the application. Efficient use of the OS and its facilities are key to producing smooth and interactive interfaces between operator and machine.
See also: Threading
OpenEXR is a file format developed by Industrial Light & Magic for high dynamic range (HDR) images that are used in computer imaging applications. It is used by ILM on motion pictures, has become ILM’s main image file format, and is released as free software. It offers a higher dynamic range than 8 or 10-bit pictures, support for 16-bit floating-point, 32-bit floating-point, and 32-bit integer pixels, up to 2:1 lossless compression algorithms, and extensibility to add new compression codecs and image types. It forms the container for the ACES color format.
See also: ACES, Color Transform Language
The Open Systems Interconnection (OSI) Basic Reference Model describes a general structure for communications, defined by the ISO, which comprises seven layers and forms a framework for the coordination of current and future standards, but not defining the standards themselves.
Production of the complete, final edit performed at full program quality; the buck stops here! Being higher quality than offline editing, time costs more but the difference has narrowed as the cost of equipment has reduced. Preparation in an offline suite will help save time and money in the online. To produce the finished edit, online has to include a wide range of tools, offer flexibility to try ideas and accommodate late changes, and to work fast to maintain the creative flow and to handle pressured situations.
A one-light pass refers to a film-processing lab giving the same exposure to a defined length of film, during printing. This is the simplest, quickest and cheapest way to print all the film and the results are typically used for making rushes, dailies, etc. These are often subsequently telecined and recorded to videotape as a reference for the offline decision-making process.
See also: Best light
Open Media Framework Interchange is an open standard developed by Avid, for post production interchange of digital media among applications and across platforms. It describes a file format and supports video, audio, graphics, animation and effects as well as comprehensive edit decision information. Transfers may be by removable media or over a high-speed network.
Organic Light Emitting Diode technology is used to make displays for video, computer screens, mobile devices, and more. Light is generated when a voltage is applied across an emissive electroluminescent layer of an organic (containing carbon) semiconductor compound via two electrode layers. The brightness of the light from each diode is varied according to the video input – unlike ‘LED’ screens that are predominant in the current screen market. Here the LEDs provide a white backlight behind an LCD (liquid crystal display) that works as a red, green and blue light filter for each pixel that is controlled by the input image (video, etc).
The OLED technology offers a number of benefits over LED screen technology. As there is no backlight, when the OLEDs are switched off there is no light emitted. Black really looks black, and the screens are specified with a very high contrast ratio. They also offer a faster response time, wider viewing angles and larger color gamuts. As a result the images look very good and they they are often used as monitors. As it is easy to put the OLEDs very close together it is possible to make very small screens that deliver full resolution images – useful for viewfinders, handheld devices, phone, etc. An OLED screen’s power consumption is less than equivalent LED screen as almost all the light generated is visible, none absorbed in filters. The screens can be very thin. An early 50-inch consumer model was boasting 6mm thickness. The screens can be bent, and some manufacturers are offering curved screens. Perhaps OLEDs may be used to make a roll-up screen.
On the downside, OLED screens are still offered at a substantial price premium over LED versions. Also it is said that the life expectancy of OLED screens is considerably less than that of LEDs, mainly due to the relatively short life of the blue OLEDs. Work continues to improve that area.
OLED screens have been produced for a number of years but only recently have been widely available in the consumer market.
See also: Quantum dot, SED
Open IPTV Forum. This has become a part of the HbbTV Assoication, and so very much involved with accelerating the deployment of IPTV services.
See also: AAF
A decision-making process using low-cost equipment to produce an EDL or a rough cut which can then be conformed or referred to in a high quality online suite – so reducing decision-making time in the more expensive online environment. Most offline suites enable shot selection and the defining of basic transitions such as cuts and dissolves which are carried by EDLs. AAF provides an open standard for transporting a much wider range of decisions, including DVE, color corrections, as well as other metadata, between systems from different manufacturers.
With on-line video quality now available on low-cost computers there is a choice of skipping offline and editing broadcast quality footage very cost-effectively, or continuing to use off-line, depending on the requirements of the project and budgets.
Orthogonal Frequency Division Multiple Access is a technology used for terrestrial radio return channels. Generally it provides several users with simultaneous low data rate transmission for fixed and mobile applications. Power can be less than a Watt and will communicate over considerable distances. OFDMA is used in the IEEE 802.16-2005 Mobile WiMAX broadband wireless access standard, enabling mobile DSL services and mobile telephony to provide customers with enhanced voice and data services.
Ofcom describes itself as the ‘Independent regulator and competition authority for the UK communications industries’. Its responsibilities include telecommunications and radio and television broadcasting – ranging from licensing radio and TV channels to customer complaints.