Category Archives: C

C-mode

In videotape editing, operating in C-mode allows the shots on the source footage reels to be copied to the edit tape in the order they are recorded on the source tapes, rather than the order required for the finished program. So all the selected material from one source reel can be copied before loading another source reel – thus saving spooling and reel changing time. This assumes that none of the edits will need to be changed along the way!

The term lives on in the disk-based editing world where the editing disk store can randomly access the footage recorded from the source tapes. So C-mode editing on disks not only gives the advantages of faster and more convenient transfers from source tapes, but also allows the edits to be easily altered thanks to the random access of disk stores.

Camera negative (film)

Camera negative film is designed to capture as much detail as possible from scenes. This not only refers to its spatial resolution but also its dynamic resolution. Modern camera negative stock has almost 10 stops’ (over 1,000:1) exposure range and so is able to record detail in both the low-lights and the highlights which are well beyond the range that can be shown on the final print film. This provides latitude to compensate for over or under exposure during the shoot or to change the look of a scene. The latitude is engineered into the film stock by giving it a very low gamma of around 0.6.

Exposed and developed camera color negative film has an orange tint and is low in contrast – differing greatly from the un-tinted and high contrast print film. As not only the blue, but also the red and green layers of the film are sensitive to blue light, the orange layer is added below the blue layer to stop blue light going further. All types of film stocks use orange dye but for print films it is bleached away during processing.

There are numerous stocks available. High-speed stocks work well in lower lights but tend to be more grainy. The opposite is true for low speed stocks.

Cardboarding (Stereoscopic)

Lack of true 3D feel to a shot making solid objects look like as if they are made from cardboard cutouts, i.e. they have no depth. This is also referred to as Cutout Planar Effect. Caused by inadequate depth resolution due to an incorrect matching between the focal length of the recording lens (or CGI camera) and the interocular distance between the cameras. Typically big zooms looking at distant objects can cause the effect if the left and right cameras are not sufficiently far apart.

See also: Interocular

CCD

Charge Coupled Device (CCD), either assembled as a linear or two-dimensional array of light sensitive elements. Light is converted to an electrical charge in a linear fashion – proportional to the brightness impinging on each cell. The cells are coupled to a scanning system which, after analog to digital conversion, presents the image as a series of binary digits.

Typically the charge created on each cell is shuffled along to the end of its line where it is measured by an ADC that outputs a digital number corresponding to the charge, and hence, the brightness. This handling of the charge exposes it to noise and early CCD arrays were unable to work over a wide range of brightness. Now they offer low noise, high resolution imaging for television and digital cinematography.

See also: CMOS

CCIR

The Comité Consultatif International des Radiocommunications is history as it has been absorbed into the ITU, under the ITU-R prefix.

See also: ITU

CCITT

International Telegraph and Telephone Consultative Committee. As the name suggests this was initially set up to establish standards for the telephone industry in Europe. It has now been superseded by ITU-T so putting both radio frequency matters (ITU-R) and telecommunications under one overall United Nations body.

See also: ITU

Channel in a box

A single solution that automates the compilation and playout of program, commercial and promotional material to be sent to the transmitter or delivered via any other broadcast medium, such as the internet. As the name implies, the purpose is to provide all the functions needed for placing a program on air in a single ‘personal computer’ box, running standard operating systems. The long term aim was to create a software solution that runs on commercial off-the-shelf (COTS) hardware, thus providing broadcast performance at a fraction of the usual cost.

At the start, in the 1990s, personal computers had a small fraction of the power they provide today. Initially CiaB users were generally small or start-up TV channels or disaster recovery operations. Some functions, such as graphics and video input and output, had to be handled by special cards within the box. Also working with the many different file types used for delivering the material meant that it was often a challenge to play out the material from the box.

Today, the long-term aim has been reached. Computers can offer the power to handle all the required functions in software, except the basics of video in and out, for multiple channels of HD, if required. CiaB offerings range from simple solutions providing automation and playout of video, channel branding and captions to the most fully featured including rich graphics, comprehensive audio handling including Dolby and insertion of ancillary and signaling data for use downstream by other devices as well as many other features found more traditionally in the broadcast chain.

Also the CiaB customer base has expanded from a small market of low-cost start-ups, which it still serves, to include the mainstream business of serving all sizes and types of broadcasters.

Website: http://www.snellgroup.com/products/channel-in-a-box/ice-family/ice-overview/

Checksum

A simple check value of a block of data intended to recognize when data bits are wrongly presented. It is calculated by adding all the bytes in a block. It is fairly easily fooled by typical errors in data transmission systems so that, for most applications, a more sophisticated system such as CRC is preferred.

See also: CRC

Chroma keying

The process of overlaying one video picture, or part of, over another. The areas of overlay are defined by a specific range of color, or chrominance, on the background video signal that is used to create a key signal for a chroma keyer. For this to work reliably, the chrominance must have sufficient resolution, or bandwidth. PAL or NTSC analog coding systems significantly restrict chroma bandwidth and so are of very limited use for making a chroma key which, for many years, was restricted to using live, RGB camera feeds.

An objective of the ITU-R BT.601 and 709 digital sampling standards was to allow high quality chroma keying in post production. The 4:2:2 sampling system allows far greater bandwidth for chroma than PAL or NTSC and helped chroma keying, and the whole business of layering, to thrive in post production. High signal quality is still important to derive good keys so some high-end operations favor using RGB (4:4:4) for keying – despite the additional storage requirements. Certainly anything but very mild compression tends to result in keying errors appearing – especially at DCT block boundaries.

Chroma keying techniques have continued to advance and use many refinements, to the point where totally convincing composite images can be easily created. You can no longer see the join and it may no longer be possible to distinguish between what is real and what is keyed.

See also: Color space, Digital keying, Photo-real

Chrominance

The color part of a television signal, relating to the hue and saturation but not to the brightness (luminance) of the signal. Thus pure black, gray and white have no chrominance, but any colored signal has both chrominance and luminance. Although imaging equipment registers red, blue and green, television pictures are handled and transmitted as U and V, Cr and Cb, or (R-Y) and (B-Y), which all represent the chrominance information of a signal, and the pure luminance (Y).

See also: YUV, Y,Cr,Cb, Composite

CIE

International Commission on Illumination (Commission Internationale de l’Eclairage) is devoted to international cooperation and exchange of information among its member countries on all matters relating to the science and art of lighting. It is a technical, scientific and cultural, non-profit autonomous organization that has grown out of the interests of individuals working in illumination – lighting. It is recognized by ISO as an international standardization body.

See also: X´Y´Z´

Website: www.cie.co.at/cie

CIFS

Common Internet File System is a platform-independent file-sharing system that supports rich, collaborative applications over the internet which could be useful for collaborative post production workflows. It defines a standard remote file-system access protocol, enabling groups of users to work together and share documents via the Internet or within intranets. CIFS is an open, cross-platform technology based on native file-sharing protocols in Windows and other popular PC operating systems, and is supported on other platforms, so users can open and share remote files on the Internet without installing new software or changing work methods.

CIFS allows multiple clients to access and update the same file, while preventing conflicts by using sophisticated file-sharing and locking semantics. These mechanisms also permit aggressive caching and read-ahead/write-behind without loss of cache coherency.

CIFS also supports fault tolerance in the face of network and server failures.

In Quantel’s Genetic Engineering teamworking infrastructure, the Sam data server virtualizes media on-the-fly to give third-party applications instant access to all stored media using the CIFS protocol for no-API, out-of-the-box connectivity.

Website: www.microsoft.com/mind/1196/cifs.asp

Cineon (file)

An RGB bitmap file format (extension .cin) developed by Kodak and widely used for storing and transferring digitized film images in production and post production. It accommodates a range of film frame sizes up to full Vista Vision (rarely used format 36 x 18.3mm frame laid horizontally). In all cases the digital pictures have square pixels and use 10-bit log sampling. The sampling is scaled so that each of the code values from 0-1023 represents a density difference of 0.002 – describing a total density range of 2.046, equivalent to an exposure range of around 2,570:1 or about 11.3 stops. Note that this is no longer beyond the range of modern negative film and so has led to the use of higher precision file formats.

The format was partly designed to hold virtually all the useful information contained in negatives and so create a useful ‘digital negative’ suitable as a source for post production processing and creating a digital master of a whole program.

See also: 10-bit log, ACES, Color spaces, DPX

Clip

The name is taken from the film industry and refers to a segment of sequential frames made during the filming of a scene. In television terms a clip is the same but represents a segment of sequential video frames. In Quantel editing systems, a clip can be a single video segment or a series of video segments spliced together. A video clip can also be recorded with audio or have audio added to it.

Clone

An exact copy, indistinguishable from the original. As in copying recorded material, eg copy of a non-compressed recording to another non-compressed recording. If attempting to clone compressed material care must be taken not to decompress it as part of the process or the result will not be a clone. Today, cloning is best done by simply copying a file.

Cloud

For years computer network engineers have occasionally drawn a cloud in their diagrams – implying a distant reach of the network; usually somewhere where more storage can be accessed. Today, with reasonably high speed internet widely available, that fluffy thing has achieved front-of-stage importance with a rapid expansion of cloud computing service providers offering both storage and processing in their cloud. Or you can even have your own private cloud if you want.

Note that this is not a data center, requiring your computer to have the processing power and programs (apps) to alter and edit your video or audio. Cloud computing can be highly asymmetric – your computer only has to run a browser and upload video to the cloud, in theory all the rest could be done in the cloud – depending on the tools available and how you want to work.

Typically these clouds are accessed via the Internet and offer access to any amount of your uploaded data from any internet connection. There is a very wide range of services available, such as payrolls, tax returns, sales coordination, etc. Some providers offer specific services for the media industry, specializing in video storage, viewing and editing via their cloud. Uploading newly shot footage and editing can be done from anywhere with an internet connection. Once in the cloud, all those allowed to can access the footage by logging onto the website. This has major benefits for location shoots; not least being able to deliver valuable new footage to a safe place without moving. It also helps with studio productions as all required personnel can contribute to the production and post-production, without having to move from their laptop.

Technically cloud services divide into three recognized types:

Software as a Service (SaaS) is popular as it only requires a modest computer on the ground, which basically just has to support a web browser, because all the software and probably quite a bit of storage needed is in the cloud. So, for example, freshly shot video footage could be stored in the cloud. Then it could be, viewed, edited, and reviewed. Another powerful use is planning and organizing production, so everyone is kept up to date.

With Platform as a Service (PaaS) users can run their own applications, provided that it fits with what cloud service operators offer in terms of platforms. When using Infrastructure as a Service (IaaS) you can run whatever programs you like on the cloud’s available hardware

CMOS

Complementary Metal-Oxide Semiconductor technology is very widely used to manufacture a wide range of electronic integrated circuits (chips). CMOS chip digital applications include microprocessors, RAM and dynamic and static memory. They also have a variety of analog applications.

CMOS devices are favored for their immunity to high levels of noise, low static power drain, with significant power only drawn while the transistors switch, and high density packing of logic functions. Being so widely used, the technology is relatively cheap to manufacture.

CMOS imaging sensors are potentially cheaper to make than the alternative CCDs. They also consume less power, can be more light sensitive (and so faster, less noisy and better in low lights), have less image lag and can include image-processing functions on each photosite (cell) on the chip. Each photocell can have its own ADC, so the charge created by the light falling on it is converted into digits on site, then the data is passed to the highways. This way, CMOS imaging sensors with their on-site digitizing, have a much lower noise level and so can work in lower light conditions than CCDs.

CMOS technology also plays a vital role in digital projection where DMD (Digital Micromirror Device) chips make images from data.

See also: DMD

CMTS

Cable Modem Termination System is located at a cable headend / hub and connects to the customers’ lines and can provide fast data services, including Internet, to its customers.

Co-sited sampling

This is a sampling technique applied to color difference component video signals (Y, Cr, Cb) where the color difference signals, Cr and Cb, are sampled at a factor of the luminance, Y, frequency – for example as in 4:2:2. If co-sited sampling is applied, the two color difference signals are sampled at the same instant, and simultaneously with a luminance sample. Co-sited sampling is the ‘norm’ for component video as it ensures the luminance and the chrominance digital information is coincident, minimizing chroma/luma delay.

4:2:2 Co-sited sampling

Co-sited sampling

 

Codec

Originally short for a combination of a coder and decoder but now often used to describe just one or the other. Mostly codec refers to a compression coder or decoder such as JPEG, MPEG or JPEG 2000.

COFDM

Coded Orthogonal Frequency Division Multiplexing – a modulation scheme which is used by the DVB digital television system. It allows for the use of either 1705 carriers (usually known as ‘2K’), or 6817 carriers (‘8K’). Concatenated error correction is used. The ‘2K’ mode is suitable for single transmitter operation and for relatively small single-frequency networks with limited transmitter power. The ‘8K’ mode can be used both for single transmitter operation and for large area single-frequency networks (SFN). The guard interval is selectable. The ‘8K’ system is compatible with the ‘2K’ system. At the start of DVB transmissions in the United Kingdom in November 1998, only 2K was possible, but soon after 8K was also possible.

There has been much discussion about the relative merits of COFDM vs the 8-VSB scheme used in the ATSC standard (OFDM may be considered for upcoming ATSC 3). The Japanese ISDB system uses a similar scheme, OFDM, and the Chinese have developed their own transmission scheme the DMB-T/H standard – not to be confused with the T-DMB Korean standard modulation – both significantly different COFDM implementations.

See also: DVB, ISDB, VSB

Website: www.dvb.org

Color cube

A representation of color space by a three-dimensional diagram. For example, all definable colors of an RGB color space can be contained in an RGB color cube where R, G and B are axes at right angles to each other (like x, y and z at the corner of a cube). Different color spaces and interpretations of color are defined by different color cubes.

If the exact spectral values of R, G and B are defined, that cube defines an absolute color space. Such cubes are available from a number of vendors.

Color Decision List (CDL)

The American Society of Cinematographers’ Color Decision List (ASC-CDL) is a proposed metadata interchange format for color correction, developed to provide a means of guiding the next stage in the process with respect to color correction. The CDL applies primary corrections of offset, gain, gamma and saturation expressed in a normalized manner.

Color management

The control of color through a defined process. The idea is that all, or at least some, pictures, monitors or screens should portray the colors as they will appear when shown in their target medium. For example, if working on a DI project, the color management should ensure that the monitors show the colors exactly as an audience in a cinema will see them. Today that may be by use of a release print, in which case the color management has to adjust the monitoring for the film stock to be used. It could also be from a digital cinema distribution master (DCDM) and the color characteristics of that path taken into account as part of the color management.

Color science

Color Science has been an area of scientific research for over 100 years. Brought together in 1931 by the CIE (Commission Internationale de l’Eclairage) this sphere of science studies all aspects of human perception of color and brightness. Early use was to study how dyes could be mixed. The issues of color printing and the effects of different viewing conditions on perception. There are large amounts of research and books published relating to this subject.

The definitions for all our television and cinema viewing standards are rooted in color science. The numeric definition of r,g,b and the conversions to YCrCb are examples of the practical use of color science. In this multimedia world media transportability and maintenance of creative intent would not be possible without color science defining the solid core of math to support the industry.

See also: Color Space

Color space

The color range between specified references. Typically three reference points are quoted as in television is could be RGB, Y R-Y B-Y, or Hue, Saturation and Luminance (HSL). These are all color spaces. In print, typically Cyan, Magenta, Yellow and Black (CMYK) are used. Film is RGB while digital cinema uses X´Y´Z´. Pictures can be moved between these color spaces but it requires careful attention to the accuracy of the involved processing. Operating across the media – in print, film and TV, as well as between computers and television – requires color-space conversions to display the correct colors everywhere.

Electronic light sensors detect red, blue and green (RGB) light but TV signals are usually changed into Y, R-Y and B-Y components as, or very soon after they enter the electronic realm via camera, scanner or telecine. There is some discussion about which color space is best for post production – the most critical operation being keying. However, with most video storage and infrastructure being component-based, the full RGB signal is usually not available, so any of its advantages can be hard to realize for television-based productions. However, in the Digital Intermediate (DI) process, where movie footage undergoes ‘post production’, RGB color space predominates.

With the increasing use of disk storage, networking able to carry RGB and digital cameras with RGB outputs, RGB infrastructure and operations are more widely used. Even so, RGB takes up 50 percent more storage and, for most productions, its benefits over component working are rarely noticed. One area that is fixed on RGB use is in 2K and 4K digital film (digital intermediates). Modern digital techniques allow the use of both RGB and Y R-Y B-Y to best suit the requirements of production as well as those of distribution and delivery.

More recently the world has become more complex for color, with a wide diversity of camera, display systems and methodologies for moving from one to the other. The CRT display set the colorimetric standard that still holds today even though CRTs are no longer used and other display technologies, LCD, plasma, micro-mirror and OLED, for example, each have their own characteristics for color and light output transforms.

In the television broadest description Color Space defines how all parts of the imaging chain respond to light or electrical drive. Cameras separate out the red, green and blue light using filters. The spectral response, in particular the passable bandwidth, of these filters control the margins of the range of colors that can be represented. These are usually represented in a diagram of the type shown below.

Color Space

The axes of this diagram are the xy of the color notation xyY where the Y component would represent the intensity. The human visible area is infilled with color and, on more detailed depictions of this diagram, show the wavelengths of light representing each color. Saturation increases towards the edge of the locus.

The corners of the triangles forming the ACES, DCI P3 and Rec.709 are at the coordinates of the primary colors used in these displays. Only colors inside the respective triangle can be reproduced by that display. You can see that no display technology covers the full human range for color and that, for example, full green on a Rec.709 monitor is not the same color as full green on the P3 display. Some colors in nature cannot be represented at their proper saturation using today’s imaging technology.

See also: 2K, Keying

Color timing (a.k.a. Grading)

The color of film exposed and processed in a laboratory is controlled by separately altering the amount of time that the red, blue and green lights are used to expose the film. This is referred to as color timing and its effect is to alter the contrast of R,G and B to create a required color balance.

In a lab, color timing is usually applied at the point where the edited negative is copied to the master interpositive, but can be done later at other points, if required. This contrasts with the digital intermediate process where color timing is applied at any required time. In addition there is far more flexibility for color control with gamma, hue, luminance, saturation as well as secondary color correction. Also the results can be seen immediately and projected onto a large cinema screen, and further adjusted if required. The images have precise color settings to show the results as if output via film, or digitally.

See also: Grading, Timing

Color Transformation Language (CTL)

Color Transformation Language is a small programming language designed to serve as a building block for digital color management systems. It allows users to describe color transforms in a concise and unambiguous way by expressing them as programs that describe the transform that can be applied to pixel data. It is designed to run fast, operating on many pixels at one time.

See also: OpenEXR

Website: http://ampasctl.sourceforge.net

Common Image Format (CIF)

The ITU has defined common image formats. A standard definition image of 352 x 240 pixels is described for computers. For HDTV production the HD-CIF preferred format is defined in ITU-R BT.709-4 as 1920 x 1080 pixels, 16:9 aspect ratio with progressive frame rates of 24, 25 and 30 Hz (including segmented scan) and interlace field rates of 50 and 60 Hz. This has helped to secure the 1920 x 1080 format as the basis for international program exchange.

See also: ITU-R BT.709

Component video

The normal interpretation of a component video signal is one in which the luminance and chrominance remain as separate components, e.g. analog components in Betacam VTRs, digital components Y, Cr, Cb in ITU-R BT.601 and 709. RGB is also a component signal.

Component video signals retain maximum luminance and chrominance bandwidth and the frames are independent of each other and so can be edited at any frame boundary.

See also: Cut (edit)

Composite Video

Luminance and chrominance are combined along with the timing reference sync and color burst information using one of the color coding standards – NTSC, PAL or SECAM – to make composite video. The process, which is an analog form of video compression, restricts the bandwidths (image detail) of the color components. In the composite result color is literally added to the monochrome (luminance or Y) information using a visually acceptable technique. As our eyes have far more luminance resolving power than for color, the sharpness (bandwidth) of the coded color signal is reduced to well below that of the luminance.

This provides a good solution for transmission and viewing but it becomes difficult, if not impossible, to accurately reverse the process (decode) back into pure luminance and chrominance. This limits its use in post production as repetitive decode, recode cycles severely impair the pictures. Deriving keys from composite video gives poor results.

See also: 4fsc, D2, D3

Compositing

Multi-layering for moving pictures. Modern composites often use many techniques together, such as painting, retouching, rotoscoping, keying/matting, digital effects and color correction as well as multi-layering to create complex animations and ‘opticals’ for promotions, title sequences and commercials as well as in program content. Besides the creative element there are other important applications for compositing equipment such as image repair, glass painting and wire removal, especially in motion pictures.

The quality of the finished work, and therefore the equipment, can be crucial especially where seamless results are demanded. For example, adding a foreground convincingly over a background – such as placing an actor into a scene – without any telltale blue edges or other signs that the scene is composed.

See also: Digital keying, Photo-real

Compression (audio)

Reduction of bandwidth or data rate for audio. Many digital schemes are in use, all of which make use of the way the ear hears (e.g. that a loud sound will tend to mask a quieter one) to reduce the information sent. Generally this is of benefit in areas where bandwidth or storage is limited, such as in delivery systems to the home, hand-held players, etc. Generally the terms ‘coding’ or ‘codec’ refer to a compression scheme, such as MP3.

See also: Auditory masking, Dolby Digital

Compression ratio

The ratio of the amount of data in the non-compressed digital video signal to the compressed version. Modern compression techniques start with component television signals but a variety of sampling systems are used, 4:2:2 (‘Studio’ MPEG-2), 4:2:0 (MPEG-2), 4:1:1 (NTSC, DVCPRO), etc. The compression ratio should not be used as the only means to assess the quality of a compressed signal. For a given technique, greater compression can be expected to result in lower picture quality, but different techniques give widely differing quality of results for the same compression ratio. The more modern technologies, MPEG-4, (H 264), VC-1, JPEG 2000 and the latest HEVC (H.265), are more efficient than MPEG-2. The only sure method of judgment is to make a very close inspection of the resulting pictures – where appropriate, re-assessing their quality after onward video processing.

See also: Concatenation, DV, ETSI, JPEG, JPEG 2000, MPEG

Compression (video)

The process of reducing the bandwidth or data rate required to transport a video or audio stream. The old analog broadcast standards, PAL, NTSC and SECAM are, in fact, compression systems which greatly reduce the information that was present in the original RGB sources.

Digital compression systems analyze their sources, such as pictures and sound, to find and remove redundancy both within the signal. Some of the techniques were primarily developed for digital data transmission but have been adopted as a means of reducing broadcast transmission bandwidths and storage requirements on disks and VTRs.

A number of compression systems are in regular use for moving images. These include ETSI, JPEG, Motion JPEG, JPEG 2000, DV, MPEG-1, MPEG-2, MPEG-4, AVC, H.264, Windows Media, Real and many more. Where different techniques are used in the same workflow, quality problems can occur and picture quality/ audio can suffer more than if the same method is used throughout.

The MPEG-2 family of compression schemes, which was originally designed for program transmission, has been adapted for studio use in Betacam SX and IMX recorders.
While there is much debate, and new technologies continue to be developed, it remains true that the best compressed results are produced from the highest quality source pictures. Poor quality inputs do not compress well. Noise, which may be interpreted as important picture detail, is the enemy of compression.

The need for ever more efficient compression systems is partly driven by the bigger TV picture formats, HD images are about five times the area of SD. More recently UHDTV-1 (4K) and UHDTV-2 (8K), respectively are 4 and 16 times the picture area of 1920 x 1080 HD. New HEVC / H.265 coding is expected to be twice as efficient as MPEG-4, which is already twice as efficient as MPEG-2. This efficiency combined with the twice-as-efficient DVB-T2 and DVB-S2, together mean that 32 SD channels can now broadcast in the space that used to carry one anaolg TV channel. It also means that the channel could carry one 4K TV channel.

See also: Compression ratio, Concatenation, Digital Betacam, ETSI, HEVC, JPEG, JPEG 2000, MPEG-2, MPEG-4, Windows Media

Concatenation

The linking together of systems in a linear manner. In digital television this often refers to the concatenation of compression systems which is a subject of concern because any compression beyond about 2:1 generally results in the removal of information that cannot be recovered. As the use of compression increases, so too does the likelihood that material will undergo a number of compressions between acquisition and transmission. Although the effects of one compression cycle might not be very noticeable, the impact of multiple decompressions and recompressions (A.K.A. decoding and recoding), with the material returned to baseband in between, can cause considerable damage. The damage is likely to be greatest where different compression schemes are concatenated in a particular signal path.

Concatenation#

Conform

Cutting together recorded material according to a prepared scheme such as a rough cut or EDL. EDLs can be used to directly control conforming in an online edit suite (auto-conforming). The time to conform varies widely, from a tape-based suite which takes much longer than the finished program’s running time, to a nonlinear online suite with true random access to all material. This reduces time by loading material in C-mode (see below) and the conforming itself takes only a moment and still allows any subsequent adjustments to be easily made.

Note that with in-server editing, material may be loaded onto the server as an independent task, rather than involving the edit equipment itself. This circumvents so called loading time and further reduces the total time to produce the finished program. The same is also true of nonlinear edit systems with the bandwidth to support loading as a background task.

There are a number of established ways, or modes, of approaching video editing.

A-mode is a simple sequential method, starting from the beginning of the program and going through to the end.

B-mode uses all shots from one particular source before the next source is used (i.e. checkerboarding).

C-mode is similar to B-mode but works sequentially through the playback source rather than the record tape.

D-mode is similar to A-mode except that dissolves and other transitions are performed last, at the end of the conform.

E-mode is similar to C-mode but with transitions at the end, as in D-mode.

Note that the logic of the above modes is based on the characteristics of videotape. As tape is increasingly replaced by other recording media, network conform is becoming increasingly used. Part of the work of the Advanced Media Workflow Association (AMWA) is to provide new industry standards for this important area.

See also: Delta editing, In-server editing, Uncommitted editing

Website: www.aafassociation.org

Connected TV

A television service that typically allows viewing recent programs and the internet in general. Early versions of this made use of a set-top box with an internet connection, today it can be accessed with an all-in-one smart TV.

See also: Smart TV

Consolidation

Clearing continuous space on a disk store to allow consistent recording. This generally involves the moving of data on the disks to one area, leaving the remainder free so that recording can proceed track-to-track – without having to make random accesses. The larger the amount of data stored, the longer consolidation may take. Careful consideration must be given to large-capacity multi-user systems, such as video servers, especially when used for transmission or on-air.

The need for consolidation arises because of the store’s inability to continuously record television frames randomly at video rate. This is taken care of by Quantel’s FrameMagic. Recording can take place over small, scattered areas of the store so there is no need for consolidation.

Consolidation

See also: Defragmentation, FrameMagic, True random access

Constant bit rate (CBR) compression

Compression systems that are used to create a fixed rate of output data. This is usually to fit within a given bandwidth such as that available on a video tape recorder or a constant bit rate transmission channel. With video, the amount of useful information contained in the material varies widely both spatially and temporally – with movement. For example, a football match with crowds and grass texture as well as fast panning cameras typically contains far more information than a largely static head-and-shoulders shot of a newsreader. Using constant bit rate means that the video quality may suffer in order to meet a specified maximum bit rate. In the football case, the texture of the grass may go ‘flat’ during a camera pan, and reappear when the camera is still.

As overflowing the available bit rate budget could have disastrous results with bits being lost, the aim is always to use just under the available bit rate. The degree of success in almost filling the available space (not easily done live) is a measure of the quality and efficiency of the compression system.

See also: Variable bit rate

Contouring

An unwanted artifact, similar to posterization. Digital video systems exhibit contouring when an insufficient number of quantizing levels are used, or inappropriate processes are used such as poor truncation – rounding of LSB errors. The result is that the picture’s brightness changes in too large steps and become visible over relatively even-brightness areas – like the sky.

See also: Dynamic Rounding

Convergence (Stereoscopic)

In human vision, convergence is the ability of our eyes to divert their optical axes so they meet at the object we are looking at. The convergence ‘near point’ is the closest point which is still possible to perceive one image. This can be easily found by moving your finger towards your nose; when you start to see it as two fingers, that’s the near point. Our eyes can easily converge inward but have much less ability to diverge outward; it is something we don’t do normally except when looking at 3D images that have positive parallax (requiring eye to diverge, point outwards) wider than the distance between our eyes (interocular).

In cameras, convergence is  ‘toeing’ of the cameras (to simulate the eyes converging) to focus on a depth point in a scene, either in front of, behind or at the point of interest. The ‘convergence point’ is where the axes of toed in cameras align on the Z-axis. Convergence can be adjusted in post production by horizontal movement. Note that sometimes the term ‘vergence’ is used to describe both convergence and divergence.

Convergence pullers are camera-crew members on a stereoscopic shoot responsible for setting up and shifting the convergence during a shot.

See also: Parallax

 

Corner pinning

A technique for controlling the position and rotation of pictures in a DVE by dragging their corners to fit a background scene: for example, to fit a (DVE) picture into a frame hanging on a wall. Corner pinning was developed by Quantel as a practical alternative to precisely setting the many parameters needed to accurately position a picture in 3D space.

This works well with graphical user interfaces (GUI), e.g. pen and tablet. It can also be combined with the data derived from four-point image tracking to substitute objects in moving images, for example replacing, or obscuring the license plate on a moving vehicle.

See also: Tracking

COTS

Commercial off-the-shelf, usually referring to the use of common components from the computer market to create a product, or application. For instance COTS components and say, a Windows platform, can be used to make a channel-in-a-box integrated video playout system that replaces a host of dedicated hardware. Typically, to meet broadcasters’ expectations of reliability and performance, considerable attention to the precise choice of platform (not all computers are the same) along with considerable software development and testing is needed. The use of COTS in television and movie production is now commonplace. High end editing runs on COTS-based platforms, with the big development effort in software, not hardware!

CRC

Cyclic Redundancy Check, an advanced checksum technique used to recognize errors in digital data. It uses a check value calculated for a data stream by feeding it through a shifter with feedback terms ‘EXORed’ back in. It performs the same function as a checksum but is considerably harder to fool.

A CRC can detect errors but not repair them, unlike an ECC. A CRC is attached to almost any burst of data which might possibly be corrupted. On disks, any error detected by a CRC is corrected by an ECC. ITU-R BT.601 and 709 data is subjected to CRCs. If an error is found the data is concealed by repeating appropriate adjacent data. Ethernet packets also use CRCs.

See also: Asynchronous, Checksum, ECC, EXOR

Cut (edit)

A transition at a frame boundary from one clip or shot to another. On videotape a cut edit is performed by recording (dubbing) the new clip at the out-point of the previous clip. This contrasts with modern techniques such as FrameMagic where true random access storage provides real-time access to all stored frames and so no re-recording is required; there is simply an instruction to read frames in a new order. Simple nonlinear disk systems may need to shuffle, or de-fragment their recorded data in order to achieve the required frame-to-frame access for continuous real-time replay.

The editable frame boundaries may be restricted by video coding systems such as MPEG-2, MPEG-4, VC-1, HEVC, etc. Non-compressed component video and video that is compressed using I-frame only compression (e.g. DV, motion JPEG, motion JPEG 2000 or I-only MPEG-2) can be edited on any frame boundary without additional processing.

Cut(edit)

See also: Fragmentation, I-frame only