Tag : Editing

2:2 F1/F2

A film frame being transported as 2:2 (pSF) is placed into two consecutive video fields. F1/F2 denotes that the film frame is carried in field one and the following field two. This is commonly referred to “normal dominance” or “perfect cadence”.

See also pSF

2:2 F2/F1

A film frame being transported as 2:2 (psf) is placed into two consecutive video fields. F2/F1 denotes that the film frame is carried in a field two and the following field one. This is commonly referred to “reverse dominance” or “reverse cadence”.

See also pSF


Burnt-in Timecode. Timecode that is displayed on the video to which it refers. This is often recorded to provide precise frame references for those viewing on platforms not supplied with timecode readers.


In videotape editing, operating in C-mode allows the shots on the source footage reels to be copied to the edit tape in the order they are recorded on the source tapes, rather than the order required for the finished program. So all the selected material from one source reel can be copied before loading another source reel – thus saving spooling and reel changing time. This assumes that none of the edits will need to be changed along the way!

The term lives on in the disk-based editing world where the editing disk store can randomly access the footage recorded from the source tapes. So C-mode editing on disks not only gives the advantages of faster and more convenient transfers from source tapes, but also allows the edits to be easily altered thanks to the random access of disk stores.


The name is taken from the film industry and refers to a segment of sequential frames made during the filming of a scene. In television terms a clip is the same but represents a segment of sequential video frames. In Quantel editing systems, a clip can be a single video segment or a series of video segments spliced together. A video clip can also be recorded with audio or have audio added to it.

Color Decision List (CDL)

The American Society of Cinematographers’ Color Decision List (ASC-CDL) is a proposed metadata interchange format for color correction, developed to provide a means of guiding the next stage in the process with respect to color correction. The CDL applies primary corrections of offset, gain, gamma and saturation expressed in a normalized manner.

Color management

The control of color through a defined process. The idea is that all, or at least some, pictures, monitors or screens should portray the colors as they will appear when shown in their target medium. For example, if working on a DI project, the color management should ensure that the monitors show the colors exactly as an audience in a cinema will see them. Today that may be by use of a release print, in which case the color management has to adjust the monitoring for the film stock to be used. It could also be from a digital cinema distribution master (DCDM) and the color characteristics of that path taken into account as part of the color management.

Color timing (a.k.a. Grading)

The color of film exposed and processed in a laboratory is controlled by separately altering the amount of time that the red, blue and green lights are used to expose the film. This is referred to as color timing and its effect is to alter the contrast of R,G and B to create a required color balance.

In a lab, color timing is usually applied at the point where the edited negative is copied to the master interpositive, but can be done later at other points, if required. This contrasts with the digital intermediate process where color timing is applied at any required time. In addition there is far more flexibility for color control with gamma, hue, luminance, saturation as well as secondary color correction. Also the results can be seen immediately and projected onto a large cinema screen, and further adjusted if required. The images have precise color settings to show the results as if output via film, or digitally.

See also: Grading, Timing


Multi-layering for moving pictures. Modern composites often use many techniques together, such as painting, retouching, rotoscoping, keying/matting, digital effects and color correction as well as multi-layering to create complex animations and ‘opticals’ for promotions, title sequences and commercials as well as in program content. Besides the creative element there are other important applications for compositing equipment such as image repair, glass painting and wire removal, especially in motion pictures.

The quality of the finished work, and therefore the equipment, can be crucial especially where seamless results are demanded. For example, adding a foreground convincingly over a background – such as placing an actor into a scene – without any telltale blue edges or other signs that the scene is composed.

See also: Digital keying, Photo-real


Cutting together recorded material according to a prepared scheme such as a rough cut or EDL. EDLs can be used to directly control conforming in an online edit suite (auto-conforming). The time to conform varies widely, from a tape-based suite which takes much longer than the finished program’s running time, to a nonlinear online suite with true random access to all material. This reduces time by loading material in C-mode (see below) and the conforming itself takes only a moment and still allows any subsequent adjustments to be easily made.

Note that with in-server editing, material may be loaded onto the server as an independent task, rather than involving the edit equipment itself. This circumvents so called loading time and further reduces the total time to produce the finished program. The same is also true of nonlinear edit systems with the bandwidth to support loading as a background task.

There are a number of established ways, or modes, of approaching video editing.

A-mode is a simple sequential method, starting from the beginning of the program and going through to the end.

B-mode uses all shots from one particular source before the next source is used (i.e. checkerboarding).

C-mode is similar to B-mode but works sequentially through the playback source rather than the record tape.

D-mode is similar to A-mode except that dissolves and other transitions are performed last, at the end of the conform.

E-mode is similar to C-mode but with transitions at the end, as in D-mode.

Note that the logic of the above modes is based on the characteristics of videotape. As tape is increasingly replaced by other recording media, network conform is becoming increasingly used. Part of the work of the Advanced Media Workflow Association (AMWA) is to provide new industry standards for this important area.

See also: Delta editing, In-server editing, Uncommitted editing

Website: www.aafassociation.org

Corner pinning

A technique for controlling the position and rotation of pictures in a DVE by dragging their corners to fit a background scene: for example, to fit a (DVE) picture into a frame hanging on a wall. Corner pinning was developed by Quantel as a practical alternative to precisely setting the many parameters needed to accurately position a picture in 3D space.

This works well with graphical user interfaces (GUI), e.g. pen and tablet. It can also be combined with the data derived from four-point image tracking to substitute objects in moving images, for example replacing, or obscuring the license plate on a moving vehicle.

See also: Tracking

Cut (edit)

A transition at a frame boundary from one clip or shot to another. On videotape a cut edit is performed by recording (dubbing) the new clip at the out-point of the previous clip. This contrasts with modern techniques such as FrameMagic where true random access storage provides real-time access to all stored frames and so no re-recording is required; there is simply an instruction to read frames in a new order. Simple nonlinear disk systems may need to shuffle, or de-fragment their recorded data in order to achieve the required frame-to-frame access for continuous real-time replay.

The editable frame boundaries may be restricted by video coding systems such as MPEG-2, MPEG-4, VC-1, HEVC, etc. Non-compressed component video and video that is compressed using I-frame only compression (e.g. DV, motion JPEG, motion JPEG 2000 or I-only MPEG-2) can be edited on any frame boundary without additional processing.


See also: Fragmentation, I-frame only

Delta editing

A Quantel term for a form of server-based nonlinear editing where only the change information, the EDL along with the any new video frames and audio created in the editing process, are sent back to the server from a connected edit station. For cuts-only editing, the stored result would simply be an EDL. If transitions are created, such as dissolves, wipes, DVE moves etc, these represent new frames that are processed by the editing workstation and sent to the server to be included as a part of EDL. Delta editing contrasts with dub editing or some NLE technology where every frame in the final edit has to be copied to a new file.

See also: FrameMagic

Digital intermediate (DI)

The DI refers to the process that accepts exposed film or digital footage and eventually delivers edited and graded masters, which can either be film internegatives for production labs to generate large numbers of release prints, but today usually the Digital Source Master (DSM). Initially the term arose to describe a digital version of the traditional chemical intermediate lab where film is graded, cut and copied from camera negative to several interpositives, then to many internegatives, which are distributed to the production labs to make the release prints for cinemas. These processes include creating possibly thousands of release prints from a single set of camera negatives.

Although the boundaries may vary, generally the DI ‘lab’ accepts data from digital movie cameras, and outputs an edited and graded DSM or, possibly, an internegative master for a whole or part of a feature. However, the operation and decision-making processes of the digital workflow greatly differ from the traditional film lab, not least because of the interactive nature of the digital operation.

In the DI lab, decisions become on-screen reality and are seen in full context as they are prepared – no waiting for the ‘chemical’ lab. Grading, dissolves, cuts and effects can be seen immediately – on a big screen if needed. The interactive process allows more creativity and gives complete confidence that the decisions work well. Also grading can take place after the footage is cut together, so the shots are seen, as graded, in context.

For those still using film, whole movies can be sent for output to the digital lab’s film recorder, exposing 1000ft reels at a time and no final grading required. For digital cinemas, the DSM – digital cinema’s equivalent of internegatives – enters a short process to create the Digital Cinema Package (DCP) for distribution.

See also DCI

Discontinuous 2:3

It is common for the electronic editing process to be performed post telecine. When editing is performed to 2:3 there is a potential for disruptions in the 2:3 sequence. These can be 3 field sequences adjacent to other 3 field sequences, and 2 field sequences adjacent to other 2 field sequences. Also there are cases where we have single fields present that are not part of any sequence (Orphan fields). These disruptions caused by editing create a “broken 2:3 sequence”.

Drop-frame timecode

Alteration of timecode to match the 1000/1001 speed offset of NTSC transmissions and many newer HD and UHD video formats used in ‘NTSC’ countries – including the USA, Canada and Japan. 525-line NTSC at a nominal 30 f/s actually runs at 29.97 f/s, 720 and 1080-line HD as well as 2K and 4K UHD all include the 1000/1001 offset frequencies of nominal 24, 30 and 60Hz frame rate. So even 24 f/s movies run at 23.97 Hz.

With drop-frame timecode, the timecode is locked to the video and it needs to make up 1 in 1001 frames. It does this by counting two extra frames every minute while the video remains continuous. So 10:35:59:29 advances to 10:36:00:02. In addition, at every ten-minute point the jump is not done. This brings the timecode time almost exactly into step with the video.

Timecode that does not use drop-frame is then called non drop-frame time-code. Confusion arises when the wrong one is used!

See also: 1000/1001

Website: www.dropframetimecode.org


Digital Video Effects refers to the manipulation of video to re-size images or objects in 2D and 3D space. Originally such facilities were supplied as separate machines (boxes) but now they are included as an integral part of systems such as vision mixers/switchers, editing systems and graphics equipment. The list of effects varies but may well include zoom and position, rotation, 3D perspective, page turns, picture bending and curls, blurs, and more. Picture quality and control also widely vary depending on the processing techniques used.

See also: Axis, Global


Edit Decision List. A text file describing the make up of a program by defining the shots to be used , their inpoints, duration and sequence. EDLs can be produced during an off-line editing session and passed to the on-line suite to control the conforming of the final edit. In order to work across a range of equipment there are some widely adopted standards such as CMX 3400 and 3600. News journalists working with integrated news production systems, such as Quantel’s Enterprise sQ news systems, can effectively create EDLs at their desktops.

EDLs have been frozen in time and not kept pace with the continued development of post production. They do not carry information on DVEs, complex color correction, layering, keying etc., or carry other data about ownership, rights, etc. The development of AAF has filled these gaps. There has however been some extension to the original format to allow simple color corrections to be defined by creating the CDL which combines the edit timings of the EDL with basic color controls for Offset, Gain, Gamma and Saturation for RGB data – 10 parameters in total.

See also: AAF, CDL, Conform, OMFI

Effects (digital)

Digital effects processes have replaced the traditional film optical effects. With their faster, more accurate interactive workflow, digital effects are widely used, typically in drama productions for movies and for television. With many effects being completed within the of the main post production workflow, it is only the more complex shots that are completed at separate workstations.

Digital effects can retain the full original image quality irrespective of the number of layers used, provided that the images remain in an uncompressed format. The results can usually be seen immediately, making the process interactive and so allowing quick refinements to complete the desired result. All this is available to be seen in context with the rest of the production, providing immediate checks for continuity. As digital effects are relatively quick and accessible, they now play a much larger role in productions.


The fettle/ curves function provides various YUV, RGB and HSL graphs from which individual channel curves (i.e red, green, blue, yellow, cyan, hue and satuation) can be modified.

Each graph indicates the curve of input colour (the horizontal axis) in relation to the output colour (the  vertical axis).

Control brightness, contrast, hue, etc. by independently adjusting the interaction of YUV,RGB or HSL using the 27 grids provided. By adding multiple control nodes and dragging the spline curve to affect multiple colours within a single secondary.

The video below provides a practical example of the uses of fettle in grading:

Global (control)

The top level of control in a multi-channel DVE system. A number of objects (channels) can be controlled at one time, for example to alter their opacity or to move them all together relative to a global axis, one which may be quite separate from the objects themselves. This way the viewing point of all the assembled objects can be changed. For example, a cube assembled from six channels could be moved in 3D space as a single action from a global control.

See also: Axis

Grading (a.k.a. color timing)

Grading is the process of adjusting the color of a clip to get the best out of the material or to match shots perhaps taken at different times or in different lighting conditions. With film, grading was traditionally performed when going from internegative to print film by controlling the exposure of the film. In television it was traditionally done off the telecine for commercials or tape-to-tape for longform programs. Either way, both processes were, by their nature, linear.

The advent of non-linear grading systems (such as Quantel’s Pablo Rio) has changed the rules for color grading and correction. While there is still a requirement for an initial technical scan for film-originated material, from this point on grading can – and often does – happen at multiple stages in the post production process. For example, color correcting individual layers within multilayer composite shots (which may be shot under different lighting conditions) to ensure that the result is harmonious within itself. In addition, non-linear editing means that scene-to-scene comparisons and corrections can be made as the edit unfolds.

This eases the final grading process when the finished work is reviewed interactively with the director/client.

Secondary color correction is aimed at controlling a particular color or a narrow range of colors – such as those on a car or product. Here typically the hue, gain and saturation can be changed. There are also several methods available for defining the object, area or ‘window’ that requires color correction such as using wipe-pattern shapes, drawing an electronic mask by hand or a combination of automatic and by-hand methods. Some of the most sophisticated tools are provided by media workstations such as Quantel’s Pablo Rio.

See also: Film scanner, Telecine


Graphical User Interface. A means of controlling or operating a system through the use of interactive graphics displayed on a screen. Examples in the computer world are the Apple Macintosh and Microsoft Windows, both designed for general-purpose use and usually operated with a mouse as the pointing device.

In 1981 Quantel introduced Paintbox with its on-screen menu system operated from a pressure sensitive pen and touch tablet. This purpose-built control has been further developed to cover a wide range of operations including DVEs, editing, VTR control, color grading and audio, and today is applied to the whole range of Quantel products. Besides its success in offering fast and effective control, the GUI also enables easy updates to accommodate new facilities.

History (editing)

A Quantel term describing the ability to instantly recall original material in uncommitted form along with the associated editing data. This allows any late changes to be made quickly and easily. For example, a shadow could be softened or moved in a multi-layered commercial without having to manually find the original material or recalling the set-up data. Archiving a program containing History means that the operator no longer needs to remember to save packs or set-ups associated with the job as all the material and set-ups will be automatically included within the history archive.

Illegal colors

Colors that lie outside the limits, or gamut, of a particular defined color space. These can be generated when transferring images from one color space to another, as they all have different boundaries, or as the result of color processing. For example, removing the luminance from a high intensity blue or adding luminance to a strong yellow in a paint system may well send a subsequent video signal too high or low, producing at least inferior results and maybe causing technical problems. Out-of-gamut detectors can be used to warn of possible problems and correction is also available. Some broadcasters reject material with illegal colors.

In-server editing

Editing at a workstation which directly edits material stored in a server. For this the workstation does not need large-scale video and audio storage but depends totally on the server store. The arrangement allows background loading of new material to the server, via several ports if required, and playout of finished results, while avoiding any need to duplicate storage or transfer material to/from the workstation and allowing any number of connected workstations to share work. The efficiency of in-server editing allows fast throughput and is specially attractive to news as well as to post production where jobs can be instantly available in rooms, or moved between rooms.

This depends on using a server that can act as an edit store and perform reliable video replay and record functions. It also requires a powerful interface to the edit workstation.

Quantel’s edit workstations with sQ servers operate this way. The workstation/server connection is by Gigabit Ethernet.

in-server editing

See also: Delta editing


A set of parameters defining a point in a transition, e.g. of a DVE effect. For example a keyframe may define a picture size, position and rotation. Any digital effect must have a minimum of two keyframes, start and finish, although complex moves will use more, maybe hundreds.

Increasingly, more parameters are becoming ‘keyframeable’, i.e. they can be programmed to transition between two, or more, states. Examples are color correction to make a steady change of color, and keyer settings, perhaps to make an object slowly appear or disappear.

See also: DVE


The process of selectively overlaying an area of one picture (or clip) onto another. If the switch between the overlaid and background pictures is simply ‘hard’ this can lead to jagged edges of the overlaid, or keyed, pictures. They are usually subjected to further processing to produce ramped, slightly soft, key edges to give a cleaner, more convincing, result. The whole technology of deriving key signals from image data and the color corrections applied to keyed image edges, has greatly expanded through the use of digital technology, so that many operations may be used together, e.g. softening the key, color correcting key spill areas, and much more.

See also: Chroma keying, Digital keying, Linear keying


A collection, or ‘pack’ of video clip layers can be assembled to form a composite layered clip. Layers may be background video or foreground video with their associated matte run. The ability to compose many layers simultaneously means the result can be seen as it is composed and adjustments made as necessary.

Linear (editing)

The process of editing footage that can only be accessed or played in the sequence is was recorded. Tape and film are linear and they have to be spooled for access to any particular material and can only play pictures in the order they are recorded.

With spooling, jogging and pre-rolls, so called ‘mechanical considerations’, absorbing upwards of 40 percent of the time in a VTR edit suite, linear editing is slow for everyday editing. The imposition of having to record items to an edit master tape in sequence limits flexibility for later adjustments: e.g. inserting shots between existing material may involve either starting the job again or re-dubbing the complete piece. For simple changes however, linear suites are still fast for tape-based material, but random access storage, solid state chips or hard disk drives, provide a far faster and more flexible platform for editing.

See also: C-mode, Digital disk recorder, True random access

Linear (keying)

In linear keying the ratio of foreground to background pictures at any point on the screen is determined on a linear scale by the level of the key (control) signal. This form of keying provides the best possible control of key edge detail and anti-aliasing. It is also essential for the realistic keying of semi-transparent effects such as transparent shadows, through-window shots and partial reflections.

See also: Keying


Longitudinal Timecode. Traditionally, timecode has been recorded along a linear track on videotape. It was recorded and read by a static head on videotape recorders so it could be easily read when the tape is moving forwards or backwards, but nothing was read during a freeze frame – when VITC, timecode recorded with the picture material, was still working. Today, with tape use now falling, actual linear TC is not so often used. However timecode is still in wide used to identify and access video frames.

See also: VITC


The master of an edited program holds the material that is used as the source for making all deliverables and other versions (for language, subtitles etc.). Making a good quality master sufficient to fulfill quality needs of all target customers the standards should ensure that the deliverables are good.

For international distribution the use of 1080 x 1920 24P (the same frame rate as movies) is often regarded as the best to ensure good quality deliverables for HD and SD requirements. If it is a high-cost production aimed at a world market a 4K UHD may be required. Also, 24P can be a good frame rate for drama, but many other genres, including spot, would require higher or 50 or 60 I or P.

Supplying the best to all media platforms now often requires more than the traditional master can provide. A better form of master may be an uncommitted one, where all the original source material and all the tools used and their settings, are available so that any aspect of editing and finishing can be revisited to make the right deliverables for everyone.

See also: Deliverables

Non-additive mix

A mix of two pictures which is controlled by their luminance levels relative to each other, as well as a mix value K (between 0 and 1): e.g. the position of a switcher lever arm. A and B sources are scaled by factors K and 1-K but the output signal is switched to that which has the greatest instantaneous product of the scaling and the luminance values. The output of any pixel is either signal A or B but not a mix of each. So if K = 0.5 (lever arm moved half way), in areas where picture A is brighter than B, then only A will be seen. Thus two clips of single subjects shot against a black background can be placed in one picture.

The term has also come to encompass some of the more exotic types of picture mixing available today: for example to describe a mix that could add smoke to a foreground picture – perhaps better termed an additive mix.

See also: Digital mixing

Non drop-frame timecode

Timecode that does not use drop-frame and always identifies 30 frames per second. This way the timecode running time will not exactly match normal time unless it is an exact 30f/s. The mismatch amounts to 1:1000, amounting to an 18-frame overrun every 10 minutes. This applies where 59.94, 29,97 or 23.976 Hz frame rates are used in video systems, the practice in many ex NTSC countries including USA and Canada.

See also: 1000/1001, Drop-frame timecode

Nonlinear (editing)

Nonlinear means not linear, that the recording medium is not tape and editing can be performed in a non-linear sequence, not necessarily the sequence of the program. It describes editing with quick access to source clips and recording space, usually using computer disks to store footage. This removes the spooling and pre-rolls needed for VTR operations so greatly increasing the speed of work. Yet greater speed and flexibility are possible with realtime random access to any frame (true random access).

See also: FrameMagic, Linear (editing), On-line (editing), True random access


Newsroom Computer System. The name sprang up when the only computer in a TV news area was used for storing and editing the text available from news services. It also created the running order for the bulletin and was interfaced to many other devices around the production studio. Today the NRCS lives on… but it is no longer the only computer around the newsroom! All journalists can access the the NRCS which stores all the video, audio, text needed for them to edit their story at their workstation, which could be anywhere. Also rundowns, prompter scripts and more tasks are undertaken by modern NRCS that may link more systems, such as video editing.

See also: MOS

Offline (editing)

A decision-making process using low-cost equipment to produce an EDL or a rough cut which can then be conformed or referred to in a high quality online suite – so reducing decision-making time in the more expensive online environment. Most offline suites enable shot selection and the defining of basic transitions such as cuts and dissolves which are carried by EDLs. AAF provides an open standard for transporting a much wider range of decisions, including DVE, color corrections, as well as other metadata, between systems from different manufacturers.

With on-line video quality now available on low-cost computers there is a choice of skipping offline and editing broadcast quality footage very cost-effectively, or continuing to use off-line, depending on the requirements of the project and budgets.

Online (editing)

Production of the complete, final edit performed at full program quality; the buck stops here! Being higher quality than offline editing, time costs more but the difference has narrowed as the cost of equipment has reduced. Preparation in an offline suite will help save time and money in the online. To produce the finished edit, online has to include a wide range of tools, offer flexibility to try ideas and accommodate late changes, and to work fast to maintain the creative flow and to handle pressured situations.


A set of clips, mattes, DVE settings, color corrector, keyer, etc., that are used together to make a video layer of a composited picture. Quantel equipment allows packs to be saved and archived so they can be used later for any re-works.


Video and film effects that are constructed in such a way that they look totally real, and not synthetic, are referred to as photo-real effects. This use of effects has rapidly increased and so changed the way many productions are shot and post produced – leading to lower budgets and better looking results.

Achieving photo-real results requires careful planning from before the shoot and computer imagery through to compositing in post production. Excellence in keying, so there are no telltales of blue screen halos or color spill, are among the many techniques required for successful results.

See also: Compositing, Digital keying


A Progressive Segmented Frame (pSF) format splits a progressive image into two sequential fields. It is identical to 2:2 in terms of motion profile.


The practice of using frames of live footage as reference for painting animated sequences. Today, the meaning has extended to cover a whole range of manual retouching techniques. While the painting will always depend on the skill of the artist, modern graphics equipment integrated with a video disk or RAM store makes rotoscoping, or any graphical treatment of video frames, relatively quick and easy. This has led to many new designs and looks appearing on television as well as more mundane practices such as image repair.

Scrub (audio)

Replay of audio tracks at a speed and pitch corresponding to jog speed – as heard with analog audio tape ‘scrubbing’ backwards and forwards past an audio replay head. This feature, which is natural for analog fixed-head recorders, may be provided on a digital system recording on disks to help set up cues.

Secondary color correction

Primary color correction, or grading, is applied to the whole image. Secondary correction is applied only to selected areas of the image, the area being defined by a pattern generator (e.g. a circle or rectangle), by curved lines or derived from the object itself using chromakey techniques or any combination of all these. This way, for example, the color of a car in a clip could be changed from say, red to green, while the rest of every image is unchanged.

Sequence Detection

This is the act of finding film frame boundaries. For “perfect” pSF or 2:3 sequences, this will produce a regular pattern of frames; for “non-perfect” sequences the pattern will not be regular and might have discontinuities at edit points for example.

Server (file)

A storage system that provides data files to all connected users of a local network. Typically the file server is a computer with large disk storage which is able to record or send files as requested by the other connected (client) computers, the file server often appearing as another disk on their systems.

The data files are typically around a few kB in size and are expected to be delivered within moments of request.

Server (video)

A storage system that provides audio and video storage for a network of clients. Those used in professional and broadcast applications are based on hard disk storage. Aside from those used for video on demand (VOD), video servers are applied in three areas of television operations: transmission, post production and news. Compared to general-purpose file servers, video servers must handle far more data, files are larger and must be continuously delivered.

There is no general specification for video servers and so the performance between models varies greatly according to storage capacity, number of realtime video channels, protection level (RAID), compression codec and ratio, and speed of access to stored material – the latter having a profound influence.

Store sizes are large, typically from about 500 GB up to a many terabytes. Operation depends on connected devices: edit suites, automation systems, secondary servers, etc. The effectiveness of the server’s remote control and video networking is vital to success.


A keying signal used in graphics systems – such as the Quantel Paintbox that is built into all the company’s editing systems. It can be drawn, derived from picture information, or both. It can be used to define the area of an object, obscure part or all of an object, making it transparent or partially transparent, and used to control the application of paint… and more.


A graphical representation of events, such as those that occur in editing, compositing, grading or other processes, usually as a horizontal line. This works well with disk-based and solid-state based operations providing instant access to any part of the process and, hopefully, to all the footage, decisions, associated tools and their settings.

Tracking (image)

Following a defined point, or points, in the pictures of a clip. Initially this was performed by hand, using a DVE but was laborious, difficult and limited to only to pixel accuracy. Now image tracking is widely used, thanks to the availability of automatic point tracking operating to sub-pixel accuracy. The tracking data can be applied to control DVE picture moving for such applications as the removal of film weave, replacing 3D objects in moving video, wire removal, etc.

Advanced multiple point tracking is sometimes used to analyze images in 3D, so allowing a whole raft of computer-generated material to be move-matched for compositing into live scenes, blurring the boundaries of live and synthetic imagery.

See also: Corner pinning, Sub-pixel

Uncommitted editing

Editing where the decisions are made and the edits completed but any can still easily be changed. This is possible in an edit suite with FrameMagic that includes true random access editing – where the edits need only comprise the original footage and the edit instructions. Nothing is re-recorded so nothing is committed. This way, decisions about any aspect of the edit can be changed at any point during the session, regardless of where the changes are required. Where new frames are generated, such as in mixes, dissolves and compositing, all the tools and their settings are available – preferably on the edit timeline.

See also: True random access


Vertical Interval Timecode (pronounced ‘vitsy’). Timecode information in digital form, added into the vertical blanking of a TV signal. This can be read by the video heads from videotape at any time pictures are displayed, even during jogging and freeze but not during spooling. This effectively complements LTC ensuring timecode can be read at any time.

See also: LTC