Category Archives: F


File Area Networks are a shared storage concept that stores shared files in multiple locations. However the user is not aware of where the files are located; they are simply accessed as if they were local or single site storage. The IT industry is actively pursuing this concept in order to provide organizations with strategically central data sources that are geographically agnostic, which can lead to considerable resource savings.

FANs also have potentially broad applications in post production and broadcast. It is not impossible to imagine a post house with a ‘front door’ in the city where clients can come and watch their job progress, driven by a creative at in his country retreat – perhaps with the storage itself in a third location. Broadcasters with multiple sites (perhaps local stations or subsidiaries) are also looking at FANs with great interest.



The fettle/ curves function provides various YUV, RGB and HSL graphs from which individual channel curves (i.e red, green, blue, yellow, cyan, hue and satuation) can be modified.

Each graph indicates the curve of input colour (the horizontal axis) in relation to the output colour (the  vertical axis).

Control brightness, contrast, hue, etc. by independently adjusting the interaction of YUV,RGB or HSL using the 27 grids provided. By adding multiple control nodes and dragging the spline curve to affect multiple colours within a single secondary.

The video below provides a practical example of the uses of fettle in grading:

Fibre Channel (FC)

An integrated set of standards developed by ANSI designed to improve data speeds between workstations, supercomputers, storage devices and displays while providing a single standard for networking storage and data transfer. It can be used point-to-point, switched or in an arbitrated loop (FC-AL) connecting up to 126 devices.

Planned in 1997 to run on a fiber-optic or twisted-pair cable at an initial data rate of 1Gb/s, it has been consistently upgraded to make, 2, 4, 8 and 14 Gb/s (14GFC) available. Expect both 28 and 4 x 28 Gb/s in 2015. There is a road map sketched to 2028 with the possibility of about an 8-fold further increase in speed. These are nominal wire speeds but 8b/10b encoding is used to improve transmission characteristics, provide more accuracy and better error handling. With every 8-bit data byte for transmission converted into a 10-bit Transmission Character, the useful data rate is reduced by 20 percent.

Because of its close association with disk drives, its TV application is mostly, but not always, in the creation of storage networking. It can interface with the SCSI disk interface, which is key to its operation in storage networking such as SAN.

See also: SAN


File-based (media)

‘File-based’ generally refers to storing media in files rather than as continuous streams like video tape. The term is more widely used to describe IT-based environments for handling and processing digital media. Media production is increasingly becoming file-based.

File transfer

A discontinuous transfer process which treats each transferred item as a single block, neither divisible into smaller, independent elements nor part of a larger whole. As the transfer process has a recognizable beginning and end (unlike streaming) it is possible for the complete transfer to be checked and any errors corrected. This is not possible with a streaming process.

File transfer requires material to be complete and clearly identifiable. When handling time-based material, such as video and audio, the complete file has to be available before transfer can start. If the whole clip is a single file, this cannot be transferred until all the material is complete. However, if the clip is sub-divided, for example into frames, the transfer can start immediately after the first frame is completed. This becomes important in time sensitive applications such the live editing of highlights while the event is still taking place.

See also: Streaming

Film formats

Unlike pre-HD television, which had only two image formats, 525/60I and 625/50I, 35 mm film has many. Of these the most common are Full Frame, which occupies the largest possible area of the film, Academy and Cinemascope. The scanning for these is defined in the DPX file specification as follows:

Scanning resolutionFull frameAcademyCinemascope
4K4096 x 31123656 x 26643656 x 3112
2K2048 x 15561828 x 13321828 x 1556
1K1024 x 778914 x 666914 x 778
Aspect Ratio1.3161.3721.175

These scan sizes generally represent the valid image size within the total frame size indicated by full frame. It is generally considered that all scanning is done at full frame size as this avoids the complexity of adjusting the scanner optics or raster size with risks associated with repeatability and stability. Although these digital image sizes came about as formats for scanning film, new digital cinematography cameras are also using them, exactly or nearly. In the file-based world of DI the exact size does not matter, as long as it’s managed correctly and, most importantly, able to produce high quality output for release prints and digital cinema – where the DCI specifies exact sizes.

2K has 3.19 Mpixels and a 1.316:1 aspect ratio. It is used for digitizing full frame 35mm motion picture film sampled in 4:4:4 RGB color space – making each image 12 MB. Sampling is usually at 10-bit resolution and may be linear or log, depending on the application, and is progressively scanned.

Note that the sampling includes 20 lines of black between frames because of the use of a full frame camera aperture. Thus the actual ‘active’ picture area is 2048 x 1536, has a 4:3 aspect ratio and is exactly QXGA computer resolution. Removing the aperture creates an ‘open gate’ format which may have no black bar between frames – then all 1556 lines carry picture information.

4K is a x4-area version of 2K, with 12.76 Mpixels. Once again the format includes ‘black’ lines – 40 this time, so the actual full frame image is 4096 x 3092. Historically many aspects of handling 4K have been problematic – not least due to the large data rate (over 1.1 GB/s) and the amount of data produced – about 4 TB/h. However modern technologies applied all the way from scene to screen have now made 4K far more readily accessible. For some time, 4K has been the format of choice for some complex effects shots where it was felt these needed extra quality (over 2K) to still look good after all the necessary processes are completed, specifically where the finished shots are inter-cut with the original negative. Now 4K is increasingly being used for whole movies.

DCI 4K and 2K sizes for exhibition in digital cinemas are not the same as the DPX values above. They are 2K (2048 x 1080), 4K (4096 x 2160), quite close to the 1.85:1 aspect ratio of Cinemascope.

In addition, different camera apertures can be used to shoot at different aspect ratios. All these (below) are ‘four perf’ (a measure of the length of film used) and so all consume the same amount of stock per frame. Note that scanners (and telecines) typically change scan size to maintain full 2K or 4K images regardless of aspect ratio. It is no longer normal for work to be scanned at a fixed full frame size.

FormatWidth (mm)Height (mm)
Full Frame24.9218.67

There are many more 35 mm formats in use but in general the use of film is rapidly diminishing as digital alternatives have become easier and more cost-effective. Still, some people want to produce the ‘film’ look.

For lower-budget movies Super 16 is sometimes used.

See also: MTF

Film recorder (digits to film)

Equipment which inputs digital images and outputs exposed negative film. For this, CRT, laser-based and D-ILA LCOS imaging device technology recorders expose high-resolution images onto film. Here there is some emphasis on speed, taking a few seconds per frame, as well as image quality. Laser-based models can scan a 35mm image in about 2s for 2K, (4s for 4K), CRT-based recorders 1s (3.5) and D-ILA imagers can expose 3 f/s for 2K (4K not known).

While the use of film for acquisition and presentation has greatly diminished, in one respect the workflow has reversed as film recorders are needed to create film copies of digital motion pictures for presentation in the many thousands of remaining film-based cinemas. For archiving, many believe that film has a far greater shelf life than digital media, is immune to the digital world’s technology creep and obsolescence, and proven to survive well for many decades.


Film scanner (film to digits)

A general term for a device that creates a digital representation of film for direct use in digital television or for digital intermediate work. For television, film scanners replaced traditional telecines that had to work in realtime. For use in digital film production, they should capture the full detail of the film so that, when transferred back to film, the film-digital-film chain can appear as an essentially lossless process. For this, film scanners are able to operate at greater than HD resolutions (1920 x 1080); 4K is now predominant in the movie busininess. The output is data files rather than the digital video that would be expected from the old traditional telecines.

For movie production the output must retain as much of the negative’s latitude as possible, so the material is transferred with a best light pass and recorded by linear electronic light sensors, typically of CMOS technology to a high accuracy – at least 13 bits of accuracy (describing 8192 possible levels). Using a LUT, this can be converted into 10-bit log data which holds as much of the useful information but does not ‘waste’ data by assigning too many digital levels to dark areas of pictures.

Note that this is different from using a telecine to transfer film to video where, normally the film is graded as the transfer takes place. Additional latitude is not required in this digital state so 10 or 8 bits linear coding is sufficient.


Framework for Interoperable Media Services aims to define standards which enable building media systems using a Service Orientated Architecture (SOA), to provide flexibility, efficiency and scalability that have not been possible with traditional architectures. FIMS is a task force jointly managed by AMWA and EBU. It has over 100 participants that are mostly broadcast industry manufactures.

FIMS recognizes that the television business has radically changed. It aims to change the rigid, hard-wired program production chains of today to a new environment where each process is considered to be a “service” which can be used when needed, and when released, available for others. It has set out to provide a solution which is flexible, cost effective, reliable, expandable, enables best-of-breed products to be employed, future-proof and enables integration with media business systems. Potentially this could offer a very efficient and comprehensive service.



Double flash is commonly used in film projectors so that each of the 24 f/s is shown twice; a total of 48 f/s. This means the movie presentation has less flicker. Triple flash is better still with a frame rate of 72 f/s.

When presenting 3D cinema, the left and right eyes want motion and the parallax to appear at the same time but the sequential frame presentation of 3D, often using a single projector, naturally offsets motion timing. Double, or better triple, flash improves the motion portrayal. Here total frame rates are double that of 2D, so:
single flash is 48 f/s
L1, R1, L2, R2, etc.

double flash is 96 f/s
L1, R1, L1, R1, L2, R2, L2, R2 etc

triple flash is 144 f/s
L1, R1, L1, R1, L1, R1, L2, R2, L2, R2, L2, R2 etc.

Note that the cine player offers only 24 left and right frames/s. It is the job of the projector to present each frame two or three times. Of course, the projector has to be capable of clearly showing frames at that rate.

Flash Memory

Non-volatile solid-state memory that offers fast write and read times, but not as fast as DRAM, and can withstand considerable shock (G-force). Generally known as CF (compact flash), it is popular storage on portable / hand-held devices including professional and consumer video and stills cameras. Also when packaged in a memory card or a ‘pen drive’ case, it is enormously durable, withstanding intense pressure, extremes of temperature and even water immersion.

There are many technologies applied to make CF cards and its development continues apace with increasing capacities: 2005 – 1GB, 2GB; 2006 – 4GB, 64GB, and later 128GB and 256GB (2010). Transfer rates are quoted in different ways. With the ‘x’ rating, multiples of the standard Compact Disk data transfer rate of 150kB/s (= x1) are shown. So a x600 memory card offers up to 90 MB/s – fast enough to record only slightly compressed HD. ‘Class’ is another rating, so Class 6 indicates a (minimum) transfer rate of 6 MB/s. UHS (Ultra High Speed – ) is a third speed rating system with UHS-1 supporting up to 104 MB/s.

In the professional market manufacturers tend to offer memory cards that bundle several CF chips to provide a particular high performance to match the needs of their products – typically high-end cameras and computers. Currently these offer in the order of up to a few TB storage. In 2010 Sandisk, Sony, and Nikon proposed developing a new 500 MB/s memory card with over 2TB capacity.

Flash memory is increasingly used to replace computer hard drives – with solid-state drives (SSD) – providing much higher speed of data transfers. Current offerings are up to 1 TB with faster read and write rates than disks, low power consumption and high robustness, these have many advantages. Cost prohibits a large-scale defection from the traditional rotating mass PC storage device – at the moment.


1) A failure or disappointment.

2) A floating point calculation per second. Flops are a measure of processing power and today this runs into Gigaflops. For example, the powerful Cell processor used in PlayStation 3 is rated at somewhere between 218 and 250 Gigaflops.

Format conversion

Changing the format of television or other moving media images without changing the vertical (frame or field) refresh rate. Starting with 1920 x 1080/50I and converting it to 720 x 576/50I is a format conversion. This only alters the format spatially, changing the vertical and horizontal size which technically is a relatively straightforward task. Note that this may include aspect ratio conversion (ARC) typically to account for the use of 16:9 and 4:3 displays.

See also: ARC, Down-res, Standards conversion, Up-res


The scattering of data over a (disk) store caused by many recording and deletion operations. Generally this will eventually result in store access becoming slow – a situation that is not acceptable for video recording or replay. The slowing is caused by the increased time needed to access randomly distributed data or free space. With such stores de-fragmentation routines re-arrange the data (by copying from one part of the disk to another) so that it is quickly accessible in the required order for replay. Clearly any change in replay, be it a transmission running order or the revision of an edit, could cause further fragmentation.


Stores capable of true random access, such as Quantel’s sQ server, are able to play frames in any order at video rate, and so never need de-fragmentation.

See also: Consolidation, FrameMagic


A whole television picture. A frame has shape, its aspect ratio. Today all new TV frames have a 16:9 aspect ratio. Some motion pictures are presented on TV with a wider aspect ratio, typically with a black border above and below. A frame has a specific resolution and is either using interlaced (I) or progressive (P) scans. Most productions now originate in an HD format of either 1280 x 720P or 1920 x 1080(I or P) pixels. Some still use SD with 701 x 576I or 701 x 480I frames. These two SD standards do not have square pixels, all other DTV frames do. In UHD a frame could have 3840 x 2160 (4K) or 7680 X 4320 (8K) pixels. UHD only uses progressive scans. Interlace makes a relatively low frame rate of 25 or 30 f/s (shown as 50 or 60 fields/s) suitable for portraying motion quite well but, without further processing, stop motion freezes can look poor.

Another property of a frame is its color gamut, as defined in its standard. As TV video standards have progressed, so the associated color gamut has expanded. Some say this is the most striking change from HD to UHD. UHD frames may also have a higher dynamic range (HDR) – again enhancing the look of the pictures. A frame has a specific time. Usually 1/25 or 1/30 of a second. Larger frame formats, especially 4K and 8K, require faster frame rates to reasonably portray smooth movement on a big screen. See ‘Frame rate below’.

See also: Interlace

Frame rate

The number of whole pictures per unit time, usually frames per second, f/s or Hz. There is a wide range of frame rates used with modern media, typically from 23.976 to 60Hz. An even higher frame rate of 120Hz is included in ITU-R BT.2020 recommendation for UHD. This is because the large 4K and 8K video requires high frame rates to provide good smoothness of motion when looking at very big pictures.

Frame-rate conversion

The process of changing the vertical rate of moving pictures. This has become far more important with the use of HD and UHD with their inclusion of many frame rates in the standards with 23.98P, 24P, 25P, 29.97P, 30P, 50I, 50P, 59.94I, 59.94P, 60I and 60P Hz. Changing from one rate to another, including between interlaced and progressive scans, requires considerable technology to produce high quality results. Further frequencies are used in other media markets, such as mobile and some web-based areas with more restricted bandwidth, where lower rates such as 15 f/s and others are found.

Conversions that simply drop or repeat fields or frames produce poor results – especially where the video includes a lot of motion, e.g. sports. For better results some form of motion compensation is used that analyzes the motion and uses this information to create new images in between those that are coincident between the input and output.


Quantel term describing an advanced form of the management of video in a server. This covers much ground but basically offers the practical goals of guaranteeing realtime access to any frame for all video connections (simultaneous true random access) and avoiding the deletion of any material by one user that is partly or wholly used by another.

This is achieved by implementing a number of basic design criteria, including realtime random access to any frame, storing video material as a series of individual frames, rather than longer video files, as well as an on-board realtime database management system which, among other things, tracks who is using what material. FrameMagic is implemented in Quantel’s sQ servers.

See also: Delta Editing


The name, coined by Quantel, given to solid-state video storage, usually built with DRAMs. Technically it implies storage of one complete frame or picture, but the term is also used more generically to encompass the storage of a few lines to many frames. With large DRAM capacities available, framestores may hold many frames or a whole video clip, they are widely used to enhance equipment performance by providing instant access to video material.

See also: DRAM


The number of times and event occurs over a given period of time (usually one second). In most cases frequency relates to a regularly occurring event of a cyclic nature. Frequency is measured in Hertz , Hz, which is the SI unit defining cycles per second. It is named after the German physicist Heinrich Hertz who was not the founder of a car rental company we understand , A 440Hz tone describes the pitch of A4 audible tone. In electronic terms we often talk in terms of MHz, 10^6, and GHz, 10^9, cycles/second. For example, a specific frequency defines the clock rates in digital systems. Here are some of the more common ones used in TV:

PAL subcarrier: 4.43 MHz
NTSC subcarrier: 3.58 MHz
ITU-R BT.601 luminance sampling rate: 13.5 MHz (SD)
ITU-R BT.601 chrominance sampling rate: 6.75 MHz (for 4:2:2 SD sampling)
ITU-R BT.709 luminance sampling rate: 74.25 MHz (HD)
ITU-R BT.709 chrominance sampling rate: 37.125 MHz (for 4:2:2 HD sampling)
ITU-R BT.2020 luminance sampling rate: 297 MHz (4K UHD)
ITU-R BT.2020 chrominance sampling rate: 148.50 MHz (for 4:2:2 UHD sampling)

Although not appearing in any prominent headline, 2.25 MHz is significant as the greatest common divisor multiple of all these frequencies, meaning they are all related.

See also: 13.5 MHz


File Transfer Protocol. The high level Internet standard protocol for transferring files from one machine to another. FTP is usually implemented at the application level.

See also: FXP


A generic acronym for Fiber To The X – a broadband network that uses optical fiber to provide all or part of the local loop used for the ‘last mile’ connection to the customer. X can be H – home, N – neighborhood, D – desktop, N – to the street cabinet or node, C – cabinet, P – premises, etc. Fiber can typically operate over quite long distances – typically 10 km or more, capable of delivering digital services, often broadband internet, at speeds of 1 Gb/s or more. However, the internet provider may well not be delivering the service at such a high speed.

See also:

Full HD

Full HD is a term describing video at 1920 x 1080 resolution. Generally this appears as one of the many logos stuck on a new flat panel TV set or screen that can display all the pixels of 1920 x 1080 images supplied from HD sources such as broadcast transmissions and HD DVD and Blu-ray Discs.

The panels only use progressive scans so they are showing 50P and 59.94P frame rates. The logo can also appear on other consumer products such as camcorders using the 1920 x 1080 image size.


File eXchange Protocol uses FTP to transfer data directly from one remote server to another. This inter-server exchange operates without the data being routed to the client. Whereas normal FTP involves data transfers only between a client and a server, using FXP involves the client using an FTP connection to the two servers to command the data transfer. This can be particularly useful when moving large files, such as video, as it is only the connection between the two servers that needs to be fast; the client’s command operation requires only low bandwidth.