The ability of our eyes to refocus onto a new point of interest. In normal vision, the processes of focusing on objects at different distances (accommodation) and convergence/divergence (the angle between the lines of sight of our eyes) are linked by muscle reflex. A change in one creates a complementary change in the other. However, watching a stereoscopic film or TV program requires the viewer to break the link between these different processes by accommodating at a fixed distance (the screen) while dynamically varying eye convergence and divergence (something we don’t do in life and can quickly lead to headaches if over-used in stereo3D) to view objects at different stereoscopic distances.
A type of stereoscopy in which the left eye and right eye images are separated by color filtering and then superimposed as a single image rather than two separate images. Each eye sees only the required image through the use of complementary colored filters (e.g. red and green or red and cyan). Anaglyph glasses have been popular over the years for viewing 3D comics and some 3D films (particularly on VHS and DVD).
See also: 3D
Screens that allow viewers to see 3D images without them wearing special 3D glasses are referred to as being autostereoscopic. This provides so called ‘glasses-free’ 3D viewing. Typically the displays make use of a lenticular filter on the front of the screen – rather like those sometimes used on the post cards you can find in tourist shops. The filter is designed so that our left eye sees the left image on the screen, and the same with the right. With this type of system there are sweet spots, typically called ‘zones’ or ‘views’, where you can appreciate the 3D, but sadly, outside those areas 3D is not seen. So you have to pick your spot and stay there. A way to improve matters is to offer more sweet spots. Today six or eight is common, and some screens offer more. Philips and Dolby have been working together; their autostereoscopic screen provides 14 views. Some say this is so many that you can always see the 3D. Another solution is to add a camera to the screen so it can see where its viewers are and so adjust the left and right images so each can see the 3D. This can work for several viewers.
Stereo objects in front of the screen plane (negative parallax) are problematic if they intersect the edge of frame, as contradictory depth cues are sent to the viewer. Essentially one cue is saying that the object is in front of the screen and another is saying that the object is behind it.
This problem can be reduced in post production by a technique known as a ‘floating window’. This involves applying a partially transparent mask, reducing the strength of the cues on whichever side the object is breaking frame (and simultaneously if there are objects breaking frame both left and right).
Another kind of issue is caused by objects moving backwards and forwards over the edge of frame. As an object moves off the edge of a screen one stereo camera signal is lost before the other. The result is that the stereo signal temporarily ‘switches off’. This can sometimes be solved by sizing up both images in post, causing the object to move off screen altogether.
Lack of true 3D feel to a shot making solid objects look like as if they are made from cardboard cutouts, i.e. they have no depth. This is also referred to as Cutout Planar Effect. Caused by inadequate depth resolution due to an incorrect matching between the focal length of the recording lens (or CGI camera) and the interocular distance between the cameras. Typically big zooms looking at distant objects can cause the effect if the left and right cameras are not sufficiently far apart.
See also: Interocular
In human vision, convergence is the ability of our eyes to divert their optical axes so they meet at the object we are looking at. The convergence ‘near point’ is the closest point which is still possible to perceive one image. This can be easily found by moving your finger towards your nose; when you start to see it as two fingers, that’s the near point. Our eyes can easily converge inward but have much less ability to diverge outward; it is something we don’t do normally except when looking at 3D images that have positive parallax (requiring eye to diverge, point outwards) wider than the distance between our eyes (interocular).
In cameras, convergence is ‘toeing’ of the cameras (to simulate the eyes converging) to focus on a depth point in a scene, either in front of, behind or at the point of interest. The ‘convergence point’ is where the axes of toed in cameras align on the Z-axis. Convergence can be adjusted in post production by horizontal movement. Note that sometimes the term ‘vergence’ is used to describe both convergence and divergence.
Convergence pullers are camera-crew members on a stereoscopic shoot responsible for setting up and shifting the convergence during a shot.
See also: Parallax
A post production process where negative and positive parallax convergence are adjusted. This is not only a creative tool used to place objects on the Z axis but also a way to ensure that stereoscopic content can be comfortably watched on the screen size it is intended for. For example, in a post suite the director may be viewing a film on a small projection screen but the final delivery format may be a large theater or IMAX.
In practice the eyes have little ability to diverge (up to one degree is considered the rule of thumb) and this is especially a consideration in depth grading for very large screens with positive parallax images, where the distance between the left and right representations of an image may be very widely spaced.
Sometimes the term Depth Budget is used to refer to the allowed combined value of positive and negative parallax and expressed as a percentage of screen width.
See also: Parallax
Double flash is commonly used in film projectors so that each of the 24 f/s is shown twice; a total of 48 f/s. This means the movie presentation has less flicker. Triple flash is better still with a frame rate of 72 f/s.
When presenting 3D cinema, the left and right eyes want motion and the parallax to appear at the same time but the sequential frame presentation of 3D, often using a single projector, naturally offsets motion timing. Double, or better triple, flash improves the motion portrayal. Here total frame rates are double that of 2D, so:
single flash is 48 f/s
L1, R1, L2, R2, etc.
double flash is 96 f/s
L1, R1, L1, R1, L2, R2, L2, R2 etc
triple flash is 144 f/s
L1, R1, L1, R1, L1, R1, L2, R2, L2, R2, L2, R2 etc.
Note that the cine player offers only 24 left and right frames/s. It is the job of the projector to present each frame two or three times. Of course, the projector has to be capable of clearly showing frames at that rate.
Artifacts typically caused by signal leakage (crosstalk) between the two ‘eyes’. A secondary ‘ghost’ image can be seen. There are several possible causes that can introduce the problem during acquisition, post production and display. One reason can be high contrast levels between an object and its background.
Confusing visual cues in a stereoscopic scene that can make an object appear to be the ‘wrong’ size, i.e. the impression of strangely enlarged size of objects. This is due to the choice of interaxial distance relative to the focal length of the camera lenses, e.g. shooting with an interaxial distance much less than the interocular distance of adult human eyes can make a figure appear to be a giant.
Using widely spaced cameras (e.g. beyond 70mm interaxial) which record more stereo effect than the eyes can see. Such a large interaxial distance can produce the effect of miniaturization. Also used in order to achieve the effect of more stereo depth and less scale in a scene.
For stereo effects on very long shots (e.g. landscapes) interaxial camera set ups of several meters have been used (hyperstereo). One extreme example of hyperstereo is from cameras on each side of the earth to record the sun in 3D.
Using closely spaced cameras (e.g. less than 50 mm interaxial) which record less stereo effect than the eyes can see. Such a small interaxial distance can produce the effect of gigantism. If standard cameras are used, the minimum interaxial distance is typically limited by the thickness of the cameras, so a mirror or beam splitter system is often used, enabling interaxials down to millimeters.
See also: Gigantism
The distance between the centers of the lenses of two recording cameras. A typical distance would be 63.5 mm (approximating average adult eye spacing). The term ‘interaxial’ is sometimes also used interchangeably with ‘interocular’ (when referring to eyesight, ‘interpupillary’ is often used).
The distance between the centers of eyes. A typical distance for humans would be 63.5 mm (approximating average adult eye spacing). The term ‘interaxial’ is sometimes also used interchangeably with ‘interocular’ (when referring to eyesight, ‘interpupillary’ is also occasionally used).
This can occur when the two cameras are ‘toed-in’ so that the object of interest coincides when viewed. There can be some mismatching of the outlines or borders of the two images. Techniques like corner pinning can be used to help correct this.
Confusing visual cues in a stereoscopic scene that can make an object appear to be the ‘wrong’ size; i.e. the impression of being strangely shrunk. This is due to the choice of the cameras’ interaxial distance of greater than 63.5 mm relative to the focal length of the camera lenses e.g. shooting with very widely spaced cameras. Subjectively this makes the audience feel like a giant looking at tiny objects, which is why miniaturization is sometimes referred to as Lilliputianism.
A one-to-one condition where what is being displayed is the same as the ‘real world’. For example IMAX 3D is often shot with parallel cameras spaced at the average human adult interocular distance (approx 63.5 mm) and with wide angle lenses that closely match an audience member’s view of the screen.
This refers to the separation of the left and right images on the projection device or display screen. Positive Parallax puts an object behind the screen (on screen objects in the left eye image are to the left of the same objects in the right eye image). Negative parallax puts an object in front of the screen (on screen objects in the left eye image are to the right of the same objects in the right eye image).
Zero or neutral parallax puts an object on the screen (on screen objects in the left eye image are overlaid on the same objects in the right eye image).
The only difference between stereo cameras should be parallax or angle between the axes of the lenses, as in Camera Convergence – anything else can disturb the stereo viewing experience. This requires close attention, so that the cameras are set-up the same and with the same filters. Color differences, skewing, vertical misalignment, differential weave and hop, lens flares, poor VFX fixes, scratches and dirt can all cause problems.
Fast cuts between shots with strong positive and strong negative parallax can be unsettling in some circumstances. This is because the eyes and brain are being asked to jump uncomfortably quickly between positions and then make sense of the result. This can be mitigated by the use of ‘handing off’ – dynamically changing the convergence of an outgoing shot in relation to an incoming shot. Another method of dealing with this is trying wherever possible to cut between shots that are somewhat close in parallax.
Vertical parallax is a vertical offset between stereo images and is very uncomfortable to watch – so it is necessary to remove it during post production if there has been camera misalignment during shooting.
Note: The term ‘Parallax’ is sometimes used interchangeably with ‘Congruence’ or ‘Disparity’.
If a stereoscopic signal is reversed (e.g. each eye is being fed the opposite eye signal) a strange ‘punched in’ effect appears. This is also referred to as inverted stereo or reversed stereo.
Horizontal motion that can be interpreted as binocular depth. A stereo effect which is produced when 2D images moving laterally on a single plane are viewed at slightly different times by each eye.
The amount of stereo image available to the viewer is dictated by the frame surrounding a stereoscopic image, e.g. the size of TV or projection screen. This boundary is called the Stereo Window. Depending on their parallax, objects will appear either in front, at or behind this window. IMAX has the largest window.
The process of making and presenting images using ‘left eye’ and ‘right eye’ cameras. The resulting ‘left eye’ and ‘right eye’ stereo images allow audiences to perceive depth into and out of the screen. Although the technique can add greatly to the viewing experience and is often referred to as ‘3D’, viewers cannot look around objects – as would be the case with real 3D. Stereo television and film can create a 3D illusion, but it not real 3D.
In stereoscopy, presenting objects from the left and right eyes’ point of view in the same way that our eyes would look at them in the real world, creates the depth effect. If the left and right eye images of an object are coincident at the screen, then it appears to be at the distance of the screen. If the left and right images on the screen are crossed over, with the right image on the left and the left image on the right, then the object appears to be in front of the screen as our eyes converge on the images. If the left and right images are not crossed over but closer together than the distance between our eyes (interocular distance generally taken as 63.5 mm for the average adult), then the object appears to be behind the screen as our eyes converge less. To show an object at infinity left and right images are shown spaced by the interocular distance.
HD video and DCI digital movies sparked development in new shooting technologies that can make live 3D TV. But generally post production is needed to correct unwanted differences between left and right cameras, and to finesse the point of view and perspective. Exhibition become far easier with digital cinema were one projector can sequence left and right images replayed from one player. This removes the nightmare of aligning and keeping two projectors, and running two films in sync and registration; even then weave, scratches and sparkles can lower the quality of presentation. Now most cinemas are 3D-capable and have a viewing system to sequence the left and right images into images into the correct eyes – such as Real D, Dolby or McNaughton. These require wearing glasses that are passive polarized (Real D), passive frequency based (Dolby) or active switched (McNaughton). Live shooting and easy exhibition means that live events can be shown on cinema screens – giving audiences a new experience and theaters a potential new revenue stream.
For television, 3D screens and viewing systems have been developed but usually require viewers to wear 3D glasses (active or passive depending on the system) to sequence the left and right images to the correct eyes. Probably the greatest domestic application has been for use in the virtual world of computer games.
See also: 3D
Websites: Dolby www.dolby.com/professional/motion_picture/solutions_d3ddc.html