Temporal anti-aliasing seeks to reduce or remove the effects of temporal aliasing. Temporal aliasing is caused by the sampling rate (i.e. number of frames per second) of a scene being too low compared to the transformation speed of objects inside of the scene; this causes objects to appear to jump or appear at a location instead of giving the impression of smoothly moving towards them. To avoid aliasing artifacts altogether, the sampling rate of a scene must be at least twice as high as the fastest moving object.[1] The shutter behavior of the sampling system (typically a camera) strongly influences aliasing, as the overall shape of the exposure over time determines the band-limiting of the system before sampling, an important factor in aliasing. A temporal anti-aliasing filter can be applied to a camera to achieve better band-limiting.[2] A common example of temporal aliasing in film is the appearance of vehicle wheels travelling backwards, the so-called wagon-wheel effect. Temporal anti-aliasing can also help to reduce jaggies, making images appear softer.[3]
In cel animation[edit]
In cel animation, animators can either add motion lines or create an object trail to give the impression of movement. To solve the wagon-wheel effect without changing the sampling rate or wheel speed, animators could add a broken or discolored spoke to force viewer's visual system to make the correct connections between frames.
In computer generated imagery[edit]
To perform anti-aliasing in computer graphics, the anti-aliasing system requires a key piece of information: which objects cover specific pixels at any given time in the animation.
One approach used is to derive a high resolution (i.e. larger than the output image) temporal intensity function from object attributes which can then be convolved with an averaging filter to compute the final anti-aliased image.
In this approach, there are two methods available for computing the temporal intensity function. The first method being to compute the position of each object as a continuous function and then using the function to determine which pixels are covered by this object in the scene. The second method can use traditional rendering techniques to supersample the moving scene and determine a discrete approximation of object position.[4]
One algorithm proposed for computing the temporal intensity function is:[4]
For each image frame: For each object in the frame: Calculate the temporal transformation function for each dynamic attribute Determine the areas the object covers during the filtered interval For each pixel: Determine which objects are covering this pixel at some time in the sampled interval Determine the subintervals of time during which each object projects onto this pixel Perform hidden surface removal by removing subintervals of occuluded objects Determine pixel intensity function based on the remaining subintervals and the object's temporal transformation function Filter resulting pixel intensity function
Note: The "temporal transformation function" in the above algorithm is simply the function mapping the change of a dynamic attribute (for example, the position of an object moving over the time of a frame).
In the cases where either object attributes (shape, color, position, etc.) are either not explicitly defined or are too complex for efficient analysis, interpolation between the sampled values may be used. To obtain results closest to the source data, B-splines can be used to interpolate the attributes. In cases where speed is a major concern, linear interpolation may be a better choice.
Temporal anti-aliasing can be applied in image space for simple objects (such as a circle or disk) but more complex polygons could require some or all calculations for the above algorithm to be performed in object space.
In spatial anti-aliasing it is possible to determine the image intensity function by supersampling. Supersampling is also a valid approach to use in temporal anti-aliasing; the animation system can generate multiple (instead of just one) pixel intensity buffers for a single output frame. The primary advantage of supersampling is that it will work with any image, independent of what objects are displayed or rendering system is used.
See also[edit]
References[edit]
- Jump up^ Grant, C. (1985). "Integrated analytic spatial and temporal anti-aliasing for polyhedra in 4-space". SIGGRAPH Computer Graphics, 19(3):79-84
- Jump up^ Tessive, LLC (2010). "Time Filter Technical Explanation "
- Jump up^ NVIDIA "Temporal Anti-Aliasing Technology (TXAA)".
- ^ Jump up to:a b Korein, J. and Badler, N. (1983). "Temporal anti-aliasing in computer generated animation". SIGGRAPH Computer Graphics, 17(3):377-388
Temporal anti-aliasing seeks to reduce or remove the effects of temporal aliasing. Temporal aliasing is caused by the sampling rate (i.e. number of frames per second) of a scene being too low compared to the transformation speed of objects inside of the scene; this causes objects to appear to jump or appear at a location instead of giving the impression of smoothly moving towards them. To avoid aliasing artifacts altogether, the sampling rate of a scene must be at least twice as high as the fastest moving object.[1] The shutter behavior of the sampling system (typically a camera) strongly influences aliasing, as the overall shape of the exposure over time determines the band-limiting of the system before sampling, an important factor in aliasing. A temporal anti-aliasing filter can be applied to a camera to achieve better band-limiting.[2] A common example of temporal aliasing in film is the appearance of vehicle wheels travelling backwards, the so-called wagon-wheel effect. Temporal anti-aliasing can also help to reduce jaggies, making images appear softer.[3]
In cel animation[edit]
In cel animation, animators can either add motion lines or create an object trail to give the impression of movement. To solve the wagon-wheel effect without changing the sampling rate or wheel speed, animators could add a broken or discolored spoke to force viewer's visual system to make the correct connections between frames.
In computer generated imagery[edit]
To perform anti-aliasing in computer graphics, the anti-aliasing system requires a key piece of information: which objects cover specific pixels at any given time in the animation.
One approach used is to derive a high resolution (i.e. larger than the output image) temporal intensity function from object attributes which can then be convolved with an averaging filter to compute the final anti-aliased image.
In this approach, there are two methods available for computing the temporal intensity function. The first method being to compute the position of each object as a continuous function and then using the function to determine which pixels are covered by this object in the scene. The second method can use traditional rendering techniques to supersample the moving scene and determine a discrete approximation of object position.[4]
One algorithm proposed for computing the temporal intensity function is:[4]
For each image frame: For each object in the frame: Calculate the temporal transformation function for each dynamic attribute Determine the areas the object covers during the filtered interval For each pixel: Determine which objects are covering this pixel at some time in the sampled interval Determine the subintervals of time during which each object projects onto this pixel Perform hidden surface removal by removing subintervals of occuluded objects Determine pixel intensity function based on the remaining subintervals and the object's temporal transformation function Filter resulting pixel intensity function
Note: The "temporal transformation function" in the above algorithm is simply the function mapping the change of a dynamic attribute (for example, the position of an object moving over the time of a frame).
In the cases where either object attributes (shape, color, position, etc.) are either not explicitly defined or are too complex for efficient analysis, interpolation between the sampled values may be used. To obtain results closest to the source data, B-splines can be used to interpolate the attributes. In cases where speed is a major concern, linear interpolation may be a better choice.
Temporal anti-aliasing can be applied in image space for simple objects (such as a circle or disk) but more complex polygons could require some or all calculations for the above algorithm to be performed in object space.
In spatial anti-aliasing it is possible to determine the image intensity function by supersampling. Supersampling is also a valid approach to use in temporal anti-aliasing; the animation system can generate multiple (instead of just one) pixel intensity buffers for a single output frame. The primary advantage of supersampling is that it will work with any image, independent of what objects are displayed or rendering system is used.
See also[edit]
References[edit]
- Jump up^ Grant, C. (1985). "Integrated analytic spatial and temporal anti-aliasing for polyhedra in 4-space". SIGGRAPH Computer Graphics, 19(3):79-84
- Jump up^ Tessive, LLC (2010). "Time Filter Technical Explanation "
- Jump up^ NVIDIA "Temporal Anti-Aliasing Technology (TXAA)".
- ^ Jump up to:a b Korein, J. and Badler, N. (1983). "Temporal anti-aliasing in computer generated animation". SIGGRAPH Computer Graphics, 17(3):377-388