Temporal anti-aliasing

1623 篇文章 23 订阅
1277 篇文章 12 订阅


From Wikipedia, the free encyclopedia

Temporal anti-aliasing seeks to reduce or remove the effects of temporal aliasing. Temporal aliasing is caused by the sampling rate (i.e. number of frames per second) of a scene being too low compared to the transformation speed of objects inside of the scene; this causes objects to appear to jump or appear at a location instead of giving the impression of smoothly moving towards them. To avoid aliasing artifacts altogether, the sampling rate of a scene must be at least twice as high as the fastest moving object.[1] The shutter behavior of the sampling system (typically a camera) strongly influences aliasing, as the overall shape of the exposure over time determines the band-limiting of the system before sampling, an important factor in aliasing. A temporal anti-aliasing filter can be applied to a camera to achieve better band-limiting.[2] A common example of temporal aliasing in film is the appearance of vehicle wheels travelling backwards, the so-called wagon-wheel effect. Temporal anti-aliasing can also help to reduce jaggies, making images appear softer.[3]

In cel animation[edit]

In cel animation, animators can either add motion lines or create an object trail to give the impression of movement. To solve the wagon-wheel effect without changing the sampling rate or wheel speed, animators could add a broken or discolored spoke to force viewer's visual system to make the correct connections between frames.

In computer generated imagery[edit]

To perform anti-aliasing in computer graphics, the anti-aliasing system requires a key piece of information: which objects cover specific pixels at any given time in the animation.

One approach used is to derive a high resolution (i.e. larger than the output image) temporal intensity function from object attributes which can then be convolved with an averaging filter to compute the final anti-aliased image.

In this approach, there are two methods available for computing the temporal intensity function. The first method being to compute the position of each object as a continuous function and then using the function to determine which pixels are covered by this object in the scene. The second method can use traditional rendering techniques to supersample the moving scene and determine a discrete approximation of object position.[4]

One algorithm proposed for computing the temporal intensity function is:[4]

For each image frame:
        For each object in the frame:
                Calculate the temporal transformation function for each dynamic attribute
                Determine the areas the object covers during the filtered interval
        For each pixel:
                Determine which objects are covering this pixel at some time in the sampled interval
                Determine the subintervals of time during which each object projects onto this pixel
                Perform hidden surface removal by removing subintervals of occuluded objects
                Determine pixel intensity function based on the remaining subintervals and the object's temporal transformation function
        Filter resulting pixel intensity function

Note: The "temporal transformation function" in the above algorithm is simply the function mapping the change of a dynamic attribute (for example, the position of an object moving over the time of a frame).

In the cases where either object attributes (shape, color, position, etc.) are either not explicitly defined or are too complex for efficient analysis, interpolation between the sampled values may be used. To obtain results closest to the source data, B-splines can be used to interpolate the attributes. In cases where speed is a major concern, linear interpolation may be a better choice.

Temporal anti-aliasing can be applied in image space for simple objects (such as a circle or disk) but more complex polygons could require some or all calculations for the above algorithm to be performed in object space.

In spatial anti-aliasing it is possible to determine the image intensity function by supersampling. Supersampling is also a valid approach to use in temporal anti-aliasing; the animation system can generate multiple (instead of just one) pixel intensity buffers for a single output frame. The primary advantage of supersampling is that it will work with any image, independent of what objects are displayed or rendering system is used.

See also[edit]

References[edit]

From Wikipedia, the free encyclopedia

Temporal anti-aliasing seeks to reduce or remove the effects of temporal aliasing. Temporal aliasing is caused by the sampling rate (i.e. number of frames per second) of a scene being too low compared to the transformation speed of objects inside of the scene; this causes objects to appear to jump or appear at a location instead of giving the impression of smoothly moving towards them. To avoid aliasing artifacts altogether, the sampling rate of a scene must be at least twice as high as the fastest moving object.[1] The shutter behavior of the sampling system (typically a camera) strongly influences aliasing, as the overall shape of the exposure over time determines the band-limiting of the system before sampling, an important factor in aliasing. A temporal anti-aliasing filter can be applied to a camera to achieve better band-limiting.[2] A common example of temporal aliasing in film is the appearance of vehicle wheels travelling backwards, the so-called wagon-wheel effect. Temporal anti-aliasing can also help to reduce jaggies, making images appear softer.[3]

In cel animation[edit]

In cel animation, animators can either add motion lines or create an object trail to give the impression of movement. To solve the wagon-wheel effect without changing the sampling rate or wheel speed, animators could add a broken or discolored spoke to force viewer's visual system to make the correct connections between frames.

In computer generated imagery[edit]

To perform anti-aliasing in computer graphics, the anti-aliasing system requires a key piece of information: which objects cover specific pixels at any given time in the animation.

One approach used is to derive a high resolution (i.e. larger than the output image) temporal intensity function from object attributes which can then be convolved with an averaging filter to compute the final anti-aliased image.

In this approach, there are two methods available for computing the temporal intensity function. The first method being to compute the position of each object as a continuous function and then using the function to determine which pixels are covered by this object in the scene. The second method can use traditional rendering techniques to supersample the moving scene and determine a discrete approximation of object position.[4]

One algorithm proposed for computing the temporal intensity function is:[4]

For each image frame:
        For each object in the frame:
                Calculate the temporal transformation function for each dynamic attribute
                Determine the areas the object covers during the filtered interval
        For each pixel:
                Determine which objects are covering this pixel at some time in the sampled interval
                Determine the subintervals of time during which each object projects onto this pixel
                Perform hidden surface removal by removing subintervals of occuluded objects
                Determine pixel intensity function based on the remaining subintervals and the object's temporal transformation function
        Filter resulting pixel intensity function

Note: The "temporal transformation function" in the above algorithm is simply the function mapping the change of a dynamic attribute (for example, the position of an object moving over the time of a frame).

In the cases where either object attributes (shape, color, position, etc.) are either not explicitly defined or are too complex for efficient analysis, interpolation between the sampled values may be used. To obtain results closest to the source data, B-splines can be used to interpolate the attributes. In cases where speed is a major concern, linear interpolation may be a better choice.

Temporal anti-aliasing can be applied in image space for simple objects (such as a circle or disk) but more complex polygons could require some or all calculations for the above algorithm to be performed in object space.

In spatial anti-aliasing it is possible to determine the image intensity function by supersampling. Supersampling is also a valid approach to use in temporal anti-aliasing; the animation system can generate multiple (instead of just one) pixel intensity buffers for a single output frame. The primary advantage of supersampling is that it will work with any image, independent of what objects are displayed or rendering system is used.

See also[edit]

References[edit]

介绍全新的CTAA V3 'Cinematic Temporal Anti-Aliasing' 现在包括对HDRP的完整支持(URP即将推出)。CTAA支持PC / MacOS和所有VR设备的所有渲染路径,包括单通道立体VR。 自2014年以来,CTAA一直是首屈一指的VR Ready尖端电影时空抗锯齿解决方案,被全球数千名Unity开发者所使用,现在也可用于HDRP!CTAA支持PC / MacOS和所有VR设备的所有渲染路径,包括单通道立体VR。 CTAA保留了固有的电影真实感质量,增强了游戏图形,而不影响其他解决方案中的性能和伪影。不需要不必要的和有害的后期锐化过滤器,CTAA在静止和运动时始终保持清晰度和清晰度。 只需点击一下,CTAA V3就能让标准和HDRP管道上的所有Unity用户实时实现真正的下一代离线电影渲染质量效果。没有更多的Specular Shimmer或Specular Aliasing,没有更多的PBS诱导的高频闪烁,没有更多的HDR Bloom Flicker,只有一个ROCK STEADY Film Quality锐利的抗锯齿图像,且性能速度惊人。CTAA提供了真正的电影级品质的时间超采样抗锯齿效果,在运动中保持并保持清晰度和清晰度。其性能与标准FXAA大致相当。 使用我们的最高性能的时空抗锯齿解决方案,为您的所有PC和VR项目实现最高的质量,迄今为止,Unity的任何引擎都是如此。 MSAA也可以和CTAA一起使用,提供无与伦比的真正离线质量结果。这对于所有的VR项目来说都是一个很好的选择,因为它能以很小的性能成本显著提高质量。2xMSAA足以提供相当于8xMSAA质量的AA,并具有时空解决方案的所有优势。 VR所需的SDK STEAMVR for HTC VIVE OCULUS INTEGRATION 立即下载免费的评估演示。 (请注意,其中一些演示使用的是旧版本的CTAA) CTAA PC DEMOS CTAA VS UNITY TAA电脑演示 CTAA VS UNITY TAA VS FXAA PC DEMO 2 一些值得注意的特点 - 2种可用的自定义层选择方法,您现在可以从任何对象或GUI元素中排除CTAA时间抗锯齿,而且很容易。 - 层级排除适用于所有的版本,包括所有的VR版本,所以很容易从CTAA中排除GUI元素,以获得清晰的用户界面。 - 超级取样现在启用 除了CTAA、CinaSoft和CinaUltra之外,现在还有2种超级采样方法可以使用。这些方法可以与CTAA同时使用,以实现终极抗锯齿,以满足非常苛刻的场景和真正的次世代AAA外观。 - CTAA for PC现在可以自动检查分辨率的变化,并对所有需要的渲染目标进行缩放,消除暗部轮廓的异常,并证明了一个更强大的使用工作流程。 - PC版增加了新的防抖动V3模式,完全消除了微抖动,适用于建筑可视化、CAD、工程、汽车、设计和制造或任何需要最高质量视觉效果的项目。 - Steam VR版新增自适应锐度V3模式,在几乎零性能影响的情况下提高感知锐度。 - 兼容Unity 2019和最新的后期处理栈。 完整的VR单通道立体声支持和最新的STEAM VR支持。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值