转载:Depth of Field

Depth of Field

by Kyle Schouviller




Depth of field refers to the distance in front of and beyond the subject that is in focus. The effect helps draw attention to the subject and away from the background (and foreground) elements. For example, if we were watching a person walk down the sidewalk, the people passing in front of and behind the subject might be blurry to show that they’re not the focus of this scene.



Fortunately, this isn’t a difficult effect to perform in games, and it can add a great deal of atmosphere to a game (only if used correctly – read Loyd Case's article, Ten Tired Gaming Clichés for more on overusing graphics tricks). In this tutorial, we’re going to create depth of field using a post-processing effect. We’ll do this by blending a blurred image with a sharp image, using a depth image to modulate between the two.



What follows is an overview of the method used to render depth of field. Following that is a short tutorial on implementing the effect, complete with code for use in the FlatRedBall engine. The engine includes color, depth and position buffering, and includes a post-processing engine in which you can easily add your own effects (and which makes the non-depth-of-field parts of this tutorial trivial - letting us focus on the meat of the issue).



Render Buffers

To perform depth of field, you need three full-screen images: the color buffer (your scene, fully rendered, with lighting), the depth buffer (a one-channel image that stores depth) and a blur buffer (the color buffer, blurred).



Color Buffer

The color buffer should contain the rendered scene – what it would look like without depth of field applied. To produce this, just render your scene as normal within a render target, then dump the result to a texture.



Depth Buffer

The depth buffer (not the built-in depth buffer, but our own "buffer" to hold depth information) will be used to store the transformed Z position of each pixel. This can then be used in a post-processing effect, combined with UV coordinates, to recover world position and determine an individual pixel’s distance from the camera. However, for this tutorial, I suggest you stick with a three or four-channel surface format, and just store the world position of pixels. It’ll save you some math headaches, and you can always revisit it and change it back to a one-channel depth buffer if you need to improve memory performance.



Post-processing

After the scene has been rendered to the appropriate buffers, it’s time for post-processing. Here is where we’ll generate the blur buffer, and then use all three buffers to create the depth of field effect.



Blur

For the blur buffer, we’re going to perform a Gaussian blur on the color buffer. To do this, take several texture samples around the current pixel, and then weight those samples according to a Gaussian distribution (i.e. bell curve). The formula for a Gaussian distribution is as follows:


(1/SIGMA*((2*PI)^0.5))*(e^(-0.5*(((x-MEAN)/SIGMA)^2)))


This formula produces a Gaussian (normal) distribution – or a bell curve. s represents the standard deviation of the curve – or how spread out the curve is. µ represents the mean of the curve – where it is centered (for our purposes this will always be 0). For example, here are some normal distributions with standard deviations of 0.2 and 0.5:


Gaussian distributions 0.2 and 0.5


To perform the blur, take a number of samples horizontally around the current pixel, and then blend them according to a Gaussian distribution. This can be done by finding the distance of a sample from the center sampling point, then finding that distance's height along the gaussian curve, and adding the sample multiplied by the gaussian value. When done with several samples around a center point, the samples will each be weighted by the gaussian curve, blending them with the center sample to create a blurring effect. Like so:


Blending example


As you can see, three pixels on each side of the center are sampled, then weighted according to a Gaussian curve and combined to produce the final color. After performing the blur horizontally, the horizontally-blurred image is blurred vertically to produce the final, gaussian blurred image.


Blurred image (blur buffer)


A great example of how varying sigma changes the shape of the curve can be found here: http://stat.wvu.edu/SRS/Modules/Normal/normal.html (Java required). Here is an image demonstrating the change in blur related to the change in sigma:


Blur comparison


Use Buffers in HLSL Effect to create Depth of Field

Here’s where it all comes together. Set all of your textures on the graphics device, then in the pixel shader, combine them. First, sample all of your textures. Then, convert the depth value into world coordinates, and then to a value between 0 and 1, based on how close it is to the inside of the focal distance. The focal distance is the distance from the camera at which you want the image to be sharp, and the focal range is the distance over which the image will fade from entirely sharp to entirely blurred. Outside of FocalDistance +- FocalRange, use the blur texture only. Then, use this "FocalAmount" value to interpolate the two colors you sampled (color and blur color), and use that value as the screen color.


Combining color, blur, depth buffers to create depth of field


That's all there is to it!



Improving the Effect

This is obviously just a starting point, and you might notice that a linear region of focus doesn’t look quite like you’d want. You might try using a better curve to determine if a pixel is in focus or not – you might want to just have a square region of focus, and have everything outside that blurred. Maybe you want to use several blur buffers, so the image is gradually more blurry as it gets farther away from the focus. This is all determined by how you calculate your focal amount, and is pretty easy to do in the final combination shader. Go ahead and try these on the sample provided below!



转自:http://www.ziggyware.com/readarticle.php?article_id=157

In FlatRedBall

Up until now, I haven’t provided any code – not even algorithms. This is for two reasons: one, you should understand the algorithms before you try to code anything - especially with something like depth of field. It makes it a lot easier to debug problems as you code. Second, of course, is because I didn’t want to break it all up – it’s better viewed as a whole:


The Code File: DepthOfField.cs
The Effect File: Dof.fx

These two files demonstrate an implementation of Depth of Field in the FlatRedBall engine. By using FlatRedBall, this demo can focus on just the important parts – the blurring algorithm and the combination of the different buffers. In fact, most of this is done in the shader – with the exception of the calculation of blur weightings (precalculated in the C# code). If you’d like to implement this sample in (or without) another engine, you’ll just need to have access to the image buffer and depth buffer (or render them yourself).



For instructions on how to add the effect to a FlatRedBall game, read the tutorial on adding post-processing effects here: Advanced Post-processing Effects in FRB.

转载于:https://www.cnblogs.com/jerryhong/articles/1082402.html

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值