Advanced Rendering-Depth of Field

https://catlikecoding.com/unity/tutorials/advanced-rendering/depth-of-field/

1 setting the scene

we perceive light because we sense photons hitting on our retinas. likewise, cameras can record light because photons hit their film or image sensor. in all cases, light gets focused to produce sharp images, but not everything can be focus at the same time. only things at a certain distance are in focus, while everything nearer or father appears out of focus. this visual effect is known as depth-of-field. 只有指定距离的东西才能被聚焦,而其他的则模糊处理。the specifics of how the out-of-focus projection looks is known as bokeh, which is japanese for blur.

usually, we do not notice depth-of-field with our own eyes, because we are paying attention to what we are focusing on, not what lies outside our focus. it can be much more obvious in photos and videos, because we can look at parts of the image that were not in the camera’s focus. although it is a physical limitation, bokeh can be used to great effect to guide the viewers attention. thus, it is an artistic tool.

2 circle of confusion 散光圈

the simplest form of camera is a perfect pinhole camera. like all cameras, it has an image plane on which the light is projected and recorded. in front of the image plane is a tiny hole ——known as the aperture小孔成像?-just large enough to allow a single light ray to pass through it. things in front of the camera emit or reflect light in many directions, producing a lot of light rays. for every point, only a single ray is able to pass through the hole and gets recorded.

在这里插入图片描述
图片是否颠倒了呢???

indeed it is. all images recorded with cameras, including your eyes, are flipped. the image is flipped again during further processing, so u do not need to worry about it.

because only a single ray of light is captured per point, the image is always sharp. unfortunately, a single light ray is not very bright, so the resulting image is hardly visible. u would have to wait a while for enough light to accumulate to get a clear image, which means that this camera requires a long exposure time. this is not a problem when the scene is static, but everything that moves even a little will produce a lot of motion blur. so it is not a practical camera and can not be used to record sharp videos.

to be able to reduce the exposure time, light has to accumulate more quickly. the only way to do this is by recording multiply light rays at the same time. this can be done by increasing the radius of the aperture. 增大小孔的大小,让更多的射线穿过。assuming the hole is round, this means that each point will be projected on the image plane with a cone of light instead of a line. 每个点发出的光线有一束都是穿过小孔的,而不是只有一根。so we receive more light, but it no longer ends up at a single point. instead, it gets projected as a disc. 投射的就不是一个点了,而是一个盘状区域。how large an area is covered depends on the distance between the point, hole, and image plane. the result is a brighter but unfocused image.

在这里插入图片描述

to focus the light again, we have to somehow take an incoming cone of light and bright it back to a single point. this is done by putting a lens in the camera’s hole. the lens bends light in such a way that dispersing light gets focused again. 加一个透镜,让关系弯曲进行聚焦。this can produce a bright and sharp projection. 哦,因为太暗了,所以增大孔洞,然后用一个透镜让其再次聚焦。but only for points at a fixed distance from the camera. the light from points further away will not be focused enough, while the light from points too close to the camera are focused too much. in both cases we again end up projecting points as discs, their size depending on how much out of focus they are. this unfocused projection is known as the circle of confusion, CoC for shot.

在这里插入图片描述

2.1 visualizing the CoC

the radius of the circle of confusion is a measure of how out-of-focus the projection of a point is. 我们一个模糊半径,来度量一个点偏离透视有多远。也就是说只在半径内进行聚焦。半径之外的都是模糊的。这里的confusion则是模糊的意思。
let us begin by visualizing this value. add a constant to DepthOfFieldEffect to indicate our fist pass, the circle-of-confusion pass. explicitly use this pass when blitting.

const int circleOfConfusionPass = 0;void OnRenderImage (RenderTexture source, RenderTexture destination) {
		…

		Graphics.Blit(source, destination, dofMaterial, circleOfConfusionPass);
	}

C#端增加一个变量,然后传给shader。

because the CoC depends on the distance from the camera, we need to read from the depth buffer. in fact, the depth buffer is exactly what we need, because a camera’s focus region is a plane parallel to the camera, assuming the lens and image plane are aligned and perfect. so sample from the depth texture, convert to linear depth and render that.

CGINCLUDE
		#include "UnityCG.cginc"

		sampler2D _MainTex, _CameraDepthTexture;
		…

	ENDCG

	SubShader {
		…

		Pass { // 0 circleOfConfusionPass
			CGPROGRAM
				#pragma vertex VertexProgram
				#pragma fragment FragmentProgram

				half4 FragmentProgram (Interpolators i) : SV_Target {
//					return tex2D(_MainTex, i.uv);
					half depth = SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, i.uv);
					depth = LinearEyeDepth(depth);
					return depth;
				}
			ENDCG
		}
	}

2.2 choose a simple CoC

we are not interested in the ray depth value, but the CoC value. to determine this, we have to decided on a focus distance. this is the distance between the camera and the focus plane, 下图中的小孔到聚焦平面的距离,where everything is perfectly sharp. add a public field for t to our effect component and use a range like 0.1-100 and a default of 10.

[Range(0.1f, 100f)]
public float focusDistance = 10f;

在这里插入图片描述

the size of the CoC increases with the distance of a point to the focus plane. the exact relationship depends on the camera and its configuration, which can get rather complex. it is possible to simulate real cameras, but we will use a simple focus range so it is easier to understand and control. our CoC will go from zero to its maximum inside this range, relative to the focus distance. give it a field with a range like 0.1-10 and a default of 3.

这个CoC,公式在:https://blog.csdn.net/wodownload2/article/details/95066148,有点复杂。
在这里插入图片描述
为了简化,这里使用coc = (d-f)/r

这里的d 是点距离摄像机的深度;
这里的f是focus plane到摄像机的距离;
这里的r是聚焦的半径;
这样就简化了计算coc,到底是啥是coc,术语 叫做散光圈。我们再来回顾下2小节的最后:
在这里插入图片描述
我们增大了小孔的半径,让更多的光线进入。假设小孔为圆形,那么点(上图的红色、绿色、蓝色的点)投射到平面上都是一个锥形区域,而不是一条线。而打在投影面上的是一个圆,而不再是一个点了。这个圈到底有多大,取决于投影的点(上图的红色、绿色、蓝色的点),以及小孔,以及投影面,这三者之间的关系。这样虽然更多的光线进入了,但是不再是一个聚焦的像了。上图是一个点变成了一个圆圈了。

那么怎么让其再次聚焦呢,我们就要使用一个凸透镜,让其光线聚焦。如下图所示:
在这里插入图片描述
我们的小孔不再是一个空的小孔,而是一个凸透镜,我们知道凸透镜有聚焦的作用。而上图中只有绿色的点发出的一束光线,最后聚焦在左边平面上的一点。而红色的点则在再到平面的时候,尚未聚焦;而蓝色的点则是到达平面之前早就聚焦了,甚至再次发生了散射。所以红色的点、绿色的点这两种情况下,最终投影到平面上的是一个光圈,而这个光圈的大小,取决于凸透镜的焦距是多少,而此时我们把这个非聚焦的投影称之为circle of confusion,简称coc。

综上所述,我们只是为了找到一个公式用来简化计算CoC的大小而已,从而引入了三个变量之间的关系,即depth、focusDistance、以及focusRange。

[Range(0.1f, 10f)]
public float focusRange = 3f;

These configuration options are needed by our shader, so set them before we blit.

dofMaterial.SetFloat("_FocusDistance", focusDistance);
dofMaterial.SetFloat("_FocusRange", focusRange);

Graphics.Blit(source, destination, dofMaterial, circleOfConfusionPass);

Add the required variables to the shader. As we use them to work with the depth buffer, we’ll use float as their type when calculating the CoC. As we’re using half-based HDR buffers, we’ll stick to half for other values, though this doesn’t matter on desktop hardware.

sampler2D _MainTex, _CameraDepthTexture;
float4 _MainTex_TexelSize;

float _FocusDistance, _FocusRange;

using the depth d, focus distance f, and focus range r, we can find the CoC via (d-f)/r

half4 FragmentProgram(Interpolators i):SV_Target
{
	float depth = SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, i.uv);
	depth = LinearEyeDepth(depth);
	float coc = (depth - FocusDistance) / FocusRange;
	return coc;
}

this results in positive CoC values for points beyond the focus distance, and negative CoC values for points in front of it. the values -1 and 1 represent the maximum CoC, so we should make sure the CoC values do not exceed that. by clamping.

float coc = (depth - _FocusDistance) / _FocusRange;
coc = clamp(coc, -1, 1);
return coc;

we keep the negative CoC values so we can distinguish between foreground and background points. to see the negative CoC values, u could color them, let us say with red.

coc = clamp(coc, -1, 1);
if(coc < 0)
{
	return coc * -half4(1, 0, 0, 1);
}
return coc;

2.3 buffer the CoC

we will need the CoC to scale the point’s projection, but we will do that in another pass. so we will store the CoC values in a temporary buffer. because we only need to store a single value, w can suffice with a single-channel texture, using RenderTextureFormat.RHalf. also, this buffer contains CoC data, not color values. so it should always be treated as linear data. let us make this explicit, even though we assume that we are rendering in linear space.

blit to the CoC buffer and then add a new blit to copy that buffer to the destination. finally, release the buffer.

RenderTexture coc = RenderTexture.GetTemporary(
			source.width, source.height, 0,
			RenderTextureFormat.RHalf, RenderTextureReadWrite.Linear
		);

		Graphics.Blit(source, coc, dofMaterial, circleOfConfusionPass);
		Graphics.Blit(coc, destination);

		RenderTexture.ReleaseTemporary(coc);

because we are using a texture that only has an R channel, the entire CoC visualization is now red. we need to store the actual CoC values, so remove the coloration (着色) of the negative ones. also, we can change the return type of the fragment function to a single value. (这个还是头一次见到,只返回一个数字)。

//				half4
				half FragmentProgram (Interpolators i) : SV_Target {
					float depth = SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, i.uv);
					depth = LinearEyeDepth(depth);

					float coc = (depth - _FocusDistance) / _FocusRange;
					coc = clamp(coc, -1, 1);
//					if (coc < 0) {
//						return coc * -half4(1, 0, 0, 1);
//					}
					return coc;
				}

上面的这段代码的目的,是把CoC的值存放在一个图片上,后面会使用CoC进行缩放点的透视。
我们知道untiy中为了将一个值缓冲起来,通常使用RTT的方法,比如上面的代码:

RenderTexture coc = RenderTexture.GetTemporary(
			source.width, source.height, 0,
			RenderTextureFormat.RHalf, RenderTextureReadWrite.Linear
		);

		Graphics.Blit(source, coc, dofMaterial, circleOfConfusionPass);
		Graphics.Blit(coc, destination);

它的意思就是,使用和屏幕等宽等高的图片coc,使用dofMaterial材质的cicleOfConfusionPass通道,对source进行处理,处理的结果保存到coc中。这个通道的是如何计算每个像素的值的呢?

half FragmentProgram (Interpolators i) : SV_Target {
					float depth = SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, i.uv);
					depth = LinearEyeDepth(depth);

					float coc = (depth - _FocusDistance) / _FocusRange;
					coc = clamp(coc, -1, 1);
					return coc;
				}

就是采样深度,注意是线性的深度,unity内置函数LinearEyeDepth函数得到实际点的z深度,这个值在摄像机的near和far之间。然后使用上面简化的coc计算公式,计算出来一个值,并且用clamp函数维持在-1到1之间,将这个值,保存到一个图片上的一个像素点,以备后面使用。

3 bokeh

while the CoC determines the strength of the bokeh effect per point, it is the aperture that determines how it looks. basically, an image is composed of many projections of the aperture’s shape on the image plane. so one way to create bokeh is to render a sprite for each texel using its color, with size and opacity based on its CoC. this approach is actually in some cases, but it can be very expensive due to the massive amount of overdraw.

another approach is to work in the opposite direction. instead of projecting a single fragment onto many, each fragment accumulates colors from all texels that might influence it. this technique does not require the generation of extra geometry, but requires taking many texture samples. we will use this approach.

对不起党和国家,我不明白上面的意思。

3.1 accumulating the bokeh

create a new pass for the generation of the bokeh effect. start by simply passing through the colors of the main texture. we do care about its alpha channel.

Pass { // 0 CircleOfConfusionPass}

		Pass { // 1 bokehPass
			CGPROGRAM
				#pragma vertex VertexProgram
				#pragma fragment FragmentProgram

				half4 FragmentProgram (Interpolators i) : SV_Target {
					half3 color = tex2D(_MainTex, i.uv).rgb;
					return half4(color, 1);
				}
			ENDCG
		}

use this pass for the second and final blit, with the source texture as input. we will ignore the CoC data for now, assuming that the entire image fully out of focus.

const int circleOfConfusionPass = 0;
	const int bokehPass = 1;void OnRenderImage (RenderTexture source, RenderTexture destination) {
		…

		Graphics.Blit(source, coc, dofMaterial, circleOfConfusionPass);
		Graphics.Blit(source, destination, dofMaterial, bokehPass);
//		Graphics.Blit(coc, destination);

		RenderTexture.ReleaseTemporary(coc);
	}

to create the bokeh effect, we have to average the colors around the fragment that we are working on. we will start by taking the average of block of 9x9 texels centered on the current fragment. this requires 81 samples in total.

half4 FragmentProgram (Interpolators i) : SV_Target {
//					half3 color = tex2D(_MainTex, i.uv).rgb;
					half3 color = 0;
					for (int u = -4; u <= 4; u++) {
						for (int v = -4; v <= 4; v++) {
							float2 o = float2(u, v) * _MainTex_TexelSize.xy;
							color += tex2D(_MainTex, i.uv + o).rgb;
						}
					}
					color *= 1.0 / 81;
					return half4(color, 1);

上面的这段代码的是将当前的像素的周围9x9的像素值累加起来,求平均值。这不就是模糊处理吗。

the result is a blockier image. effectively, we are using a square aperture. the image also became brighter, because the influence of very bright fragments gets spread out over a larger area. this is more like bloom than depth-of-field, but an exaggerated effect makes it easier to see what’s going on. so we will keep it too bright for now and tone it down later.

上面的结果是,图片模糊了、图片稍微变亮了,原因是周围像素的平均值、稍微变亮是因为稍微亮的像素扩大了影响的面积。这个结果更像是bloom效果,但是此时我们保留这个中间结果。

because our simple bokeh approach is based on the texel size, its visual size depends on the target resolution. lowering the resolution increases the texel size, which increases the aperture and the bokeh effect. 降低分辨率,那么每个像素就越大,这样就会使小孔变大,bokeh效果更明显。for the reset of this tutorial, i will use half-resolution screenshots to make individual texels easier to see. as a result, the dimensions of the bokeh shape is doubled.

gathering samples in a 9x9 texel block already requires 81 samples, which is a lot. if we would want to double the bokeh dimensions, we would need to increase that to a 17x17. that would require 289 samples per fragment, which is far too much. but we could double the sampling area without increasing the amount of samples, by simply doubling the sample offset.

float2 o = float2(u, v) * _MainTex_TexelSize.xy * 2;

we have now doubled the radius of the aperture, but we are taking too few samples to cover it entirely. the resulting undersampling breaks up&……

3.2 round bokeh
圆形的博基效果:
在这里插入图片描述

the ideal aperture is round, not square, and produces a bokeh consisting of many overlapping discs. we can turn our bokeh shape into a disc with a radius of four steps by simply discarding those samples that have too large an offset. how many samples are included determines the weight of the accumulated color, which we can use to normalize it.

half3 color = 0;
					float weight = 0;
					for (int u = -4; u <= 4; u++) {
						for (int v = -4; v <= 4; v++) {
//							float2 o = float2(u, v) * _MainTex_TexelSize.xy * 2;
							float2 o = float2(u, v);
							if (length(o) <= 4) {
								o *= _MainTex_TexelSize.xy * 2;
								color += tex2D(_MainTex, i.uv + o).rgb;
								weight += 1;
							}
						}
					}
					color *= 1.0 / weight;
					return half4(color, 1);

instead of checking whether each sample is valid, we can define an array with all offsets that matter and loop through it. this also means that we are not constrained to a regular grid. as w are sampling a disc, it makes more sense to use a pattern of spirals or concentric circles. instead of coming up with a pattern ourselves, let use use one of the sampling kernels of unity’s post effect statck v2, defined in DiskKernels.hlsl. these kernels contain offsets within the unit circle. the smallest kernel consists of 16 samples: a center point with a ring of 5 samples around it and other ring of 10 samples around that.

定义一个同心圆进行采样,一种是定义16个点,第一圈为5个点,第二圈为10个点。包括自身圆心的一个点,共计16个点。

// From https://github.com/Unity-Technologies/PostProcessing/
				// blob/v2/PostProcessing/Shaders/Builtins/DiskKernels.hlsl
				static const int kernelSampleCount = 16;
				static const float2 kernel[kernelSampleCount] = {
					float2(0, 0),
					float2(0.54545456, 0),
					float2(0.16855472, 0.5187581),
					float2(-0.44128203, 0.3206101),
					float2(-0.44128197, -0.3206102),
					float2(0.1685548, -0.5187581),
					float2(1, 0),
					float2(0.809017, 0.58778524),
					float2(0.30901697, 0.95105654),
					float2(-0.30901703, 0.9510565),
					float2(-0.80901706, 0.5877852),
					float2(-1, 0),
					float2(-0.80901694, -0.58778536),
					float2(-0.30901664, -0.9510566),
					float2(0.30901712, -0.9510565),
					float2(0.80901694, -0.5877853),
				};

				half4 FragmentProgram (Interpolators i) : SV_Target {}

loop through these offsets and use them to accumulate samples. to keep the same disk radius, multiply the offsets by 8.

half4 FragmentProgram (Interpolators i) : SV_Target {
					half3 color = 0;
//					float weight = 0;
//					for (int u = -4; u <= 4; u++) {
//						for (int v = -4; v <= 4; v++) {
//							…
//						}
//					}
					for (int k = 0; k < kernelSampleCount; k++) {
						float2 o = kernel[k];
						o *= _MainTex_TexelSize.xy * 8;
						color += tex2D(_MainTex, i.uv + o).rgb;
					}
					color *= 1.0 / kernelSampleCount;
					return half4(color, 1);
				}

the more samples a kernel has, the higher its quality. DiskKernels.hlsl contains a few and u should copy all and compare them. for this tutorial, i will use the medium kernel, which also has two rings but a total of 22 samples.

#define BOKEH_KERNEL_MEDIUM

				// From https://github.com/Unity-Technologies/PostProcessing/
				// blob/v2/PostProcessing/Shaders/Builtins/DiskKernels.hlsl
				#if defined(BOKEH_KERNEL_SMALL)
					static const int kernelSampleCount = 16;
					static const float2 kernel[kernelSampleCount] = {};
				#elif defined (BOKEH_KERNEL_MEDIUM)
					static const int kernelSampleCount = 22;
					static const float2 kernel[kernelSampleCount] = {
						float2(0, 0),
						float2(0.53333336, 0),
						float2(0.3325279, 0.4169768),
						float2(-0.11867785, 0.5199616),
						float2(-0.48051673, 0.2314047),
						float2(-0.48051673, -0.23140468),
						float2(-0.11867763, -0.51996166),
						float2(0.33252785, -0.4169769),
						float2(1, 0),
						float2(0.90096885, 0.43388376),
						float2(0.6234898, 0.7818315),
						float2(0.22252098, 0.9749279),
						float2(-0.22252095, 0.9749279),
						float2(-0.62349, 0.7818314),
						float2(-0.90096885, 0.43388382),
						float2(-1, 0),
						float2(-0.90096885, -0.43388376),
						float2(-0.6234896, -0.7818316),
						float2(-0.22252055, -0.974928),
						float2(0.2225215, -0.9749278),
						float2(0.6234897, -0.7818316),
						float2(0.90096885, -0.43388376),
					};
				#endif

this gives use a higher-quality kernel, while it is still easy to distinguish between the two sample rings, so we can see how our effect works.
上面采用了两种采样方式,一个是16采样,一个是22采样。

3.3 blurring bokeh

对上面的bokeh效果进行模糊处理

although a specialized sampling kernel is better than using a regular grid, it is still requires a lot of samples to get a decent bokeh.
to cover more ground with the same amount of samples, we can create the effect at half resolution, just like the bloom effect. this will blur the bokeh somewhat, but that is an acceptable price to pay.

为了进行模糊处理,我们需要降低原图分辨率。
方法是把原图混合到一半的分辨率,对这个一半的分辨率图进行bokeh处理。

to work at half resolution, we have to first blit to a half-size texture, create the bokeh at that resolution, then blit (原文为bit,我觉得是笔误)back to full resolution. this requires two additional temporary textures.

int width = source.width / 2;
int height = source.height / 2;
RenderTextureFormat format = source.format;
RenderTexture dof0 = RenderTexture.GetTemporary(width, height, 0, format);
RenderTexture dof1 = RenderTexture.GetTemporary(width, height, 0, format);

Graphics.Blit(source, coc, dofMaterial, circleOfConfusionPass);
Graphics.Blit(source, dof0); //dof0是一半的分辨率
Graphics.Blit(dof0, dof1, dofMaterial, bokehPass); //对dof0进行bokeh处理,最终结果储存在dof1中去,dof1也是一半的分辨率。
Graphcis.Blit(dof1, destination); //最后将dof1,在升高到原始分辨率。
RenderTexture.ReleaseTemporary(coc);
RenderTexture.ReleaseTemporary(dof0);
RenderTexture.ReleaseTemporary(dof1);

to keep the same bokeh size, we also have to halve the sampling offset.

o *= _MainTex_TexelSize.xy * 4;

在这里插入图片描述
we are getting more solid bokeh, but the samples are still not fully connected yet, but the samples are still not fully connected yet. we either have to reduce the bokeh size or blur more. instead of down-sampling further, we will add an extra blur pass after creating the bokeh, so it is a postfilter pass.

我们目前虽然得到了更加solid的bokeh效果,其实就是稍微连续的效果,不再是颗粒状。但是还是不够连续。也就是原文中的still not fully connected. 是我们要么是减小bokeh size或者是再次blur处理。而这里是再开启一个通道进行后过滤的方式即是:postfilter pass.

const int postFilterPass = 2;

we perform the postfilter pass at half resolution, for which we can reuse the first temporary half-size texture.
我们对处理过后的dof1,进行postFilterPass的处理,其结果保存在dof0上。

Graphics.Blit(source, coc, dofMaterial, circleOfConfusionPass);
Graphcis.Blit(source, dof0);
Graphcis.Blit(dof0, dof1, dofMaterial, bokehPass);
Graphics.Blit(dof1, dof0, dofMaterial, postFilterPass);
Graphics.Blit(dof0, destination);

the post-filter pass will perform a small Guassian blur while staying at the same resolution, by using a box filter with a half-texel offset. this leads to overlapping samples, creating a 3x3 kernel known as a tent filter. (高斯核,类似于帐篷,或者倒钟)。

在这里插入图片描述

Pass
{
	CGPROGRAM
		#pragma vertex VertexProgram
		#pragma fragment FragmentProgram
		half5 FragmentProgram(Interpolators i ):SV_Target
		{
			float4 o = _MainTex_TexelSize.xyxy * float2(-0.5, 0.5).xxyy;
			half4 s = tex2D(_MainTex, i.uv+o.xy) +
			tex2D(_MainTex, i.uv+o.zy)+
			tex2D(_MainTex, i.uv+o.xw)+
			tex2D(_MainTex, i.uv+zw);
			return s * 0.25;
		}
	ENDCG
}

这不还是对角线四个点的平均值吗????这里不是高斯模糊,而是直接使用对角线的四个点进行均值模糊处理:
在这里插入图片描述
比如上面的红色的点,就是采样对角线上的四个点,而且这里是进行一半像素的偏移。你还可以想上图一样进行周围9个点的采样,还要考虑权重分配,这就是高斯模糊了。

在这里插入图片描述

3.4 bokeh size

thanks to the postfilter pass, our bokeh looks acceptable at a radius of 4 half-resolution texels. 这里使用半径为四的bokeh效果看起来还是可以接收的,这里半径为四是啥意思,在上面的:

o *= _MainTex_TexelSize.xy * 4;

这里的4就是控制offset的大小的。所以是半径为4,这个在shader的bokehPass通道中。

the bokeh is not perfectly smooth, which can be interpreted as the effect of an imperfect or dirty lens. 这个bokeh效果不是很平滑,可以理解为是脏的凸透镜或者是不完美的效果。but this is only really visible for very bright projections on top of darker backgrounds. 但是这个只是对暗的背景上的亮出的投影才有效. and we are currently greatly exaggerating this effect. but u might prefer a higher-quality bokeh at reduced size, or prefer an even larger but worse one. so let’s make the bokeh radius configurable via a field, with rang 1-10 and a default of 4, expressed in half-resolution texels. we should not use a radius smaller than 1, because then we mostly end up with simply the blur effect from the downsampling instead of a bokeh effect. 我们不同用小于1的半径,因为这样得到的基本上是模糊的效果而不是bokeh效果。

[Range(1f, 10f)]
public float bokehRadius = 4f;

pass the radius to the shader.

dofMaterial.SetFloat("_BokehRadius", bokehRadius);

add a shader variable for it, again using a float as it is used for texture sampling.

float _BokehRadius, _FocusDistance, _FocusRange;

finally, use the configurable radius instead of the fixed value 4.

float2 o = kernel[k];
o *= _MainTex_TexelSize.xy * _BokehRadius;

4 focusing

by now we can determine the size of the circle of confusion and we can create the bokeh effect at maximum size. the next step is to combine these to render a variable bokeh, simulating camera focus.

4.1 downsampling CoC

because we are creating bokeh at half resolution, we also need the CoC data at half resolution. a default blit or texture sample simply averages adjacent texels, which does not make sense of depth values or things derived from it, like the CoC. so we will have to down-sample ourselves, with a custom prefilter pass.

注释:我们对source的一半进行bokeh处理;所以我们也需要一半分辨率的Coc数据。默认的混合函数只是将其邻边的像素进行平均值处理。这个对深度值这样处理是没有意义的,所以我们呢要自己进行降采样,所以定义了一个prefilter pass。

const int circleOfConfusionPass = 0;
const int preFilterPass = 1;
const int bokehPass = 2;
const int postFilterPass = 3;

besides the source texture, the prefilter pass also needs to read from the CoC texture. so pass it to the shader before down-sampling.

在prefilter pass中我们需要两个数据,一个是原始图片数据,另外一个是CoC数据,第二个数据是采用SetTexture的方式将CoC数据传递给prefilter pass的。处理的结果保存在dof0中,然后我们对dof0进行bokeh处理,处理的结果保存在dof1中,最后对dof1进行postfilter处理,即在此模糊处理。

dofMaterial.SetTexture("_CoCTex", coc);

Graphics.Blit(source, coc, dofMaterial, circleOfConfusionPass);
Graphics.Blit(source, dof0, dofMaterial, preFilterPass);
Graphics.Blit(dof0, dof1, dofMaterial, bokehPass);
Graphics.Blit(dof1, dof0, dofMaterial, postFilterPass);
Graphics.Blit(dof0, destination);

add the corresponding texture variable to the shader.

sampler2D _MainTex,  _CameraDepthTexture, _CoCTex;

next, creating a new pass to perform the down-sampling. the color data can be retrieved with a single texture sample. but the CoC value needs special care. begin by sampling from the four high-resolution texels corresponding to the low-resolution texel, and average them. store the result in the alpha channel.

Pass { // 1 preFilterPass
			CGPROGRAM
				#pragma vertex VertexProgram
				#pragma fragment FragmentProgram

				half4 FragmentProgram (Interpolators i) : SV_Target {
					float4 o = _MainTex_TexelSize.xyxy * float2(-0.5, 0.5).xxyy;
					half coc0 = tex2D(_CoCTex, i.uv + o.xy).r;
					half coc1 = tex2D(_CoCTex, i.uv + o.zy).r;
					half coc2 = tex2D(_CoCTex, i.uv + o.xw).r;
					half coc3 = tex2D(_CoCTex, i.uv + o.zw).r;
					
					half coc = (coc0 + coc1 + coc2 + coc3) * 0.25;

					return half4(tex2D(_MainTex, i.uv).rgb, coc);
				}
			ENDCG
		}

		Pass { // 2 bokehPass}

		Pass { // 3 postFilterPass}

this would be a regular down-sample, but we do not want that. instead, we will just take the most extreme CoC value of the four texels, either positive or negative.

//					half coc = (coc0 + coc1 + coc2 + coc3) * 0.25;
					half cocMin = min(min(min(coc0, coc1), coc2), coc3);
					half cocMax = max(max(max(coc0, coc1), coc2), coc3);
					half coc = cocMax >= -cocMin ? cocMax : cocMin;

我们对source 进行简单的采样,保存在rgb中,而对CoC的处理,则是取对角线四个点的平均值,o,这里还不是平均值,而是四个点的CoC的最大值,并将其保存在a通道中。

4.2 using the correct CoC
to use the correct CoC radius, we have to scale the CoC value by the bokeh radius when we calculate it in the first pass.

float coc = (depth - _FocusDistance) / _FocusRange;
coc = clamp(coc, -1, 1) * _BokehRadius;

to determine whether a kernel sample contributes to the bokeh of fragment, we have to check whether the CoC of that sample overlaps this fragment. we need to know the kernel radius used for this sample, which is simply the length of its offset, so compute it. we measure this in texels, so we have to do this before compensating for the texel size.

for (int k = 0; k < kernelSampleCount; k++) 
{
	float2 o = kernel[k] * _BokehRadius;
//	o *= _MainTex_TexelSize.xy * _BokehRadius;
	half radius = length(o);
	o *= _MainTex_TexelSize.xy;
	color += tex2D(_MainTex, i.uv + o).rgb;
}

if the sample’s CoC is at least as large as the kernel radius used for its offset, then that point’s projection ends up overlapping the fragment. if not, then that point does not influence this fragment and should be skipped. this means that we must once again keep track of the accumulated color’s weight to normalize it.

我们要根据CoC的值,来判断当前的像素是否需要模糊处理。而CoC的值,是深度有关系的,也就是考虑了深度影响模糊的因素。

half3 color = 0;
half weight = 0;
for (int k = 0; k < kernelSampleCount; k++) {
	float2 o = kernel[k] * _BokehRadius;
	half radius = length(o);
	o *= _MainTex_TexelSize.xy;
//	color += tex2D(_MainTex, i.uv + o).rgb;
	half4 s = tex2D(_MainTex, i.uv + o);

	if (abs(s.a) >= radius) {
		color += s.rgb;
		weight += 1;
	}
}
color *= 1.0 / weight;

这一点不是很高理解。

4.3 smoothing the sampling

our image now contains bokeh discs of varying sizes, but the transition between size is abrupt. to see why this happens, increase the bokeh size so u can see the individual samples and play with the focuse range or distance.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值