advanced rendering bloom

https://catlikecoding.com/unity/tutorials/advanced-rendering/bloom/

1 setting the scene

the amount of light that a display can produce is limited. it can go from black to full brightness, which in shaders correspond to RGB values 0 and 1. this is known as the low dynamic range —LDR — for light. how bright a fully white pixel is varies per display and can be adjusted by the used, but it is never going to be blinding.

real life is not limited to LDR light. there is not a maximum brightness. the more photons arrive at the same time, the brighter sth. appears, until it becomes painful to look at or even blinding. directly looking at the sun will damage your eyes.

to represent very bright colors, we can go beyond LDR into the high dynamic range --HDR. this simply means that we do not enforce a maximum of 1. shaders have no trouble working with HDR colors, as long as the input and output formats can store values greater than 1. however, displays can not go beyond their maximum brightness, so the final color is still clamped to LDR.

to make HDR colors visible, they have to be mapped to LDR, which is known as tonemapping. this boils down to nonlinearly darkening the image, so it becomes possible to distinguish between originally HDR colors. this is somewhat analogous to how our eyes adapt to deal with bright scenes, although tonemapping is constant. there is also the auto-exposure technique, which adjust the image brightness dynamically. both can be used together. but our eyes are not always able to do adapt sufficiently. some scenes are simply too bright, which makes it harder for use to see. how could we show this effect, while limited to LDR displays?

bloom is an effect which messes up an image by making a pixels’ color bleed into adjacent pixels. it is like blurring an image, but based on brightness. this way, we could communicate overbright colors via blurring. it is somewhat similar to how light can diffuse inside our eyes, which can become noticeable in case of high brightness, but it is mostly a nonrealistic effect.

many people dislike bloom because it messes up otherwise crisp images and makes things appear to glow unrealistically. this is not inherent fault of bloom, it is simple how it happens to be used a lot. if u are aiming for realism, use bloom in moderation, when it make sense. bloom can also be used artistically for nonrealistic effects. examples are dream sequences, to indicate wooziness, or for creative scene transitions.

1.1 bloom scene

we are going to create our own bloom effect via a camera post-effect component, similar to how we created the deferred fog effect in rendering 14, fog. while u can start with a new project or continue from that tutorial, i used the previous advanced rendering tutorial, surface displacement, as the basis for this project.

create a new scene with default lighting. put a bunch of bright objects inside it, on a dark background. i i used a black plane with a bunch of solid white, yellow, green, and red cubes and spheres of varying sizes. make sure that the camera is HDR enabled. also set the project to use linear color space, so we can best see the effect.

normally, u would tonemapping to a scene with linear and HDR rendering. u could do auto-exposure first, then apply bloom, and then perform the final tonemapping. but in this tutorial we will focus on bloom exclusively and will not apply any other effects. this means that all colors that end up beyond LDR will be clamped in the final image.

1.2 bloom effect
create a new BloomEffect component. just like DeferredFogEffect, have it execute in edit mode and give it an OnRenderImage method. initially it does nothing extra, so just blit from the source to the destination render texture.

using UnityEngine;
using System;

[ExecuteInEditMode]
public class BloomEffect : MonoBehaviour {

	void OnRenderImage (RenderTexture source, RenderTexture destination) {
		Graphics.Blit(source, destination);
	}
}

let us also apply this effect to the scene view, so it is easier to see the effect from a varying point of view. this is done by adding the ImageEffectAllowedInSceneView attribute to the class.

[ExecuteInEditMode, ImageEffectAllowedInSceneView]
public class BloomEffect : MonoBehaviour {}

add this component as the only effect to the camera object. this completes our test scene.

2 blurring

the bloom effect is created by taking the original image, blurring it somehow, then combining the result with the original image. so to create bloom, we must first be able to blur an image.

2.1 rendering to another texture

applying an effect is done via rendering from one render texture to another. if we could perform all the work in a single pass, then we could simple blit from the source to the destination, using an appropriate shader. but blurring is a lot of work, so let us introduce an intermediate step. we first blit from the source to a temporary texture, then from that texture to the final destination.

getting hold of a temporary render texture is best done via invoking RenderTexture.GetTemporary. this method takes care of managing temporary textures for us, creating, caching, and destroying them as unity sees fit. at minimum, we have to specify the texture’s dimensions. we will start with the same size as the source texture.

void OnRenderImage (RenderTexture source, RenderTexture destination) {
		RenderTexture r = RenderTexture.GetTemporary(
			source.width, source.height
		);
		
		Graphics.Blit(source, destination);
	}

as we are going to blur the image, we are not going to do anything with the depth buffer. to indicate that, use 0 as the third parameter.

RenderTexture r = RenderTexture.GetTemporary(
			source.width, source.height, 0
		);

because we are using HDR, we have to use an appropriate texture format. as the camera should have HDR enabled, the source’s format will be correct, so we can use that. it is most likeyly ARGBHalf, but may be another format is used.

RenderTexture r = RenderTexture.GetTemporary(
			source.width, source.height, 0, source.format
		);

instead of blitting from source to destination directly, now first blit from the source to the temporary texture, then from that to the destination.

//		Graphics.Blit(source, destination);
		Graphics.Blit(source, r);
		Graphics.Blit(r, destination);

after that, we no longer need the temporary texture. to make it available for reuse, release it by invoking RenderTexture.ReleaseTemporary.

Graphics.Blit(source, r);
		Graphics.Blit(r, destination);
		RenderTexture.ReleaseTemporary(r);

although the result still looks the same, we are now moving it through a temporary texture. 我们正在使用一个临时的纹理移动它。

2.2 downsampling

blurring an image is done by averaging pixels. for each pixel, we have to decide on a bunch of nearby pixels to combine. which pixels are included defines the filter kernel used for the effect. a little blurring can be done by averaging only a few pixels, which means a small kernel. a lot of blurring would require a large kernel, combining many pixels.

the more pixels there are in the kernel, the more times we have to sample the input texture. as this is per pixel, a large kernel can require a huge amount of sampling work. so let us keep it as simple as possible.

the simplest and quickest way to average pixels is to take advantage of the bilinear filtering built into the GPU. if we halve the resolution of the temporary texture, then we end up with one pixel for each group of four source pixels. the lower-resolution pixel will be sampled exactly in between the original four, so we end up with their average. we do not even have to use a custom shader for that.

在这里插入图片描述

int width = source.width / 2;
int height = source.height / 2;
RenderTextureFormat format = source.format;
RenderTexture r = RenderTexture.GetTemporary(widht, height, 0, format);

to make this work, first refactor-rename r to currentDestination. after the first blit, add an explicit currentSource variable and assign currentDestination to it, then use that for the final blit and release it.

RenderTexture currentDestination = RenderTexture.GetTemporary(width, height, 0, format);
Graphics.Blit(source, currentDestination);
RenderTexture currentSource = currentDestination;
Graphics.Blit(currentSource, destination);
RenderTexture.ReleaseTemporary(currentSource);

now we can put a loop in between the destination of the current

2.5 custom shading
to improve our blurring, we have to switch to a more advanced filter kernel than simple bilinear filtering.
this requires us to use a custom shader, so create a new bloom shader. just like the deferred fog shader, begin with a simple shader that has a _MainTex property, has no culling, and does not use the depth buffer. give it a single pass with a vertex and fragment program.

Shader "Custom/Bloom" {
	Properties {
		_MainTex ("Texture", 2D) = "white" {}
	}

	SubShader {
		Cull Off
		ZTest Always
		ZWrite Off

		Pass {
			CGPROGRAM
				#pragma vertex VertexProgram
				#pragma fragment FragmentProgram
			ENDCG
		}
	}
}

the vertex program is even simpler than the one for the fog effect. it only has to transform the vertex position to clip space and pass through the texture coordinates of the full-screen quad. because we will end up with multiple passes, everything except the fragment program can be shared and defined in a CGINCLUDE block. 这里我不是很懂,顶点是啥?uv又是啥???这个以后再悟吧。。。

	Properties {
		_MainTex ("Texture", 2D) = "white" {}
	}

	CGINCLUDE
		#include "UnityCG.cginc"

		sampler2D _MainTex;

		struct VertexData {
			float4 vertex : POSITION;
			float2 uv : TEXCOORD0;
		};

		struct Interpolators {
			float4 pos : SV_POSITION;
			float2 uv : TEXCOORD0;
		};

		Interpolators VertexProgram (VertexData v) {
			Interpolators i;
			i.pos = UnityObjectToClipPos(v.vertex);
			i.uv = v.uv;
			return i;
		}
	ENDCG

	SubShader {}

we will define the FragmentProgram function in the pass itself. initially, initially sample the source texture and use that as the result, making it red to verify that we are using our custom shader. typically HDR colors are stored in half-precision format, so let us explicit and use half instead of float, even though this makes no difference for non-mobile platforms.

Pass {
			CGPROGRAM
				#pragma vertex VertexProgram
				#pragma fragment FragmentProgram

				half4 FragmentProgram (Interpolators i) : SV_Target {
					return tex2D(_MainTex, i.uv) * half4(1, 0, 0, 0);
				}
			ENDCG
		}

add a public field to our effect to hold a reference to this shader, and hook it up in the inspector.

public Shader bloomShader;

在这里插入图片描述

add a field to hold the material that will use this shader, which does not need to be serialized. before rendering, check whether we have this material and if not create it. we do not need to see it in the hierarchy and neither do we need to save it, so set its hideFlags accordingly.

	[NonSerialized]
	Material bloom;
	
	void OnRenderImage (RenderTexture source, RenderTexture destination) {
		if (bloom == null) {
			bloom = new Material(bloomShader);
			bloom.hideFlags = HideFlags.HideAndDontSave;
		}}

NonSerialized 的关键的使用

each time we blit, it should be done with this material instead of the default.

void OnRenderImage (RenderTexture source, RenderTexture destination) {
		…
		Graphics.Blit(source, currentDestination, bloom);

2.6 box sampling

we are going to adjust our shader so it uses a different sampling method that bilinear filtering. because sampling depends on the pixel size, add the magic float4 _MainTex_TexselSie variables to the CGINCLUDE block. keep in mind that this corresponds to the texel size of the source texture, not the destination.

这里介绍下这个变量:
https://forum.unity.com/threads/_maintex_texelsize-whats-the-meaning.110278/
unity官网以及unity安装路径下的shader代码,都没有这个变量的使用。
在论坛中有找到:
_MainTex_TexselSie = Vector4(1 / width, 1 / height, width, height)
width和height分别是纹理的宽度和高度。

下面会介绍为何要这个变量。

sampler2D _MainTex;
float4 _MainTex_TexelSize;

as we are always sampling the main texture and only care about the RGB channels, let us create a convenient minimal Sample function.

half3 Sample (float2 uv) {
			return tex2D(_MainTex, uv).rgb;
		}

instead of relying on a bilinear filter only, we will use a simple box filter kernel instead. 我们不使用双线性采样,而是使用一个简单的盒子采样。it takes four samples instead of one, diagonally positioned so we get the averages of four adjacent 2x2 pixels blocks. sum these samples and divide by four, so we end up with the average of a 4x4 pixel block, doubling our kernel size.

在这里插入图片描述
这里可能有人看不懂了,其实我也是,但是后面突然看懂了。
其实是对角线采样,我画个图就知道了:
在这里插入图片描述

如上图所示,一个红色的点,是由其左上、左下、右上、右下,平均值得到的。

half3 SampleBox (float2 uv) {
			float4 o = _MainTex_TexelSize.xyxy * float2(-1, 1).xxyy;
			half3 s =
				Sample(uv + o.xy) + Sample(uv + o.zy) +
				Sample(uv + o.xw) + Sample(uv + o.zw);
			return s * 0.25f;
		}

这里有人又看不懂了, _MainTex_TexelSize.xyxy * float2(-1, 1).xxyy是个啥???
我们可以再画个图:

_MainTex_TexelSize.xyxy,_MainTex_TexelSize的前两维度是每个像素的大小,所以写成四个形式即为:
而float2(-1,1).xxyy则为(-1,-1,1,1)
然后相乘得到:
(sizeX, sizeY, sizeX, sizeY) * (-1,-1,1,1)
每个分量对应相乘得到:
(-sizeX, -sizeY, sizeX, sizeY)

最后才样的四个点:

Sample(uv + o.xy) + Sample(uv + o.zy) +
				Sample(uv + o.xw) + Sample(uv + o.zw);

在这里插入图片描述

use this sampling function in our fragment program.

half4 FragmentProgram (Interpolators i) : SV_Target {
//					return tex2D(_MainTex, i.uv) * half4(1, 0, 0, 0);
					return half4(SampleBox(i.uv), 1);
				}

3 creating bloom
blurring the original image is the first step of creating a bloom effect. the second step is to combine the blurred image with the original, brightening it. however, we will not just use the final blurred result, as that would produce a rather uniform smudging. instead, lower amounts of blurring should contribute more to the result that higher amounts of blurring. we can do this by accumulating the intermediate results, adding to the old data was we up-sample.
在这里插入图片描述

3.1 additive blending
adding to what we already have at some intermediate level can be done by using additive blending, instead of replacing the texture’s contents. all we have to do is set the blend mode of the up-sampling pass to one one.

	Pass { // 1
			Blend One One

			CGPROGRAM
				#pragma vertex VertexProgram
				#pragma fragment FragmentProgram

				half4 FragmentProgram (Interpolators i) : SV_Target {
					return half4(SampleBox(i.uv, 0.5), 1);
				}
			ENDCG
		}

this simple approach works fine for all intermediate passes, but will go wrong when we render to the final destination, because we have not rendered to it yet. 在最后一步出错了,因为我们还没有将其画到目的地. we likely end up accumulating light each frame, blowing out the image, or sth. else, depending on how the textures are reused by unity. to solve this we have to create a separate pass for the last up-sample, where we we combine the original source texture with the last intermediate texture. so we need a shader variable for the source.

sampler2D _MainTex, _SourceTex;

就是说啥呢,我们需要把最后一次的混合结果和我们的原始图片进行叠加才行。
add a third pass that is a duplicate of the second pass, except that it uses the default blend mode and adds the box sample to a sample of the source texture.

Pass { // 2
			CGPROGRAM
				#pragma vertex VertexProgram
				#pragma fragment FragmentProgram

				half4 FragmentProgram (Interpolators i) : SV_Target {
					half4 c = tex2D(_SourceTex, i.uv); //原始图的采样
					c.rgb += SampleBox(i.uv, 0.5); //最后一次向上采样
					return c;
				}
			ENDCG
		}

define a constant for this pass, which applies the bloom to the original image.

const int BoxDownPass = 0;
const int BoxUpPass = 1;
const int ApplyBloomPass = 2;

the last blit has to use this pass, with the correct source texture.

//		Graphics.Blit(currentSource, destination, bloom, BoxUpPass);
		bloom.SetTexture("_SourceTex", source);
		Graphics.Blit(currentSource, destination, bloom, ApplyBloomPass);
		RenderTexture.ReleaseTemporary(currentSource);

3.2 bloom threshold

right now we are still blurring the entire image. it is just most obvious for bright pixels. but one of the uses of bloom is to apply it only to very bright pixels. to make this possible, we have to introduce a brightness threshold. add a public field for this, with a slider range from 0 to some very bright value, like 10. let us use a default threshold of 1, excluding LDR pixels.

[Range(0, 10)]
public float threshold = 1;

the threshold determines which pixels contribute to the bloom effect. if they are not bright enough, they should not be included during the down-sampling and up-sampling process. simply converting them to black will do this, which has to be done by the shader. so set the material’s _Threshold variable before we blit.

if (bloom == null) {}

		bloom.SetFloat("_Threshold", threshold);

在此说明下,这个和最终的源码有所不同,这是因为,这个才是第一次的优化,而源码是在优化了多次之后的最终版本,其中间每个步骤的优化作者没有上传源代码。

add this variable to the CGINCLUDE block to the shader as well, again using the half type.

	half _Threshold;

we will use the threshold to filter out pixels that we do not wish to include. as we do this at the start of the blurring process, it is kown as a pre-filter step. create a function for this, which takes a color an and outputs the filtered one.

	half3 Prefilter (half3 c) {
			return c;
		}

we will use the color’s maximum component to determine its brightness, so b=max(r,max(g,b)).
we can determine the contribution factor of the color by subtracting the threshold from its brightness, then divide that by its brightness, c = (b-t)/b,where t is the threshold. the result is always 1 when t = 0, which leaves the color unchanged. as t increases, the brightness curve will bend downward so it drop to zero where b = t. because of the curve’s shape, it is known as a knee. as we do not want negative factors, we have to make sure that b-t does not drop below zero, leading to c=max(0, (b-t))/ b.

在这里插入图片描述
这个曲线像膝盖,所以叫做knee曲线。

to avoid divisions by zero in the shader, make sure that the divisor has a small value at minimum, like 0.00001. then use the result to modulate the color.

half3 Prefilter (half3 c) {
			half brightness = max(c.r, max(c.g, c.b));
			half contribution = max(0, brightness - _Threshold);
			contribution /= max(brightness, 0.00001);
			return c * contribution;
		}

the filter is applied in the first pass only. so duplicate the first pass, putting it at the top as pass 0. apply the filter to the result of the box sample.

	Pass { // 0
			CGPROGRAM
				#pragma vertex VertexProgram
				#pragma fragment FragmentProgram

				half4 FragmentProgram (Interpolators i) : SV_Target {
					return half4(Prefilter(SampleBox(i.uv, 1)), 1);
				}
			ENDCG
		}

Add a constant for this new pass, and increase the indices of all later passes by one.

const int BoxDownPrefilterPass = 0;
	const int BoxDownPass = 1;
	const int BoxUpPass = 2;
	const int ApplyBloomPass = 3;

Use the new pass for the first blit.

RenderTexture currentDestination = textures[0] =
RenderTexture.GetTemporary(width, height, 0, format);
Graphics.Blit(source, currentDestination, bloom, BoxDownPrefilterPass);
RenderTexture currentSource = currentDestination;

at this point, with the threshold set to 1, u will likely see no or almost no bloom, assuming the light and materials used have no HDR values. to make bloom appear, u could increase the light contribution of some of the materials. for example, i made the yellow material emissive, which together with the reflected light pushes the yellow pixels into HDR.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值