Image Quality Render scale, MSAA, and HDR

https://catlikecoding.com/unity/tutorials/scriptable-render-pipeline/image-quality/

1 render scale
the camera determines the width and height of the image that gets rendered, that is out of control of the pipeline.
but we can do whatever we want before rendering to the camera’s target.
we can render to intermediate textures, which we can give any size we like.

for example, we could render everything to a small texture,
followed by a final blit to the camera’s target to scale it up to the desired size.

that reduces the image quality, but speeds up rendering because there are fewer fragment to process.

the lightweight/universal pipeline has Render Scale option to support this, so let us add it to our own pipeline as well.

bool scaledRendering =
			renderScale < 1f && camera.cameraType == CameraType.Game;
		
		int renderWidth = camera.pixelWidth;
		int renderHeight = camera.pixelHeight;
		if (scaledRendering) {
			renderWidth = (int)(renderWidth * renderScale);
			renderHeight = (int)(renderHeight * renderScale);
		}

1.2 rendeing to a scaled texture
we must now render to an intermediate texture when either scaled rendering or post-processing is used.

bool renderToTexture = scaledRendering || activeStack;

		if (renderToTexture) {
			cameraBuffer.GetTemporaryRT(
				cameraColorTextureId, renderWidth, renderHeight, 0,
				FilterMode.Bilinear
			);
			cameraBuffer.GetTemporaryRT(
				cameraDepthTextureId, renderWidth, renderHeight, 24,
				FilterMode.Point, RenderTextureFormat.Depth
			);}

from now on the adjusted width and height must be passed to the active stack, when RenderAfterOpaque gets invoked.

activeStack.RenderAfterOpaque(
				postProcessingBuffer, cameraColorTextureId, cameraDepthTextureId,
				renderWidth, renderHeight
			);

the same is true for RenderAfterTransparent.
now we must always release the textures when we are rendering to them but only invoke RenderAfterTransparent when a stack is use.
if not we can use a regular blit to copy the scaled texture to the camera’s target.

if (renderToTexture) {
			if (activeStack) {
				activeStack.RenderAfterTransparent(
					postProcessingBuffer, cameraColorTextureId,
					cameraDepthTextureId, renderWidth, renderHeight
				);
				context.ExecuteCommandBuffer(postProcessingBuffer);
				postProcessingBuffer.Clear();
			}
			else {
				cameraBuffer.Blit(
					cameraColorTextureId, BuiltinRenderTextureType.CameraTarget
				);
			}
			cameraBuffer.ReleaseTemporaryRT(cameraColorTextureId);
			cameraBuffer.ReleaseTemporaryRT(cameraDepthTextureId);
		}

adjusting the render scale affects everything that our pipeline renders, except shadows as they have their own size.
a slight reduction of the render scale seems to apply a bit of anti-aliasing, although haphazardly. 随意的
but further reduction makes it clear that this is just a loss of detail that gets smudged due to bilinear interpolation when blitting to the final render target.
https://blog.csdn.net/xbinworld/article/details/65660665

1.3 scaling up
we can scale down to improve performance at the cost of image quality.
we can do the opposite as well;
scale up to improve image quality at the cost of performance.

to make this possible increase the maximum render scale to 2 in MyPipelineAsset.

[SerializeField, Range(0.25f, 2f)]
	float renderScale = 1f;

2.2 multisampled render textures
MSAA support is set per camera, so keep track of the samples used for rendering in Render and force it to 1 if the camera does not have MSAA enabled.
then if we end up with more than one sample per pixel
we have to render to intermediate multi-sampled textures, MS textures for shot.

int renderSamples = camera.allowMSAA?msaaSamples:1;
bool renderToTexture =
			scaledRendering || renderSamples > 1 || activeStack;

to configure the render textures correctly we have to add two more arguments to GetTemporaryRT.
first the read-write mode, which is the default for the color buffer and is linear for the depth buffer.
the next argument is the sample count.

if (renderToTexture) {
			cameraBuffer.GetTemporaryRT(
				cameraColorTextureId, renderWidth, renderHeight, 0,
				FilterMode.Bilinear, RenderTextureFormat.Default,
				RenderTextureReadWrite.Default, renderSamples
			);
			cameraBuffer.GetTemporaryRT(
				cameraDepthTextureId, renderWidth, renderHeight, 24,
				FilterMode.Point, RenderTextureFormat.Depth,
				RenderTextureReadWrite.Linear, renderSamples
			);}

2.3 resolving MS textures
while we can render directly to MS textures,
we can not directly read from them by the normal way.
if we want to sample a pixel, it must first be resolved, which means averaging all samples to arrive at the final value.
the resolve happens for the entire texture at once in a special Resolve Color pass,
which gets inserted automatically before a pass that samples it.
在这里插入图片描述
Resolving color before final blit.

resolving the MS texture creates a temporary regular texture which remains valid until something new gets rendered to the MS texture.
so if we sample from and then render to the MS texture multiple times
we end up with extra resolve passes for the same texture.

u can see this when activating a post-effect stack with blurring enabled.
at strength 5 we get three resolve pass.
在这里插入图片描述
Resolving three times with blur strength 5.

the additional resolve passes are useless, because our full-screen effects do not benefit from MSAA.
to avoid needlessly rendering to an MS texture we can blit to an intermediate texture once and then use that instead of the camera target.
to make this possible add a samples parameter to the RenderAfterOpaque and RenderAfterTransparent methods in MyPostProcessingStack. If blurring is enabled and MSAA is used then copy to a resolved texture and pass that to Blur.

static int resolvedTexId =
		Shader.PropertyToID("_MyPostProcessingStackResolvedTex");
	
	…
	
	public void RenderAfterOpaque (
		CommandBuffer cb, int cameraColorId, int cameraDepthId,
		int width, int height, int samples
	) {}
	
	public void RenderAfterTransparent (
		CommandBuffer cb, int cameraColorId, int cameraDepthId,
		int width, int height, int samples
	) {
		if (blurStrength > 0) {
			if (samples > 1) {
				cb.GetTemporaryRT(
					resolvedTexId, width, height, 0, FilterMode.Bilinear
				); //这里申请的rt,sample数量为1,所以没有resolve
				Blit(cb, cameraColorId, resolvedTexId);
				Blur(cb, resolvedTexId, width, height);
				cb.ReleaseTemporaryRT(resolvedTexId);
			}
			else {
				Blur(cb, cameraColorId, width, height);
			}
		}
		else {
			Blit(cb, cameraColorId, BuiltinRenderTextureType.CameraTarget);
		}
	}

2.4 no depth resolve
color samples are resolved by averaging them, but this does not work for the depth buffer.
averaging adjacent depth values makes no sense and there is no universal approach that can be used,
so multisample depth does not get resolved at all.
as a result the depth stripes effect does not work when MSAA is enabled.

the naive approach to get the effect working again is so simply not apply MSAA to the depth texture when depth stripes are enabled.
first add a getter property to MyPostProcessingStack that indicates whether it needs to read from a depth texture.
this is only required when the depth stripes effect is used.

public bool NeedsDepth {
		get {
			return depthStripes;
		}
	}

now we can keep track of whether we need an accessible depth texture in MyPipeline.Render.
only when we need depth do we have to get a separate depth texture, otherwise we can make do with setting the depth
bits of the color texture.
and if we do need a depth texture then let us explicitly always set its samples to 1 to disable MSAA for it.

bool needsDepth = activeStack && activeStack.NeedsDepth;

		if (renderToTexture) {
			cameraBuffer.GetTemporaryRT(
				cameraColorTextureId, renderWidth, renderHeight,
				needsDepth ? 0 : 24,
				FilterMode.Bilinear, RenderTextureFormat.Default,
				RenderTextureReadWrite.Default, renderSamples
			);
			if (needsDepth) {
				cameraBuffer.GetTemporaryRT(
					cameraDepthTextureId, renderWidth, renderHeight, 24,
					FilterMode.Point, RenderTextureFormat.Depth,
					RenderTextureReadWrite.Linear, 1
				);
				cameraBuffer.SetRenderTarget(
					cameraColorTextureId,
					RenderBufferLoadAction.DontCare, RenderBufferStoreAction.Store,
					cameraDepthTextureId,
					RenderBufferLoadAction.DontCare, RenderBufferStoreAction.Store
				);
			}
			else {
				cameraBuffer.SetRenderTarget(
					cameraColorTextureId,
					RenderBufferLoadAction.DontCare, RenderBufferStoreAction.Store
				);
			}
		}

this also affects setting the render target after drawing opaque effects.

context.DrawSkybox(camera);

		if (activeStack) {if (needsDepth) {
				cameraBuffer.SetRenderTarget(
					cameraColorTextureId,
					RenderBufferLoadAction.Load, RenderBufferStoreAction.Store,
					cameraDepthTextureId,
					RenderBufferLoadAction.Load, RenderBufferStoreAction.Store
				);
			}
			else {
				cameraBuffer.SetRenderTarget(
					cameraColorTextureId,
					RenderBufferLoadAction.Load, RenderBufferStoreAction.Store
				);
			}
			context.ExecuteCommandBuffer(cameraBuffer);
			cameraBuffer.Clear();
		}

and which textures need to get released at the end.

DrawDefaultPipeline(context, camera);

		if (renderToTexture) {
			…
			cameraBuffer.ReleaseTemporaryRT(cameraColorTextureId);
			if (needsDepth) {
				cameraBuffer.ReleaseTemporaryRT(cameraDepthTextureId);
			}
		}
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值