Multiple Cameras

https://catlikecoding.com/unity/tutorials/custom-srp/multiple-cameras/
git 仓库:https://bitbucket.org/catlikecodingunitytutorials/custom-srp-14-multiple-cameras/src/master/
https://blog.theknightsofunity.com/using-multiple-unity-cameras-why-this-may-be-important/
https://zhuanlan.zhihu.com/p/60325181 从仿色图案(Dither Pattern)到半调(Halftone)

Skybox replaces color buffer with your skybox and completely clear depth buffer Solid color does the same, but color buffer becomes solid color Depth only leaves color buffer at is, but your depth buffer becomes clear Don’t Clear doesn’t clear anything.

1 combining cameras
because culling, light processing, and shadow rendering is performed per camera it is a good idea to render as few cameras as possible per frame, ideally only one.
but sometimes we do have to render multiple different points of view at the same time.
examples incude split-screen multiplayer, rear-view mirrors, a top-down overlay, an in-game camera, and 3D character portraits.

1.1 split screen
let us begin by considering a split-screen scenario, consisting of two side-by-side 并排的 cameras.
the left camera has the width of its viewport rect set to 0.5.
the right camera also has a width of 0.5 and has its x position set to 0.5.
if we do not use FX then this works as expected.

but if we enabled post FX it fails. both camers render at the correct size but end up covering the entire camera target buffer,
with only the last one visible.

this happens because an invocation of SetRenderTarget also resets the viewport to cover the entire target.
To apply the viewport to the final post FX pass we have to set the viewport after setting the target and before drawing.
Let’s do this by duplicating PostFXStack.Draw, renaming it to DrawFinal and invoking SetViewport on the buffer directly after SetRenderTarget, with the camera’s pixelRect as an argument. As this is the final draw we can replace all but the source parameter with hard-coded values.

void DrawFinal (RenderTargetIdentifier from) 
{
		buffer.SetGlobalTexture(fxSourceId, from);
		buffer.SetRenderTarget(BuiltinRenderTextureType.CameraTarget,
			RenderBufferLoadAction.DontCare, RenderBufferStoreAction.Store
		);
		buffer.SetViewport(camera.pixelRect);
		buffer.DrawProcedural(Matrix4x4.identity, settings.Material,(int)Pass.Final, MeshTopology.Triangles, 3);
	}

Invoke the new method instead of the regular Draw at the end of DoColorGradingAndToneMapping.

void DoColorGradingAndToneMapping (int sourceId) 
{//Draw(…)
		DrawFinal(sourceId);
		buffer.ReleaseTemporaryRT(colorGradingLUTId);
	}

1.2 Layering Cameras
Besides rendering to separate areas we can also make camera viewports overlap. The simplest example is to use a regular main camera that covers the entire screen and then add a second camera that renders later with the same view but a smaller viewport. I reduced the second viewport to half size and centered it by settings its XY position to 0.25.

在这里插入图片描述

if we are not using post FX then we can turn the top camera layer into a partically-transparent overlay by setting it to clear depth only. this removes its skybox, revealing the layer below.
but this does not work when post FX are u sed because then we force it to CameraClearFlags.Color, so we will see the camera’s background color instead, which is dark blue by default.
在这里插入图片描述
one thing we could do to make layer transparency work with post FX is change the PostFXStack shader’s final pass
so it performs alpha blending instead of the default One Zero mode.

Pass {
			Name "Final"
			Blend SrcAlpha OneMinusSrcAlpha
			HLSLPROGRAM
				#pragma target 3.5
				#pragma vertex DefaultPassVertex
				#pragma fragment FinalPassFragment
			ENDHLSL
		}

this does requires us to always load the target buffer in FinalDraw.

void DrawFinal (RenderTargetIdentifier from) {
		buffer.SetGlobalTexture(fxSourceId, from);
		buffer.SetRenderTarget(
			BuiltinRenderTextureType.CameraTarget,
			RenderBufferLoadAction.Load, RenderBufferStoreAction.Store
		);}

代码分析:
在这里插入图片描述
一个cube:
在这里插入图片描述

using UnityEngine;

[DisallowMultipleComponent]
public class PerObjectMaterialProperties : MonoBehaviour {

	static int
		baseColorId = Shader.PropertyToID("_BaseColor"),
		cutoffId = Shader.PropertyToID("_Cutoff"),
		metallicId = Shader.PropertyToID("_Metallic"),
		smoothnessId = Shader.PropertyToID("_Smoothness"),
		emissionColorId = Shader.PropertyToID("_EmissionColor");

	static MaterialPropertyBlock block;

	[SerializeField]
	Color baseColor = Color.white;

	[SerializeField, Range(0f, 1f)]
	float alphaCutoff = 0.5f, metallic = 0f, smoothness = 0.5f;

	[SerializeField, ColorUsage(false, true)]
	Color emissionColor = Color.black;

	void Awake () {
		OnValidate();
	}

	void OnValidate () {
		if (block == null) {
			block = new MaterialPropertyBlock();
		}
		block.SetColor(baseColorId, baseColor);
		block.SetFloat(cutoffId, alphaCutoff);
		block.SetFloat(metallicId, metallic);
		block.SetFloat(smoothnessId, smoothness);
		block.SetColor(emissionColorId, emissionColor);
		GetComponent<Renderer>().SetPropertyBlock(block);
	}
}

材质球:
在这里插入图片描述

Shader "Custom RP/Lit" {
	
	Properties {
		_BaseMap("Texture", 2D) = "white" {}
		_BaseColor("Color", Color) = (0.5, 0.5, 0.5, 1.0)
		_Cutoff ("Alpha Cutoff", Range(0.0, 1.0)) = 0.5
		[Toggle(_CLIPPING)] _Clipping ("Alpha Clipping", Float) = 0
		[Toggle(_RECEIVE_SHADOWS)] _ReceiveShadows ("Receive Shadows", Float) = 1
		[KeywordEnum(On, Clip, Dither, Off)] _Shadows ("Shadows", Float) = 0

		[Toggle(_MASK_MAP)] _MaskMapToggle ("Mask Map", Float) = 0
		[NoScaleOffset] _MaskMap("Mask (MODS)", 2D) = "white" {}
		_Metallic ("Metallic", Range(0, 1)) = 0
		_Occlusion ("Occlusion", Range(0, 1)) = 1
		_Smoothness ("Smoothness", Range(0, 1)) = 0.5
		_Fresnel ("Fresnel", Range(0, 1)) = 1

		[Toggle(_NORMAL_MAP)] _NormalMapToggle ("Normal Map", Float) = 0
		[NoScaleOffset] _NormalMap("Normals", 2D) = "bump" {}
		_NormalScale("Normal Scale", Range(0, 1)) = 1

		[NoScaleOffset] _EmissionMap("Emission", 2D) = "white" {}
		[HDR] _EmissionColor("Emission", Color) = (0.0, 0.0, 0.0, 0.0)

		[Toggle(_DETAIL_MAP)] _DetailMapToggle ("Detail Maps", Float) = 0
		_DetailMap("Details", 2D) = "linearGrey" {}
		[NoScaleOffset] _DetailNormalMap("Detail Normals", 2D) = "bump" {}
		_DetailAlbedo("Detail Albedo", Range(0, 1)) = 1
		_DetailSmoothness("Detail Smoothness", Range(0, 1)) = 1
		_DetailNormalScale("Detail Normal Scale", Range(0, 1)) = 1
		
		[Toggle(_PREMULTIPLY_ALPHA)] _PremulAlpha ("Premultiply Alpha", Float) = 0

		[Enum(UnityEngine.Rendering.BlendMode)] _SrcBlend ("Src Blend", Float) = 1
		[Enum(UnityEngine.Rendering.BlendMode)] _DstBlend ("Dst Blend", Float) = 0
		[Enum(Off, 0, On, 1)] _ZWrite ("Z Write", Float) = 1

		[HideInInspector] _MainTex("Texture for Lightmap", 2D) = "white" {}
		[HideInInspector] _Color("Color for Lightmap", Color) = (0.5, 0.5, 0.5, 1.0)
	}
	
	SubShader {
		HLSLINCLUDE
		#include "../ShaderLibrary/Common.hlsl"
		#include "LitInput.hlsl"
		ENDHLSL

		Pass {
			Tags {
				"LightMode" = "CustomLit"
			}

			Blend [_SrcBlend] [_DstBlend], One OneMinusSrcAlpha
			ZWrite [_ZWrite]

			HLSLPROGRAM
			#pragma target 3.5
			#pragma shader_feature _CLIPPING
			#pragma shader_feature _RECEIVE_SHADOWS
			#pragma shader_feature _PREMULTIPLY_ALPHA
			#pragma shader_feature _MASK_MAP
			#pragma shader_feature _NORMAL_MAP
			#pragma shader_feature _DETAIL_MAP
			#pragma multi_compile _ _DIRECTIONAL_PCF3 _DIRECTIONAL_PCF5 _DIRECTIONAL_PCF7
			#pragma multi_compile _ _OTHER_PCF3 _OTHER_PCF5 _OTHER_PCF7
			#pragma multi_compile _ _CASCADE_BLEND_SOFT _CASCADE_BLEND_DITHER
			#pragma multi_compile _ _SHADOW_MASK_ALWAYS _SHADOW_MASK_DISTANCE
			#pragma multi_compile _ _LIGHTS_PER_OBJECT
			#pragma multi_compile _ LIGHTMAP_ON
			#pragma multi_compile _ LOD_FADE_CROSSFADE
			#pragma multi_compile_instancing
			#pragma vertex LitPassVertex
			#pragma fragment LitPassFragment
			#include "LitPass.hlsl"
			ENDHLSL
		}

		Pass {
			Tags {
				"LightMode" = "ShadowCaster"
			}

			ColorMask 0
			
			HLSLPROGRAM
			#pragma target 3.5
			#pragma shader_feature _ _SHADOWS_CLIP _SHADOWS_DITHER
			#pragma multi_compile _ LOD_FADE_CROSSFADE
			#pragma multi_compile_instancing
			#pragma vertex ShadowCasterPassVertex
			#pragma fragment ShadowCasterPassFragment
			#include "ShadowCasterPass.hlsl"
			ENDHLSL
		}

		Pass {
			Tags {
				"LightMode" = "Meta"
			}

			Cull Off

			HLSLPROGRAM
			#pragma target 3.5
			#pragma vertex MetaPassVertex
			#pragma fragment MetaPassFragment
			#include "MetaPass.hlsl"
			ENDHLSL
		}
	}

	CustomEditor "CustomShaderGUI"
}

ShadowCasterPass.hlsl

#ifndef CUSTOM_SHADOW_CASTER_PASS_INCLUDED
#define CUSTOM_SHADOW_CASTER_PASS_INCLUDED

struct Attributes {
	float3 positionOS : POSITION;
	float2 baseUV : TEXCOORD0;
	UNITY_VERTEX_INPUT_INSTANCE_ID
};

struct Varyings {
	float4 positionCS : SV_POSITION;
	float2 baseUV : VAR_BASE_UV;
	UNITY_VERTEX_INPUT_INSTANCE_ID
};

bool _ShadowPancaking;

Varyings ShadowCasterPassVertex (Attributes input) {
	Varyings output;
	UNITY_SETUP_INSTANCE_ID(input);
	UNITY_TRANSFER_INSTANCE_ID(input, output);
	float3 positionWS = TransformObjectToWorld(input.positionOS);
	output.positionCS = TransformWorldToHClip(positionWS);

	if (_ShadowPancaking) {
		#if UNITY_REVERSED_Z
			output.positionCS.z = min(
				output.positionCS.z, output.positionCS.w * UNITY_NEAR_CLIP_VALUE
			);
		#else
			output.positionCS.z = max(
				output.positionCS.z, output.positionCS.w * UNITY_NEAR_CLIP_VALUE
			);
		#endif
	}

	output.baseUV = TransformBaseUV(input.baseUV);
	return output;
}

void ShadowCasterPassFragment (Varyings input) {
	UNITY_SETUP_INSTANCE_ID(input);
	ClipLOD(input.positionCS.xy, unity_LODFade.x); //ClipLOD是干嘛的

	InputConfig config = GetInputConfig(input.baseUV);
	float4 base = GetBase(config);
	#if defined(_SHADOWS_CLIP)
		clip(base.a - GetCutoff(config));
	#elif defined(_SHADOWS_DITHER)
		float dither = InterleavedGradientNoise(input.positionCS.xy, 0);
		clip(base.a - dither);
	#endif
}

#endif
void ClipLOD (float2 positionCS, float fade) 
{
	#if defined(LOD_FADE_CROSSFADE)
		float dither = InterleavedGradientNoise(positionCS.xy, 0);
		clip(fade + (fade < 0.0 ? dither : -dither));
	#endif
}

there are multiple ways to generate a dither value in LitPassFragment.
the easiest is to use the InterleavedGradientNoise function from the core RP library, which generates a rotated tiled dither pattern given a screen-space XY position. in the fragment funtion that is equal to the clip-space XY position.
it also requires a second argument which is used to animate it, which we do not need and can leave at zero.

when cross-fading is active both LOD levels get rendered at the same time.
it’s up to the shader to blend them somehow.
unity picks a shader variant for the LOD_FADE_CROSSFADE keyword, so add a multi-compile directive for it to our Lit shader.

#pragma multi_compile _ LOD_FADE_CROSSFADE

how much an object is faded gets communicated via the unity_LODFade vector of the
#pragma multi_compile _ LOD_FADE_CROSSFADE buffer, which have already defined.
its X component contains the fade factor. its Y component contains the same factor, but quantized to sixteen steps, which we will not use.

GetInputConfig 返回了一个结构体,其中有一些字段,等待着赋值。

struct InputConfig {
	float2 baseUV;
	float2 detailUV;
	bool useMask;
	bool useDetail;
};

InputConfig GetInputConfig (float2 baseUV, float2 detailUV = 0.0) {
	InputConfig c;
	c.baseUV = baseUV;
	c.detailUV = detailUV;
	c.useMask = false;
	c.useDetail = false;
	return c;
}

对于无光照的pass来说:
采样贴图,乘以颜色。

float4 GetBase (InputConfig c) 
{
	float4 baseMap = SAMPLE_TEXTURE2D(_BaseMap, sampler_BaseMap, c.baseUV);
	float4 baseColor = INPUT_PROP(_BaseColor);
	return baseMap * baseColor;
}

GetCutoff 取截断a的值:

float GetCutoff (InputConfig c) {
	return INPUT_PROP(_Cutoff);
}

ShadowCasterPassFragment 中部分代码:

void ShadowCasterPassFragment (Varyings input) {
	UNITY_SETUP_INSTANCE_ID(input);
	ClipLOD(input.positionCS.xy, unity_LODFade.x);

	InputConfig config = GetInputConfig(input.baseUV);
	float4 base = GetBase(config);
	#if defined(_SHADOWS_CLIP)
		clip(base.a - GetCutoff(config));
	#elif defined(_SHADOWS_DITHER)
		float dither = InterleavedGradientNoise(input.positionCS.xy, 0);
		clip(base.a - dither);
	#endif
}
float2 TransformBaseUV (float2 baseUV) {
	float4 baseST = INPUT_PROP(_BaseMap_ST);
	return baseUV * baseST.xy + baseST.zw;
}

float2 TransformDetailUV (float2 detailUV) {
	float4 detailST = INPUT_PROP(_DetailMap_ST);
	return detailUV * detailST.xy + detailST.zw;
}

光照的初始化:

//第一步,先剔除
bool Cull (float maxShadowDistance) 
{
		if (camera.TryGetCullingParameters(out ScriptableCullingParameters p)) {
			p.shadowDistance = Mathf.Min(maxShadowDistance, camera.farClipPlane);
			cullingResults = context.Cull(ref p);
			return true;
		}
		return false;
	}
//第二步,灯光的设置
lighting.Setup(
			context, cullingResults, shadowSettings, useLightsPerObject,
			cameraSettings.maskLights ? cameraSettings.renderingLayerMask : -1
		);
//第三步,后处理的设置
postFXStack.Setup(
		context, camera, postFXSettings, useHDR, colorLUTResolution,
		cameraSettings.finalBlendMode
	);

灯光的设置:

	public void Setup (
		ScriptableRenderContext context, CullingResults cullingResults,
		ShadowSettings shadowSettings, bool useLightsPerObject, int renderingLayerMask
	) {
		this.cullingResults = cullingResults;
        buffer.BeginSample(bufferName);
		shadows.Setup(context, cullingResults, shadowSettings); //阴影配置的初始化
		SetupLights(useLightsPerObject, renderingLayerMask); //灯光的设置
		shadows.Render();
		buffer.EndSample(bufferName);
		context.ExecuteCommandBuffer(buffer);
		buffer.Clear();
	}

灯光的初始化:

void SetupLights (bool useLightsPerObject, int renderingLayerMask) 
{
		NativeArray<int> indexMap = useLightsPerObject ?
			cullingResults.GetLightIndexMap(Allocator.Temp) : default;
		NativeArray<VisibleLight> visibleLights = cullingResults.visibleLights;
		int dirLightCount = 0, otherLightCount = 0;
		int i;
		for (i = 0; i < visibleLights.Length; i++) {
			int newIndex = -1;
			VisibleLight visibleLight = visibleLights[i];
			Light light = visibleLight.light;
			if ((light.renderingLayerMask & renderingLayerMask) != 0) {
				switch (visibleLight.lightType) {
					case LightType.Directional:
						if (dirLightCount < maxDirLightCount) {
							SetupDirectionalLight(
								dirLightCount++, i, ref visibleLight, light
							);
						}
						break;
					case LightType.Point:
						if (otherLightCount < maxOtherLightCount) {
							newIndex = otherLightCount;
							SetupPointLight(
								otherLightCount++, i, ref visibleLight, light
							);
						}
						break;
					case LightType.Spot:
						if (otherLightCount < maxOtherLightCount) {
							newIndex = otherLightCount;
							SetupSpotLight(otherLightCount++, i, ref visibleLight, light);
						}
						break;
				}
			}
			if (useLightsPerObject) {
				indexMap[i] = newIndex;
			}
		}

		if (useLightsPerObject) {
			for (; i < indexMap.Length; i++) {
				indexMap[i] = -1;
			}
			cullingResults.SetLightIndexMap(indexMap);
			indexMap.Dispose();
			Shader.EnableKeyword(lightsPerObjectKeyword);
		}
		else {
			Shader.DisableKeyword(lightsPerObjectKeyword);
		}

		buffer.SetGlobalInt(dirLightCountId, dirLightCount);
		if (dirLightCount > 0) {
			buffer.SetGlobalVectorArray(dirLightColorsId, dirLightColors);
			buffer.SetGlobalVectorArray(
				dirLightDirectionsAndMasksId, dirLightDirectionsAndMasks
			);
			buffer.SetGlobalVectorArray(dirLightShadowDataId, dirLightShadowData);
		}

		buffer.SetGlobalInt(otherLightCountId, otherLightCount);
		if (otherLightCount > 0) {
			buffer.SetGlobalVectorArray(otherLightColorsId, otherLightColors);
			buffer.SetGlobalVectorArray(
				otherLightPositionsId, otherLightPositions
			);
			buffer.SetGlobalVectorArray(
				otherLightDirectionsAndMasksId, otherLightDirectionsAndMasks
			);
			buffer.SetGlobalVectorArray(
				otherLightSpotAnglesId, otherLightSpotAngles
			);
			buffer.SetGlobalVectorArray(
				otherLightShadowDataId, otherLightShadowData
			);
		}
	}

4 Lights Per object
摘自:https://catlikecoding.com/unity/tutorials/custom-srp/point-and-spot-lights/

currently all visible lights are evaluated for every fragment that gets rendered.
this is fine for directional lights, 就是说每个可见的灯都影响了一个像素。
but it is unnecessary work for other lights that are out of range of a fragment. 但是有些灯的影响范围不已然超过了这个像素,那么这个灯对于此像素是无效的,应该不参与计算。
usually each point or spot light only affects a small portion of all fragments, 对于点光和聚光灯其实只影响一小部分的像素。
so there is a lot of work done for nothing, which can affect performance significantly. in order to support many lights with good performance we have to somehow reduce the amount of lights are evaluated per fragment. there are multiple approaches for this, 对于决定一个像素受哪些灯影响有很多的方法,其实我一种都不知道。
of which the simplest is to use unity’s per-object light indices.

the idea is that unity determines which lights affect each object and sends this information to the GPU.
then we can evaluate only the relevant lights when rending each object, ignoring the rest. thus the lights are determined on a per-object basis, not per fragment.

this usually works fine for small objects but is not idea for large ones, because if a light only affects a small portion of an object it will get evaluated for its entire surface.
also, there is a limit to how many lights can affect each object,
so large objects are more prone to lack some lighting.

because per-object light indices are not ideal and can miss some lighting we will make it optional.
that way it is also possible to easily compare both visuals and performance.

还能引起崩溃????
did not unity’s per-object light indices code break a lot?
yes, it is been broken a few times since unity 2018, sometimes for months, causing many bugs.
that is another reasons to make it optional.

4.2 sanitizing light indices
unity simply creates a list of all active lights per objet, roughly sorted by their importance.
this list includes all lights regardless of their visibility and also contains directional lights.
we have to sanitize 清洁 these lights so only the indices of visible non-directional lights remain. We do this in Lighting.SetupLights, so add a lights-per-object parameter to that method, and to Lighting.Setup to pass it along.

public void Setup (
		ScriptableRenderContext context, CullingResults cullingResults,
		ShadowSettings shadowSettings, bool useLightsPerObject
	) {SetupLights(useLightsPerObject);}void SetupLights (bool useLightsPerObject) {}

then add the mode as an argument for Setup in CameraRenderer.Render.

lighting.Setup(
			context, cullingResults, shadowSettings, useLightsPerObject
		);

in Lighting.SetupLights, before we loop to the visible lights, retrieve 检索
the light index map from the culling results.
this is done by invoking GetLightIndexMap with Allocator.Temp as an argument, which gives us a tempoaray NativeArray that contains light indices, matching the visible light indices plus all other active lights in the scene.

NativeArray<int> indexMap =
			cullingResults.GetLightIndexMap(Allocator.Temp);
		NativeArray<VisibleLight> visibleLights = cullingResults.visibleLights;

we only need to retrieve this data when we use lights per obejct.
as the native array is a struct we initialize it to its default value otherwise, which does not allocate anything.

NativeArray<int> indexMap = useLightsPerObject ?
			cullingResults.GetLightIndexMap(Allocator.Temp) : default;

we only need the indices for point and spot lights that we include,
all other lights should be skipped.
we communicate this to Unity by setting the indices off all other lights to -1.
we also have to change the indices of the remaining lights to match ours.
set the new index only if we retrieved the map.

for (int i = 0; i < visibleLights.Length; i++) 
{
			int newIndex = -1;
			VisibleLight visibleLight = visibleLights[i];
			switch (visibleLight.lightType) {case LightType.Point:
					if (otherLightCount < maxOtherLightCount) 
					{
						newIndex = otherLightCount;
						SetupPointLight(otherLightCount++, ref visibleLight);
					}
					break;
				case LightType.Spot:
					if (otherLightCount < maxOtherLightCount) 
					{
						newIndex = otherLightCount;
						SetupSpotLight(otherLightCount++, ref visibleLight);
					}
					break;
			}
			if (useLightsPerObject) 
			{
				indexMap[i] = newIndex;
			}
		}

we also have to eliminate the indices of all lights that are not visible.
do this with a second loop that continues after the first one, if we use lights per object.

int i;
		for (i = 0; i < visibleLights.Length; i++) {}

		if (useLightsPerObject) {
			for (; i < indexMap.Length; i++) {
				indexMap[i] = -1;
			}
		}

when we are done we have to send the adjusted index map back to unity, by invoking SetLightIndexMap on the culling results.
the index map is no longer needed after that, so we should deallocate it by invoking Dispose on it.

	if (useLightsPerObject) {
			for (; i < indexMap.Length; i++) 
			{
				indexMap[i] = -1;
			}
			cullingResults.SetLightIndexMap(indexMap);
			indexMap.Dispose();
		}

Finally, we’ll use a different shader variant when lights per object are used. We signal this by enabling or disabling the _LIGHTS_PER_OBJECT shader keyword, as appropriate.

static string lightsPerObjectKeyword = "_LIGHTS_PER_OBJECT";void SetupLights (bool useLightsPerObject) {if (useLightsPerObject) {
			for (; i < indexMap.Length; i++) {
				indexMap[i] = -1;
			}
			cullingResults.SetLightIndexMap(indexMap);
			indexMap.Dispose();
			Shader.EnableKeyword(lightsPerObjectKeyword);
		}
		else {
			Shader.DisableKeyword(lightsPerObjectKeyword);
		}}

4.3 using the indices
To use the light indices, add the relevant multi-compile pragma to the CustomLit pass of our Lit shader.

#pragma multi_compile _ _LIGHTS_PER_OBJECT

the required data is part of the UnityPerDraw buffer and consists of two real4 values that must be defined directly after unity_WorldTransformParams. 必须定义在这个之后?
first is unity_LightData, which contains the amount of lights in its Y component.
after that comes unity_LightIndices, which is an array of length two.
each channel of the two vectors contains a light index, so up to eight are supported per object.

real4 unity_WorldTransformParams;
real4 unity_LightData;
real4 unity_LightIndices[2];

use an alternative loop for the other lights in GetLighting if _LIGHTS_PER_OBJECT is defined.
in this case the amount of lights is found via unity_LightData.y and the light index has to be retrieved from the appropriate element and component of unity_LightIndices.
we can get the correct vector by dividing the iterator by 4 and the correct component via modulo 4.

#if defined(_LIGHTS_PER_OBJECT)
		for (int j = 0; j < unity_LightData.y; j++) {
			int lightIndex = unity_LightIndices[j / 4][j % 4];
			Light light = GetOtherLight(lightIndex, surfaceWS, shadowData);
			color += GetLighting(surfaceWS, brdf, light);
		}
	#else
		for (int j = 0; j < GetOtherLightCount(); j++) {
			Light light = GetOtherLight(j, surfaceWS, shadowData);
			color += GetLighting(surfaceWS, brdf, light);
		}
	#endif

however, although only up to eight light indices are available the provided light count does not take this limit into consideration. so we have to limit the loop to eight iterations explicitly.

for (int j = 0; j < min(unity_LightData.y, 8); j++) {}

难道lights per object最多支持8盏灯?
is not there a buffered approach that is not limited to eight lights per object?
there was, but that code has been disabled since unity 2018.3 and has been partially removed from the universal RP.
it has been dead code for more than a year, so i would not rely on it.

not that, with lights-per-object enabled GPU instancing is less efficient, because only objects whose light counts and index lists match are grouped.
the SRP batcher is not affect, because each object still gets its own optimized draw call.

外国人写代码注释真好玩:
下面是在urp中的一段注释:有一个:( 哭脸的表情。

// We store 8 light indices in float4 unity_LightIndices[2];                                /
// Due to memory alignment unity doesn't support int[] or float[]                           /
// Even trying to reinterpret cast the unity_LightIndices to float[] won't work             /
// it will cast to float4[] and create extra register pressure. :(                          /
/
#elif !defined(SHADER_API_GLES)
    // since index is uint shader compiler will implement
    // div & mod as bitfield ops (shift and mask).
    
    // TODO: Can we index a float4? Currently compiler is
    // replacing unity_LightIndicesX[i] with a dp4 with identity matrix.
    // u_xlat16_40 = dot(unity_LightIndices[int(u_xlatu13)], ImmCB_0_0_0[u_xlati1]);
    // This increases both arithmetic and register pressure.
    return unity_LightIndices[index / 4][index % 4];

1.5 render textures
besides creating a split-screen display or directly layering cameras it is also common to use a camera for an in-game display,
or as part of the GUI.
in these case the camera’s target has to be a render texture, RT.
either an asset or one created at runtime.
as an example I created a 200*100 render texture via Assets/Create/RenderTexture.
I gave it no depth buffer because I render a camera with post FX to it, which creates its own
intermediate render texture with a depth buffer.

在这里插入图片描述
render texture asset

I then created a camera that renders the scene to this texture, by hooking it up to the camera’s
target texture property.

在这里插入图片描述
camera target texture set.

as with regular rendering the bottom camera has to use One Zero for its final blend mode.
the editor will initially present a clear black texture, but after that the render texture will contain
whatever was last rendered to it.
multiple cameras can render to the same render texture, with any viewport, as normal.
the only difference is that unity automatically renders cameras with render texture targets before
those that render to a display.
first cameras with target textures are rendered in order of increasing depth, then those without.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值