1.UnityObjectToClipPos(float4 pos)
把pos从object space 转换到 clip space.
2.把法线从局部控件转换到世界空间
(1)UnityObjectToWorldNormal(float3 normal)
(2)mul(i.normal, (float3x3)unity_WorldToObject);
3.片元-->模版测试-->深度测试-->混合-->颜色缓冲区
4.透明度测试
指定片元的透明度不满足条件,那么这个片元就会被丢弃,被丢弃的片元不做任何处理,也就没有上面第3点中片元之后的任何操作。使用clip函数来进行测试
5.WorldSpaceViewDir
输入模型空间坐标,返回一个世界空间中该点到摄像机的观察方向
6.ObjSpaceViewDir
输入模型空间坐标,返回一个模型空间中该点到摄像机的观察方向
7.WorldSpaceLightDir
输入模型空间坐标,返回一个世界空间中该点到光源的光照方向
8.ObjSpaceLightDir
输入模型空间坐标,返回一个模型空间中该点到光源的光照方向
9.UnityWorldSpaceViewDir
输入一个世界空间坐标,返回一个世界空间中该点到摄像机的观察方向
10.UnityWorldSpaceLightDir
输入一个世界空间坐标,返回一个世界空间中该点到光源的光照方向
11.UnityObjectToWorldDir
把向量从模型空间转换到世界空间
12.UnityWorldToObjectDir
把向量从世界空间转换到模型空间
13.Unity中的uv原点坐标是在左下角
14.环境光颜色
UNITY_LIGHTMODEL_AMBIENT
15.如何写入深度到屏幕空间的depth texture
(1)shader中有LightMode 为 ShadowCaster的pass, 该pass可以自定义或者存在与Fallback中
(2)Render Queue <= 2500
(注:对于透明clip的,我们要自定义ShadowCaster来剔除掉对应片元,阻止写入depth texture中)
16._WorldSpaceCameraPos(float3)
摄像机的世界空间下的坐标
17._ProjectionParams
x 1.0f或者是-1.0,如果当前渲染使用的是翻转矩阵
y 摄像机的近裁剪面
z 摄像机的远裁剪面
w 1/farplane
18._ScreenParams
x 摄像机渲染目标纹理的宽度(像素单位)
y 摄像机渲染目标纹理的高度(像素单位)
z 1.0 + 1.0/width
w 1.0 + 1.0/height
19.
(1)cg中的float4x4,float3x3等类型变量是按行优先的方式存储
例子:
float3x3 m = float3x3(
1.1, 1.2, 1.3, // first row
2.1, 2.2, 2.3, // second row
3.1, 3.2, 3.3 // third row
);
float3 row2 = m[2]; // = float3(3.1, 3.2, 3.3)
float m20 = m[2][0]; // = 3.1
float m21 = m[2].y; // = 3.2
(2)glsl中的mat3等矩阵类型是按列优先的方式储存
mat3 m = mat3(v1, v2, v3) //v1,v2,v3 按列进行排列
例子:
The Section 5.6 of the GLSL reference manual says you can access mat4
array elements using operator[][]
style syntax in the following way:
mat4 m;
m[1] = vec4(2.0); // sets the second column to all 2.0
m[0][0] = 1.0; // sets the upper left element to 1.0
m[2][3] = 2.0; // sets the 4th element of the third column to 2.0
Remember, OpenGL defaults to column major matrices, which means access is of the format mat[col][row]
. In the example, m[2][3]
sets the 4th ROW (index 3) of the 3rd COLUMN (index 2) to 2.0. In the example m[1]=vec4(2.0)
, it is setting an entire column at once (because m[1]
refers to column #2, when only ONE index is used it means that COLUMN. m[1]
refers to the SECOND COLUMN VECTOR).
(3)hlsl中的float3x4等矩阵类型是按行优先的方式储存
例子(行优先):
float3x3 mat = float3x3(A,B,C)
mat._11_12_13 = A.xyz
例子
float2x2 fMatrix;
temp = fMatrix[0] // read the first row
(4)unity中的Matrix4x4 是已列优先的方式储存
例子
Vector4 v1 = new Vector4(0.0f, 0.0f, 0.0f, 0.0f);
Vector4 v2 = new Vector4(0.0f, 0.0f, 0.0f, 0.0f);
Vector4 v3 = new Vector4(0.5f, 0.0f, 0.0f, 0.0f);
Vector4 v4 = new Vector4(0.0f, 0.0f, 0.0f, 0.0f);
Matrix4x4 m = new Matrix4x4(v1, v2, v3, v4); //v1为第一列,v2为第二列
float v = m[0,2]; //值是0.5f
mMaterial.SetMatrix("_M", m);
对应shader中,要访问这个0.5的值,该如何取? 如下
_M[0][2]就可以取到0.5这个值,因为shader中的float4x4是以行优先的方式储存,_M[0]代表第一行,所以_M[0][2], 代表第1行第3列
20.旋转和镜像矩阵是正交矩阵,所以他们的逆矩阵就是转置矩阵
21.关于使用UNITY_UV_STARTS_AT_TOP宏
对于后处理时,同时使用超过2张texture并且开启抗锯齿时,我们要注意翻转uv坐标的y值。
但是我实测后发现,Unity5.4.1是需要注意这个问题,Unity5.6.3和Unity2017.4.1都不用再翻转了,本身就是对的。应该是Unity
内部做了调整,个人认为研究这些细节没有用,因为Unity是个黑盒子,我们只需要知道导致这个原因是因为:
Render Texture coordinates
Vertical Texture coordinate conventions differ between two types of platforms: Direct3D-like and OpenGL-like.
- Direct3D-like: The coordinate is 0 at the top and increases downward. This applies to Direct3D, Metal and consoles.
- OpenGL-like: The coordinate is 0 at the bottom and increases upward. This applies to OpenGL and OpenGL ES.
This difference tends not to have any effect on your project, other than when rendering into a Render Texture
. When rendering into a Texture on a Direct3D-like platform, Unity internally flips rendering upside down. This makes the conventions match between platforms, with the OpenGL-like platform convention the standard.
Unity内部是遵循OpengGL的约定的!
详细链接:点我点我
使用方式
#if UNITY_UV_STARTS_AT_TOP
if(_MainTex_TexelSize.y < 0)
{
o.uv.w = 1 - o.uv.w;
}
#endif
22.smoothstep(a,b,x)
前提保证a<b条件下
(1)x < a, 那么返回值为0
(2)x > b, 那么返回值为1
(3)实现:
float smoothstep(float a, float b, float x)
{
float t = saturate((x - a)/(b - a));
return t*t*(3.0 - (2.0*t));
}
23.unity_ObjectToWorld
o.world_pos = mul(unity_ObjectToWorld, i.pos);
转换局部空间的顶点到世界空间下
24.tex2Dproj(sampler2D samp, float3 s)
http://developer.download.nvidia.com/cg/tex2Dproj.html
采样samp Texture,采样坐标是: float2 uv = s.xy / s.w
25.UNITY_PROJ_COORD 的定义
#if defined(SHADER_API_PSP2)
#define UNITY_BUGGY_TEX2DPROJ4
#define UNITY_PROJ_COORD(a) (a).xyw
#else
#define UNITY_PROJ_COORD(a) a
#endif
26.ComputeScreenPos 的实现
inline float4 ComputeNonStereoScreenPos(float4 pos) {
float4 o = pos * 0.5f;
o.xy = float2(o.x, o.y*_ProjectionParams.x) + o.w;
o.zw = pos.zw;
return o;
}
inline float4 ComputeScreenPos(float4 pos) {
float4 o = ComputeNonStereoScreenPos(pos);
#if defined(UNITY_SINGLE_PASS_STEREO)
o.xy = TransformStereoScreenSpaceTex(o.xy, pos.w);
#endif
return o;
}
27.ComputeGrabScreenPos 的实现
inline float4 ComputeGrabScreenPos (float4 pos) {
#if UNITY_UV_STARTS_AT_TOP
float scale = -1.0;
#else
float scale = 1.0;
#endif
float4 o = pos * 0.5f;
o.xy = float2(o.x, o.y*scale) + o.w;
#ifdef UNITY_SINGLE_PASS_STEREO
o.xy = TransformStereoScreenSpaceTex(o.xy, pos.w);
#endif
o.zw = pos.zw;
return o;
}
28.如何渲染物体到深度图中(_CameraDepthTexture)
(1)物体的自身或者fall back中有 LightMode=ShadowCaster 的pass
(2)物体的Queue <= 2500
29.如何渲染物体到深度和法线图(_CameraDepthNormalsTexture)
(1)物体有正确的RenderType
A.使用了clip的,RenderType为TransparentCutout, 这样渲染到_CameraDepthNormalsTexture的深度也是被clip的。
B.使用了透明的,RenderType为Transparent,不会渲染到_CameraDepthNormalsTexture中
C.不透明的,RenderType为Opaque, 完整的渲染到_CameraDepthNormalsTexture中
30.不同维度的矩阵和向量相乘bug
在低端机器上(荣耀x4)出现一个bug,高端机(三星s8)上没有问题
原因是:mul(UNITY_MATRIX_MV, i.normal) 这句话看起来没啥问题,但是却是一个float4x4的矩阵去乘以float3x1的向量,不是同一个维度,导致shader出错,所以改为:mul((float3x3)UNITY_MATRIX_MV, i.normal) 一切就正常了
31.Unity的Mesh顶点什么顺序是正面
Mesh顶点顺时针的是正面,逆时针是背面,默认渲染正面
32.后处理的时候一定要加上ZTest Always 和 ZWrite Off
项目中遇到过在手机上失效的问题
33.UnpackNormal
inline fixed3 UnpackNormalDXT5nm (fixed4 packednormal)
{
fixed3 normal;
normal.xy = packednormal.wy * 2 - 1;
normal.z = sqrt(1 - saturate(dot(normal.xy, normal.xy)));
return normal;
}
inline fixed3 UnpackNormal(fixed4 packednormal)
{
#if defined(UNITY_NO_DXT5nm)
return packednormal.xyz * 2 - 1;
#else
return UnpackNormalDXT5nm(packednormal);
#endif
}
//例子
注意:使用UnpackNormal时_NormalTex的Texture Type一定要设置成Normal Map,这样Unity会知道是法线贴图,然后在储存的时候进行压缩。这样我们UnpackNormal就可以进行解压。
使用:UnpackNormal(tex2D(_NormalTex, i.uv))
或
这里设置_NormalTex类型为普通的Texture 就可以了,不需要unity帮我们压缩,我们也不用解压
使用:fixed3 normal = tex2D(_NormalTex, i.uv);
normal = normal.xyz * 2 - 1;
34.Shader数据的提供
How property values are provided to shaders
Shader property values are found and provided to shaders from these places:
- Per-Renderer values set in MaterialPropertyBlock. This is typically “per-instance” data (e.g. customized tint color for a lot of objects that all share the same material).
- Values set in the Material
that’s used on the rendered object. - Global shader properties, set either by Unity rendering
code itself (see built-in shader variables), or from your own scripts
(e.g. Shader.SetGlobalTexture).
The order of precedence is like above: per-instance data overrides everything; then Material data is used; and finally if shader property does not exist in these two places then global property value is used. Finally, if there’s no shader property value defined anywhere, then “default” (zero for floats, black for colors, empty white texture for textures) value will be provided.
35.xxx_TexelSize
值是: (1 / width, 1 / height, width, height)
36.xxx_ST
值是: uv的 (uvXScale, uvYScale, uvXOffset, uvYOffset)
37.most common blend types:
Blend SrcAlpha OneMinusSrcAlpha // Traditional transparency
Blend One OneMinusSrcAlpha // Premultiplied transparency
Blend One One // Additive
Blend OneMinusDstColor One // Soft Additive
Blend DstColor Zero // Multiplicative
Blend DstColor SrcColor // 2x Multiplicative
38.
(1)旋转矩阵是正交矩阵
(2)缩放矩阵不是正交矩阵
(3)平移矩阵不是正交矩阵