Rendering 6 Bumpiness

https://catlikecoding.com/unity/tutorials/rendering/part-6/

rendering bumpiness
perturb 扰乱 normals to simulate bumps.
compute normals from a height filed.
sample and blend normal maps.
convert from tangent space to world space.

this is the sixth part of a tutorial series about rendering. the previous part added support for more complex lighting. this time, we will create the illusion of more complex surfaces.

this tutorial was made with unity 5.4.0f3.

在这里插入图片描述
it does not look like a smooth sphere anymore.

1 bump mapping
we can use textures to create materials with complex color patterns. we can use normals to adjust the apparent surface curvature. with these tools, we can produces all kinds of surfaces. however the surface of a single triangle will always be smooth. it can only interpolate between three normal vectors. so it cannot represent a rough or varied surface. this becomes obvious when forsaking an albedo texture and using only a solid color.

a good example of this flatness is a simple quad. add one to the scene and make it point upwards, by rotating it 90度 around the X axis. give it our lighting material, without textures and with a fully white tint.

在这里插入图片描述

because the default skybox is very bright, it is hard to see the contribution of the other lights. so let us turn it off for this tutorial. u can do so by decreasing the Ambient Intensity to zero in the lighting settings. then only enable the main directional light. find a good point of view in the scene view so u can see some light differences on the quad.

在这里插入图片描述
在这里插入图片描述
no ambient, only the main directional light.

how could we make this quad appear non-flat? we could fake roughness by baking shading into the albedo texture. however, that would be completely static. if the lights change, or the objects move, so should the shading. if it does not, the illusion will be broken. and in case of specular reflections, even the camera is not allowed to move.

we can change the normals to create the illusion of a curving surface. but there are only four normals per quad, one for each vertex. this can only produce smooth transitions. if we want a varied and rough surface, we need more normals.

we could subdivide our quad into smaller quads. this gives us more normals to work with. in fact, once we have more vertices, we can also move them around. then we do not need the illusion of roughness, we can make an actual rough surface! but the sub-quads still have the same problem. are we going to subdivide those too? that will lead to huge meshes with an enormous amount of triangles. that is fine when creating 3D models, but is not feasible for real-time use in games.

1.1 height maps
a rough surface has a non-uniform elevation 提升, compared to a flat surface. if we store this elevation data in
a texture, we might be able to use it generate normal vectors per fragment, instead of per vertex. this idea is known as bump mapping, and was first formulated by James Blinn.

here is a height map to accompany our marble texture. it is an RGB texture with each channel set to the same value. import it into your project, with the default import settings.

在这里插入图片描述
height map for marble.

add a _HeightMap texture property to My First Lighting Shader. as it will use the same UV as our albedo texture, it does not need its own scale and offset parameters. the default texture does not really matter, as long as it’s uniform. Gray will do.

Properties {
		_Tint ("Tint", Color) = (1, 1, 1, 1)
		_MainTex ("Albedo", 2D) = "white" {}
		[NoScaleOffset] _HeightMap ("Heights", 2D) = "gray" {}
		[Gamma] _Metallic ("Metallic", Range(0, 1)) = 0
		_Smoothness ("Smoothness", Range(0, 1)) = 0.1
	}

在这里插入图片描述
Material with height map.

add the matching variable to the My Lighting include file, so we can access the texture. let us see how it looks, by factoring it into the albedo.

float4 _Tint;
sampler2D _MainTex;
float4 _MainTex_ST;

sampler2D _HeightMap;

…

float4 MyFragmentProgram (Interpolators i) : SV_TARGET {
	i.normal = normalize(i.normal);

	float3 viewDir = normalize(_WorldSpaceCameraPos - i.worldPos);

	float3 albedo = tex2D(_MainTex, i.uv).rgb * _Tint.rgb;
	albedo *= tex2D(_HeightMap, i.uv);}

在这里插入图片描述
Using heights as colors.

1.2 adjusting normals
Because our fragment normals are going to become more complex, let’s move their initialization to a separate function. Also, get rid the height map test code.

void InitializeFragmentNormal(inout Interpolators i) {
	i.normal = normalize(i.normal);
}

float4 MyFragmentProgram (Interpolators i) : SV_TARGET {
	InitializeFragmentNormal(i);

	float3 viewDir = normalize(_WorldSpaceCameraPos - i.worldPos);

	float3 albedo = tex2D(_MainTex, i.uv).rgb * _Tint.rgb;
//	albedo *= tex2D(_HeightMap, i.uv);}

Because we’re currently working with a quad that lies in the XZ plane, its normal vector is always (0, 1, 0). So we can use a constant normal, ignoring the vertex data. Let’s do that for now, and worry about different orientations later.

void InitializeFragmentNormal(inout Interpolators i) {
	i.normal = float3(0, 1, 0);
	i.normal = normalize(i.normal);
}

how do we include the height data in this? a naive approach is to use the height as the normal’s Y component, before normalizing.

void InitializeFragmentNormal(inout Interpolators i) {
	float h = tex2D(_HeightMap, i.uv);
	i.normal = float3(0, h, 0);
	i.normal = normalize(i.normal);
}

在这里插入图片描述
Using heights as normals.

this does not work, because normalization converts every vector back to (0,1,0). the black lines appear where the heights are zero, because normalization fails in those areas. we need a different method.

1.3 finite difference 有限差分
because we are working with texture data, we have two-dimensional data. there is the U and V dimensions. the heights can be thought of as going in a third dimension, upwards. we could say that the texture represents a function, f(u,v) = h. let us begin by limiting ourselves to only the U dimension. so the function is reduced to f(u)=h. can we derive normal vectors from this function?

if we knew the slope of the function, then we could use it to compute its normal at any point. the slope is defined by the rate of change of h. this is its derivative, h’. because h is the result of a function, h’ is the result of a function as well. so we have the derivative function f’(u)=h’.

unfortunately, we do not know what these function are. but we can approximate them. we can compare the heights at two different points in our texture. for example, at the extreme ends, using U coordinates 0 and 1. the difference between those two samples is the rate of change between those coordinates. expressed as a function, that is f(1)-f(0). we can use this to construct a tangent vector,
在这里插入图片描述
在这里插入图片描述
红色的向量,就是tangent vector

that is of course a very crude approximation of the real tangent vector. it treats the entire texture as a linear slope. we can do better by sampling two points that are closer together. for example, U coordinates 0 and 1/2. the rate of change between those two points is 在这里插入图片描述, per half a unit of U.

Because it is easier to deal with rate of change per whole units, we divide it by the distance between the points, so we get 在这里插入图片描述
that gives us the tangent vector
在这里插入图片描述
in general, we have to do this relative to the U coordinate of every fragment that we render. The distance to the next point is defined by a constant delta. So the derivative function is approximated by
在这里插入图片描述
The smaller δ becomes, the better we approximate the true derivative function. Of course it cannot become zero, but when taken to its theoretical limit, you get
在这里插入图片描述
This method of approximating a derivative is known as the finite difference method. With that, we can construct tangent vectors at any point,
在这里插入图片描述

1.4 from tangent to normal
what value could we use for δ in our shader? the smallest sensible difference would cover a single texel of our texture. we can retrieve this information in the shader via a float4 variable with the _TexelSize suffix. unity sets those variables, similar to _ST variables.

sampler2D _HeightMap;
float4 _HeightMap_TexelSize;

What is stored in _TexelSize variables?
Its first two components contain the texel sizes, as fractions of U and V. The other two components contain the amount of pixels. For example, in case of a 256×128 texture, it will contain (0.00390625, 0.0078125, 256, 128).

Now we can sample the texture twice, compute the height derivative, and construct a tangent vector. Let’s directly use that as our normal vector.

float2 delta = float2(_HeightMap_TexelSize.x, 0);
	float h1 = tex2D(_HeightMap, i.uv);
	float h2 = tex2D(_HeightMap, i.uv + delta);
	i.normal = float3(1, (h2 - h1) / delta.x, 0);

	i.normal = normalize(i.normal);

Actually, because we’re normalizing anyway, we can scale our tangent vector by δ. This eliminates a division and improves precision.

i.normal = float3(delta.x, h2 - h1, 0);

在这里插入图片描述
Using tangents as normals.

we get a very pronounced 明显的 result. that is because the heights have a range of one unit, which produces very steep slopes. as the perturbed normals do not actually change the surface, we do not want such huge differences. we can sclae the heights by an arbitrary factor. let us reduce the range to a single texel. we can do that by multiplying the height difference by δ, or by simply replacing δ with 1 in the tangent.

i.normal = float3(1, h2 - h1, 0);

在这里插入图片描述
Scaled heights.

this is starting to look good, but the lighting is wrong. it is far too dark. that is because we are directly using the tangent as a normal. to turn it into an upward-pointing normal vector, we have to rotate the tangent 90度 around the Z axis.

i.normal = float3(h1 - h2, 1, 0);

在这里插入图片描述
Using actual normals.

1.5 central difference 中心差分
we have used finite difference approximations to create normal vectors. specifically, by using the forward difference method. we take a point, and then look in one direction to determine the slope. as a result, the normal is biased in that direction. to get a better approximation of the normal, we can instead offset the sample points in both directions. this centers the linear approximation on the current point, and is known as the central difference method. this changes the derivative function to
在这里插入图片描述

float2 delta = float2(_HeightMap_TexelSize.x * 0.5, 0);
	float h1 = tex2D(_HeightMap, i.uv - delta);
	float h2 = tex2D(_HeightMap, i.uv + delta);
	i.normal = float3(h1 - h2, 1, 0);

this shifts the bumps slightly, so they are better aligned with the height field. besides that, their shape does not change.

1.6 using both dimensions
the normals that we have created only take the change along U into account. we have been using the partial derivative 偏导数 of the function f(u,v) with respect to u. that is 在这里插入图片描述
, or just fu’ for short. we can also create normals along V, by using fv’. in that case, the tangent vector is

在这里插入图片描述

and the normal vector is 在这里插入图片描述

	float2 du = float2(_HeightMap_TexelSize.x * 0.5, 0);
	float u1 = tex2D(_HeightMap, i.uv - du);
	float u2 = tex2D(_HeightMap, i.uv + du);
	i.normal = float3(u1 - u2, 1, 0);

	float2 dv = float2(0, _HeightMap_TexelSize.y * 0.5);
	float v1 = tex2D(_HeightMap, i.uv - dv);
	float v2 = tex2D(_HeightMap, i.uv + dv);
	i.normal = float3(0, 1, v1 - v2);
	i.normal = normalize(i.normal);

在这里插入图片描述
Normals along V.

we now have access to both the U and V tangents. together, these vectors describe the surface of the height field at our fragment. by computing their cross product, we find the normal vector of the 2D height field.

float2 du = float2(_HeightMap_TexelSize.x * 0.5, 0);
	float u1 = tex2D(_HeightMap, i.uv - du);
	float u2 = tex2D(_HeightMap, i.uv + du);
	float3 tu = float3(1, u2 - u1, 0);

	float2 dv = float2(0, _HeightMap_TexelSize.y * 0.5);
	float v1 = tex2D(_HeightMap, i.uv - dv);
	float v2 = tex2D(_HeightMap, i.uv + dv);
	float3 tv = float3(0, v2 - v1, 1);

	i.normal = cross(tv, tu);
	i.normal = normalize(i.normal);

在这里插入图片描述
在这里插入图片描述
Complete normals.

when u calcualte the cross product with the tangent vectors, u will see that 在这里插入图片描述
so we can construct the vector directly, instead of having to rely on the cross function.

void InitializeFragmentNormal(inout Interpolators i) {
	float2 du = float2(_HeightMap_TexelSize.x * 0.5, 0);
	float u1 = tex2D(_HeightMap, i.uv - du);
	float u2 = tex2D(_HeightMap, i.uv + du);
//	float3 tu = float3(1, u2 - u1, 0);

	float2 dv = float2(0, _HeightMap_TexelSize.y * 0.5);
	float v1 = tex2D(_HeightMap, i.uv - dv);
	float v2 = tex2D(_HeightMap, i.uv + dv);
//	float3 tv = float3(0, v2 - v1, 1);

//	i.normal = cross(tv, tu);
	i.normal = float3(u1 - u2, 1, v1 - v2);
	i.normal = normalize(i.normal);
}

2 normal mapping
while bump mapping works, we have to perform multiple texture samples and finite difference calculations. this seems like a waste, as the resulting normal should always be the same. why do all this work every frame? we can do it once and store the normals in a texture.

this means that we need a normal map. i could provide one, but we can let unity do the work for us. chang the Texture Type of the height map to Normal Map. unity automatically switches the texture to use trilinear filtering, and assumes that we want to use the grayscale image data to generate a normal map. this is exactly what we want, but change the Bumpiness to a much lower value, like 0.05.

在这里插入图片描述
after applying the import settings, unity will compute the normal map. the original height map still exists, but unity internally uses the generated map.

like we did when visualizing normals as colors, they have to be adjusted to fit inside the 0-1 range. so they are stored as (N+1)/2. this would suggest that flat areas will appear light green. however, they appear light blue instead. that is beause the most common convention for normal maps is to store the up direction in the Z component. the Y and Z coordinates are swapped, from unity’s point of view.

Sampling the Normal Map
Because a normal map is quite different than a height map, rename the shader property accordingly.

	Properties {
		_Tint ("Tint", Color) = (1, 1, 1, 1)
		_MainTex ("Albedo", 2D) = "white" {}
		[NoScaleOffset] _NormalMap ("Normals", 2D) = "bump" {}
//		[NoScaleOffset] _HeightMap ("Heights", 2D) = "gray" {}
		[Gamma] _Metallic ("Metallic", Range(0, 1)) = 0
		_Smoothness ("Smoothness", Range(0, 1)) = 0.1
	}

在这里插入图片描述
Now using a normal map.

We can remove all the height map code and replace it with a single texture sample, followed by a normalization.

sampler2D _NormalMap;

//sampler2D _HeightMap;
//float4 _HeightMap_TexelSize;void InitializeFragmentNormal(inout Interpolators i) {
	i.normal = tex2D(_NormalMap, i.uv).rgb;
	i.normal = normalize(i.normal);
}

Of course, we have to convert the normals back to their original −1–1 range, by computing
2N−1.

	i.normal = tex2D(_NormalMap, i.uv).xyz * 2 - 1;

Also, make sure to swap Y and Z.

i.normal = tex2D(_NormalMap, i.uv).xyz * 2 - 1;
i.normal = i.normal.xzy;

在这里插入图片描述
Using a normal map.

2.2 DXT5nm

there is definitely something wrong with our normals. that is because unity ended up encoding the normals in a different way than we expected. even though the texture preview shows RGB encoding, unity actually uses DXT5nm.

the DXT5nm format only stores the X and Y components of the normal. its Z component is discarded. The Y component is stored in the G channel, as you might expect. However, the X component is stored in the A channel. The R and B channels are not used.

so when using DXT5nm, we can only retrieve the first two components of our normal.

i.normal.xy = tex2D(_NormalMap, i.uv).wy * 2 - 1;

We have to infer the third component from the other two. Because normals are unit vectors,
在这里插入图片描述

i.normal.xy = tex2D(_NormalMap, i.uv).wy * 2 - 1;
i.normal.z = sqrt(1 - dot(i.normal.xy, i.normal.xy));
i.normal = i.normal.xzy;

Theoretically, the result should be equal to the original Z component. However, because the texture has limited precision, and because of texture filtering, the result will often be different. It’s close enough, though.

Also, because of precision limitations, it is possible that 在这里插入图片描述
ends up out of bounds. make sure that this does not happen, by clamping the dot product.

i.normal.z = sqrt(1 - saturate(dot(i.normal.xy, i.normal.xy)));

在这里插入图片描述
Decoded DXT5nm normals.

2.3 Scaling Bumpiness
Because we bake the normals into a texture, we cannot scale them in the fragment shader. Or can we?

We can scale the normal’s X and Y components before computing Z. If we decrease X and Y, then Z will become larger, resulting in a flatter surface. The opposite will happen if we increase them. So we can adjust the bumpiness that way. As we’re already clamping the squares of X and Y, we’ll never end up with invalid normals.

Let’s add a bump scale property to our shader, just like Unity’s standard shader.

	Properties {
		_Tint ("Tint", Color) = (1, 1, 1, 1)
		_MainTex ("Albedo", 2D) = "white" {}
		[NoScaleOffset] _NormalMap ("Normals", 2D) = "bump" {}
		_BumpScale ("Bump Scale", Float) = 1
		[Gamma] _Metallic ("Metallic", Range(0, 1)) = 0
		_Smoothness ("Smoothness", Range(0, 1)) = 0.1
	}

Incorporate this scale into our normal calculations.

sampler2D _NormalMap;
float _BumpScale;void InitializeFragmentNormal(inout Interpolators i) {
	i.normal.xy = tex2D(_NormalMap, i.uv).wy * 2 - 1;
	i.normal.xy *= _BumpScale;
	i.normal.z = sqrt(1 - saturate(dot(i.normal.xy, i.normal.xy)));
	i.normal = i.normal.xzy;
	i.normal = normalize(i.normal);
}

to get bumps of about the same strength as we got while using the height map, reduce the scale to something like 0.25.

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
Scaled bumps.

UnityStandardUtils contains the UnpackScaleNormal function. It automatically uses the correct decoding for normal maps, and scales normals as well. So let’s take advantage of that convenient function.

void InitializeFragmentNormal(inout Interpolators i) {
//	i.normal.xy = tex2D(_NormalMap, i.uv).wy * 2 - 1;
//	i.normal.xy *= _BumpScale;
//	i.normal.z = sqrt(1 - saturate(dot(i.normal.xy, i.normal.xy)));
	i.normal = UnpackScaleNormal(tex2D(_NormalMap, i.uv), _BumpScale);
	i.normal = i.normal.xzy;
	i.normal = normalize(i.normal);
}

2.4 combining albedo and bumps
Now that we have a functional normal map, you can check the difference it makes. When only using the marble albedo texture, our quad looks like perfectly polished stone. Add the normal map, and it becomes a much more interesting surface.
在这里插入图片描述
在这里插入图片描述
Without vs. with bumps.

4 tangent space
up to this points, we have assumed that we are shading a flat surface that is aligned with the XZ plane. but for this technique to be of any use, it must work for arbitrary geometry.

在这里插入图片描述

one of the faces of a cube can be aligned so that it matches our assumptions. we could support the other sides, by swapping and flipping dimensions. but this assumes a cube that is axis-aligned. when the cube has an arbitrary rotation, it becomes more complex. we have to transform the results of our bump mapping code so it matches the real orientation of the face.

can we know the orientation of a face? for that, we need vectors that define the u and v axes. those two, plus the normal vector, define a 3D space which matches our assumptions. once we have that space, we can use it to transform the bumps to world space.

as we already have the normal vector N, we only require one additional vector. the cross product of those two vectors defines the third one.

the additonal vector is provided as part of the mesh’s vertex data. As it lies in the plane defined by the surface normal, it is known as the tangent vector T. By convention, this vector matches the U axis, pointing to the right.

the third vector is known as B, the bitangent, or the binormal. as unity referes to it as the binormal, so will i. this vector defines the v axis, pointing forward. the standard way to derive the bitangent is via B=NxT. however, this will produce a vector that points backwards, not forwards. to correct this, the result has to be multiplied with -1. this factor is stored as an extra fourth component of T.

so we can use the vertex normal and tangent to construct a 3D space that matches the mesh surface. this space is known as tangent space, the tangent bias, or TBN space. in the case of a cube, tangent space is uniform per face. in the case of a sphere, tangent space wraps around its surface.

in order to construct this space, the mesh has to contain tangent vectors. fortunately, unity’s default meshes contain this data. when importing a mesh into unity, u either import your own tangents, or have unity generate them for u.

4.1 Visualizing Tangent Space
To get an idea of how tangent space works, let’s code a quick visualization of it. Create a TangentSpaceVisualizer component with an OnDrawGizmos method.

using UnityEngine;

public class TangentSpaceVisualizer : MonoBehaviour {

	void OnDrawGizmos () {
	}
}

Each time gizmos are drawn, grab the mesh from the game object’s mesh filter, and use it to show its tangent space. Of course this only works if there actually is a mesh. Grab the shadedMesh, not the mesh. The first gives us a reference to the mesh asset, while the second would create a copy.

void OnDrawGizmos () {
		MeshFilter filter = GetComponent<MeshFilter>();
		if (filter) {
			Mesh mesh = filter.sharedMesh;
			if (mesh) {
				ShowTangentSpace(mesh);
			}
		}
	}

	void ShowTangentSpace (Mesh mesh) {
	}
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值