flat and wireframe shading

https://catlikecoding.com/unity/tutorials/advanced-rendering/flat-and-wireframe-shading/

this tutorial covers how to add support for flat shading and showing the wireframe of a mesh. it uses advanced rendering techniques and assumes u are familiar with the material covered in the rendering series.

1 flat shading

meshes consist of triangles, which are flat by definition. we use surface normal vectors to add the illusion of curvature. this makes it possible to create meshes that represent seemingly smooth surfaces. however, sometimes, u actually want to display flat triangles, either for style or to better see the mesh’s topology.

to make the triangles appear as flat as they really are, we have to use the surface normals of the actual triangles. it will give meshes a faceted appearance, known as flag shading. this can be done by making the normal vectors of a triangle’s three vertices equal to the triangle’s normal vector. this makes it impossible to share vertices between triangles, because then they would share normals as well. so we end up with more mesh data. it would be convenient if we could keep sharing vertices. also, it would be nice if we could use a flag-shading material with any mesh, overriding its original normals, if any.

besides flag shading, it can also be useful or stylish to show a mesh’s wire-frame. this makes the topology of the mesh even more obvious. ideally, we can do both flat shading and wire-frame rendering with a custom material, in a single pass, for any mesh. to create such a material, we need a new shader. we will use the final shader from part 20 of the rendering series as our base. duplicate My First Light Shader and change its name to Flat Wireframe.

Shader "Custom/Flat Wireframe" {}

can not we already see the wireframe in the editor??

we can indeed see the wireframe in the scene view, but not in the game view, and not in builds. so if u want to see the wireframe outside the scene view, u have to use a custom solution. also, the scene view only displays the wire-frame of the original mesh, regardless whether the shader renders sth. else. so it does not work with vertex displacement of tessellation.

1.1 derivation instructions

because triangles are flat, their surface normal is the same at every point on their surface. hence, each fragment rendered for a triangle should use the same normal vector. but we current do not know what this vector is. in the vertex program, we only have access to the vertex data stored in the mesh, processed in isolation. the normal vector stored here is of no use to us, unless it is designed to represent the triangle’s normal. and in the fragment program, we only have access to the interpolated vertex normals.

to determine the surface normal, we need to know the orientation of the triangle in world space. this can be determined via the positions of the triangle’s vertices. assuming that the triangle is not degenerate, its normal vector is equal to the normalized cross product of two of the triangle’s edges. if it is degenerate退化, then it will not be rendered anyway. so gives a triangle’s vertices a, b, and c in counter-clockwise order, its normal vector is n = (c-a)x(b-a). normalizing that gives us the final unit normal vector n = n/|n|.
在这里插入图片描述
we do not actually need to use the triangle’s vertices. any three points that lie in the triangle’s plane will do, as long as those points form a triangle too. specifically, we only need two vectors that lie in the triangle’s plane, as long as they are not parallel and are larger than zero.

one possibility is to use points corresponding to the world positions of rendered fragments. for example, the world position of the fragment we are currently rendering, the position of the fragment to the right of it, and the position of the fragment above it, in screen space.
这段啥意思?????
在这里插入图片描述

if we could access the world positions of adjacent fragments, then this could work. there is no way to directly access the data of adjacent fragments, but we can access the screen-space derivatives of this data. this is done via special instructions, which tell us the rate of change between fragments, for any piece of data, in either the screen-space X or Y dimension.

for example, our current fragment’s world position is p0. the position of the next fragment in the screen-space X dimension is pX. the rate of change of the world position in the X dimension between these two fragments is thus 在这里插入图片描述

this is the partial derivative 导数 of the world position, in the screen-space x dimension. we can retrieve this data in the fragment program via the ddx function, by supplying it with the world position. let us do this at the start of the InitializeFragmentNormal function in My Lighting.cginc.

void InitializeFragmentNormal(inout Interpolators i) {
	float3 dpdx = ddx(i.worldPos);}

we can do the same for the screen-space Y dimension, finding 在这里插入图片描述
by invoking ddy function with the world position.

float3 dpdx = ddx(i.worldPos);
float3 dpdy = ddy(i.worldPos);

because these values represent the differences between the fragment world position, they define the edges of a triangle. we do not actually know the exact shape of that triangle, but it is guaranteed to lie in the original triangle’s plane, and that is all that matters. so the final normal vector is the normalized cross product of those vectors. override the original normal with this vector.

float3 dpdx = ddx(i.worldPos);
float3 dpdy = ddy(i.worldPos);
i.normal = normalize(cross(dpdy, dpdx));

create a new material that uses our Flag Wireframe shader. any mesh that uses this material should be rendered using flat shading. they will appear faceted, though this might be hard to see when u are also using normal maps. i use a standard capsule mesh in the screenshots for this tutorial, with a gray material.

from a distance, it might look like the capsule’s made out of quads, but those quads are made of two triangles each.

while this works, we have actually changed the behavior of all shaders that rely on the My Lighting include file. so remove the code that we just added.

又去掉了???这种方案不可行???好像是

1.2 geometry shaders

there is another way that we can determine the triangle’s normal. instead of using derivative instructions, we could use the actual triangle vertices to compute the normal vector. this requires use to do work per triangle, not per individual vertex or fragment. that is where geometry shaders come in.

the geometry shader stage sits in between the vertex and the fragment stage. it is fed the output of the vertex program, grouped per primitive. a geometry program can modify this data, before it gets interpolated and used to render fragments.
在这里插入图片描述
the added value of the geometry shader is that the vertices are fed to it per primitive, so three for each triangle in our case. whether mesh triangles share vertices does not matter, because the geometry program outputs new vertex data. this allows us to derive the triangle’s normal vector and use it as the normal for all three vertices.

let us put the code for our geometry shader in its own include file, MyFlatWireframe.cginc. have this file include My Lighting.cginc and define a MyGeometryProgram function. start with an empty void function.

#if !defined(FLAT_WIREFRAME_INCLUDED)
#define FLAT_WIREFRAME_INCLUDED

#include "My Lighting.cginc"

void MyGeometryProgram () {}

#endif

geometry shaders are only supported when targeting shader model 4.0 or higher. unity will automatically increase the target to this level if it was defined lower, but let us be explicit about it. to actually use a geometry shader, we have to add the #pragma geometry directive, just like for the vertex and fragment functions. finally, MyFlatWireframe has to be included instead of My Lighting. apply these changes to the base, additive, and deferred passes of our Flat Wireframe shader.

		#pragma target 4.0#pragma vertex MyVertexProgram
			#pragma fragment MyFragmentProgram
			#pragma geometry MyGeometryProgram//			#include "My Lighting.cginc"
			#include "MyFlatWireframe.cginc"

this will result in shader compile errors, because we have not defined our geometry function correctly yet. we have to declare how many vertices it will output. this number can vary, so we must provide a maximum. because we are working with triangles, we will always output three vertices per invocation. this is specified by adding the maxvertexcount attribute to our function, with 3 as an argument.

[maxvertexcount(3)]
void GeometryProgram () {}

the next step is to define the input. as we are working with the output of the vertex program before interpolation, the data type is InterpolatorsVertex. so the type name is not technically correct in this case, but we did not took the geometry shader into consideration when we named it.

这个搞笑了,类型其实是错误的

[maxvertexcount(3)]
void MyGeometryProgram (InterpolatorsVertex i) {}

we also have to declare which type of primitive we are working on, which is triangle in our case. this has to be specified before the input type. also, as triangles have three vertices each, we are working on an array of three structures. we have to define this explicitly.

[maxvertexcount(3)]
void MyGeometryProgram (triangle InterpolatorsVertex i[3]) {}

because the amount of vertices that a geometry shader can output varies, we do not have a singular return type. instead, the geometry shader writes to a stream of primitives. in our case, it is a TriangleStream, which has to be specified as an inout parameter.

[maxvertexcount(3)]
void MyGeometryProgram (
	triangle InterpolatorsVertex i[3],
	inout TriangleStream stream
) {}

TriangleSream works like a generic type in C#. it needs to know the type of the vertex data that we are going to give it, which is still InterpolatorsVertex.

[maxvertexcount(3)]
void MyGeometryProgram (
	triangle InterpolatorsVertex i[3],
	inout TriangleStream<InterpolatorsVertex> stream
) {}

now that the function signature is correct, we have to put the vertex data into the stream. this is done by invoking the stream’s Append function once per vertex, in the order that we received them .

[maxvertexcount(3)]
void MyGeometryProgram (
	triangle InterpolatorsVertex i[3],
	inout TriangleStream<InterpolatorsVertex> stream
) {
	stream.Append(i[0]);
	stream.Append(i[1]);
	stream.Append(i[2]);
}

at this point our shader works again. we have added a custom geometry stage, which simply passes through the output from the vertex program, unmodified.

why does the geometry program look so different??
unity’s shader syntax is a mix of cg an hlsl code. mostly it looks like cg, but in this case it resembles hlsl.

1.3 modifying vertex normals per triangle

to find the triangle’s normal vector, begin by extracting the world positions of its three vertices.

float3 p0 = i[0].worldPos.xyz;
	float3 p1 = i[1].worldPos.xyz;
	float3 p2 = i[2].worldPos.xyz;
	
	stream.Append(i[0]);
	stream.Append(i[1]);
	stream.Append(i[2]);

now we can perform the normalized cross product, once per triangle.

float3 p0 = i[0].worldPos.xyz;
	float3 p1 = i[1].worldPos.xyz;
	float3 p2 = i[2].worldPos.xyz;

	float3 triangleNormal = normalize(cross(p1 - p0, p2 - p0));

replace the vertex normals with this triangle normal.

float3 triangleNormal = normalize(cross(p1 - p0, p2 - p0));
	i[0].normal = triangleNormal;
	i[1].normal = triangleNormal;
	i[2].normal = triangleNormal;

we end up with the same results as before, but now using a geometry shader stage instead of relying on screen-space derivative instructions.

which approach is best??
if flag shading is all u need, screen-space derivatives are the cheapest way to achieve that effect. then u can also strip normals from mesh data-- which unity can do automatically – and can also remove the normal interpolator data. in general, if u can get away with not using a custom geometry stage, do so. we will keep using the geometry approach though, because we will need it for wireframe rendering as well.

2 rendering the wireframe
after taking care of the flag shading, we move on to rendering the mesh’s wireframe. we are not going to create new geometry, nor will we use an extra pass to draw lines. we will create the wireframe visuals by adding a line effect on the inside of triangles, along their edges. this can create a convincing wireframe, although the lines defining a shape’s silhouette 剪影 will appear half as thick as the lines on the inside. this usually is not very noticeable, so we will accept this inconsistency. 不一致

2.1 barycentric 重心 coordinates
to add line effects to the triangle edges, we need to know a fragment’s distance to the nearest edge. this means that topological information about the triangle needs to be available in the fragment program. this can be done by adding the barycentric coordinates of the triangle to the interpolated data.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值