OGL(教程39)——Silhouette Detection

http://ogldev.atspace.co.uk/www/tutorial39/tutorial39.html

today we are going to discuss one way in which the silhouette of an object can be detected. to make things clearer, i am referring to the silhouette of a 3d object which is created when light falls upon it from an arbitrary direction. moving the light source will likely change the silhouette accordingly. this entirely different from silhouette detection in image space that deals with finding the boundaries of an object in a 2D picture (which is usually not dependent on the location of the light source). while the subject of silhouette detection may be interesting by itself, for me its main goal is as first step in the implementation of a stencil shadow volume. this is a technique for rendering shadows which is particularly useful when dealing with point lights.

we will study this technique in the next tutorial (so u may refer to this tutorial as “stencil shadow volume --part 1”…).

这是为后面的处理点光源的阴影,stencil shadow volume的基础课程。

the following image demonstrates the silhouette that we want to detect:

在这里插入图片描述
in the image above the silhouette is the ellipsis 还是不知道啥意思呀呀 which is touched by the light rays.

let us now move to a more traditional 3D language. a model is basically composed of triangles so the silhouette must be created by triangle edges. how do we decide whether an edge is part of the silhouette or not?? the trick is based on the diffuse light mode. according to that model the light strength is based on the dot product between the triangle normal and the light vector. if the triangle faces away 背对着 from the light source the result of this dot product operation will be less than or equal to zero.

int that case the light does not affect the triangle at all. in order to decide whether a triangle edge is part of the silhouette or not we need to find the adjacent that triangle that shares the same edge and calculate the dot product between the light direction and the normals of both the original triangle and its neighbor. an edge is considered a silhouette edge if one triangle faces the light but its neighbor does not.

用光源方向分别点乘两个三角形的法线,两个三角形公用一个边,如果其中一个三角形朝着光源方向,而另外一个三角形背着光源方向,那么这个边就是silhouette.

the following picture shows a 2D object for simplicity.

在这里插入图片描述

the red arrow represents the light ray that hits the three edges (in 3d these would be triangles) whose normals are 1, 2 and 3 (do product between these normals and the reverse of the light vector is obviously greater than zero). 这里要把light vector取反,要做点乘。the edges whose normals are 4,5 and 6 are facing away from the light (here the same dot product would be less than or equal to zero). the two blue circles mark the silhouette of the object. and the reason is that edge 1 is facing the light but its neighbor edge 6 does not. the point between them is therefore a silhouette. same goes for the other silhouette point.
在这里插入图片描述

edges (or point in this example) that face the light as well as their neighbors are not silhouette (between 1 and 2 and between 2 and 3).

as u can see, the algorithm for finding the silhouette is very simple. however, it does require use to have knowledge of the three neighbors of each triangle. this is known as the adjacencies of the triangles. unfortunately, assimp does not support automatic adjacencies calculation for us so we need to implement such an algorithm ourselves. in the coding section we will review a simple algorithm that will satisfy our needs.

what is the best place in the pipeline for the silhouette algorithm itself? remember that we need to do a lot product between the light vector and the triangle normal as well as the normals of the three adjacent triangles. this requires us to have access to the entire primitive information. therefore, the VS is not enough. looks like the GS is more appropriate since it allows access to all the vertices of a primitive. but what about the adjacencies?? 在顶点着色器中计算行吗?不行,因为我们需要的是三角形;在几何着色器中计算可以吗?对于一个三角形可以,但是对于相邻的三角形呢,它不知道,所以也不行。luckily for us, the designers of opengl have already given it much thought and created a topology type known as ‘triangle with adjacencies’. if u provide a vertex buffer with adjacency information it will correctly load it and provide the GS with six vertices per triangle instead of three. the additional three vertices belong to the adjacent triangles and are not shared with the current triangle. the following image should make this much clearer.

在这里插入图片描述

the red vertices in the above picture belong to the original triangle and the blue ones are the adjacent vertices (ignore the edges e1-e6 for now - they are referenced later in the code section). when we supply a vertex buffer in the above format the VS is executed for every vertex (adjacent and non adjacent) and the GS (if it exists) is executed on a group of six vertices that include the triangle and its adjacent vertices. when the GS is present it is up to the developer to supply an output topology but if there is no GS the rasterizer knows how to deal with such a scheme and it rasterizes only the actual triangles (ignoring the adjacent triangles). 如果GS存在,那么他会让程序员自己定义输出的拓扑结构,而如果不存在GS,那么光栅化器只会处理中间的三角形,而对邻边的三角形忽略。

one of the readers informed me that such a setup has produced an error on his Macbook with Intel HD 3000 so if u run into a similar problem simply use a pass thru GS, or change the topology type.

note that the adjacent vertices in the vertex buffer have the same format and attributes as regular vertices. what makes them adjacent is simply their relative location within each group of six vertices. in the case of a model whose triangles are continuous the same vertices will sometimes be regular and sometimes adjacent, depending on the current triangle.

this makes indexed even more attractive due to the saving of space in the vertex buffer.

如何找邻边

void Mesh::FindAdjacencies(const aiMesh* paiMesh, vector<unsigned int>& Indices)
{
	
}

传入两个参数:网格、索引数组

for(uint i = 0; i < paiMesh -> mNumFaces; i++)
{}

mNumFaces:
the number of primitives (triangles, polygons, lines) in this mesh.

const aiFace& face = paiMesh->mFaces[i]; //拿到一个面
Face Unique; //临时变量
for(uint j=0;j<3;j++)
{
	uint Index = face.mIndices[j]; //取得一个点的索引
	aiVector3D& v = paiMesh->mVertices[Index]; //取得索引对应的点

	if(m_posMap.find(v) == m_posMap.end()){ //如果没找到,则表面当前是新的点
		m_posMap[v] = Index; 
	}
	else
	{
		Index = m_posMap[v]; //如果找到了,则使用第一个位置数据
	}
	Unique.Indices[j] = Index;
}

if的作用是,if a position vector is duplicated in the VB we fetch the index of the first occurrence. 如果这个点在VB中是重复的,那么找到第一个位置。

m_uniqueFaces.push_back(Unique); //定义了面,然后放在m_uniqueFaces集合中

处理三条边

Edge e1(Unique.Indices[0], Unique.Indices[1]); //边1
Edge e1(Unique.Indices[1], Unique.Indices[2]); //边2
Edge e1(Unique.Indices[2], Unique.Indices[0]); //边3

三条边,都被面i所占用,一条边最多有两个面占用
m_indexMap[e1].AddNeighbor(i);
m_indexMap[e2].AddNeighbor(i);
m_indexMap[e3].AddNeighbor(i);

上面的代码的意思是:
setp-1 find the two triangles that share every edge

most of the adjacency logic is contained in the above function and a few helper structures. the algorithm is composed of two stages. 上面的算法是找共享边的三角形。这个算法包含两个部分。

in the first stage we create a map between each edge and the two triangles that share it. this happens in the above for loop.

in the fist half of this loop we generate a map between each vertex position and the first index that refers to it. the reason why different indices may point to vertices that share the same position is that sometimes other attributes force assimp to split the same vertex into two vertices. 为啥一个点有多个索引呢?原因是assimp的一些设置,会导致将同样一个点分成两个点,而循环的第一部分主要做的是用一个索引,去重处理。

the same vertex may have different texture attributes for two neighboring triangles that share it. 同样一个点,如果被两个三角形公用,可能会有不同的纹理属性,这是一个点分为两个点的一个例子。

this creates a problem for our adjacency algorithm and we prefer to have each vertex appear only once. therefore, we create this mapping between a position and first index and use only this index from now on. 我们创建了一个map,key=position, value=index,而且使用第一个出现的index,达到去重的目的。

第二个阶段:

for(uint i=0;i<paiMesh->mNumFaces;i++)
{
	const Face& face = m_uniqueFaces[j];
	for(uint j=0;j<3;j++)
	{}
}

遍历所有的面,对每个面单独处理。

Edge e(face.Indices[j], face.Indices[(j+1)%3]); //取得一个边
Neighbors n = m_indexMap[e]; //得到这个边的邻接的两个三角形
uint OtherTri = n.GetOhteR //得到另外一个相邻的三角形的索引
const Face& OtherFace = m_uniqueFaces[OtherTri]; //得到另外一个相邻的三角形
uint OppositeIndex = OtherFace.GetOppositeIndex(e); //得到对边的点
Indices.push_back(face.Indices[j]); //保存当前的点
Indices.push_back(OppositeIndex); //保存对边的点

解释如下:
在这里插入图片描述
比如当处理e1这个边的时候,找到相邻的三角形蓝色的三角形;0号点和2号点对边的的点是1号点,最后我们将:
0/1/2/3/4/5/六个点都保存起来了,而且是连续的保存在数组中。

in the second stage we populate the index vector with sets of six vertices each that match the topology of the triangle list with adjacency that we saw earlier. the map that we created in the first stage helps us here because for each edge in the triangle it is very easy to find the neighboring triangle that shares it and then the vertex in that triangle which is opposite to this edge.

the last two lines in the loop alternate the content of the index buffer between vertices from the current triangle and vertices from the adjacent triangles that are opposite to edges of the current triangle.

there are a few additional minor changes to the mesh class. i suggest u compare it to the version from the previous tutorial to make sure u capture all differences. one of the notable changes is that we use GL_TRIANGLES_ADJACENCY instead of GL_TRIANGLES as the topology when calling glDrawElementsBaseVertex(). if u forget that the GL will feed incorrectly sized primitives into the GS.

(silhouette.vs)

#version 330
layout(location=0) in vec3 Position;
layout(location=1) in vec2 TexCoord;
layout(location=2) in vec3 Normal;

out vec3 WorldPos0;
uniform mat4 gWVP;
uniform mat4 gWorld;

void main()
{
	vec4 PosL = vec4(Position, 1.0);
	gl_Position = gWVP * PosL;
	WorldPos = (gWorld * PosL).xyz;
}

in today’s demo we are going to detect the silhouette of an object and mark it by a thick red line. the object itself will be drawn using our standard forward rendering lighting shader and the silhouette will be drawn using a dedicated shader. the code above belongs to the VS of that shader. there is nothing special about it. we just need to transform the position into clip space using the WVP matric and provide the GS with the vertices in world space (since the silhouette algorithm takes place in world space).

(silhouette.fs)

layout(triangles_adjacency) in;
layout(line_strip,max_vertices=6) out;

in vec3 worldPos0[0]
void EmitLine(int StartIndex, int EndIndex)
{
	gl_Position = gl_in[StartIndex].gl_Position;
	EmitVertex();
	
	gl_Positon = gl_in[EndIndex].gl_Position;
	EmitVertex();
	EndPrimitive();
}

uniform vec3 gLightPos;

void main()
{
		vec3 e1 = WorldPos0[2] - WorldPos0[0];
	    vec3 e2 = WorldPos0[4] - WorldPos0[0];
	    vec3 e3 = WorldPos0[1] - WorldPos0[0];
	    vec3 e4 = WorldPos0[3] - WorldPos0[2];
	    vec3 e5 = WorldPos0[4] - WorldPos0[2];
	    vec3 e6 = WorldPos0[5] - WorldPos0[0];


		vec3 Normal = cross(e1,e2);
    vec3 LightDir = gLightPos - WorldPos0[0];

    if (dot(Normal, LightDir) > 0.00001) {

        Normal = cross(e3,e1);

        if (dot(Normal, LightDir) <= 0) {
            EmitLine(0, 2);
        }

        Normal = cross(e4,e5);
        LightDir = gLightPos - WorldPos0[2];

        if (dot(Normal, LightDir) <=0) {
            EmitLine(2, 4);
        }

        Normal = cross(e2,e6);
        LightDir = gLightPos - WorldPos0[4];

        if (dot(Normal, LightDir) <= 0) {
            EmitLine(4, 0);
        }
    }
		
}

all the silhouette logic is contained within the GS. when using the triangle list with adjacenies topology the GS receives an array of six vertices. we start by calculating a few selected edges that will help us calculate the normal of the current triangle as well as the three adjacent triangles. use the picture above to understand how to map e1-e6 to actual edges.

then we check whether the triangle faces the light by calculating a dot product between its normal and the light direction (with the light vector going towards the light). if the result of the dot product is positive the answer is yes (we use a small epsilon due to floating point inacuracies). if the triangle does not face the light then this is the end of the way for it, but if it is light facing, we do the same dot product operation between the light vector and every one of the three adjacent triangles.

if we hit an adjacent triangle that does not face the light we call the EmitLine() function which (unsurprisingly) emits the shared edge between the triangle (which faces the light) and its neighbor (which does not). the FS simply draws that edge in red.

silhouette.fs

#version 330

out vec4 FragColor;

void main()
{      
    FragColor = vec4(1.0, 0.0, 0.0, 0.0);
}
void RenderScene()
{
    // Render the object as-is
    m_LightingTech.Enable();

    Pipeline p;
    p.SetPerspectiveProj(m_persProjInfo);
    p.SetCamera(m_pGameCamera->GetPos(), m_pGameCamera->GetTarget(), m_pGameCamera->GetUp()); 
    p.WorldPos(m_boxPos);
    m_LightingTech.SetWorldMatrix(p.GetWorldTrans()); 
    m_LightingTech.SetWVP(p.GetWVPTrans()); 
    m_mesh.Render();

    // Render the object's silhouette
    m_silhouetteTech.Enable();

    m_silhouetteTech.SetWorldMatrix(p.GetWorldTrans()); 
    m_silhouetteTech.SetWVP(p.GetWVPTrans()); 
    m_silhouetteTech.SetLightPos(Vector3f(0.0f, 10.0f, 0.0f));

    glLineWidth(5.0f);

    m_mesh.Render(); 
}

this is how we use the silhouette technique. the same object is rendered twice. first with the standard lighting shader. then with the silhouette shader.

note how the function glLightWidht() is used to make the silhouette thicker and thus more noticeable.

if u use the code above as-is to create the demo, u might notice a minor corruption around the silhouette lines. the reason is that the second render generates a line with roughly the same depth as the original mesh edge. this causes a phenomenon known as Z fighting as pixels from the silhouette and the original mesh cover each other in an inconsistent way (again, due to floating point accuracies).

to fix this we call glDepthFunc(GL_LEQUAL) which relaxes the depth test a bit. it means that if a second pixel is rendered on top of a previous pixel with the same depth the last pixel always take precedence.
采用小于等于的比较操作,避免z-fighting,如果第二个像素和第一个像素具有同一个深度,那么优先使用第一个像素。

在这里插入图片描述

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值