OpenGL 4 Shading Language Cookbook chapter 6——Using Geometry and Tessellation Shaders

Tessellation and geometry shaders are relatively new additions to the OpenGL pipeline, and provide programmers with additional ways to modify geometry as it progresses through the shader pipeline. Geometry shaders can be used to add, modify, or delete geometry, and
tessellation shaders can be configured to automatically generate geometry at various levels of detail and to facilitate interpolation based on arbitrary input (patches). In this chapter, we’ll look at several examples of geometry and tessellation shaders in various
contexts. However, before we get into the recipes, let’s investigate how all of this fits together.

The shader pipeline extended

The following diagram shows a simplified view of the shader pipeline when the shader program includes geometry and tessellation shaders:

在这里插入图片描述
The tessellation portion of the shader pipeline includes two stages: the tessellation control shader (TCS), and the tessellation evaluation shader (TES). The geometry shader follows the tessellation stages and precedes the fragment shader. The tessellation shader and geometry shader are optional; however, when a shader program includes a tessellation or geometry shader, a vertex shader must be included.

Other than the preceding requirement, all shaders are optional. However, when a shader program does not include a vertex or
fragment shader, the results are undefined. When using a geometry shader, there is no requirement that you also include a tessellation
shader and vice versa. It is rare to have a shader program that does not include at least a fragment shader and a vertex shader.

The geometry shader

The geometry shader (GS) is designed to execute once for each primitive. It has access to all of the vertices of the primitive, as well as the values of any input variables associated with each vertex. In other words, if a previous stage (such as the vertex shader) provides an output variable, the geometry shader has access to the value of that variable for all vertices in the primitive. As a result, the input variables within the geometry shader are always arrays.

the geometry shader can output zero, one, or more primitives. those primitives need not be of the same kind that were received by the geometry shader. however, the GS can only output one primitive type. For example, a GS could receive a triangle, and output several line segments as a line strip. Or a GS could receive a triangle and output zero or many triangles as a triangle strip.
GS只能输出一种图元类型,不能同时输出多个类型的图元。

This enables the GS to act in many different ways. A GS could be responsible for culling (removing) geometry based on some criteria, such as visibility based on occlusions. It could generate additional geometry to augment the shape of the object being rendered. The GS
could simply compute additional information about the primitive and pass the primitive along unchanged. Or the GS could produce primitives that are entirely different from the input geometry.

The functionality of the GS is centered around the two built-in functions, EmitVertex and EndPrimitive. These two functions allow the GS to send multiple vertices and primitives down the pipeline. The GS defines the output variables for a particular vertex, and then calls EmitVertex. After that, the GS can proceed to re-define the output variables for the next vertex, call EmitVertex again, and so on. After emitting all of the vertices for the primitive, the GS can call EndPrimitive to let the OpenGL system know that all the vertices of the
primitive have been emitted. The EndPrimitive function is implicitly called when the GS finishes execution. If a GS does not call EmitVertex at all, then the input primitive is effectively dropped (it is not rendered).

In the following recipes, we’ll examine a few examples of the geometry shader. In the Point sprites with the geometry shader recipe, we’ll see an example where the input primitive type is entirely different than the output type. In the Drawing a wireframe on top of a shaded
mesh recipe, we’ll pass the geometry along unchanged, but also produce some additional information about the primitive to help in drawing wireframe lines. In the Drawing silhouette lines using the geometry shader recipe, we’ll see an example where the GS passes along the input primitive, but generates additional primitives as well.

the tessellation shaders
When the tessellation shaders are active, we can only render one kind of primitive: the patch (GL_PATCHES). Rendering any other kind of primitive (such as triangles, or lines) while a tessellation shader is active is an error. The patch primitive is an arbitrary “chunk” of geometry (or any information) that is completely defined by the programmer. It has no geometrical interpretation beyond how it is interpreted within the TCS and TES. The number of vertices within the patch primitive is also configurable. The maximum number of vertices per patch is implementation dependent, and can be queried via the following command:

glGetIntegerv(GL_MAX_PATCH_VERTICES, &maxVerts);

we can define the number of vertices per patch with the following function:

glPatchParameteri( GL_PATCH_VERTICES, numPatchVerts );

a very common application of this is when the patch primitive consists of a set of control points that define an interpolated surface or curve (such as a Bezier curve or surface).

point sprites with the geometry shader

Point sprites are simple quads (usually texture mapped) that are aligned such that they are always facing the camera. They are very useful for particle systems in 3D (refer to Chapter 9, Particles Systems and Animation) or 2D games. The point sprites are specified by the OpenGL application as single point primitives, via the GL_POINTS rendering mode. This simplifies the process, because the quad itself and the texture coordinates for the quad are determined automatically. The OpenGL side of the application can effectively treat them as point primitives, avoiding the need to compute the positions of the quad vertices.

The following screenshot shows a group of point sprites. Each sprite is rendered as a point primitive. The quad and texture coordinates are generated automatically (within the geometry shader) and aligned to face the camera.
在这里插入图片描述

OpenGL already has built-in support for point sprites in the GL_POINTS rendering mode. When rendering point primitives using this mode, the points are rendered as screen-space squares that have a diameter (side length) as defined by the glPointSize function. In
addition, OpenGL will automatically generate texture coordinates for the fragments of the square. These coordinates run from zero to one in each direction (left-to-right for s, bottom-to-top for t), and are accessible in the fragment shader via the gl_PointCoord built-in variable.

There are various ways to fine-tune the rendering of point sprites within OpenGL. One can define the origin of the automatically generated texture coordinates using the glPointParameter functions. The same set of functions also can be used to tweak the way that OpenGL defines the alpha value for points when multisampling is enabled.
The built-in support for point sprites does not allow the programmer to rotate the screen-space squares, or define them as different shapes such as rectangles or triangles. However, one can achieve similar effects with creative use of textures and transformations of the texture
coordinates. For example, we could transform the texture coordinates using a rotation matrix to create the look of a rotating object even though the geometry itself is not actually rotating. In addition, the size of the point sprite is a screen-space size. In other words, the point size must be adjusted with the depth of the point sprite if we want to get a perspective effect (sprites get smaller with distance).

If these (and possibly other) issues make the default support for point sprites too limiting, we can use the geometry shader to generate our point sprites. In fact, this technique is a good example of using the geometry shader to generate different kinds of primitives than it receives. The basic idea here is that the geometry shader will receive point primitives (in camera coordinates) and will output a quad centered at the point and aligned so that it is facing the camera. The geometry shader will also automatically generate texture
coordinates for the quad.

If desired, we could generate other shapes such as hexagons, or we could rotate the quads
before they are output from the geometry shader. The possibilities are endless. Implementing
the primitive generation within the geometry shader gives us a great deal of flexibility, but
possibly at the cost of some efficiency. The default OpenGL support for point sprites is highly
optimized and is likely to be faster in general.

Before jumping directly into the code, let’s take a look at some of the mathematics. In the geometry shader, we’ll need to generate the vertices of a quad that is centered at a point and aligned with the camera’s coordinate system (eye coordinates). Given the point location
§ in camera coordinates, we can generate the vertices of the corners of the quad by simply translating P in a plane parallel to the x-y plane of the camera’s coordinate system as shown in the following figure:

在这里插入图片描述

The geometry shader will receive the point location in camera coordinates, and output the quad as a triangle strip with texture coordinates. The fragment shader will then just apply the texture to the quad.
For this example, we’ll need to render a number of point primitives. The positions can be sent via attribute location 0. There’s no need to provide normal vectors or texture coordinates for this one.
The following uniform variables are defined within the shaders, and need to be set within the OpenGL program:

Size2: This should be half the width of the sprite’s square
SpriteTex: This is the texture unit containing the point sprite texture

As usual, uniforms for the standard transformation matrices are also defined within the shaders, and need to be set within the OpenGL program.
To create a shader program that can be used to render point primitives as quads, use the
following steps:

  1. Use the following code for the vertex shader:
layout (location = 0) in vec3 VertexPosition;
uniform mat4 ModelViewMatrix;
uniform mat3 NormalMatrix;
uniform mat4 ProjectionMatrix;
void main()
{
	gl_Position = ModelViewMatrix * vec4(VertexPosition,1.0);
}
  1. Use the following code for the geometry shader:
layout( points ) in;
layout( triangle_strip, max_vertices = 4 ) out;
uniform float Size2; // Half the width of the quad

uniform mat4 ProjectionMatrix;
out vec2 TexCoord;
void main()
{
	mat4 m = ProjectionMatrix; // Reassign for brevity
	gl_Position = m * (vec4(-Size2,-Size2,0.0,0.0) + gl_in[0].gl_Position);
	TexCoord = vec2(0.0,0.0);
	EmitVertex();
	gl_Position = m * (vec4(Size2,-Size2,0.0,0.0) +	gl_in[0].gl_Position);
	TexCoord = vec2(1.0,0.0);
	EmitVertex();
	gl_Position = m * (vec4(-Size2,Size2,0.0,0.0) + gl_in[0].gl_Position);
	TexCoord = vec2(0.0,1.0);
	EmitVertex();
	gl_Position = m * (vec4(Size2,Size2,0.0,0.0) + gl_in[0].gl_Position);
	TexCoord = vec2(1.0,1.0);
	EmitVertex();
	EndPrimitive();
}
  1. Use the following code for the fragment shader:
	in vec2 TexCoord; // From the geometry shader
	uniform sampler2D SpriteTex;
	layout( location = 0 ) out vec4 FragColor;
	void main()
	{
		FragColor = texture(SpriteTex, TexCoord);
	}
  1. Within the OpenGL render function, render a set of point primitives.

The vertex shader is almost as simple as it can get. It converts the point’s position to camera coordinates by multiplying by the model-view matrix, and assigns the result to the built-in output variable gl_Position. In the geometry shader, we start by defining the kind of primitive that this geometry shader expects to receive. The first layout statement indicates that this geometry shader will receive point primitives.

layout( points ) in;

The next layout statement indicates the kind of primitives produced by this geometry shader, and the maximum number of vertices that will be output.

layout( triangle_strip, max_vertices = 4 ) out;

In this case, we want to produce a single quad for each point received, so we indicate that the output will be a triangle strip with a maximum of four vertices.

The input primitive is available to the geometry shader via the built-in input variable gl_in. Note that it is an array of structures. You might be wondering why this is an array since a point primitive is only defined by a single position. Well, in general the geometry shader can receive triangles, lines, or points (and possibly adjacency information). So, the number of values available may be more than one. If the input were triangles, the geometry shader would have access to three input values (associated with each vertex). In fact, it could have access to as many as six values when triangles_adjacency is used (more on that in a later recipe).

The gl_in variable is an array of structs. Each struct contains the following fields: gl_Position, gl_PointSize, and
gl_ClipDistance[]. In this example, we are only interested in gl_Position. However, the others can be set in the vertex
shader to provide additional information to the geometry shader.

Within the main function of the geometry shader, we produce the quad (as a triangle strip) in the following way. For each vertex of the triangle strip we execute the following steps:

  1. Compute the attributes for the vertex (in this case the position and texture coordinate), and assign their values to the appropriate output variables (gl_Position and TexCoord). Note that the position is also transformed by the projection matrix. We do this because the variable gl_Position must be provided in clip coordinates to later stages of the pipeline.
  2. Emit the vertex (send it down the pipeline) by calling the built-in function
    EmitVertex(). Once we have emitted all vertices for the output primitive, we call EndPrimitive() to finalize the primitive and send it along.
    The fragment shader is also very simple. It just applies the texture to the fragment using the (interpolated) texture coordinate provided by the geometry shader.

This example is fairly straightforward and is intended as a gentle introduction to geometry shaders. We could expand on this by allowing the quad to rotate or to be oriented in different directions. We could also use the texture to discard fragments (in the fragment shader) in
order to create point sprites of arbitrary shapes. The power of the geometry shader opens up plenty of possibilities!

Drawing a wireframe on top of a shaded mesh

The preceding recipe demonstrated the use of a geometry shader to produce a different variety of primitive than it received. Geometry shaders can also be used to provide additional information to later stages. They are quite well suited to do so because they have access to all of the vertices of the primitive at once, and can do computations based on the entire primitive
rather than a single vertex. This example involves a geometry shader that does not modify the triangle at all. It essentially passes the primitive along unchanged. However, it computes additional information about the triangle that will be used by the fragment shader to highlight the edges of the polygon. The basic idea here is to draw the edges of each polygon directly on top of the shaded mesh.

The following figure shows an example of this technique. The mesh edges are drawn on top of the shaded surface by using information computed within the geometry shader.

在这里插入图片描述

To render the wireframe on top of the shaded mesh, we’ll compute the distance from each fragment to the nearest triangle edge. When the fragment is within a certain distance from the edge, it will be shaded and mixed with the edge color. Otherwise, the fragment will be
shaded normally. To compute the distance from a fragment to the edge, we use the following technique. In the geometry shader, we compute the minimum distance from each vertex to the opposite edge (also called the triangle altitude). In the following figure, the desired distances are ha, hb, and hc.

为了画出网格线,我们需要知道每个像素离三条边的距离,取出最小的那个边。比如如下图所示:D点是三个垂线的交点。我们就考虑这个点,它距离AB、AC、BC的距离分别为DG、DE、DF,很显然DG最小,ok,如下图:
在这里插入图片描述
凡是距离粗黑线的在两个细线之间的这些点都将图黑色,那么这个边就被画出来了。我们的算法的核心是计算出每个顶点的垂线高度,然后光栅化插值之后,得到每个像素的距离三条边的距离,然后取出最小值,如果这个最小值在线的宽度范围之内,则图成线的颜色,否则涂成正常颜色。这样就显示出网格线了。
在这里插入图片描述

We can compute these altitudes using the interior angles of the triangle, which can be determined using the law of cosines. For example, to find ha, we use the interior angle at vertex C (β).

在这里插入图片描述

The other altitudes can be computed in a similar way. (Note that β could be greater than 90 degrees, in which case, we would want the sine of 180-β. However, the sine of 180-β is the same as the sine of β.)

once we have computed these triangle altitudes, we can create an output vector (an “edge-distance” vector) within the geometry shader for interpolation across the triangle.
the components of this vector represent the distances from the fragment to each edge of the triangle. the x component represents the distance from edge a, the y component is the distance from edge b, and the component is the distance from edge c.
if we assign the correct values to these components at the vertices, the hardware will automaticaly interpolate them for us to provide the appropriate distances at each fragment.
at vertex A the value of this vector should be (ha, 0, 0) because the vertex A is at a distance of ha from edge a and directly on edges b and c. similarly, the value for vertex B is (0, hb, 0) and for vertex C is (0,0,hc).
when these three values are interpolated across the triangle, we should have the distance from the fragment to each of the three edges.

we will calcualte all of this in screen space. that is, we will transform the vertices to screen space within the geometry shader before computing the altitudes. since we are working in screen space, there is no need (and it would be incorrect) to interpolate the values in a perspective correct manner. so we need to be careful to tell the hardware to interpolate linearly.

within the fragment shader, all we need to do is find the minimum of the three distances, and if that distance is less than the line width, we mix the fragment color with the line color. however, we would also like to apply a bit of anti-aliasing while we are at it. to do so, we will fade the edge of the line using the GLSL smoothstep function.

we will scale the intensity of the line in a two-pixel range around the edge of the line. pixels taht are at a distance of one or less from the true edge of the line get 100 percent of the line color, and pixels that are at a distance of one or more from the edge of the line get zero percent of the line color.

in between, we will use the smoothstep function to create a smooth transition. of course, the edge of the line itself is a configurable distance (we will at it Line.Width) from the edge of the polygon.

the typical setup is needed for this example. the vertex position and normal should be provided in attributes zero and one respectively, and u need to provide the appropriate parameters for your shading model. as usual, the standard matrices are defined as uniform variables and should be set within the opengl application. however, note that this time we also need the viewport matrix (uniform variable ViewportMatrix) in order to transform into screen space.

there are a few uniforms related to the mesh lines that need to be set:

Line.Width : this should be half the width of the mesh lines
Line.Color : this is the color of the mesh lines

to create a shader program that utilizes the geometry shader to produce a wireframe on top of a shaded surface, use the following steps:

  1. Use the following code for the vertex shader:
layout (location = 0 ) in vec3 VertexPosition;
layout (location = 1 ) in vec3 VertexNormal;
out vec3 VNormal;
out vec3 VPosition;
uniform mat4 ModelViewMatrix;
uniform mat3 NormalMatrix;
uniform mat4 ProjectionMatrix;
uniform mat4 MVP;
void main()
{
	VNormal = normalize( NormalMatrix * VertexNormal);
	VPosition = vec3(ModelViewMatrix * vec4(VertexPosition,1.0));
	gl_Position = MVP * vec4(VertexPosition,1.0);
}
  1. Use the following code for the geometry shader:
layout( triangles ) in;
layout( triangle_strip, max_vertices = 3 ) out;
out vec3 GNormal;
out vec3 GPosition;
noperspective out vec3 GEdgeDistance;
in vec3 VNormal[];
in vec3 VPosition[];
uniform mat4 ViewportMatrix; // Viewport matrix
void main()
{
	// Transform each vertex into viewport space
	vec3 p0 = vec3(ViewportMatrix * (gl_in[0].gl_Position / gl_in[0].gl_Position.w));
	vec3 p1 = vec3(ViewportMatrix * (gl_in[1].gl_Position /	gl_in[1].gl_Position.w));
	vec3 p2 = vec3(ViewportMatrix * (gl_in[2].gl_Position /	gl_in[2].gl_Position.w));
	
	// Find the altitudes (ha, hb and hc)
	float a = length(p1 - p2);
	float b = length(p2 - p0);
	float c = length(p1 - p0);
	float alpha = acos( (b*b + c*c - a*a) / (2.0*b*c) );
	float beta = acos( (a*a + c*c - b*b) / (2.0*a*c) );
	float ha = abs( c * sin( beta ) );
	float hb = abs( c * sin( alpha ) );
	float hc = abs( b * sin( alpha ) );
	// Send the triangle along with the edge distances
	GEdgeDistance = vec3( ha, 0, 0 );
	GNormal = VNormal[0];
	GPosition = VPosition[0];
	gl_Position = gl_in[0].gl_Position;
	EmitVertex();
	
	GEdgeDistance = vec3( 0, hb, 0 );
	GNormal = VNormal[1];
	GPosition = VPosition[1];
	gl_Position = gl_in[1].gl_Position;
	EmitVertex();
	
	GEdgeDistance = vec3( 0, 0, hc );
	GNormal = VNormal[2];
	GPosition = VPosition[2];
	gl_Position = gl_in[2].gl_Position;
	EmitVertex();
	EndPrimitive();
}
  1. Use the following code for the fragment shader:

// *** Insert appropriate uniforms for the Phong model ***
// The mesh line settings
uniform struct LineInfo 
{
	float Width;
	vec4 Color;
} Line;
in vec3 GPosition;
in vec3 GNormal;
noperspective in vec3 GEdgeDistance;
layout( location = 0 ) out vec4 FragColor;
vec3 phongModel( vec3 pos, vec3 norm )
{
// *** Phong model evaluation code goes here ***
}
void main() 
{
	// The shaded surface color.
	vec4 color=vec4(phongModel(GPosition, GNormal), 1.0);
	// Find the smallest distance
	float d = min( GEdgeDistance.x, GEdgeDistance.y );
	d = min( d, GEdgeDistance.z );
	// Determine the mix factor with the line color
	float mixVal = smoothstep( Line.Width – 1, Line.Width + 1, d );
	// Mix the surface color with the line color
	FragColor = mix( Line.Color, color, mixVal );
}

the vertex shader is pretty simple. it passes the normal and position along to the geometry shader after converting them into camera coordinates. the built-in variable gl_Position getst the position in clip coordinates. we will use this value in the geometry shader to determine the screen space coordinates.

in the geometry shader, we begin by defining the input and output primitive types for this shader.

layout( triangles ) in;
layout( triangle_strip, max_vertices = 3 ) out;

We do not actually change anything about the geometry of the triangle, so the input and output types are essentially the same. we will output exactly the same triangle that was received as input.

the output variables for the geometry shader are GNormal, GPosition, and GEdgeDistance. the first two are simply the values of the normal and position in camera coordinates, passed through unchanged.
the third is the vector that will store the distance to each edge of the triangle (described previously). not that it is defined with the noperspective qualifier.

noperspective out vec3 GEdgeDistance;

the noperspective qualifier indicates that the values are to be interpolated linearly, instead of the default perspective correct interpolation.as mentioned previously, these distances are in screen space, so it would be incorrect to interpolate them in a non-linear fashion.

within the main function, we start by transforming the position of each of the three vertices of the triangle from clip coordiantes to screen space coordinates by multiplying with the viewport matrix. (note that it is also necessary to divide by the w coordinate as the clip coordinates are homogeneous and may need to be converted back to true Cartesian coordinates).

next, we compute the three altitudes ha, hb, and hc using the law of cosines as described earlier.

once we have the three altitudes, we set GEdgeDistance appropriately for the first vertex: pass along GNormal, GPosition, and gl_Position unchanged; and emit the first vertex by calling EmitVertex(). This finishes the vertex and emits the vertex position and all of the per-vertex output variables. We then proceed similarly for the other two vertices of the triangle, finishing the polygon by calling EndPrimitive().

in the fragment shader, we start by evaluating the basic shading model and storing the resulting color in color. at this stage in the pipeline, the three components of the GEdgeDistance vriable should contain the distance from this fragment to each of the three edges of the triangle. we are interested in the minimum distance, so we find the minimum of the three components and store that in the d variable. the smoothstep function is then used to determine how much to mix the line color with the shaded color (mixVal).

float mixVal = smoothstep( Line.Width – 1, Line.Width + 1, d );

if the distance is less than Line.Width - 1, then smoothstep will return a value of 0, and if it is greater than Line.Width + 1, it will return 1. for values of d that are in between the two, we will get a smooth transition. this gives us a value of 0 when inside the line, a value of 1 when outside the line, and in a two pixel area arount the edge, we will get a smooth variation between 0 and 1. therefore, we can use the result directly to mix the color with the line color.

Finally, the fragment color is determined by mixing the shaded color with the line color using mixVal as the interpolation parameter.

this technique produces very nice looking results and has relatively few drawbacks. it is a good example of how geometry shaders can be useful for tasks other than modification of the actual geometry. in this case, we used the geometry shader simply to compute additional information about the primitive as it was being sent down the pipeline.

this shader can be dropped in and applied to any mesh without any modification to the opengl side of the application. it can be useful when debugging mesh issues or when implementing a mesh modeling program.

other common techniques for accomplishing this effect typically involve rendering the shaded object and wireframe in two passes with a polygon offset (via the glPolygonOffset function) applied to avoid the “z-fighting”, which takes place between the wireframe and the shaded surface beneath. this technique is not always effective because the modified depth values might not always be correct, or as desired, and it can be difficult to find the “sweet-spot” for the polygon offset value. for a good survey of techniques, refer to section 11.4.2 in real time rendering, third edition, by T Akenine-Moller, E Haines, and N Hoffman, AK Peters, 2008.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值