图形学书籍 Real-Time Rendering 2.3 几何处理阶段(二)(根据谷歌翻译修改)

文章介绍了GPU图形管道中的可选处理阶段,包括曲面细分、几何着色和流输出,以及后续的裁剪和屏幕映射步骤。曲面细分用于优化曲面的三角形数量,几何着色常用于粒子生成,流输出则允许GPU作为几何引擎输出数据。裁剪阶段确保只有在视图体积内的图元被处理,屏幕映射则将坐标转换为适合屏幕显示的形式。此外,文章还对比了OpenGL和DirectX在像素坐标处理上的差异。
摘要由CSDN通过智能技术生成

2.3.2 Optional Vertex Processing ### 2.3.2 可选的顶点处理

Every pipeline has the vertex processing just described. Once this processing is done, there are a few optional stages that can take place on the GPU, in this order: tessellation, geometry shading, and stream output. Their use depends both on the capabilities of the hardware—not all GPUs have them—and the desires of the programmer. They are independent of each other, and in general they are not commonly used. More will be said about each in Chapter 3.

每个管道都有2.3.1描述的各种顶点处理。 完成这些处理后,可以在 GPU 上执行几个可选阶段,顺序为:曲面细分、几何着色和流输出。 它们的使用既取决于硬件的能力(并非所有 GPU 都具有它们),也取决于程序员的需求。 它们相互独立,一般不常用。 第 3 章将详细介绍每一个。

The first optional stage is tessellation. Imagine you have a bouncing ball object. If you represent it with a single set of triangles, you can run into problems with quality or performance. Your ball may look good from 5 meters away, but up close the individual triangles, especially along the silhouette, become visible. If you make the ball with more triangles to improve quality, you may waste considerable processing time and memory when the ball is far away and covers only a few pixels on the screen. With tessellation, a curved surface can be generated with an appropriate number of triangles.

第一个可选阶段是曲面细分。 假设您有一个弹跳球对象。 如果你用一组三角形来表示它,你可能会遇到质量或性能问题。 你的球在 5 米外可能看起来不错,但靠近时,单个三角形,尤其是沿着轮廓的三角形,就会变得清晰可见。 如果你用更多的三角形来提高质量,当球很远并且只覆盖屏幕上的几个像素时,你可能会浪费大量的处理时间和内存。 通过曲面细分,可以生成具有适当数量的三角形的曲面。

We have talked a bit about triangles, but up to this point in the pipeline we have just processed vertices. These could be used to represent points, lines, triangles, or other objects. Vertices can be used to describe a curved surface, such as a ball. Such surfaces can be specified by a set of patches, and each patch is made of a set of vertices. The tessellation stage consists of a series of stages itself—hull shader, tessellator, and domain shader—that converts these sets of patch vertices into (normally) larger sets of vertices that are then used to make new sets of triangles. The camera for the scene can be used to determine how many triangles are generated: many when the patch is close, few when it is far away.

我们已经谈了一些关于三角形的内容,但到目前为止,我们只是处理了管道中的顶点。 这些可以用来表示点、线、三角形或其他对象。 顶点可用于描述曲面,例如球。 这样的曲面可以由一组面片(patches)指定,每个面片由一组顶点组成。 曲面细分阶段本身由一系列阶段组成——外壳着色器、曲面细分器和域着色器——将这些补丁顶点集转换为(通常)更大的顶点集,然后用于制作新的三角形集。 场景的相机可用于确定生成了多少个三角形:补丁(patch)很近时很多,距离很远时很少。

The next optional stage is the geometry shader. This shader predates the tessellation shader and so is more commonly found on GPUs. It is like the tessellation shader in that it takes in primitives of various sorts and can produce new vertices. It is a much simpler stage in that this creation is limited in scope and the types of output primitives are much more limited. Geometry shaders have several uses, with one of the most popular being particle generation. Imagine simulating a fireworks explosion. Each fireball could be represented by a point, a single vertex. The geometry shader can take each point and turn it into a square (made of two triangles) that faces the viewer and covers several pixels, so providing a more convincing primitive for us to shade.

下一个可选阶段是几何着色器。 此着色器早于曲面细分着色器出现,因此在 GPU 上更常见。 它类似于曲面细分着色器,因为它接受各种类型的图元并可以生成新的顶点。 这是一个简单得多的阶段,因为这种创建的范围有限,输出基元的类型也更加有限。 几何着色器有多种用途,其中最流行的一种是粒子生成。 想象一下模拟烟花爆炸。 每个火球都可以用一个点,一个顶点来表示。 几何着色器可以将每个点变成一个正方形(由两个三角形组成),面向观察者并覆盖多个像素,从而为我们提供更有说服力(convincing)的图元进行着色。

The last optional stage is called stream output. This stage lets us use the GPU as a geometry engine. Instead of sending our processed vertices down the rest of the pipeline to be rendered to the screen, at this point we can optionally output these to an array for further processing. These data can be used by the CPU, or the GPU itself, in a later pass. This stage is typically used for particle simulations, such as our fireworks example.

最后一个可选阶段称为流输出。 这个阶段让我们将 GPU 用作几何引擎。 不是将我们处理过的顶点沿着管道的其余部分发送到屏幕上,而是在这一点上我们可以选择将它们输出到一个数组以供进一步处理。 这些数据可以在后面的过程中被 CPU 或 GPU 本身使用。 此阶段通常用于粒子模拟,例如我们的烟花示例。

These three stages are performed in this order—tessellation, geometry shading, and stream output—and each is optional. Regardless of which (if any) options are used, if we continue down the pipeline we have a set of vertices with homogeneous coordinates that will be checked for whether the camera views them.

这三个阶段按以下顺序执行——曲面细分、几何着色和流输出——并且每个阶段都是可选的。 无论使用哪个(如果有的话)选项,如果我们继续沿着管线进行下去,我们就会得到一组具有齐次坐标的顶点,这些顶点将被检查相机是否观察(views)到它们。

2.3.3 Clipping 裁剪

Only the primitives wholly or partially inside the view volume need to be passed on to the rasterization stage (and the subsequent pixel processing stage), which then draws them on the screen. A primitive that lies fully inside the view volume will be passed on to the next stage as is. Primitives entirely outside the view volume are not passed on further, since they are not rendered. It is the primitives that are partially inside the view volume that require clipping. For example, a line that has one vertex outside and one inside the view volume should be clipped against the view volume, so that the vertex that is outside is replaced by a new vertex that is located at the intersection between the line and the view volume. The use of a projection matrix means that the transformed primitives are clipped against the unit cube. The advantage of performing the view transformation and projection before clipping is that it makes the clipping problem consistent; primitives are always clipped against the unit cube.

只有完全或部分位于视图体积内的图元需要传递到光栅化阶段(以及后续的像素处理阶段),然后将它们绘制在屏幕上。 完全位于视图体积(view volume)内的基元将按原样传递到下一阶段。 完全在视图体积(view volume)之外的图元不会进一步传递,因为它们不会被渲染。 部分位于视图体积内的基元需要裁剪。 例如,一条线的一个顶点位于视图体积之外,一个顶点位于视图体积内部,应该针对视图体积(view volume)进行裁剪,以便外部的顶点被位于该线和视图体积之间的交点处的新顶点替换 。使用投影矩阵意味着变换后的图元被裁剪到单位立方体(unit cube)上。 在裁剪之前进行视图变换和投影的好处是可以使裁剪问题保持一致; 图元总是根据单位立方体(unit cube)裁剪。

The clipping process is depicted in Figure 2.6. In addition to the six clipping planes of the view volume, the user can define additional clipping planes to visibly chop objects. An image showing this type of visualization, called sectioning, is shown in Figure 19.1 on page 818.

裁剪过程如图 2.6 所示。 除了视图体积(view volume)的六个裁剪平面之外,用户还可以定义额外的裁剪平面来明显地切割对象。 第 818 页的图 19.1 中显示了这种可视化类型(type of visualization)的图像,称为切片(sectioning)。

图2.6

Figure 2.6. After the projection transform, only the primitives inside the unit cube (which correspond to primitives inside the view frustum) are needed for continued processing. Therefore, the primitives outside the unit cube are discarded, and primitives fully inside are kept. Primitives intersecting with the unit cube are clipped against the unit cube, and thus new vertices are generated and old ones are discarded.

图 2.6。 投影变换后,只需要单位立方体内的图元(对应视锥体内的图元)继续处理。 因此,单位立方体外部的图元被丢弃,而完全位于内部的图元被保留。 与单位立方体相交的基元被裁剪到单位立方体上,从而生成新顶点并丢弃旧顶点。

The clipping step uses the 4-value homogeneous coordinates produced by projection to perform clipping. Values do not normally interpolate linearly across a triangle in perspective space. The fourth coordinate is needed so that data are properly interpolated and clipped when a perspective projection is used. Finally, perspective division is performed, which places the resulting triangles’ positions into three-dimensional normalized device coordinates. As mentioned earlier, this view volume ranges from (—1,-1,-1) to (1,1,1). The last step in the geometry stage is to convert from this space to window coordinates.

裁剪步骤使用投影产生的4值齐次坐标进行裁剪。 值通常不会在透视空间的三角形上线性插值。 需要第四个坐标,以便在使用透视投影时正确地插值和裁剪数据。 最后,执行透视划分(perspective division),将生成的三角形位置放入三维归一化(normalized)设备坐标中。 如前所述,此视图体积(view volume)的范围从 (-1,-1,-1) 到 (1,1,1)。 几何阶段的最后一步是从这个空间转换为窗口坐标(window coordinates)。

2.3.4 Screen Mapping 屏幕映射

Only the (clipped) primitives inside the view volume are passed on to the screen mapping stage, and the coordinates are still three-dimensional when entering this stage. The x- and y-coordinates of each primitive are transformed to form screen coordinates. Screen coordinates together with the z-coordinates are also called window coordinates. Assume that the scene should be rendered into a window with the minimum corner at (x1, y1) and the maximum corner at (x2, y2), where x1 < x2 and y1< y2. Then the screen mapping is a translation followed by a scaling operation. The new x- and y-coordinates are said to be screen coordinates. The z-coordinate ([-1, +1] for OpenGL and [0,1] for DirectX) is also mapped to [z1, z2], with z1 = 0 and z2 = 1 as the default values. These can be changed with the API, however. The window coordinates along with this remapped z-value are passed on to the rasterizer stage. The screen mapping process is depicted in Figure 2.7.

只有视图体积(view volume)内的(裁剪)基元被传递到屏幕映射阶段,并且在进入该阶段时坐标仍然是三维的。 每个基元的 x 和 y 坐标被转换为屏幕坐标。 屏幕坐标与 z 坐标一起也称为窗口坐标。 假设场景应该被渲染到一个最小角在 (x1, y1) 和最大角在 (x2, y2) 的窗口中,其中 x1 < x2 且 y1< y2。 然后屏幕映射是一个转换,然后是缩放操作。 新的 x 和 y 坐标称为屏幕坐标。 z 坐标(OpenGL 的 [-1, +1] 和 DirectX 的 [0,1])也映射到 [z1, z2],z1 = 0 和 z2 = 1 作为默认值。 但是,这些可以通过 API 进行更改。 窗口坐标连同这个重新映射的 z 值被传递到光栅化器阶段。 屏幕映射过程如图 2.7 所示。

图2.7

Figure 2.7. The primitives lie in the unit cube after the projection transform, and the screen mapping procedure takes care of finding the coordinates on the screen.

图 2.7。 图元在投影变换后位于单位立方体中,屏幕映射程序负责找到屏幕上的坐标。

Next, we describe how integer and floating point values relate to pixels (and texture coordinates). Given a horizontal array of pixels and using Cartesian coordinates, the left edge of the leftmost pixel is 0.0 in floating point coordinates. OpenGL has always used this scheme, and DirectX 10 and its successors use it. The center of this pixel is at 0.5. So, a range of pixels [0,9] cover a span from [0.0, 10.0). The conversions are simply d=floor©,c=d+0.5 where d is the discrete (integer) index of the pixel and c is the continuous (floating point) value within the pixel.

接下来,我们描述整数和浮点值如何与像素(和纹理坐标)相关。 给定水平像素阵列并使用笛卡尔坐标,最左侧像素的左边缘在浮点坐标中为 0.0。 OpenGL 一直使用这种方案,DirectX 10 及其后续版本也使用这种方案。 该像素的中心位于 0.5。 因此,像素范围 [0,9] 涵盖了 [0.0, 10.0) 的范围。 转换只是

在这里插入图片描述

其中 d 是像素的离散(整数)索引,c 是像素内的连续(浮点)值。

While all APIs have pixel location values that increase going from left to right, the location of zero for the top and bottom edges is inconsistent in some cases between OpenGL and DirectX. OpenGL favors the Cartesian system throughout, treating the lower left corner as the lowest-valued element, while DirectX sometimes defines the upper left corner as this element, depending on the context. There is a logic to each, and no right answer exists where they differ. As an example, (0,0) is located at the lower left corner of an image in OpenGL, while it is upper left for DirectX. This difference is important to take into account when moving from one API to the other.

虽然所有 API 都具有从左到右增加的像素位置值,但在某些情况下,OpenGL 和 DirectX 之间顶部和底部边缘的零位置不一致。 OpenGL始终支持笛卡尔系统,将左下角视为最低值元素,而DirectX有时将左上角定义为该元素,具体取决于上下文。 每个都有一个逻辑,并且在它们不同的地方不存在正确答案。 例如,(0,0) 在 OpenGL 中位于图像的左下角,而在 DirectX 中则位于左上角。 从一个 API 迁移到另一个 API 时,需要考虑这一差异。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值