Opengl固定渲染管线操作顺序

  1
  2             <<Appendix A: Order of Operations>>
  3
  4         http://fly.cc.fer.hr/~unreal/theredbook/appendixa.html
  5
  6
  7 This guide describes all the operations performed between the time vertices
  8 are specified and when fragments are finally written into the framebuffer. The
  9 chapters of this guide are arranged in an order that facilitates learning
 10 rather than in the exact order in which these operations are actually
 11 performed. Sometimes the exact order of operations doesn't matter - for
 12 example, surfaces can be converted to polygons and then transformed, or
 13 transformed first and then converted to polygons, with identical results - and
 14 different implementations of OpenGL might do things differently. This appendix
 15 describes a possible order; any implementation is required to give equivalent
 16 results. If you want more details than are presented here, see the OpenGL
 17 Reference Manual.
 18
 19
 20 This appendix has the following major sections:
 21
 22       "Overview"
 23       "Geometric Operations"
 24       "Pixel Operations"
 25       "Fragment Operations"
 26       "Odds and Ends"
 27
 28 1. Overview
 29
 30 This section gives an overview of the order of operations, as shown in Figure
 31 A-1 . Geometric data (vertices, lines, and polygons) follows the path through
 32 the row of boxes that includes evaluators and per-vertex operations, while
 33 pixel data (pixels, images, and bitmaps) is treated differently for part of
 34 the process. Both types of data undergo the rasterization and per-fragment
 35 operations before the final pixel data is written into the framebuffer.
 36
 37 [IMAGE]
 38
 39 Figure A-1 : Order of Operations
 40
 41
 42
 43 All data, whether it describes geometry or pixels, can be saved in a display
 44 list or processed immediately. When a display list is executed, the data is
 45 sent from the display list just as if it were sent by the application.
 46
 47 All geometric primitives are eventually described by vertices. If evaluators
 48 are used, that data is converted to vertices and treated as vertices from then
 49 on. Per-vertex calculations are performed on each vertex, followed by
 50 rasterization to fragments. For pixel data, pixel operations are performed,
 51 and the results are either stored in the texture memory, used for polygon
 52 stippling, or rasterized to fragments.
 53
 54 Finally, the fragments are subjected to a series of per-fragment operations,
 55 after which the final pixel values are drawn into the framebuffer.
 56
 57 2. Geometric Operations
 58
 59 Geometric data, whether it comes from a display list, an evaluator, the
 60 vertices of a rectangle, or as raw data, consists of a set of vertices and the
 61 type of primitive it describes (a vertex, line, or polygon). Vertex data
 62 includes not only the (x, y, z, w) coordinates, but also a normal vector,
 63 texture coordinates, a color or index, and edge-flag data. All these elements
 64 except the vertex's coordinates can be specified in any order, and default
 65 values exist as well. `As soon as the vertex command glVertex*() is issued, the
 66 components are padded`, if necessary, to four dimensions(using z = 0 and w =
 67 1), and the current values of all the elements are associated with the vertex.
 68 The complete set of vertex data is then processed.
 69
 70 3. Per-Vertex Operations
 71
 72 In the per-vertex operations stage of processing, each vertex's spatial
 73 coordinates are transformed by the modelview matrix, while the normal vector
 74 is transformed by that matrix's inverse and renormalized if specified. If
 75 automatic texture generation is enabled, new texture coordinates are generated
 76 from the transformed vertex coordinates, and they replace the vertex's old
 77 texture coordinates. The texture coordinates are then transformed by the
 78 current texture matrix and passed on to the primitive assembly step.
 79
 80 Meanwhile, the lighting calculations, if enabled, are performed using the
 81 transformed vertex and normal vector coordinates, and the current material,
 82 lights, and lighting model. These calculations generate new colors or indices
 83 that are clamped or masked to the appropriate range and passed on to the
 84 primitive assembly step.
 85
 86 4. Primitive Assembly
 87
 88 Primitive assembly differs, depending on whether the primitive is a point, a
 89 line, or a polygon. If flat shading is enabled, the colors or indices of all
 90 the vertices in a line or polygon are set to the same value. If special
 91 clipping planes are defined and enabled, they're used to clip primitives of
 92 all three types. (The clipping-plane equations are transformed by the inverse
 93 of the modelview matrix when they're specified.) Point clipping simply passes
 94 or rejects vertices; line or polygon clipping can add additional vertices
 95 depending on how the line or polygon is clipped. After this clipping, the
 96 spatial coordinates of each vertex are transformed by the projection matrix,
 97 and the results are clipped against the standard viewing planes x = ą w, y = ą
 98 w, and z = ą w.
 99
100 If selection is enabled, any primitive not eliminated by clipping generates a
101 selection-hit report, and no further processing is performed. Without
102 selection, perspective division by w occurs and the viewport and depth-range
103 operations are applied. Also, if the primitive is a polygon, it's then
104 subjected to a culling test (if culling is enabled). A polygon might convert
105 to vertices or lines, depending on the polygon mode.
106
107 Finally, points, lines, and polygons are rasterized to fragments, taking into
108 account polygon or line stipples, line width, and point size. Rasterization
109 involves determining which squares of an integer grid in window coordinates
110 are occupied by the primitive. Color and depth values are also assigned to
111 each such square.
112
113 5. Pixel Operations
114
115 Pixels from host memory are first unpacked into the proper number of
116 components. The OpenGL unpacking facility handles a number of different
117 formats. Next, the data is scaled, biased, and processed using a pixel map.
118 The results are clamped to an appropriate range depending on the data type,
119 and then either written in the texture memory for use in texture mapping or
120 rasterized to fragments.
121
122 If pixel data is read from the framebuffer, pixel-transfer operations(scale,
123 bias, mapping, and clamping) are performed. The results are packed into an
124 appropriate format and then returned to processor memory.
125
126 The pixel copy operation is similar to a combination of the unpacking and
127 transfer operations, except that packing and unpacking is unnecessary, and
128 only a single pass is made through the transfer operations before the data is
129 written back into the framebuffer.
130
131 6. Fragment Operations
132
133 If texturing is enabled, a texel is generated from texture memory for each
134 fragment and applied to the fragment. Then fog calculations are performed, if
135 they're enabled, followed by coverage (antialiasing) calculations if
136 antialiasing is enabled.
137
138 Next comes scissoring, followed by the alpha test(in RGBA mode only), the
139 stencil test, the depth-buffer test, and dithering. All of these operations
140 can be disabled. Next, if in index mode, a logical operation is applied if one
141 has been specified. If in RGBA mode, blending is performed.
142
143 The fragment is then masked by a color mask or an index mask, depending on the
144 mode, and drawn into the appropriate buffer. If fragments are being written
145 into the stencil or depth buffer, masking occurs after the stencil and depth
146 tests, and the results are drawn into the framebuffer without performing the
147 blending, dithering, or logical operation.
148
149 7. Odds and Ends
150
151 Matrix operations deal with the current matrix stack, which can be the
152 modelview, the projection, or the texture matrix stack. The commands
153 glMultMatrix*(), glLoadMatrix*(), and glLoadIdentity() are applied to the top
154 matrix on the stack, while glTranslate*(), glRotate*(), glScale*(), glOrtho(),
155 and glFrustum() are used to create a matrix that's multiplied by the top
156 matrix. When the modelview matrix is modified, its inverse is also generated
157 for normal vector transformation.
158
159 The commands that set the current raster position are treated exactly like a
160 vertex command up until when rasterization would occur. At this point, the
161 value is saved and is used in the rasterization of pixel data.
162
163 The various glClear() commands bypass all operations except scissoring,
164 dithering, and writemasking.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值