Learning Modern 3D Graphics Programming
这里简单介绍一下 rasterization 光栅化流程
1）裁剪空间变换，归一化坐标系 transform the vertices of each triangle into normalized device coordinates
2）窗口变换 from normalized device coordinates to window coordinates
1）Clip Space Transformation. 裁剪空间变换
The first phase of rasterization is to transform the vertices of each triangle into a certain region of space，The volume that the triangle is transformed into is called, in OpenGL parlance, clip space. The positions of the triangle’s vertices in clip space
are called clip coordinates.
Normalized Coordinates. Clip space is interesting, but inconvenient. The extent of this space is different for each vertex, which makes
visualizing a triangle rather difficult. Therefore, clip space is transformed into a more reasonable coordinate space: normalized device coordinates.
This process is very simple. The X, Y, and Z of each vertex’s position is divided by W to get normalized device coordinates. That is all.
The next phase of rasterization is to transform the vertices of each triangle again. This time, they are converted from normalized device coordinates to window coordinates. As the name suggests, window coordinates are relative to the window that OpenGL is running within.
Even though they refer to the window, they are still three dimensional coordinates. The only difference is that the bounds for these coordinates depends on the viewable window. It should also be noted that while these are in window coordinates, none of the precision is lost. These are not integer coordinates; they are still floating-point values, and thus they have precision beyond that of a single pixel.
The bounds for Z are [0, 1], with 0 being the closest and 1 being the farthest. Vertex positions outside of this range are not visible.
3） Scan Conversion. After converting the coordinates of a triangle to window coordinates, the triangle undergoes a process called scan conversion.
This process takes the triangle and breaks it up based on the arrangement of window pixels over the output image that the triangle covers.
The center image shows the digital grid of output pixels; the circles represent the center of each pixel. The center of each pixel represents a sample: a discrete location within the area of a pixel.
During scan conversion, a triangle will produce a fragment for every pixel sample that is within the 2D area of the triangle
上面的右图显示了 一个三角形经过 scan conversion 产生的 fragments，它是一个三角形外形近似
The image on the right shows the fragments generated by the scan conversion of the triangle. This creates a rough approximation of the triangle’s general shape.
Scan conversion is an inherently 2D operation. This process only uses the X and Y position of the triangle in window coordinates to determine which fragments to generate. The Z value is not forgotten, but it is not directly part of the actual process of scan converting the triangle.
The result of scan converting a triangle is a sequence of fragments that cover the shape of the triangle. Each fragment has certain data associated with it. This data contains the 2D location of the fragment in window coordinates, as well as the Z position of the fragment. This Z value is known as the depth of the fragment. There may be other information that is part of a fragment, and we will expand on that in later tutorials.
4）Fragment Processing. 根据一个 fragment 生成一个或多个颜色值以及一个深度值
This phase takes a fragment from a scan converted triangle and transforms it into one or more color values and a single depth value. The order that fragments from a single triangle are processed in is irrelevant; since a single triangle lies in a single plane, fragments generated from it cannot possibly overlap. However, the fragments from another triangle can possibly overlap. Since order is important in a rasterizer, the fragments from one triangle must all be processed before the fragments from another triangle
5）Fragment Writing. fragment 被写到目标图像中
After generating one or more colors and a depth value, the fragment is written to the destination image. This step involves more than simply writing to the destination image. Combining the color and depth with the colors that are currently in the image can involve a number of computations.