Real-Time Rendering 3rd.Chapter 1 - Chapter 2

 

Chapter 1 Introduction

 

1.2 Notation and Definitions

 

符号和定义

 

logn 始终表示自然对数logen,而不是log10n

 

我们使用右手坐标系,因为这是3D 计算机图形学的标准系统。

 

We use a right-hand coordinate system (see Appendix A.2) since this is the standard system for three-dimensional geometry in the field of computer graphics.

 

Chapter 2 The Graphics Rendering Pipeline

 

图形渲染管线简称管线

 

the graphics rendering pipeline, also known simply as the pipeline.

 

管线的作用

 

The main function of the pipeline is to generate, or render, a two-dimensional image, given a virtual camera, three-dimensional objects, light sources, shading equations, textures, and more.

 

实现depth cueing 的两种算法

 

…and whether, say, depth cueing is available, not whether lines are implemented via Bresenham's line-drawing algorithm [142] or via a symmetric double-step algorithm [1391].

 

管线的速度由最慢的那个操作决定

 

The pipeline stages execute in parallel, but they are stalled until the slowest stage has finished its task.

 

 

实时渲染的三个阶段

 

A coarse division of the real-time rendering pipeline into three conceptual stages—application, geometry, and rasterizer

 

每个阶段本身既可能是单个操作,也可能是一个管线(pipelined,串行处理的几个操作),或parallelized(并行处理的几个操作)

 

Each of these stages may be a pipeline in itself …but this stage could also be pipelined or parallelized.

 

Each of these stages is usually a pipeline in itself, which means that it consists of several substages.

 

计算渲染速度

 

EXAMPLE: RENDERING SPEED. Assume that our output device's maximum update frequency is 60 Hz, and that the bottleneck of the rendering pipeline has been found. Timings show that this stage takes 62.5 ms to execute. The rendering speed is then computed as follows. First, ignoring the output device, we get a maximum rendering speed of 1/0.0625 = 16 fps. Second, adjust this value to the frequency of the output device: 60 Hz implies that rendering speed can be 60 Hz, 60/2 = 30 Hz, 60/3 = 20 Hz, 60/4 = 15 Hz, 60/5 = 12 Hz, and so forth. This means that we can expect the rendering speed to be 15 Hz, since this is the maximum constant speed the output device can manage that is less than 16 fps.

 

application stage

 

traditionally performed on the CPU

 

执行碰撞检测,全局加速算法,动画,物理仿真等

 

collision detection, global acceleration algorithms, animation, physics simulation, and many others

 

geometry stage

 

typically performed on a graphics processing unit (GPU)

 

执行变换,投影等

 

which deals with transforms, projections, etc.

 

rasterizer stage

 

使用前两个步骤生成的数据绘制一幅图像,

 

draws (renders) an image with use of the data that the previous stage generated, as well as any per-pixel computation desired. The rasterizer stage is processed completely on the GPU.

 


 

2.2 The Application Stage

 

The developer has full control over what happens in the application stage, since it executes on the CPU.

 

2.3 The Geometry Stage

 

This stage is further divided into the following functional stages: model and view transform, vertex shading, projection, clipping, and screen mapping (Figure 2.3).

 

 

 

 

2.3.1 Model and View Transform

 

建模和取景变换

 

每一个object 最初都处于自已的坐标系中,称为model coordinates(建模坐标系) 经过model transform变换后位于world coordinates坐标系。同处世界坐标系中的object 存在于同一个空间中。

 

The coordinates of an object are called model coordinates, and after the model transform has been applied to these coordinates, the model is said to be located in world coordinates or in world space. The world space is unique, and after the models have been transformed with their respective model transforms, all models exist in this same space.

 

取景变换的目的是将摄像机移到坐标原点,其观察方向指向Z 的负方向(注意:D3D中是Z 正方向),y 轴和z 轴的正方向分别是上方和右方。

 

The purpose of the view transform is to place the camera at the origin and aim it, to make it look in the direction of the negative z-axis, with the y-axis pointing upwards and the x-axis pointing to the right.

 

 

 

 

 

2.3.2 Vertex Shading

 

顶点着色

 

决定光在材质上的效果的操作就叫着色。

 

This operation of determining the effect of a light on a material is known as shading.

 

2.3.3 Projection

 

投影

 

投影将视域体转换成一个单位立方体,这个立方体称为标准视域体,它的两个extreme points (-1,-1,-1) (1,1, l)

 

which transforms the view volume into a unit cube with its extreme points at (-1,-1,-1) and (1,1, l).3 The unit cube is called the canonical view volume.

 

正交投影(也称平行投影)和透视投影

 

There are two commonly used projection methods, namely orthographic (also called parallel) and perspective projection.

 

正交投影的特征是经过投影变换后,平行线还是保持平行。其操作是变换和缩放的组合。

 

The main characteristic of orthographic projection is that parallel lines remain parallel after the transform. This transformation is a combination of a translation and a scaling.

 

透视投影中,远离摄像机的物体经过变换后看起来会比之前小。另外,平行线(或其沿长线)会相交于地平线。

 

The perspective projection is a bit more complex. In this type of projection, the farther away an object lies from the camera, the smaller it appears after projection. In addition, parallel lines may converge at the horizon.

 

经过正交投影或透视投影后,我们说models 位于标准化的设备坐标系(normalized device coordinates )中。

 

Both orthographic and perspective transforms can be constructed with 4x4 matrices (see Chapter 4), and after either transform, the models are said to be in normalized device coordinates.

 

 

 

 

之所以叫投影,是因为生成的图像不再存储Z 坐标轴,物体从三维投影到二维。

 

they are called projections because after display, the z-coordinate is not stored in the image generated. In this way, the models are projected from three to two dimensions.

 

2.3.4 Clipping

 

栽剪

 

取景变换和投影在栽剪之前?(D3D 是在栽剪之后?)

 

The advantage of performing the view transformation and projection before clipping is that it makes the clipping problem consistent;

 

只有处于视域体内部的图元的全部或其一部分需要传递到光栅化阶段(rasterizer stage), 后者将图元绘制到屏幕上。

 

Only the primitives wholly or partially inside the view volume need to be passed on to the rasterizer stage, which then draws them on the screen.

 

完全处于视域体外的图元不会被渲染

 

Primitives entirely outside the view volume are not passed on further, since they are not rendered.

 

图元仅有其中一部分位于视域体内,则需要进行栽剪

 

It is the primitives that are partially inside the view volume that require clipping.

 

 

 

 

完全位于单位立方体(这个单位立方体疑似由投影操作产生的)外的图元被丢弃,而完全位于单位立方体内的图元则被保留。

 

the primitives outside the unit cube are discarded and primitives totally inside are kept.

 

 

图元与单位立方体相交,则有新顶点产生,并且旧顶点被丢弃(位于外部)。

 

Primitives intersecting with the unit cube are clipped against the unit cube, and thus new vertices are generated and old ones are discarded.

 

2.3.5 Screen Mapping

 

屏幕映射

 

DirectX 9.0 3D游戏编程基础》管这叫视口变换,它的任务是将顶点坐标从投影窗口转换到屏幕的一个矩形区域,该矩形区域称为视口。

 

进入这个阶段时,坐标还是三维的。(经过投影变换后不是已经变成二维了吗??)

 

… and the coordinates are still three dimensional when entering this stage.

 

每个图元的xy 坐标转换成屏幕坐标。

 

The x- and y-coordinates of each primitive are transformed to form screen coordinates.

 

这个屏幕坐标又和z 轴一起构成了窗口坐标

 

Screen coordinates together with the z-coordinates are also called window coordinates.

 

OpenGL (0,0) 是图像的左下角,而DirectX中,(00) 是左上角。

 

As an example, (0,0) is located at the lower left corner of an image in OpenGL, while it is upper left for DirectX.

 

2.4 The Rasterizer Stage

 

光栅化

 

利用经过变换和投影的顶点,及其与之关联的着色数据,光栅化的目标是计算并设置物体上每一个像素的颜色。

 

Given the transformed and projected vertices with their associated shading data (all from the geometry stage), the goal of the rasterizer stage is to compute and set colors for the pixels7 covered by the object.

 

 

 

 

Similar to the geometry stage, this stage is divided into several functional stages: triangle setup, triangle traversal, pixel shading, and merging

(Figure 2.8).

 

 

2.4.1 Triangle Setup

 

In this stage the differentials and other data for the triangle's surface are computed. This data is used for scan conversion, as well as for interpolation of the various shading data produced by the geometry stage. This process is performed by fixed-operation hardware dedicated to this task.

 

2.4.2 Triangle Traversal

 

Here is where each pixel that has its center (or a sample) covered by the triangle is checked and a fragment generated for the part of the pixel that overlaps the triangle. Finding which samples or pixels are inside a triangle is often called triangle traversal or scan conversion. Each triangle  fragment's properties are generated using data interpolated among the three triangle vertices (see Chapter 5). These properties include the fragment's depth, as well as any shading data from the geometry stage. Akeley and Jermoluk [7] and Rogers [1077] offer more information on triangle traversal.

 

 

 

 

2.4.3 Pixel Shading

 

以插值着色数据作为输入,处理结果是要传递给下一阶段的颜色。

 

Any per-pixel shading computations are performed here, using the interpolated shading data as input. The end result is one or more colors to be passed on to the next stage.

 

2.4.4 Merging

 

每一个像素的信息存储在一个颜色数组中。

 

The information for each pixel is stored in the color buffer, which is a rectangular array of colors (a red, a green, and a blue component for each color).

 

此步骤的任务是将由着色阶段生成的颜色碎片与当前存在数组中的颜色合并。

 

It is the responsibility of the merging stage to combine the fragment color produced by the shading stage with the color currently stored in the buffer.

 

此外,还负责解决可见性(利用深度缓冲, Z-buffer 存着每个像素的深度值)

 

This stage is also responsible for resolving visibility.

 

模板缓存是一个离屏缓存,用来记录被渲染图元的位置。模板缓存通常是每像素8 bit

 

The stencil buffer is an offscreen buffer used to record the locations of the rendered primitive.It typically contains eight bits per pixel.

 

通过各种算法,图元可以被渲染成模板缓存

 

Primitives can be rendered into the stencil buffer using various functions, and the buffer's contents can then be used to control rendering into the color buffer and Z-buffer.

 

模板缓存是一种生成各种特效的强有力的工具

 

The stencil buffer is a powerful tool for generating special effects. All of these functions at the end of the pipeline are called raster operations (ROP) or blend operations.

 

帧缓存

 

The frame buffer generally consists of all the buffers on a system, but it is sometimes used to mean just the color buffer and Z-buffer as a set. In 1990, Haeberli and Akeley [474] presented another complement to the frame buffer, called the accumulation buffer. In this buffer, images can be accumulated using a set of operators. Other effects that can be generated include depth of field,

antialiasing, soft shadows, etc.

 

双缓冲

 

在后台缓存渲染场景,完成后前台后台缓存交换。

 

To avoid allowing the human viewer to see the primitives as they are being rasterized and sent to the screen, double buffering is used. This means that the rendering of a scene takes place off screen, in a back buffer. Once the scene has been rendered in the back buffer, the contents of the back buffer are swapped with the contents of the front buffer that was previously displayed on the

screen. The swapping occurs during vertical retrace, a time when it is safe to do so.

 

 

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值