OpenGL Projection Matrix(投影矩阵推导) .

本文深入解析OpenGL中的投影矩阵,包括透视投影和正交投影的原理、应用及实现细节,通过数学公式和图形直观展示投影过程。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

OpenGL Projection Matrix

Related Topics: OpenGL Transformation

Overview

A computer monitor is a 2D surface. We need to transform 3D scene into 2D image in order to display it. GL_PROJECTION matrix is for this projection transformation . This matrix is used for converting from the eye coordinates to the clip coordinates. Then, this clip coordinates are also transformed to the normalized device coordinates (NDC) by divided with w component of the clip coordinates.

Therefore, we have to keep in mind that both clipping and NDC transformations are integrated into GL_PROJECTION matrix. The following sections describe how to build the projection matrix from 6 parameters; left , right , bottom , top , near and far boundary values.

Perspective Projection

OpenGL Perspective Frustum and NDC
Perspective Frustum and Normalized Device Coordinates (NDC)

In perspective projection, a 3D point in a truncated pyramid frustum (eye coordinates) is mapped to a cube (NDC); the x-coordinate from [l, r] to [-1, 1], the y-coordinate from [b, t] to [-1, 1] and the z-coordinate from [n, f] to [-1, 1].

Note that the eye coordinates are defined in right-handed coordinate system, but NDC uses left-handed coordinate system. That is, the camera at the origin is looking along -Z axis in eye space, but it is looking along +Z axis in NDC. Since glFrustum() accepts only positive values of near and far distances, we need to negate them during construction of GL_PROJECTION matrix.

In OpenGL, a 3D point in eye space is projected onto the near plane (projection plane). The following diagrams shows how a point (xe , ye , ze ) in eye space is projected to (xp , yp , zp ) on the near plane.

Top View of Projection
Top View of Projection
Side View of Projection
Side View of Projection

From the top view of the projection, the x-coordinate of eye space, xe is mapped to xp , which is calculated by using the ratio of similar triangles;

From the side view of the projection, yp is also calculated in a similar way;

Note that both xp and yp depend on ze ; they are inversely propotional to -ze . It is an important fact to construct GL_PROJECTION matrix. After an eye coordinates are transformed by multiplying GL_PROJECTION matrix, the clip coordinates are still a homogeneous coordinates . It finally becomes normalized device coordinates (NDC) divided by the w-component of the clip coordinates. (See more details on OpenGL Transformation . )
Clip CoordinatesNormalized Device Coordinates

Therefore, we can set the w-component of the clip coordinates as -ze . And, the 4th of GL_PROJECTION matrix becomes (0, 0, -1, 0).

Next, we map xp and yp to xn and yn of NDC with linear relationship; [l, r] ⇒ [-1, 1] and [b, t] ⇒ [-1, 1].


Mapping from xp to xn

 


Mapping from yp to yn

 

Then, we substitute xp and yp into the above equations.

Note that we make both terms of each equation divisible by -ze for perspective division (xc /wc , yc /wc ). And we set wc to -ze earlier, and the terms inside parentheses become xc and yc of clip coordiantes.

From these equations, we can find the 1st and 2nd rows of GL_PROJECTION matrix.

Now, we only have the 3rd row of GL_PROJECTION matrix to solve. Finding zn is a little different from others because ze in eye space is always projected to -n on the near plane. But we need unique z value for clipping and depth test. Plus, we should be able to unproject (inverse transform) it. Since we know z does not depend on x or y value, we borrow w-component to find the relationship between zn and ze . Therefore, we can specify the 3rd row of GL_PROJECTION matrix like this.

In eye space, we equals to 1. Therefore, the equation becomes;

To find the coefficients, A and B , we use (ze , zn ) relation; (-n, -1) and (-f, 1), and put them into the above equation.

To solve the equations for A and B , rewrite eq.(1) for B;

Substitute eq.(1') to B in eq.(2), then solve for A;

Put A into eq.(1) to find B ;

We found A and B . Therefore, the relation between ze and zn becomes;

Finally, we found all entries of GL_PROJECTION matrix. The complete projection matrix is;
OpenGL Perspective Projection Matrix
OpenGL Perspective Projection Matrix

This projection matrix is for general frustum. If the viewing volume is symmetric, which is and ,.then it can be simplified as;

Before we move on, please take a look at the relation between ze and zn , eq.(3) once again. You notice it is a rational function and is non-linear relationship between ze and zn . It means there is very high precision at the near plane, but very little precision at the far plane. If the range [-n, -f] is getting larger, it causes a depth precision problem (z-fighting); a small change of ze around the far plane does not affect on zn value. The distance between n and f should be short as possible to minimize the depth buffer precision problem.

Comparison of depth precision
Comparison of Depth Buffer Precisions

Orthographic Projection

OpenGL Orthographic Volume and NDC
Orthographic Volume and Normalized Device Coordinates (NDC)

Constructing GL_PROJECTION matrix for orthographic projection is much simpler than perspective mode.

All xe , ye and ze components in eye space are linearly mapped to NDC. We just need to scale a rectangular volume to a cube, then move it to the origin. Let's find out the elements of GL_PROJECTION using linear relationship.


Mapping from xe to xn

 


Mapping from ye to yn

 


Mapping from ze to zn

Since w-component is not necessary for orthographic projection, the 4th row of GL_PROJECTION matrix remains as (0, 0, 0, 1). Therefore, the complete GL_PROJECTION matrix for orthographic projection is;
OpenGL Orthographic Projection Matrix
OpenGL Orthographic Projection Matrix

It can be further simplified if the viewing volume is symmetrical, and .
OpenGL Symmetric Orthographic Projection Matrix

### OpenGL 中确保模型始终绘制在相机前方的世界坐标系中的实现方法 为了使模型始终位于相机前方,在世界坐标系中进行定位,可以通过一系列的矩阵变换来控制模型的位置和方向。以下是具体的实现思路: #### 1. 使用视图矩阵调整模型位置 视图矩阵(View Matrix)定义了相机的位置和朝向。通过逆向推导相机的位置和方向,可以将模型放置到相对于相机固定的一个偏移量上。 ```cpp glm::mat4 viewMatrix = glm::lookAt( cameraPosition, // 相机位置 targetPosition, // 相机目标点 upVector // 向上方向矢量 ); // 计算模型相对相机的位置 glm::vec3 modelPositionInCameraSpace = glm::vec3(0.0f, 0.0f, -distanceFromCamera); // 距离相机的距离 glm::vec4 worldPos = inverse(viewMatrix) * glm::vec4(modelPositionInCameraSpace, 1.0f); ``` 上述代码片段展示了如何基于视图矩阵反向计算模型的世界坐标位置[^1]。 --- #### 2. 应用模型矩阵设置模型位置 模型矩阵(Model Matrix)用于描述对象在世界空间中的位置、旋转和缩放属性。要让模型始终保持在相机前方,可以在每一帧更新模型矩阵时动态调整其位置。 ```cpp float distanceFromCamera = 5.0f; // 模型与相机之间的距离 glm::vec3 directionToModel = normalize(cameraTarget - cameraPosition); // 获取相机视线方向 glm::vec3 modelWorldPosition = cameraPosition + directionToModel * distanceFromCamera; glm::mat4 modelMatrix = glm::translate(glm::mat4(1.0f), modelWorldPosition); modelMatrix = glm::rotate(modelMatrix, rotationAngle, rotationAxis); // 可选:添加旋转效果 ``` 此部分代码实现了将模型沿相机视线方向移动指定距离的功能[^2]。 --- #### 3. 结合 MVP 矩阵传递给着色器 最终,需要将模型矩阵与其他变换矩阵组合成完整的MVP(Model-View-Projection)矩阵并传入顶点着色器中完成渲染。 ```cpp glm::mat4 projectionMatrix = glm::perspective(fieldOfView, aspectRatio, nearPlane, farPlane); glm::mat4 mvpMatrix = projectionMatrix * viewMatrix * modelMatrix; shaderProgram.setUniform("mvp", mvpMatrix); ``` 以上代码构建了一个标准的投影管道,并确保每次渲染都考虑到了最新的模型位置变化[^3]。 --- #### 4. 动态更新逻辑 如果希望即使相机移动或转动也能保持模型处于相同视觉上的“正前方”,则需每帧重新计算`modelWorldPosition`以及相应的`modelMatrix`。这通常涉及监听输入事件或者动画状态的变化。 --- ### 总结 综上所述,通过合理运用视图矩阵、模型矩阵及其相互关系,能够轻松达成让某个三维物体持续停留在观察者视野范围内的需求。同时注意适时刷新相关参数以适应不断改变的画面情境[^4]。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值