OGL Tutorial 49——Cascaded Shadow Mapping

website :http://ogldev.atspace.co.uk/www/tutorial49/tutorial49.html

背景知识

让我们更近一点看47节课的阴影

在这里插入图片描述

as you can see, the quality of the shadow is not high. it is too blocky. (成块状)。we have touched on the reason for that blockiness at the end of tutorial 47 and referred to it as perspective aliasing. 这个就是透视锯齿。 which means a large number of pixels in view space being mapped to the same pixel in the shadow map. 因为在视觉空间中的大量像素对应了阴影贴图中的同一个像素。this means that all these pixels wil either be in shadow or in light, 所以这些像素要么是在阴影中,要么被照亮。 contributing to the sense of blockiness. 所以才形成了这个块状阴影。

in other words, since the resolution of the shadow map is not high it can not cover the view space adequately.

one obvious way to deal with this is to increase the resolution of the shadow map but that will increase the memory footprint of our app so it may not be the best course of action. 不是最佳方案

可以通过增加阴影贴图的分辨率解决这个问题,但是会增加内存。

another way to deal with this problem is to notice that shadows closer to the camera a far more important in terms of quality than shadow of objects that are far away.

distant objects are smaller anyway and usually the eye focus on what happens close by. 远处的东西比较小,眼睛更多的是注意到近处的东西。 leaving the rest as a “background”. if we can find a way to use a dedicated shadow map for closer objects and a different shadow map for distant objects then the first shadow will only need to cover the a smaller region, thus decreasing the ratio that we discusses above. this, in a nutshell 简单来说, is what cascaded shadow mapping. CSM is all about.

at the time of writing this tutorial CSM is considered one of the best ways to deal with perspective aliasing. let’s see how we can implement it .

from a high level view we are going to split the view frustum into several cascades.
for purpose of this tutorial we will use three cascades: near, middle and far. the algorithm itself is pretty generic so you can use more cascades if you feel like it.

every cascade will be rendered into its own private shadow map. the shadow algorithm itself will remain the same but when sampling the depth form the shadow map we will need to select the appropriate map based on the distance from the viewer.
根据距离摄像机的远近选择匹配的贴图进行采样。

let’s take a look at a generic view frustum.

在这里插入图片描述

as usual, we have a small near plane and a larger far plane. now let us take a look at the same frustum from above:

在这里插入图片描述

the next step is to split the range from the near plane to the far plane into three parts. we will call this near, middle and far.

in addition, let us add the light direction (the arrow on the right hand side):

在这里插入图片描述

so how are we going to render each cascade into its own private shadow map?

let’s think about the shadow phase in the shadow mapping algorithm. we set up (创建) things to render the scene from the light point of view.

this means creating WVP matrix with the world transform of the object, the view transform based on the light and a projection matrix. since this tutorial is based on tutorial 47 which dealt with shadows of directional lights the projection matrix will be orthographic. 平行光的投射矩阵是正交的。

in general CSMs make more sense in outdoor scenes where the main light source is usually the sun so using a directional light here is natural. if you look at the WVP matrix above you will notice that the first two parts (world and view) are the same for all cascades.

after all, the position of the object in the world and the orientation of the camera based on the light source are not related to the splitting of the frustum into cascades. what matters here is only the projection matrix because it defines the extent of the region which will eventually be rendered.

and since orthographic projections are defined using a box we need to define three different boxes which will be translated into three different orthographic projection matrices. these projection matrices will be used to create the three WVP matrices to render each cascade into its own shadow map.

the most logical thing to do will be to make these boxes as small as possible in order to keep the ratio of view pixels to shadow map pixels as low as possible. this means creating a bounding box for each cascade which is oriented along the light direction vector. 这一点有点不好理解,尽量创建一个小的盒子,让其包围cascade。并且每个盒子都要朝向灯光的方向 let’s create such a bounding box for the first cascade:

在这里插入图片描述

now let us create a bounding box for the second cascade:

在这里插入图片描述

and finally a bounding box for the last cascade:

在这里插入图片描述

as you can see, there is some overlap of the bounding boxes due to the orientation of the light which means some pixels will be rendered into more than one shadow map. 重叠的区域会被渲染到多张阴影纹理中。

there is no problem with that as long as all the pixels of a single cascade are entirely inside a single shadow map. the selection of the shadow map to use in the shader for shadow calculations will be based on the distance of the pixel from the actual viewer.

calculations of the bounding boxes that serve as the basis for the orthographic projection in the shadow phase is the most complicated part of the algorithm. these boxes must be described in light space because the projections come after world and view transforms ——at which point the light “originate” from the origin and points along the positive Z axis.

since the boxes will be calculated as min/max values on all three axis the will be aligned on the light direction, which is what we need for projection.

to calculate the bounding box we need to know how each cascade looks like in light space. to do that we need to follow these steps:

  1. calculate the eight corners of each cascade in view space. this is easy and requires simple trigonometry(三角函数):

在这里插入图片描述

the above image represents an arbitrary cascade ——since each cascade on its own is basically a frustum and shares the same field-of-view angle with the other cascades.

note that we are looking from the top down to the xz plane. we need to calculate X1 and X2.

在这里插入图片描述

this gives us the X and Z components of the eight coordinates of the cascade in view space. using similar math with the vertical field-of-view angle we can get Y component and finalize the coordinates.

  1. now we need to transform the cascade coordinates from view space back to world space. let us say that the viewer is oriented such that in world space the frustum looks like that ——the red arrow is the light direction but ignore it for now:

在这里插入图片描述

in order to transform from world space to view space we multiply the world position vector by the view matrix ——which is based on the camera location and rotation. this means that if we already have the coordinates of the cascade in view space we must multiply them by the inverse of the view matrix in order to transform them to world space:

在这里插入图片描述

  1. With the frustum coordinates in world space we can now transform them to light space as any other object. Remember that the light space is exactly like view space but instead of the camera we use the light source. Since we are dealing with a directional light that has no origin we just need to rotate the world so that the light direction becomes aligned with the positive Z axis. The origin of light can simply be the origin of the light space coordinate system (which means we don’t need any translation). If we do that using the previous diagram (with the red arrow being the light direction) the cascade frustum in light space should look like:

在这里插入图片描述

  1. With the cascade coordinates finally in light space we just need to generate a bounding box for it by taking the min/max values of the X/Y/Z components of the eight coordinates. This bounding box provides the values for the orthographic projection for rendering this cascade into its shadow map. By generating an orthographic projection for each cascade separately we can now render each cascade into different shadow map. During the light phase we will calculate the shadow factor by selecting a shadow map based on the distance from the viewer.

the CascadedShadowMapFBO class we see above is a modification of the ShadowMapFBO class that we have previously used for shadow mapping. the main change is that the m_shadowMap array has space for three shadow map objects which is the number of cascades we are going to use for this example. here we have the three main functions of the class used to initialize it, bind it for writing in the shadow map phase and for reading it the lighting phase.

the main render function in the CCM algorithm is the same as in the standard shadow mapping algorithm – first render into the shadow maps and then use them for the actual lighting.

there are a few changes in the shadow mapping phase worth noting. the first is the call the CalOrthoProjs() at the start of the phase. this function is responsible for calculating the bounding boxes used for orthographic projections. the next change is the loop over the cascades. each cascade must be bound for writing, cleared and rendered to separately. each cascade has its own projection set up in the m_shadowOrthoProjInfo array (done by CalcOrthoProjs). since we do not know which mesh goes to which cascade (and it can be more than one) we have to render the entire scene into all the cascade.
每个级联,都需要渲染整个场景

void ShadowMapPass()
{ 
    CalcOrthoProjs();

    m_ShadowMapEffect.Enable();

    Pipeline p;

    // The camera is set as the light source - doesn't change in this phase
    p.SetCamera(Vector3f(0.0f, 0.0f, 0.0f), m_dirLight.Direction, Vector3f(0.0f, 1.0f, 0.0f));

    for (uint i = 0 ; i < NUM_CASCADES ; i++) {
        // Bind and clear the current cascade
        m_csmFBO.BindForWriting(i);
        glClear(GL_DEPTH_BUFFER_BIT); 

        p.SetOrthographicProj(m_shadowOrthoProjInfo[i]);

        for (int i = 0; i < NUM_MESHES ; i++) { //把五个龙的模型画出来
            p.Orient(m_meshOrientation[i]);
            m_ShadowMapEffect.SetWVP(p.GetWVOrthoPTrans());
            m_mesh.Render();
        }
    }

    glBindFramebuffer(GL_FRAMEBUFFER, 0);
}

the only change in the lighting phase is that instead of a single light WVP matrix we have three. they are identical except for the project part. we set them up accordingly in the loop at the middle of the phase.

 for (uint i = 0 ; i < NUM_CASCADES ; i++) {
            p.SetOrthographicProj(m_shadowOrthoProjInfo[i]);        
            m_LightingTech.SetLightWVP(i, p.GetWVOrthoPTrans());
        }

before we study how to calculate the orthographic projections we need to take a look at the m_cascadeEnd array (which is set up as part of the constructor). this array defines the cascades by placing the near Z and far Z in the first and last slots, respectively, and the ends of the cascades in between. so the first cascade ends in the value of slot one, the second in slot two and the last cascade ends with the far z in the last slot. we need the z in the first slot to simplify the calculations later.

what we see above matches setp #1 of the description in the background section on how to calculate the orthographic projections for the cascades. the frustumCorners array is populated with the eight corners of each cascade in view space. note that since the filed of view is provided only for the horizontal axis we have to extrapolate 推断 it for the vertical axis (e.g, if the horizontal field of view is 90 degrees and the window has a width of 1000 and a height of 500 the vertical field of view will be only 45 degrees).

void CalcOrthoProjs()
    {                                   
        Pipeline p;
        
        p.SetCamera(m_pGameCamera->GetPos(), m_pGameCamera->GetTarget(), m_pGameCamera->GetUp());

对象p调用SetCamera

void SetCamera(const Vector3f& Pos, const Vector3f& Target, const Vector3f& Up)
{
    m_camera.Pos = Pos; //摄像机的位置
    m_camera.Target = Target; //注视的点
    m_camera.Up = Up; //摄像机向上的向量
}
Matrix4f Cam = p.GetViewTrans();
Matrix4f CamInv = Cam.Inverse();
const Matrix4f& Pipeline::GetViewTrans()
{
    Matrix4f CameraTranslationTrans, CameraRotateTrans; //声明平移矩阵、旋转矩阵

    CameraTranslationTrans.InitTranslationTransform(-m_camera.Pos.x, -m_camera.Pos.y, -m_camera.Pos.z); //平移矩阵
    CameraRotateTrans.InitCameraTransform(m_camera.Target, m_camera.Up); //旋转矩阵
    
    m_Vtransformation = CameraRotateTrans * CameraTranslationTrans; //旋转在左,平移在右,最后得到复合矩阵即是view矩阵

    return m_Vtransformation;
}

InitTranslationTransform平移相机到(-m_camera.Pos.x, -m_camera.Pos.y, -m_camera.Pos.z); ???why??

void Matrix4f::InitTranslationTransform(float x, float y, float z)
{
    m[0][0] = 1.0f; m[0][1] = 0.0f; m[0][2] = 0.0f; m[0][3] = x;
    m[1][0] = 0.0f; m[1][1] = 1.0f; m[1][2] = 0.0f; m[1][3] = y;
    m[2][0] = 0.0f; m[2][1] = 0.0f; m[2][2] = 1.0f; m[2][3] = z;
    m[3][0] = 0.0f; m[3][1] = 0.0f; m[3][2] = 0.0f; m[3][3] = 1.0f;
}

CameraRotateTrans.InitCameraTransform(m_camera.Target, m_camera.Up);
这个是初始化摄像机的view矩阵

void Matrix4f::InitCameraTransform(const Vector3f& Target, const Vector3f& Up)
{
    Vector3f N = Target;
    N.Normalize();
    Vector3f U = Up;
    U = U.Cross(N);
    U.Normalize();
    Vector3f V = N.Cross(U);

    m[0][0] = U.x;   m[0][1] = U.y;   m[0][2] = U.z;   m[0][3] = 0.0f;
    m[1][0] = V.x;   m[1][1] = V.y;   m[1][2] = V.z;   m[1][3] = 0.0f;
    m[2][0] = N.x;   m[2][1] = N.y;   m[2][2] = N.z;   m[2][3] = 0.0f;
    m[3][0] = 0.0f;  m[3][1] = 0.0f;  m[3][2] = 0.0f;  m[3][3] = 1.0f;
}

在这里插入图片描述

void CalcOrthoProjs()
{                                   
     Pipeline p;
     
     p.SetCamera(m_pGameCamera->GetPos(), m_pGameCamera->GetTarget(), m_pGameCamera->GetUp());
     Matrix4f Cam = p.GetViewTrans();
     Matrix4f CamInv = Cam.Inverse();

计算摄像机的view矩阵、以及view的逆矩阵。

p.SetCamera(Vector3f(0.0f, 0.0f, 0.0f), m_dirLight.Direction, Vector3f(0.0f, 1.0f, 0.0f));
Matrix4f LightM = p.GetViewTrans();

又将摄像机放在了(0,0,0)位置,朝向target改为平行光的朝向、up向量为(0,1,0)。
计算摄像机的view矩阵保存在LightM中,也就是此时摄像机所在位置为(0,0,0),其实位置无所谓,因为平行光的是正交投影,只考虑方向。朝向为平行光的方向即可。这个LightM为光源空间的view矩阵。

float ar = m_persProjInfo.Height / m_persProjInfo.Width;
float tanHalfHFOV = tanf(ToRadian(m_persProjInfo.FOV / 2.0f));
float tanHalfVFOV = tanf(ToRadian((m_persProjInfo.FOV * ar) / 2.0f)); 
//应该改为float tanHalfVFOV = ar * tanf(ToRadian((m_persProjInfo.FOV) / 2.0f));
m_persProjInfo.FOV    = 90.0f;
m_persProjInfo.Height = WINDOW_HEIGHT;
m_persProjInfo.Width  = WINDOW_WIDTH;
m_persProjInfo.zNear  = 1.0f;
m_persProjInfo.zFar   = 200.0f; 

这里两个宏都是1024,那么ar最后为1。

在这里插入图片描述

计算:

for (uint i = 0 ; i < NUM_CASCADES ; i++) 
{
            float xn = m_cascadeEnd[i]     * tanHalfHFOV;
            float xf = m_cascadeEnd[i + 1] * tanHalfHFOV;
            float yn = m_cascadeEnd[i]     * tanHalfVFOV;
            float yf = m_cascadeEnd[i + 1] * tanHalfVFOV;

在这里插入图片描述

八个拐角:

在这里插入图片描述

得到光源空间的点的坐标:

 for (uint j = 0 ; j < NUM_FRUSTUM_CORNERS ; j++) {
                printf("Frustum: ");
                Vector4f vW = CamInv * frustumCorners[j]; //将八个顶点,乘以摄像机的逆矩阵,得到世界空间的点坐标
                vW.Print();
                printf("Light space: ");
                frustumCornersL[j] = LightM * vW; //用光源的矩阵,变换世界点到光源空间的点
                frustumCornersL[j].Print();
                printf("\n");
  minX = min(minX, frustumCornersL[j].x);
  maxX = max(maxX, frustumCornersL[j].x);
  minY = min(minY, frustumCornersL[j].y);
  maxY = max(maxY, frustumCornersL[j].y);
  minZ = min(minZ, frustumCornersL[j].z);
  maxZ = max(maxZ, frustumCornersL[j].z);

这是干嘛的,记录下,拐角的最小的x和最大的x,y和z亦然如此。

m_shadowOrthoProjInfo[i].r = maxX;
m_shadowOrthoProjInfo[i].l = minX;
m_shadowOrthoProjInfo[i].b = minY;
m_shadowOrthoProjInfo[i].t = maxY;
m_shadowOrthoProjInfo[i].f = maxZ;
m_shadowOrthoProjInfo[i].n = minZ;

这个是记录每个视锥体的minX/Y/Z以及maxX/Y/Z

the above code contains step #2 until #4. each frustum corner coordinate is multiplied by the inverse view transform in order to bring it into world space. it is then multiplied by the light transform in order to move it into light space. we then use a series of min/max functions in order to find the size of the bounding box of the cascade in light space.

the current entry in the m_shadowOrthoProjInfo array is populated using the values of the bounding box.

CSM的着色器:
vs

#version 330                                                                        
                                                                                    
layout (location = 0) in vec3 Position;                                             
layout (location = 1) in vec2 TexCoord;                                             
layout (location = 2) in vec3 Normal;                                               
                                                                                    
uniform mat4 gWVP;                                                                  
                                                                                                                                                                       
void main()                                                                         
{                                                                                   
    gl_Position = gWVP * vec4(Position, 1.0);
}

fs:

#version 330                                                                        
                                                                                                                                                                     
                                                                                    
void main()                                                                         
{                                                                                   
}

there is nothing new in the vertex and fragment shaders of the shadow map phase. we just need to render the depth.

如何传递gWVP 矩阵给csm的vs着色器???

void ShadowMapPass()
    {      
        CalcOrthoProjs(); //这个是计算逐个正交投影矩阵
        m_ShadowMapEffect.Enable();    //激活shader
        Pipeline p; //定义管线
        p.SetCamera(Vector3f(0.0f, 0.0f, 0.0f), m_dirLight.Direction, Vector3f(0.0f, 1.0f, 0.0f));//将摄像机放在光源的位置,以及朝向,这里位置无用,因为是平行光。
for (uint i = 0 ; i < NUM_CASCADES ; i++) 
{
            m_csmFBO.BindForWriting(i); //绑定对应的纹理图片,rtt
            glClear(GL_DEPTH_BUFFER_BIT);             //清除深度缓冲,我们只画深度
                        
            p.SetOrthographicProj(m_shadowOrthoProjInfo[i]);                    //设置bounding box

            for (int i = 0; i < NUM_MESHES ; i++) { 
                p.Orient(m_meshOrientation[i]); //对于每个模型,设置其缩放、旋转、平移
                m_ShadowMapEffect.SetWVP(p.GetWVOrthoPTrans());
                m_mesh.Render();
            }
        }

下面是初始化的时候设置的模型大小和平移:

 for (int i = 0; i < NUM_MESHES ; i++) 
 {
            m_meshOrientation[i].m_scale    = Vector3f(1.0f, 1.0f, 1.0f);
            m_meshOrientation[i].m_pos      = Vector3f(0.0f, 0.0f, 3.0f + i * 30.0f);
}  

重点看下这个函数:m_ShadowMapEffect.SetWVP(p.GetWVOrthoPTrans());

const Matrix4f& Pipeline::GetWVOrthoPTrans()
{
    GetWorldTrans(); //得到模型的世界矩阵Model,MVP中的M,这里用W表示,即world
    GetViewTrans();  //得到摄像机的View矩阵,MVP中的V

    Matrix4f P;
    P.InitOrthoProjTransform(m_orthoProjInfo); //用每个cascade的边界盒子,构建正交投影矩阵
    
    m_WVPtransformation = P * m_Vtransformation * m_Wtransformation;
    return m_WVPtransformation;
}

void Matrix4f::InitOrthoProjTransform(const OrthoProjInfo& p)
{
    float l = p.l;
    float r = p.r;
    float b = p.b;
    float t = p.t;
    float n = p.n;
    float f = p.f;
    
    m[0][0] = 2.0f/(r - l); m[0][1] = 0.0f;         m[0][2] = 0.0f;         m[0][3] = -(r + l)/(r - l);
    m[1][0] = 0.0f;         m[1][1] = 2.0f/(t - b); m[1][2] = 0.0f;         m[1][3] = -(t + b)/(t - b);
    m[2][0] = 0.0f;         m[2][1] = 0.0f;         m[2][2] = -2.0f/(f - n); m[2][3] = -(f + n)/(f - n);
    m[3][0] = 0.0f;         m[3][1] = 0.0f;         m[3][2] = 0.0f;         m[3][3] = 1.0;        
}

右手坐标系的正交投影矩阵:
在这里插入图片描述
经验证符合上面的公式。

最后:m_WVPtransformation = P * m_Vtransformation * m_Wtransformation;
即得到了MVP矩阵了。

 for (int i = 0; i < NUM_MESHES ; i++) {
                p.Orient(m_meshOrientation[i]); //每个模型的缩放、平移、旋转
                m_ShadowMapEffect.SetWVP(p.GetWVOrthoPTrans());
                m_mesh.Render();
            }

画出NUM_MESHES个模型。

glBindFramebuffer(GL_FRAMEBUFFER, 0);

激活默认的帧缓冲,即绑定到0,这样就画到屏幕上了。

下面是光照计算阶段:

void RenderPass()
    {
        glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); //清除颜色和深度缓冲
        m_LightingTech.Enable(); //光照计算开启
        m_LightingTech.SetEyeWorldPos(m_pGameCamera->GetPos()); //设置摄像机的世界坐标
        m_csmFBO.BindForReading(); //绑定刚才渲染的三张纹理,供读取使用
  Pipeline p;        
        p.Orient(m_quad.GetOrientation());        //这个对于阴影接收的平面
        p.SetCamera(Vector3f(0.0f, 0.0f, 0.0f), m_dirLight.Direction, Vector3f(0.0f, 1.0f, 0.0f)); //把摄像机放在光源位置和朝向
        for (uint i = 0 ; i < NUM_CASCADES ; i++) { //画三个级联阴影
            p.SetOrthographicProj(m_shadowOrthoProjInfo[i]);        //每个级联阴影的正交矩阵
            m_LightingTech.SetLightWVP(i, p.GetWVOrthoPTrans()); //这种wvp矩阵
        }

注意此时摄像机在光源处

p.SetCamera(m_pGameCamera->GetPos(), m_pGameCamera->GetTarget(), m_pGameCamera->GetUp());
p.SetPerspectiveProj(m_persProjInfo);                
m_LightingTech.SetWVP(p.GetWVPTrans());
m_LightingTech.SetWorldMatrix(p.GetWorldTrans());    

把摄像机放在初始位置;
设置投影矩阵;
摄像机所在初始位置的MVP矩阵;
摄像机所在初始位置的世界矩阵M;

m_pGroundTex->Bind(COLOR_TEXTURE_UNIT);
m_quad.Render();

地表贴图纹理;
地面网格绘制;

 for (int i = 0; i < NUM_MESHES ; i++) {
            p.Orient(m_meshOrientation[i]);                     
            m_LightingTech.SetWVP(p.GetWVPTrans());
            m_LightingTech.SetWorldMatrix(p.GetWorldTrans());
            m_mesh.Render();
        }

NUM_MESHES个模型的绘制;
设置模型的世界变换矩阵;
设置管线的MVP矩阵,此时摄像机在初始化位置;
设置管线的M矩阵,此时摄像机在初始化位置;
单个模型的绘制;

看看光照阶段的shader:

vs:

void main()
{
    vec4 Pos = vec4(Position, 1.0); //Position为局部坐标
    gl_Position = gWVP * Pos; //得到齐次裁剪空间的坐标
    
    for (int i = 0 ; i < NUM_CASCADES ; i++) { //三个级联阴影处的WVP矩阵,乘以pos之后,得到光源空间出的齐次坐标
        LightSpacePos[i] = gLightWVP[i] * Pos;
    }

    ClipSpacePosZ = gl_Position.z; //摄像机位置处的齐次裁剪空间的z深度
    TexCoord0     = TexCoord; //纹理坐标
    Normal0       = (gWorld * vec4(Normal, 0.0)).xyz; //世界法线
    WorldPos0     = (gWorld * vec4(Position, 1.0)).xyz; ///世界坐标
}

fs:

void main()                                                                                 
{                                                                                           
    vec3 Normal = normalize(Normal0); //世界法线
    float ShadowFactor = 0.0;
    vec4 CascadeIndicator = vec4(0.0, 0.0, 0.0, 0.0);

    for (int i = 0 ; i < NUM_CASCADES ; i++) { //遍历三个视锥体
        if (ClipSpacePosZ <= gCascadeEndClipSpace[i]) { //如果当前点的z,小于第i个视锥体的z那么表明要使用这个视锥体的shadowmap
            ShadowFactor = CalcShadowFactor(i, LightSpacePos[i]);

            if (i == 0) 
                CascadeIndicator = vec4(0.1, 0.0, 0.0, 0.0); //如果第0个,则加点红色
            else if (i == 1)
                CascadeIndicator = vec4(0.0, 0.1, 0.0, 0.0); //如果第1个,则加点绿色
            else if (i == 2)
                CascadeIndicator = vec4(0.0, 0.0, 0.1, 0.0); //如果第2个,则加点蓝色
            break;
        }
   }

    vec4 TotalLight = CalcDirectionalLight(Normal, ShadowFactor);
                                                                                                                                                                            
    vec4 SampledColor = texture2D(gSampler, TexCoord0.xy);                                  
    FragColor = SampledColor * TotalLight + CascadeIndicator;                                                  
}

这个gCascadeEndClipSpace的赋值如下:

   for (uint i = 0 ; i < NUM_CASCADES ; i++) 
   {
            Matrix4f Proj;
            Proj.InitPersProjTransform(m_persProjInfo); //摄像机出的P矩阵
            Vector4f vView(0.0f, 0.0f, m_cascadeEnd[i + 1], 1.0f); //x=0,y=0,z=每个分段的级联阴影的深度,w=1,表示中心的点
            Vector4f vClip = Proj * vView; //将点vView用P矩阵变换之后,得到齐次裁剪空间的点
            vClip.Print();
            m_LightingTech.SetCascadeEndClipSpace(i, vClip.z); //传递z值
    }

看下这个函数:CalcShadowFactor

//传递第i个视锥体,则采样第i个shadow map贴图
//LightSpacePos,顶点在光源空间的位置
float CalcShadowFactor(int CascadeIndex, vec4 LightSpacePos)                                                  
{                                                                                           
    vec3 ProjCoords = LightSpacePos.xyz / LightSpacePos.w;    //进行透视除法,得到NDC坐标                              
    vec2 UVCoords;                                                                          
    UVCoords.x = 0.5 * ProjCoords.x + 0.5;    //uv坐标从0到1,而NDC在-1到1,所以/2,加上0.5,得到uv                                              
    UVCoords.y = 0.5 * ProjCoords.y + 0.5;                                                  
    float z = 0.5 * ProjCoords.z + 0.5;       //z也要转换到屏幕空间的0~1范围                                              
    float Depth = texture(gShadowMap[CascadeIndex], UVCoords).x;     //采样对应点出的深度值                                
    if (Depth < z + 0.00001)     //如果采样的值小于z+0.00001,则表面当前的点在阴影中                                                         
        return 0.5;                                                                         
    else                                                                                    
        return 1.0;                                                                         
}

下面是计算平行光:
vec4 TotalLight = CalcDirectionalLight(Normal, ShadowFactor);

vec4 CalcLightInternal(BaseLight Light, vec3 LightDirection, vec3 Normal,            
                       float ShadowFactor)                                                  
{                                                                                           
    vec4 AmbientColor = vec4(Light.Color * Light.AmbientIntensity, 1.0f);
    float DiffuseFactor = dot(Normal, -LightDirection);                                     
                                                                                            
    vec4 DiffuseColor  = vec4(0, 0, 0, 0);                                                  
    vec4 SpecularColor = vec4(0, 0, 0, 0);                                                  
                                                                                            
    if (DiffuseFactor > 0) {                                                                
        DiffuseColor = vec4(Light.Color * Light.DiffuseIntensity * DiffuseFactor, 1.0f);    
                                                                                            
        vec3 VertexToEye = normalize(gEyeWorldPos - WorldPos0);                             
        vec3 LightReflect = normalize(reflect(LightDirection, Normal));                     
        float SpecularFactor = dot(VertexToEye, LightReflect);                                      
        if (SpecularFactor > 0) {                                                           
            SpecularFactor = pow(SpecularFactor, gSpecularPower);                               
            SpecularColor = vec4(Light.Color, 1.0f) * gMatSpecularIntensity * SpecularFactor;                         
        }                                                                                   
    }                                                                                       
                                                                                            
    return (AmbientColor + ShadowFactor * (DiffuseColor + SpecularColor));                  
}       

最后的DiffuseColor+SpecularColor的颜色值乘以阴影系数,在加上环境光颜色。

总结:
1、把摄像机放在光源位置处,设置位置、方向,求出MVP矩阵、传递给CSMshader,绘制出深度信息,三张贴图;
2、把摄像机放在眼睛位置处,设置位置、方向,求出MVP矩阵、传递给LightShader,计算出每个点的所属的视锥体;
然后到对应的ShadowMap中取点,如何采样,需要将点p先转换到光源空间,用其NDC空间的xy,当然要经过/2+0.5的处理,采样得到的深度为z;这个z是光源处可以看到的物体的最近距离。如果此时点p的NDC空间的z,/2+0.5之后的值为z’。
比较z和z’。
如果z’比z大,那么则说明此时点p在阴影中,否则不再阴影中;
3、摄像机在光源处的三张shadow map图,用的是正交投影矩阵,因为是平行光;
4、摄像机在人眼出为透视投影矩阵;

在这里插入图片描述
箭头为光源方向,也就是要把摄像机放在这个朝向出,使用正交投影矩阵,绘制三个深度图。

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值