OGL(教程42)——Percentage Closer Filtering

http://ogldev.atspace.co.uk/www/tutorial42/tutorial42.html

background

in tutorial we saw how to implement shadows using a technique called shadow mapping. the shadows that result from shadow mapping are not that great and there is quite a lot of aliasing there, as u can see in the following picture:
在这里插入图片描述

this tutorial describes a method (one of many) to reduce that problem. it is called percentage closer filtering, or pcf. the idea is to sample from the shadow map around the current pixel and compare its depth to all the samples. by averaging out the results we get a smoother line between and shadow. for example, take a look at the following shadow map:
在这里插入图片描述

each cell contains the depth value for each pixel (when viewed from the light source). to make life simple, let us say that the depth of all the pixels above is 0.5 (when viewed from the camera point of view). according to the method from tutorial 24 all the pixels whose shadow map value is small than 0.5 will be in shadow while the ones whose shadow map value is greater than or equal to 0.5 will be in light. this will create a hard aliased line between light and shadow.

now consider the following-- the pixels that are nearest the border between light and shadow are surrounded by pixels who shadow map value is smaller than 0.5 as well as pixels whose shaowd map value is greater than or equal to 0.5. if we sample these neighboring pixels and average out the results we will get a factor level that can help us smooth out the border between light and shadow. ofcourse we do not know in advance what pixels are closest to that border so we simply do this sampling work for each pixel. this is basically the entire system. in this tutorial we will sample 9 pixels in a 3 by 3 kernel around each pixel and average out the result. this will be our shadow factor instead of the 0.5 or 1.0 which we have used as a factor in tutorial 24.

let us now review the source code that implements pcf. we will do this by going over the changes made to the implementation of tutorial 24. u may want to do a short refresh on that tutorial to make things clearer here.

uniform sampler2DShadow gShadowMap;

#define EPSILON 0.00001

float CalcShadowFactor(vec4 LightSpacePos)
{
    vec3 ProjCoords = LightSpacePos.xyz / LightSpacePos.w;
    vec2 UVCoords;
    UVCoords.x = 0.5 * ProjCoords.x + 0.5;
    UVCoords.y = 0.5 * ProjCoords.y + 0.5;
    float z = 0.5 * ProjCoords.z + 0.5;
  
    float xOffset = 1.0/gMapSize.x;
    float yOffset = 1.0/gMapSize.y;

    float Factor = 0.0;

    for (int y = -1 ; y <= 1 ; y++) {
        for (int x = -1 ; x <= 1 ; x++) {
            vec2 Offsets = vec2(x * xOffset, y * yOffset);
            vec3 UVC = vec3(UVCoords + Offsets, z + EPSILON);
            Factor += texture(gShadowMap, UVC);
        }
    }
    
    return (0.5 + (Factor / 18.0));
}

this is the updated shadow factor calculation function. it starts out the same where we manually perform perspective divide on clip space coordinates from the light source point of view, followed by a transformation from the (-1,+1) range to (0,1). we now have coordinates that we can use to sample from the shadow map and a z value to compare against the sample result.

from here on things are going to roll a bit differently. we are going to sample a 3 by 3 kernel so we need 9 texture coordinates altogether. the coordinates must result in sampling texels that are on one texel intervals on the X and/or Y axis. since uv texture coordinates run from 0 to 1 and map into the texel ranges (0,Width-1) and (0,Height-1), respectively, we divide 1 by the width and height of the texture. these values are stored in the gMapSize uniform vector (see sources for more details). this gives us the offset in the texture coordinates space between two neighboring texels.

next we perform a nested for loop and calculate the offset vector for each of the 9 texels we are going to sample. the last couple of lines inside the loop may seem a bit odd. we sample from the shadow map using a vector with 3 components (uvc) instead of just 2. the last component contains the value which we used in tutorial 24 to manually compare against the value from the shadow map ( the light source z plus a small epsilon to avoid z-fighting). the change here is that we are using a sampler2DShadow as the type of ‘gShadowMap’ instead of a sampler2D. when sampling from a shadow typed sampler (sampler1D shadow, sampler2DShadow, etc) the gpu performs a comparison between the texel value and a value that we supply as the last component of the texture coordinate vector (the second component for 1D, the third component for 2D, etc). we get a zero result if the comparison fails and one if the comparison succeeds. the type of comparison is configured using a GL API and not through GLSL. we will see this change later on. for now, just assume that we get a zero result for shadow and one for light. we accumulate the 9 results and divide them by 18. thus we get a value between 0 and 0.5. we add it to a base of 0.5 and this is our shadow factor.

bool ShadowMapFBO::Init(unsigned int WindowWidth, unsigned int WindowHeight)
{
    // Create the FBO
    glGenFramebuffers(1, &m_fbo);

    // Create the depth buffer
    glGenTextures(1, &m_shadowMap);
    glBindTexture(GL_TEXTURE_2D, m_shadowMap);
    glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32, WindowWidth, WindowHeight, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_COMPARE_REF_TO_TEXTURE);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC, GL_LEQUAL);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

    glBindFramebuffer(GL_FRAMEBUFFER, m_fbo);
    glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, m_shadowMap, 0);

    // Disable writes to the color buffer
    glDrawBuffer(GL_NONE);
       
    // Disable reads from the color buffer
    glReadBuffer(GL_NONE);

    GLenum Status = glCheckFramebufferStatus(GL_FRAMEBUFFER);

    if (Status != GL_FRAMEBUFFER_COMPLETE) {
        printf("FB error, status: 0x%x\n", Status);
        return false;
    }

    return true;
}

this is how we configure our shadow map texture to work with the shadow sampler in the shader instead of the regular sampler. there are two new lines here and they are marked in bold face.

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_COMPARE_REF_TO_TEXTURE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC, GL_LEQUAL);

first we set the texture compare mode to ‘compare ref to texture’. the only other possible value for the third parameter here is GL_NONE which is the default and makes the sampler behave in the regular, non-shadow, form.

the second call to glTexParameteri sets the comparison function to ‘less than or equal’. this means that the result of the sample operation will be 1.0 if the reference value is less than or equal to the value in the texture and zero otherwise. 如果小于等于采样的值,则返回1,否则返回0,其实0表示在阴影里了。u can also use GL_GEQUAL, GL_LESS, GL_GREATER, GL_EQUAL, GL_NOTEQUAL FOR similar types of comparisons. u get the idea. there are also GL_ALWAYS which always return 1.0 and GL_NEVER which always return 0.0.

  void ShadowMapPass()
  {
       glCullFace(GL_FRONT);
 void RenderPass()
 {
     glCullFace(GL_BACK);
        

the last point that i want to discuss is a minor change intended to avoid self shadowing. self shadowing is a big problem when dealing with almost any shadowing technique and the reason is that the precision of the depth buffer is quite limited (even at 32 bits). the problem is specific to the polygons that that are facing the light and are not in shadow.

in the shadow map pass we render their depth into the shadow map and in the render pass we compare their depth against the value stored in the shadow map.

due to the depth precision problem we often get z fighting which leads to some pixels being in shadow while others are in light. to reduce this problem we reverse culling so that we cull front facing polygons in the shadow map pass (and render only the back facing polygons into the shadow map). in the render pass we are back to the usual culling.

since real world occluders are generally closed volumes it is ok to use the back facing polygons for depth comparison and not the front facing ones. u should try to disable the code above and see the result of yourself.

after applying all the changes that we discussed the shadow looks like this:
在这里插入图片描述

换个模型:
在这里插入图片描述

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值