OGL(教程46)——SSAO With Depth Reconstruction

http://ogldev.atspace.co.uk/www/tutorial46/tutorial46.html
http://www.manew.com/thread-112784-1-1.html

in the previous tutorial we studied the screen space ambient occlusion algorithm.
we used a geometry buffer which contained the view space position of all the pixels as a first step in our calculations.
in this tutorial we are going to challenge ourselves by calcualting the view space position directly from the depth buffer.
the advantage of this approach is that much less memory is required because we will only need one floating point value per pixel instead of three.
this tutorial relies heavily on the previous tutorial so make sure u fully understand it before going on.
the code here will be presented only as required changes over the original algorithm.

in the SSAO algorithm we scan the entire window pixel by pixel, generate random points around each pixel in view space, project them on the near clipping plane and compare their Z value with the actual pixel at that location.
the view space position is generated in a geometry pass at the start of the render loop.
in order to populate correctly the geometry buffer with the view space position we also need a depth buffer (else pixels will be updated based on draw order rather than depth).
we cab use that depth buffer alone to reconstruct the entire view space position vector, thus reducing the space requried for it (though some more per-pixel math will required).

let us do a short recap 扼要概述 on the stages required to populate the depth buffer (if u need a more in-depth review please see tutorial 12).
we begin with the object space position of a vertex and multiply it with the WVP matrix is a combined transformations of loca-to-world, world-to-view and projection from view on the near clipping plane. the result is a 4D vector with the view space Z value in the fourth component.
we say that this vector is in clip space at this point.
the clip space vector goes into the gl_Position output vector from the vertex shader and the GPU clips its first three components between -W and W (W is the fourth component with the view space Z value).
next the GPU performs perspective divide which means that the vector is divided by W.
now the first three components are between -1 and 1 and the last component is simply 1.
we say that at this point the vector is in NDC space(normalized device coordinates).

usually the vertex is just one out of three vertices comprising a triangle so the gpu interpolates between the three NDC vectors across the triangle face and executes the fragment shader on each pixel.

on the way out of the fragment shader the gpu updates the depth buffer with the Z component of the NDC vector (based on several state nobs that must be configured correctly such as depth testing, depth write, etc).

an important point to remember is that before writing the Z value to the depth buffer the gpu transform it from (-1,1) to (0,1).
we must handle this correctly or else we will get visual anomalies.

so this is basically all the math relevant to the z buffer handling.
now let us say that we have a Z value that we sampled for the pixel and we want to reconstruct the entire view space vector from it.
everything we need in order to retrace our steps is in the above description but before we dive any further let us see that math again only this time with numbers and matrices rather than words.

since we are only interested in the view space position we can look at the projection matrix rather than the combined WVP (because projection works on the view space position):
在这里插入图片描述

what we see above is the projection of the view space vector to clip space (the result on the right). few notations:

1/ ar = Aspect Ratio (width/height)
2/ FOV = Field of View
3/ n = near clipping plane
4/ f = far clipping plane

in order to simplify the next steps let us call the value in location (3,3) of the projection matrix ‘S’ and the value in location (3,4) ‘T’.
this means that the value of the Z in NDC is (remember perspective divide):
在这里插入图片描述
and sicne we need to transform the NDC value from (-1,1) to (0,1) the actual value written to the depth buffer is:
在这里插入图片描述
it is now easy to see that we can extract the view space Z from the above formula.
i have not specified all the intermediate steps because u should be able to do them yourself. the final result is:

在这里插入图片描述

so we have the view space Z. let us see how we can recover X and Y.
remember that after transforming X and Y to clip space we perform clipping to (-W, W) and divide by W
(which is actually Z in view space).

X and Y are now in the (-1,1) range and so are all the X and Y values of the to-be-interpolated pixels on the triangle.
in face, -1 and 1 mapped to the left, right, top and bottom of the screen. this means that for every pixel on the screen the following equation applies (showing for X only; same applies to Y just without ‘ar’):
在这里插入图片描述

we can write the same as:
在这里插入图片描述

note that the left and right hand side of the inequality are basically constants and can be calcualted by the application before the draw call.
this means that we can draw a full screen quad and prepare a 2D vector with those values for X and Y and have GPU interpolate them all over the screen.
when we get to the pixel we can use the interpolated value along with Z in order to calcualte both X and Y.

Source walkthru
(tutorial46.cpp:101)

float AspectRatio = m_persProjInfo.Width / m_persProjInfo.Height;
m_SSAOTech.SetAspectRatio(AspectRatio);
float TanHalfFOV = tanf(ToRadian(m_persProjInfo.FOV / 2.0f));
m_SSAOTech.SetTanHalfFOV(TanHalfFOV); 

as i said earlier, we are only going to review the specific code changes to the previous tutorial in order to implement depth reconstruction.
the first change that we need to make is to provide the aspect ratio and the tangent of half the field of view angle to the SSAO technique. we see above how construct them.

(tutorial46.cpp:134)

if (!m_depthBuffer.Init(WINDOW_WIDTH, WINDOW_HEIGHT, true, GL_NONE)) {
    return false;
}

next we need to initialize the geometry buffer ( whose class attribute was renamed from m_gBuffer to m_depthBuffer) with GL_NONE as the internal format type. this will cause only the depth buffer to be created.
review io_buffer.cpp in the common project for further details on the internal workings of the IOBuffer class.

(tutorial46.cpp:181)

void GeometryPass()
{
    m_geomPassTech.Enable(); 

    m_depthBuffer.BindForWriting();

    glClear(GL_DEPTH_BUFFER_BIT);

    m_pipeline.Orient(m_mesh.GetOrientation());
    m_geomPassTech.SetWVP(m_pipeline.GetWVPTrans());
    m_mesh.Render(); 
}


void SSAOPass()
{
    m_SSAOTech.Enable(); 
    m_SSAOTech.BindDepthBuffer(m_depthBuffer); 

    m_aoBuffer.BindForWriting();

    glClear(GL_COLOR_BUFFER_BIT); 

    m_quad.Render(); 
}

we can see the change from m_gBuffer to m_depthBuffer in the geometry and SSAO passes.
also, we no longer need to call glClear with the color buffer bit because m_depthBuffer does not contain a color buffer.
this completes the changes in the main application code and u can see that they are fairly minimal. most of the juice is in the shaders. let us review them.

(geometry_pass.vs/fs)

#version 330

layout (location = 0) in vec3 Position; 

uniform mat4 gWVP;
// uniform mat4 gWV;

// out vec3 ViewPos; 

void main()
{ 
    gl_Position = gWVP * vec4(Position, 1.0);
    // ViewPos = (gWV * vec4(Position, 1.0)).xyz;
}


#version 330

// in vec3 ViewPos;

// layout (location = 0) out vec3 PosOut; 

void main()
{
    // PosOut = ViewPos;
}

above we see the revised geometry pass vertex and fragment shaders with the stuff that we no longer need commented out.

since we are only writing out the depth everything related to view space position was thrown out. in fact, the fragment shader is now empty.

(ssao.vs)

#version 330

layout (location = 0) in vec3 Position; 

uniform float gAspectRatio;
uniform float gTanHalfFOV;

out vec2 TexCoord;
out vec2 ViewRay;

void main()
{ 
    gl_Position = vec4(Position, 1.0);
    TexCoord = (Position.xy + vec2(1.0)) / 2.0;
    ViewRay.x = Position.x * gAspectRatio * gTanHalfFOV;
    ViewRay.y = Position.y * gTanHalfFOV;
}

based on the math reviewed above (see the very end of the background section) we need to generate something that we call a view ray in the vertex shader of the SSAO technique.
combined with the view space Z calcualted in the fragment shader it will help us extract the view space X and Y. note how we use the fact that the incoming geometry is a full screen quad that goes from -1 to 1 on the X and Y axis in order to generate the end points of ‘-1/+1 * ar * tan(FOV/2)’ for X and ‘-1/+1 * tan(FOV/2)’ and ‘tan(FOV/2)’ for Y.

(ssao.fs)

#version 330

in vec2 TexCoord;
in vec2 ViewRay;

out vec4 FragColor;

uniform sampler2D gDepthMap;
uniform float gSampleRad;
uniform mat4 gProj;

const int MAX_KERNEL_SIZE = 64;
uniform vec3 gKernel[MAX_KERNEL_SIZE];


float CalcViewZ(vec2 Coords)
{
    float Depth = texture(gDepthMap, Coords).x;
    float ViewZ = gProj[3][2] / (2 * Depth -1 - gProj[2][2]);
    return ViewZ;
}


void main()
{
    float ViewZ = CalcViewZ(TexCoord);

    float ViewX = ViewRay.x * ViewZ;
    float ViewY = ViewRay.y * ViewZ;

    vec3 Pos = vec3(ViewX, ViewY, ViewZ);

    float AO = 0.0;

    for (int i = 0 ; i < MAX_KERNEL_SIZE ; i++) {
        vec3 samplePos = Pos + gKernel[i];
        vec4 offset = vec4(samplePos, 1.0);
        offset = gProj * offset;
        offset.xy /= offset.w;
        offset.xy = offset.xy * 0.5 + vec2(0.5);

        float sampleDepth = CalcViewZ(offset.xy);

        if (abs(Pos.z - sampleDepth) < gSampleRad) {
            AO += step(sampleDepth,samplePos.z);
        }
    }

    AO = 1.0 - AO/64.0;

    FragColor = vec4(pow(AO, 2.0));
}

the first thing we do in the fragment shader is to calcualte the view space Z. we do this with the exact same formula we saw in the background section.

the projection matrix was already here in the previous tutorial and we just need to be careful when accessing the ‘S’ and ‘T’ items in the (3,3) and (3,4) locations.

remember that the index goes from 0 to 3 (vs.1 to 4 in standard matrix semantics) and that the matrix is transposed so we need to reverse the column/row for the ‘T’.

once the Z is ready we multiply it by the view ray in order to retrieve the X and Y.
we continue as usual by generating the random points and projecting them on the screen.
we use the same trick to calcualte the depth of the projected point.
if u have done everything correctly u should end up with pretty much the same results as in the previous tutorial.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值