Reconstructing Position from Linear Depth

 

Reconstructing position from depth data is important for many real-time rendering algorithms. Screen Space Ambient Occulision (SSAO) and Deferred Shading, for example, both require this technique. This can result in one channel of data instead of three channels when saving the whole position.
There are many ways to reconstruct position. We can reconstruct it from screen space, but I personally prefer to do it in view space, since it's easier and it is linear so it does not yield precision problem.

Storing depth in view space is easy. We just pass the z axis coordinate in view space to the pixel shader, before dividing it with the far clip plane length in the pixel shader. The following is the HLSL code for rendering the depth map.

void DepthVS(float4 inPos    : POSITION,
      out float vDepthView    : TEXCOORD0,
            out float4 outPos    : SV_POSITION)
{
    vDepthView = mul(inPos, gmWorldView).z;
    outPos = mul(inPos, gmWVP);
}

float DepthPS(float  iDepth : TEXCOORD0) : SV_TARGET0
{
    float fDepth = iDepth / gfFarClip;
    return fDepth;
}

The reconstruction of position is simple. First we need to get the screen space uv to fetch depth from the depth map. This is easy when we know the difference between projection space and screen space. The following figures illustrate this clearly.
Picture
Projection Space
Picture
Screen Space
According to the figures above, it is easy to write a function that converts points in projection space to screen space, and vise versa.
float2 ProjToScreen(float4 iCoord)
{
    float2 oCoord = iCoord.xy / iCoord.w;
    return 0.5f * (float2(oCoord.x, -oCoord.y) + 1);
}
float4 ScreenToProj(float2 iCoord)
{
    return float4(2.0f * float2(iCoord.x, 1.0f - iCoord.y) - 1, 0.0f, 1.0f);
}


After we get the screen space coordinate, we can use it to fetch depth from the depth map we rendered previously. In addition, we need a ray from the camera position (the origin), to the frustum far clip plane. This can be achieved by first converting the projection space coordinate to view space, before multiplying its x and y components with (FarClip / z). In the end, just multiply the view ray with the linear depth that we fetched from the depth map. The linear depth is between 0 ~ 1, and it will scale the ray to the view space position. The following figure illustrates this.
Picture
I wrote my own utility functions to handle the transition from projection space to screen space, and the reconstruction of position from depth. The HLSL code is given below.

float3 DepthToPosition(float iDepth, float4 iPosProj, matrix mProjInv, float fFarClip)
{
    float3 vPosView = mul(iPosProj, mProjInv).xyz;
    float3 vViewRay = float3(vPosView.xy * (fFarClip / vPosView.z), fFarClip);
    float3 vPosition = vViewRay * iDepth;
    return vPosition;
}


评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值