Image Based Lighting

https://chetanjags.wordpress.com/2015/08/26/image-based-lighting/

image based lighting is used to implement ambient lighting for dynamic objects coming from static objects in a game level. in most cases, it is used for specular lighting or reflections. in this process lighting information at the potential point of interests is stored in a special type of cube maps called light probes. these light probes are created using environment maps captured at same locations, which are then blurred in some special way depending on the BRDF that will consume them at runtime. each mipmap of a light probe contains a version of environment map blurred by a different amount depending roughness represented by that level. for example, if we are using 10 mip map levels to represent roughness from 0 to 1, then mip-0 will be blurred by a value representing roughness 0, mip-1 will represent 0.1 and so on. the last mip level mip-10 will be blurred by a value represent the roughness of 1.0. this process is also called cubemap convolution. all of this is done as a pre-process and resulting light probes are fetched at runtime to enable reflections or image based lighting.

There are 2 types of light probes –

Global Light probes – These represent the lighting information coming from objects at Infinite distances. These are created using global environment maps or skyboxes. Usually, we will have one of these for any given game level/scene. This can be computed as a pre-process since skyboxes are mostly fixed.
Local Light probes – These are captured at different points of interests in any given scene. They capture the lighting information from nearby objects and they have a location and size or area of influence 有位置,有影响区域. There will be many of these in any game level/scene.
So the whole process can be summed up like this –
1、Capture environment at some location in the scene (or use skybox in case of global light probe).
2、Create Light probe by blurring the captured environment map – Cubemap Convolution
3、Fetch the reflection values from light probes based on roughness and reflection vector.
4、In case of multiple local light probes find the light probes affecting any particular shading point.
5、Then either blend between multiple light probes or select one of them to shade the pixel.
6、Add the calculated value as part of ambient specular term (also diffuse in case of skybox/global light probes).

Cubemap Convolution 立方体贴图的卷积
The goal of rendering or shading any particular pixel (or point 3d space) is to compute all the light from environment received at that point and reflected towards the camera. In other words solving this integral over hemisphere –

图片失效,原网址打不开这个图片了,可惜可惜。。。

an image based lighting, the incoming light is represented by an environment where each texel represents an incoming light direction. cubemap convolution is the process of solving this euqation for all the directions represented by texels in 在输出的的灯探头贴图output light probe cube map as a preprocess. each texel in output light probe map represents a viewing direction and we calculate the light incoming from all the possible directions from input environment map. then at runtime we use reflection vector (or normal vector in the case of diffuse) to index into the generated light probe cube map. we can also do it real-time by doing our calculation for actual viewing direction, but we do not generally use the real-time version since we have to do this multiple times for each light probe and general game scenes have many light probes making this process not feasible for real-time in actual projects.

now if we solve this integral even for one pixel or output direction we have to do it over the whole hemisphere which involves solving our brdf equations with 1000 of texels fetched from input environment map which will make this process very slow even for pre-processing. so to solve this problem we use a process called importance sampling which gives us good results even with few samples. in importance sampling, we generated a fixed number of random samples biased towards the direction that will have the most influence on current shading point & view direction. those interested in details of the whole process including the mathematics involved can check [4][5].

this process which depends on the actual BRDF[1][3] being used in the engine which is smith based GGX specular BRDF[2] in case of my engine. so generally lighting/shading equation looks sth. like this——

在这里插入图片描述
for this brdf we divide the equation into 2 parts which are pre-computed separately, this process is called split-sum approximation. check out epic’s notes[10] or call of duty presentation[9] for more details of the process and why we split the brdf into 2 parts.

the first part is computed for different roughness values and stored in mipmap of light probes. this is the cube map convolution part of the whole process. since we are using microfacet based brdf the distribution of specular highlights depends on the viewing angle but for this approximation we assume viewing angle is zero N=V=R and run the following code/process for every direction/texel of the output cube map. i am using compute shader and whole cube map including all mip levels is processed in a single compute shader pass.
在这里插入图片描述
here is the code that i am using——

float3 ImportanceSampleGGX(float2 xi, float roughness, float3 N)
{
    float alpha2 = roughness * roughness * roughness * roughness;
    float phi = 2.0f * CH_PI * xi.x;
    float cosTheta = sqrt( (1.0f - xi.y) / (1.0f + (alpha2 - 1.0f) * xi.y ));
    float sinTheta = sqrt( 1.0f - cosTheta*cosTheta );
     
    float3 h;
    h.x = sinTheta * cos( phi );
    h.y = sinTheta * sin( phi );
    h.z = cosTheta;
     
    float3 up = abs(N.z) < 0.999 ? float3(0,0,1) : float3(1,0,0);
    float3 tangentX = normalize( cross( up, N ) );
    float3 tangentY = cross( N, tangentX );
     
    return (tangentX * h.x + tangentY * h.y + N * h.z);
} 
 
//This is called for each output direction / texel in output cubemap 
float3 PreFilterEnvMap(TextureCube envMap, sampler samEnv , float roughness, float3 R)
{
    float3 res = (float3)0.0f;  
    float totalWeight = 0.0f;   
     
    float3 normal = normalize(R);
    float3 toEye = normal;
     
    //roughness = max(0.02f,roughness);
     
    static const uint NUM_SAMPLES = 512;
    for(uint i=0;i<NUM_SAMPLES;++i)
    {
        float2 xi = hammersley_seq(i, NUM_SAMPLES); 
        float3 halfway = ImportanceSampleGGX(xi,roughness,normal);
        float3 lightVec = 2.0f * dot( toEye,halfway ) * halfway - toEye;
         
        float NdotL = saturate ( dot( normal, lightVec ) ) ;
        //float NdotV = saturate ( dot( normal, toEye ) ) ;
        float NdotH =  saturate ( dot( normal, halfway ) ) ;
        float HdotV = saturate ( dot( halfway, toEye ) ) ;
         
        if( NdotL > 0 )
        {
            float D = DFactor(roughness,NdotH);
            float pdf = (D * NdotH / (4 * HdotV)) + 0.0001f  ;
             
            float saTexel = 4.0f * CH_PI / (6.0f * CONV_SPEC_TEX_WIDTH * CONV_SPEC_TEX_WIDTH);
            float saSample = 1.0f / (NUM_SAMPLES * pdf + 0.00001f);
             
            float mipLevel = roughness == 0.0f ? 0.0f :  0.5f * log2( saSample / saTexel )  ;
                                 
            res += envMap.SampleLevel( samEnv, lightVec, mipLevel ).rgb *NdotL;     
            totalWeight += NdotL;
        }
    }
     
    return res / max(totalWeight,0.001f);
} 

the second part contains the rest of the equation and can be thought of as integrating specular brdf for a white environment. there are 2 ways of doing this part of the process either calculate this using some analytical approximation [9] or create a lookup texture as part of pre-processing. i have used the second method in my engine in which we have to basically solve the following integral for all the values of roughness and cosθv. all the inputs and out values will vary b/w [0,1]. for more details check[10].
在这里插入图片描述
Code I am using –

float2 IntegrateEnvBRDF(float roughness, float NdotV)
{
    float2 res = (float2)0.0f;
     
    //roughness = max(0.02f,roughness);
     
    float3 toEye = float3( sqrt(1.0f - NdotV*NdotV), 0.0f, NdotV );
    float3 normal = float3(0.0f, 0.0f, 1.0f);
     
    static const uint NUM_SAMPLES = 1024;
    for(uint i=0;i<NUM_SAMPLES;++i)
    {
        float2 xi = hammersley_seq(i, NUM_SAMPLES);     
         
        float3 halfway = ImportanceSampleGGX(xi,roughness,normal);
        float3 lightVec = 2.0f * dot( toEye,halfway ) * halfway - toEye;
         
        float NdotL = saturate ( lightVec.z ) ;
        float NdotH =  saturate ( halfway.z ) ;
        float HdotV = saturate ( dot( halfway, toEye ) ) ;
        //NdotV = saturate ( dot( normal,toEye ) );
 
        if( NdotL > 0 )
        {           
            float D = DFactor(roughness,NdotH);
            float pdf = (D * NdotH / (4 * HdotV)) + 0.0001f  ;  
             
            float V =  V_SmithJoint(roughness,NdotV,NdotL) ;
            float Vis = V * NdotL * 4.0f * HdotV / NdotH ;
            float fc = pow(1.0f - HdotV,5.0f);
             
            res.x += (1.0f - fc)* Vis;
            res.y += fc * Vis;
        }
    }
     
    return res /(float)NUM_SAMPLES;
}

at runtime, we can do sth. like this to fetch the values from above textures and calculate the ambient specular light for given point & view direction. 环境镜面反射

float3 SpecularIBLRealtime(TextureCube envMap, sampler samEnv , float3 normal, float3 toEye, 
            float roughness, float3 specColor)
{
    float3 res = (float3)0.0f;
     
    normal = normalize(normal);
     
    static const uint NUM_SAMPLES = 256;
    for(uint i=0;i<NUM_SAMPLES;++i)
    {
        float2 xi = hammersley_seq(i, NUM_SAMPLES);     
         
        float3 halfway = ImportanceSampleGGX(xi,roughness,normal);
        float3 lightVec = 2.0f * dot( toEye,halfway ) * halfway - toEye;
         
        float NdotL = saturate ( dot( normal, lightVec ) ) ;
        float NdotV = saturate ( dot( normal, toEye ) ) ;
        float NdotH =  saturate ( dot( normal, halfway ) ) ;
        float HdotV = saturate ( dot( halfway, toEye ) ) ;
         
        if( NdotL > 0 )
        {   
            float V =  V_SmithJoint(roughness,NdotV,NdotL);
            float fc = pow(1.0f - HdotV,5.0f);
            float3 F = (1.0f - fc) * specColor + fc;
             
            // Incident light = SampleColor * NoL
            // Microfacet specular = D*G*F / (4*NoL*NoV)
            // pdf = D * NoH / (4 * VoH)
            float D = DFactor(roughness,NdotH);
            float pdf = (D * NdotH / (4 * HdotV)) + 0.0001f  ;  
             
            float saTexel = 4.0f * CH_PI / (6.0f * CONV_SPEC_TEX_WIDTH * CONV_SPEC_TEX_WIDTH);
            float saSample = 1.0f / (NUM_SAMPLES * pdf)  ;          
            float mipLevel = roughness == 0.0f ? 0.0f :  0.5f * log2( saSample / saTexel )  ;
             
            float3 col = envMap.SampleLevel( samEnv, lightVec, mipLevel).rgb;
             
            res += col * F * V * NdotL * HdotV  * 4.0f / ( NdotH );
        }
    }
     
    return res / NUM_SAMPLES;
} 

for this process of convolution, we have made 2 assumptions——

  1. brdf is isotropic 各向同性 so this process can not be used for anisotropic materials.
  2. viewing angle is zero N=V=R. but in actual runtime rendering viewing angle can be different which introduces some errors/artifacts (in epic notes it is suggested to weight the samples by cosθ to reduce the error) and we can not get long streaks 条纹 that we should otherwise get from a microfacet based BRDF.

one of the solutions is to avoid pre-processing and do importance sampling at runtime, but it may not be feasible to do so because of performance. we need multiple samples from single cube map (16-32) and in case of local light probes we would have multiple probes effects single shade point making it not feasible for real-time in real projects.

for comparison here is a screenshot with sphere rendered with pre-processed method (below) and real-time method (above).

在这里插入图片描述
在这里插入图片描述

light probes placement and interpolation
for local light probes, we have to tackle the problem of placing light probes and how to decide which pixel will use which probes. one way would be to place the light probes in a game scene automatically in some grid format or sth. and then at runtime grab closes probe or blend b/w multiple.
another method can be to manually place the light probes in the game level wherever needed. this method will allow artists to tweak thing themselves place more probes where necessary, etc. there are few different approaches available for determing at runtime which light probes are affecting the current pixel——

  1. grab the closet light probe. sth. similar was used in source 1 engine for ambient specular lighting.
  2. k nearest——just grab a k-nearest light probes and interpolate b/w them by blending the result.[11][12]
  3. tetrahedral based method——check[12] for more details.
  4. influence volume——we define influence volumes for light probes and we blend between all the light probes affecting a pixel. It’s a priority blend & probes with smallest influence volumes are given the highest priority. Unreal & CryEngine uses the same technique. [11]

IBL tools
here is a short list of IBL tools available for free on the internet that can be used for pre-processing the cube maps——

  1. AMD’s CubemapGen – https://code.google.com/p/cubemapgen/
  2. Modified CubemapGen (or upgraded) – https://seblagarde.wordpress.com/2012/06/10/amd-cubemapgen-for-physically-based-rendering/
  3. IBL Baker – http://www.derkreature.com/iblbaker/
  4. Cmft – https://github.com/dariomanesku/cmft

References –
For readers looking for more in-depth information – [4] [5] are must read for cube map convolution process and [12] is contains a lot of details & references on how to handle local light probes.

  1. BRDF –https://en.wikipedia.org/wiki/Bidirectional_reflectance_distribution_function
  2. Specular BRDF Reference – http://graphicrants.blogspot.in/2013/08/specular-brdf-reference.html
  3. Background: Physics and Math of Shading – ( pdf )
  4. Importance Sampling – http://http.developer.nvidia.com/GPUGems3/gpugems3_ch20.html
  5. Irradiance Environment Maps – http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter10.html
  6. Image-Based Lighting – http://http.developer.nvidia.com/GPUGems/gpugems_ch19.html
  7. Plausible Environment Lighting in Two Lines of Code – http://casual-effects.blogspot.in/2011/08/plausible-environment-lighting-in-two.html
  8. Cubemap Texel Solid angle – http://www.rorydriscoll.com/2012/01/15/cubemap-texel-solid-angle/
  9. Physically Based Lighting in Call of Duty: Black Ops – http://blog.selfshadow.com/publications/s2013-shading-course/lazarov/s2013_pbs_black_ops_2_slides_v2.pptx
  10. Real Shading in Unreal Engine 4 – http://blog.selfshadow.com/publications/s2013-shading-course/karis/s2013_pbs_epic_slides.pptx
  11. Light Probes – http://blogs.unity3d.com/2011/03/09/light-probes/
  12. Image-based Lighting approaches and parallax-corrected cube map – https://seblagarde.wordpress.com/2012/09/29/image-based-lighting-approaches-and-parallax-corrected-cubemap/
  13. Secrets of CryENGINE 3 Graphics Technology – ( ppt )
  14. Box Projected Cubemap Environment Mapping – http://www.gamedev.net/topic/568829-box-projected-cubemap-environment-mapping/
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值