Article - Physically Based Rendering

60 篇文章 3 订阅

http://www.codinglabs.net/article_physically_based_rendering.aspx

Radiance 辐射率——用L表示
Irradiance 辐照度 ——光线进入,用I表示

在这里插入图片描述

the pursuit of realism is pushing rendering technology towards a detailed simulation of how light works and interacts with objects. physically based rendering is a catch all term of any technique that tries to achieve photorealism via physical simulation of light.

currently the best model to simulate light is captured by an equation known as the rendering equation. the rendering equation tries to describe how a “unit” of light is obtained given all the incoming light that interacts with a specific point of a given scene. we will see the details and introduce the correct terminology in a moment. it is important to notice that we will not try to solve the full rendering equation, instead we will use the following simplified version:
在这里插入图片描述

to understand this equation we first need to understand how the light works, and then, we will need to agree on some common terms. to give u a rough idea of what the formula means, in simple terms, we could say that the formula describes the colour of a pixel given all the incoming ‘coloured light’ and a function that tells us how to mix them.

physics terms
if we want to properly understand the rendering equation we need to capture the meaning of some physical quantities; the most important of these quantities is called radiance(represented with L in the formula).

radiance is a tricky thing to understand, as it is a combination of other physics quantities, therefore, before formally define it, we will introduce a few other quantities.

radiant flux: the radiant flux is the measure of the total amount of energy, emitted by a light source, expressed in Watts. we will represent the flux with the Greek leeter Φ.

any light source emits energy, and the amout of emitted energy is function of the wavelength.
在这里插入图片描述
Figure 1: Daylight spectral distribution

in figure 1 we can see the spectral distribution for day light; the radiant flux is the area of the function (to be exact, the area is the luminous flux, as the graph is limiting the wavelength to the human visible spectrum). for our purposes we will simplify the radiant flux with an RGB colour, even if this means losing a lot of information.

Solid angle: It’s a way to measure how large an object appears to an observer looking from a point. To do this we project the silhouette of the object onto the surface of a unit sphere centred in the point we are observing from. The area of the shape we have obtained is the solid angle. In Figure 2 you can see the solid angle ωω as a projection of the light blue polygon on the unit sphere.

在这里插入图片描述
Figure 2: Solid angle

Radiant Intensity: is the amount of flux per solid angle. If you have a light source that emits in all directions, how much of that light (flux) is actually going towards a specific direction? Intensity is the way to answer to that, it’s the amount of flux that is going in one direction passing through a defined solid angle. The formula that describes it is I=dΦdωI=dΦdω, where ΦΦ is the radiant flux and ωω is the solid angle.

在这里插入图片描述
Figure 3: Light intensity

Radiance: finally, we get to radiance. Radiance formula is:

在这里插入图片描述

where ΦΦ is the radiant flux, A is the area affected by the light, ω is the solid angle where the light is passing through and cosθcosθ is a scaling factor that “fades” the light with the angle.

在这里插入图片描述
Figure 4: Radiance components

we like this formula because it contains all the physical components we are intrested in, and we can use it to describe a single “ray” or light. in fact we can use radiance to describe the amount of flux, passing through an infinitely small solid angle, hitting an infinitely small area, and that describes the behaviour of a light ray. so when we talk about radiance we talk about some amount of ligh going in some direction to some area.

when we shade a point we are intrested in all the incoming light into that point, that is the sum of all the radiance that hit a hemisphere centered on the point itself; the name for this entity is irradiance. irradiance and radiance are our main physical quantities, and we will work on both of them to achieve our physically based rendering.

the rendering equation
we can now go back on the rendering equation and try to fully understand it.
在这里插入图片描述
we know understand that L is radiance, and it’s function of some point in the world and some direction plus the solid angle (we will always use infinitely small solid angles from now on, so think of it simply as a direction vector). the equation describes the outgoing radiance from a point Lo(p,ωo), which is all we need to colour a pixel on screen.

to calculate it we need the normal of the surface where our pixel lies on ( n), and the irradiance of the scene, which is given by Li(p,ωi)∀ωi. 任意方向wi的辐照度。to obtain the irradiance we sum them all the incoming radiance, 为了得到辐照度,我们把所有进入方向的辐射率加起来。hence the integral sign in front of the equation. note that the domain of the integral Ω is a semi-sphere centered at the point we are calculating and oriented 半球的中心点在我们要求解的P点。so that the top of the hemishphere itself can be found by moving away from the point along the normal direction. 半球的顶,可以定义为沿着法线方向移动一段距离。

the dot product n.wi is there to take into account the angle of incidence angle of the light ray. n点乘wi被视为光线的入射角。if the ray is perpendicular to the surface it will be more localized on the lit area, 如果是垂直照着表面,则会集中在照亮的区域,while if the angle is shallow it will be spread across a bigger area, 当入射的角度较大,那么传播的区域则会扩大, eventually spreading across too much to actually being visible.

now we can see that the equation is simply representing the outgoing radiance 出射方向的辐射率 given the incoming radiance weighted by the cosine of the angle between every incoming ray and the normal to the surface. 是对每个入射方向的辐射率进行缩放,缩放的值是入射光线与法线的cosine值。the bit we still need to introduce is fr(p,ωi,ωo), that is the BRDF
this function takes as input position, incoming and outgoing ray, and outputs a weight of how much the incoming ray is contributing to the final outgoing radiance. for a perfectly specular reflection, like a mirror, the BRDF function is 0 for every incoming ray apart for the one that has the same angle of the outgoing ray, in which case the function returns 1 (the angle is measured between the rays and the surface normal). it is important to notice that a physically based BRDF has to respect the law of conservation of energy, that is ∀ωi, ∫Ωfr(p,ωi,ωo)(n⋅ωi)dωo≤1, which means that the sum of reflected light must not exceed the amount of incoming light.

translate to code
so, now that we have all this usefull knowledge, how do we apply it to actually write something that renders to the screen? we have two main problems here.

  1. first of all, how can we represent all these radiance functions in the scene?
  2. and secondly, how do we solve the integral fast enough to be able to use this in a real-time engine?

the answer to the first question is simple, environment maps. for our purposes we will use environment maps (cubemaps, although spherical maps would be more suited) to encode the incoming radiance from a a specific direction towards a given point.

if we imagine that every pixel of the cubemap is a small emitter whose flux is the RGB colour, we can approximate L(p,ω) , with p being the exact center of the cubemap, to a texture read from the cubemap itself, so L(p,ω)≈texCUBE(cubemap,ω).
obviously it would be too much memory consuming to have a cubemap for every point in the scene (!), therefore we trade off some quality by creating a certain number of cubemaps in the scene and every point picks the cloest one. to reduce the error we can correct the sampling vector with the world position of the cubemap to be more accurate. this gives us a way to evaluate radiance, which is:
在这里插入图片描述

where wp is the sampling vector corrected by the point position and cubemap position in the world.

the answer for our second problem, how to solve the integral, is a bit more tricky, because in some cases, we will no be able to solve it quickly enough. but if the BRDF happens to depend only on the incoming radiance, or even better, on nothing (if it is constant), then we can do some nice optimization. so let us see how this happens if we plug in Lambert’s BRDF, which is a constant factor (all the incoming radiance contributes to the outgoing ray after being scaled by a constant).

Lambert
lambert’s BRDF sets fr(p,ωi,ωo)=c/π where c is the surface colour. if we plug this into the rendering equation we get:
在这里插入图片描述

now, the integral depends on wi and nothing else, which means we can precalcualte it (solving it with a Monte carlo intergration for example) and store the result into another cubemap. the value will be stored in wo direction, which means that knowing what output direction we have we can sample the cubemap and obtain the reflected light in that very direction. this reduces the whole rendering equation to a single sample from a pre-calcualted cubemap, specifically:
在这里插入图片描述

where ωop is the outgoing radiance corrected by the point position and cubemap position in the world.
So, now we have all the elements, and we can finally write a shader. I’ll show that in a moment, but for now, let’s see the results.
在这里插入图片描述
Quite good for a single texture read shader uh? Please note how the whole lighting changes with the change of the envinroment (the cubemap rendered is not the convolved one, which looks way more blurry as shown below).

在这里插入图片描述
Figure 5: Left the radiance map, right the irradiance map (Lambert rendering equation)

Now let’s present the shader’s code. Please note that for simplicity I’m not using the Monte Carlo integration but I’ve simply discretized the integral. Given infinite 无限的 samples it wouldn’t make any difference, but in a real case it will introduce more banding than Monte Carlo. In my tests it was good enough given that I’ve dropped the resolution of the cubemap to a 32x32 per face, but it’s worth bearing this in mind if you want to experiment with it.
在我的测试中,我将cubemap的分辨率降低到每个面32x32像素,这已经足够好了,但是如果你想用它做实验的话,这是值得记住的。

The first shader we need is the one that generates the blurry envmap (often referred to as the convolved envmap, since it is the result of the convolution of the radiance envmap and the kernel function (n⋅ωi) ).

Since in the shader we will integrate in spherical coordinates we will change the formula to reflect that.

在这里插入图片描述

You may have noticed that there is an extra sin(θi) in the formula; that is due to the fact that the integration is made of small uniform steps. When we are using the solid angle this is fine as the solid angles are evenly distributed on the integration area, but when we change to spherical coordinates we will get more samples where θ is zero and less where it goes to π/2. If you create a sphere in your favorite modeling tool and check it’s wireframe you’ll see what I mean. The sin(θi) function is there to compensate the distribution as dωi=sin(θ)dθdϕ.
The double integral is solved by applying a Monte Carlo estimator on each one; this leads to the following discrete equation that we can finally transform into shader code:
在这里插入图片描述

...
float4 PixelShaderFunction(VertexShaderOutput input) : COLOR
{
    float3 normal = normalize( float3(input.InterpolatedPosition.xy, 1) );
    if(cubeFace==2)
        normal = normalize( float3(input.InterpolatedPosition.x,  1, -input.InterpolatedPosition.y) );
    else if(cubeFace==3)
        normal = normalize( float3(input.InterpolatedPosition.x, -1,  input.InterpolatedPosition.y) );
    else if(cubeFace==0)
        normal = normalize( float3(  1, input.InterpolatedPosition.y,-input.InterpolatedPosition.x) );
    else if(cubeFace==1)
        normal = normalize( float3( -1, input.InterpolatedPosition.y, input.InterpolatedPosition.x) );
    else if(cubeFace==5)
        normal = normalize( float3(-input.InterpolatedPosition.x, input.InterpolatedPosition.y, -1) );

    float3 up = float3(0,1,0);
    float3 right = normalize(cross(up,normal));
    up = cross(normal,right);

    float3 sampledColour = float3(0,0,0);
    float index = 0;
    for(float phi = 0; phi < 6.283; phi += 0.025)
    {
        for(float theta = 0; theta < 1.57; theta += 0.1)
        {
            float3 temp = cos(phi) * right + sin(phi) * up;
            float3 sampleVector = cos(theta) * normal + sin(theta) * temp;
            sampledColour += texCUBE( diffuseCubemap_Sampler, sampleVector ).rgb * 
                                      cos(theta) * sin(theta);
            index ++;
        }
    }

    return float4( PI * sampledColour / index), 1 );
}
...

I’ve omitted the vertex shader and the variables definition and the source shader I’ve used is in HLSL. Running this for every face of the convolved cubemap using the normal cubemap as input gives us the irradiance map. We can now use the irradiance map as an input for the next shader, the model shader.

...
float4 PixelShaderFunction(VertexShaderOutput input) : COLOR
{
    float3 irradiance= texCUBE(irradianceCubemap_Sampler, input.SampleDir).rgb;
    float3 diffuse = materialColour * irradiance;
    return float4( diffuse , 1); 
}
...

Very short and super fast to evaluate.
This concludes the first part of the article on physically based rendering. I’m planning to write a second part on how to implement a more interesting BRDF like Cook-Torrance’s BRDF.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值