GPU Gems 1——Chapter 16. Real-Time Approximations to Subsurface Scattering

https://developer.nvidia.com/gpugems/GPUGems/gpugems_ch16.html

most shading models used in real-time graphics today consider the interaction of light only at the surface of an object. In the real world, however, many objects are slightly translucent: light enters their surface, is scattered around inside the material, and then exits the surface, potentially at a different point from where it entered.

much research has been devoted to producing efficient and accurate models of subsurface light transport. although completely physically accurate simulations of subsurface scattering are out of the reach of current graphics hardware, it is possible to approximate much of the visual appearance of this effect in real time. this chapter describes several methods of approximating the look of translucent materials, such as skin and marble, using programmable graphics hardware.

16.1 the visual effects of subsurface scattering
when trying to reproduce any visual effect, it is often useful to examine images of the effect and try to break down the visual appearance into its constituent 组成部分 parts.

looking at photographs and rendered images of translucent objects, we notice several things. first, subsurface scattering tends to soften the overal effect of lighting. light from one area tends to bleed into neighboring areas on the surface, and small surface details become less visible. the farther the light penetrates into the object, the more it is attenuated and diffused. with skin, scattering also tends to cause a slight color shift toward red where the surface transitions from being lit to being in shadow. this is caused by light entering the surface on the lit side, being scattered and absorbed by the blood and tissue beneath the skin, and then exiting on the shadowed side. the effect of scattering is most obvious where the skin is thin, such as around the nostrils and ears.

16.2 simple scattering approximations

one simple trick that approximates scattering is wrap lighting. normally, diffuse (Lambert) lighting contributes zero light when the surface normal is perpendicular to the light direction. wrap lighting modifies the diffuse function so that the lighting wraps around the object beyond the point where it would normally become dark. this reduces the contrast of the diffuse lighting, which decreases the amount of ambient and fill lighting that is required. wrap lighting is a crude approximation to the Oren-Nayar lighting model https://my.oschina.net/zsjasper/blog/368301?p=1
, which attemps to more accurately simulat rough matte surfaces (Nayar and Oren 1995).

the code shown here and the graph in figure 16-1 illustrate how to change the diffuse lighting function to include the wrap effect. the value wrap is a floating-point number between 0 and 1 that controls how far the lighting will wrap around the object.

在这里插入图片描述
Figure 16-1 Graph of the Wrap Lighting Function

float diffuse = max(0, dot(L, N));
float wrap_diffuse = max(0, (dot(L, N) + wrap) / (1 + wrap));

to compute this efficiently in a fragment program, the function can be encoded in a texture, which is indexed by the dot product between the light vector and normal. this texture can also be created to include a slight color shift toward red when the lighting approaches zero. this is a cheap way to simulate scattering for skin shaders. the same texture can also include the power function for specular lighting in the alpha channel. the FX code in listing 16-1 demonstrates how to use this technique. see figure 16-2 for examples.

在这里插入图片描述
Figure 16-2 Applying Wrap Lighting to Spheres

// Generate 2D lookup table for skin shading

float4 GenerateSkinLUT(float2 P : POSITION) : COLOR
{
	float wrap = 0.2;
	float scatterWidth = 0.3;
	float4 scatterColor = float4(0.15, 0.0, 0.0, 1.0);
	float shininess = 40.0;
	
	float NdotL = P.x * 2 - 1;  // remap from [0, 1] to [-1, 1]
	float NdotH = P.y * 2 - 1;
	float NdotL_wrap = (NdotL + wrap) / (1 + wrap); // wrap lighting
	float diffuse = max(NdotL_wrap, 0.0);
	// add color tint at transition from light to dark
	float scatter = smoothstep(0.0, scatterWidth, NdotL_wrap) *
	                smoothstep(scatterWidth * 2.0, scatterWidth,
	                           NdotL_wrap);
	
	float specular = pow(NdotH, shininess);
	if (NdotL_wrap <= 0) specular = 0;
	float4 C;
	C.rgb = diffuse + scatter * scatterColor;
	C.a = specular;
	return C;
}

// Shade skin using lookup table

half3 ShadeSkin(sampler2D skinLUT,
                half3 N,
                half3 L,
                half3 H,
                half3 diffuseColor,
                half3 specularColor) : COLOR
{
	half2 s;
	s.x = dot(N, L);
	s.y = dot(N, H);
	half4 light = tex2D(skinLUT, s * 0.5 + 0.5);
	return diffuseColor * light.rgb + specularColor * light.a;
}

16.3 simulating absorption using depth maps
one of the most important factors in simulating very translucent materials is absorption. the farther through the material light travels, the more it is scattered and absorbed. to simulate this effect, we need a measure of the distance light has traveled through the material.

one method of estimating this distance is to use depth maps [Hery 2002]. this technique is very similar to shadow mapping, and it is practical to real-time rendering. in the first pass, we render the scene from the point of view of the light, storing the distance from the light to a texture. this image is then projected back onto the scene using standard projective texture mapping. in the rendering pass, given a point to be shaded, we can look up into this texture to obtain the distance from the light at the point the ray entered the surface (di). By subtracting this value from the distance from the light to the point at which the ray exited the surface (do ), we obtain an estimate of the distance the light has traveled through the object (s). See Figure 16-3.

在这里插入图片描述
Figure 16-3 Calculating the Distance Light Has Traveled Through an Object Using a Depth Map

The obvious problem with this technique is that it works only with convex objects: holes within the object are not accounted for correctly. In practice, this is not a big issue, but it may be possible to get around the problem using depth peeling, which removes layers of the object one by one (Everitt 2003).

u might be thinking that for static objects, it would be possible to paint or precalculate a map that represents the approximate thickness of the surface at each point. the advantage of using depth maps is they take into account the direction of the incoming light, and they also work for animating models (assuming that you regenerate the depth map each frame).

The programs in Listings 16-2 and 16-3 demonstrate how to render distance from the light to a texture. They assume the modelView and modelViewProj matrices have been set up by the application for the light view.

Example 16-2. The Vertex Program for the Depth Pass

struct a2v {
  float4 pos    : POSITION;
  float3 normal : NORMAL;
};
struct v2f {
  float4 hpos : POSITION;
  float  dist : TEXCOORD0; // distance from light
};

v2f main(a2v IN,
         uniform float4x4 modelViewProj,
         uniform float4x4 modelView,
         uniform float    grow)
{
  v2f OUT;
  float4 P = IN.pos;
  P.xyz += IN.normal * grow;  // scale vertex along normal
  OUT.hpos = mul(modelViewProj, P);
  OUT.dist = length(mul(modelView, IN.pos));
  return OUT;
}

Example 16-3. The Fragment Program for the Depth Pass

float4 main(float dist : TEX0) : COLOR
{
  return dist;  // return distance
}

the fragment program extract in listing 16-4 shows how to look up in the light distance texture to calculate depth. For flexibility, this code does the projection in the fragment program, but if you are taking only a few samples, it will be more efficient to calculate these transformations in the vertex program.

Example 16-4. The Fragment Program Function for Calculating Penetration Depth Using Depth Map

// Given a point in object space, lookup into depth textures
// returns depth
float trace(float3 P, uniform float4x4  lightTexMatrix, // to light texture space     
	uniform float4x4  lightMatrix,    // to light space
	uniform sampler2D lightDepthTex,)
{
	// transform point into light texture space
	float4 texCoord = mul(lightTexMatrix, float4(P, 1.0));
	// get distance from light at entry point
	float d_i = tex2Dproj(lightDepthTex, texCoord.xyw);
	// transform position to light space
	float4 Plight = mul(lightMatrix, float4(P, 1.0));
	// distance of this pixel from light (exit)
	float d_o = length(Plight);
	// calculate depth
	float s = d_o - d_i;
	return s;
}

once we have a measure of the distance the light has traveled through the material, there are several ways we can use it. One simple way is to use it to index directly into an artist-created 1D texture that maps distance to color. The color should fall off exponentially with distance. By changing this color map, and combining the effect with other, more traditional lighting models, we can produce images of different materials, such as marble or jade.

float si = trace(IN.objCoord, lightTexMatrix, lightMatrix,
                 lightDepthTex);
return tex1D(scatterTex, si);

Alternatively, we can evaluate the exponential function directly:

return exp(-si * sigma_t) * lightColor;

The problem with this technique is that it does not simulate the way light is diffused as it passes through the object. When the light is behind the object, you will often clearly see features from the back side of the object showing through on the front. The solution to this is to take multiple samples at different points on the surface or to use a different diffusion approximation, as discussed in the next section.

16.3 implementation details
on GeForce FX hardware, when reading from a depth texture, only the most significant eight bits of the depth value are available. this is not sufficient precision. instead, we can either use floating-point textures or use the pack and unpack instructions from the NVIDIA fragment program extension to store a 32-bit float value in a regular eight-bit RGBA texture. floating-point textures do not currently support filtering, so block artifacts will sometimes be visible where the projected texture is magnified. if necessary, bilinear filtering can be performed in the shader, at some performance cost.

another problem with projected depth maps is that artifacts often appear around the edges of the projection. these are similar to the self-shadowing artifacts seen with shadow mapping. they result mainly from the limited resolution of the texture map, which causes pixels from the background to be projected onto the edges of the object. the sample code avoids this problem by slightly scaling the object along the vertex normal during the depth-map pass.
For more accurate simulations, we may also need to know the normal, and potentially the surface color, at the point at which the light entered the object. We can achieve this by rendering additional passes that render the extra information to textures. We can look up in these textures in a similar way to the depth texture. On systems that support multiple render targets, it may be possible to collapse 折叠 the depth, normal, and other passes into a single pass that outputs multiple values. See Figure 16-4.

在这里插入图片描述
Figure 16-4 Using a Depth Map to Approximate Scattering

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值