Real-Time Realistic Skin Translucency

http://www.iryoku.com/translucency/downloads/Real-Time-Realistic-Skin-Translucency.pdf

many materials possess a degree of translucency. light scatters within transluent objects (such as tree leaves, paper, or candles蜡烛) before leaving the object at a certain distance from the incident point. this process is called subsurface scattering (SSS). simulation of SSS in computer graphics is challenging. the rendering process must correctly simulate the light transport beneath an object’s surface to accurately capture its appearance (see figure 1).
在这里插入图片描述
figure 1. a comparison between (a) ignoring subsurface and (b) accouting for it. the skin’s reflectance component is softened from being scattering within the skin. in addition, the figure compares © raw sreen-space diffusion and (d) screen-space diffusion with transmittance simulation, calculated using the algorithm proposed in this article. light travels through thin parts of the skin, which the transmittance component accounts for.

figure 2 shows how SSS affects real-world objects.

在这里插入图片描述
figure 2. several objects showing varying degrees of translucency. as the middle and right images show, light transmitted through an object can greatly impact its final appearance.

human skin is particularly interesting because it consists of multiple translucent layers that scatter light according to their specific composition. this provides the characteristic reddish 红色的 look to which our visual system seems to be particulary well tuned. slight simulation errors will be more noticebale in skin than in, say, a wax candle. Correctly depicting human skin is important in fields such as cinematography 摄影 and games. whereas the former can count on the luxury of offline rendering, the latter imposes real-time constraints that make the problem much harder. the main challenge is to compute a real-time, perceptually plausible approximation of the complex SSS effects. it should also be easy to implement so that it integrates well with existing pipelines.

several real-time algorithms for simulating skin exist (for more information, see the “Related work in subsurface-scattering simulation”). their common key insight is that SSS mainly amounts to blurring of high-frequency details, which these algorithms perform in texture space. although the results can be realistic, the algorithms do not scale well; more objects mean more textures to process, so performance quickly decays. this is especially problematic in computer games, in which many characeters can appear on screen simutaneously and real-time performance is needed. we believe this is a main issue keeping game programmers from rendering truly realistic human skin. however, the commonly adopted alternative is to simply ignore SSS, thus decreasing the skin’s realism. additionaly, real-time rendering in a computer game context can become much more difficult, with issues such as the backgrond geometry, depth-of-field simulation, or motion blur imposing additional time penalties. in this field, great efforts are spend on obtaining further performance boosts (either in processing or memory usage). this lets us spend the saved resources on other effects, such as high-resolution textures and increased geometry complexity.

to develop a pratical skin-rendering model and thus solve the scalability issues that arise in multicharater scenarios, we proposed an algorithm that translated the simulation of scattering effects from texture space to screen space (see figure 3). this algorithm therefore reduced the problem of simulating translucency to a postprocess, with the added advantages of easy adaptability to any graphics engine. the main consequence is that we have less information to work with in screen space, as opposed to algorithms that work in 3D or texture space. we lose irradiance in all points of the surface not seen from the camera, 去除了摄像机看不到的点。because only the visible pixels are rendered. so, we can no longer calculate the transmittance of light through thin parts of an object.

Eugene d’Eon and his colleagues proposed an algorithm4 based on translucent shadow maps7 with good results. However, their solution takes up to 30 percent of the total computation time (inferred from the performance analysis in their paper) and requires irradiance maps, which aren’t available when simulating diffusion in screen space. We aim to simulate forward scattering through thin geometry with much lower computational costs, similar to how we’ve made reflectance practical.1 From our observations of the transmittance phenomenon, we derived several assumptions on which we built a heuristic that let us approximately reconstruct the irradiance on the back of an object. This in turn let us approximately calculate transmittance based on the multipole theory.8 The results show that we can produce images whose quality is on a par with photon mapping and other diffusion-based techniques (for a high-level overview of the diffusion approximation on which we based our algorithm, see the “Diffusion Profiles and Convolutions” sidebar). Our technique also requires minimal to no additional processing or memory resources.

Real-Time Transmittance Approaches
Our algorithm builds on two approaches. The first is Simon Green’s approach, which relies on depth maps to estimate the distance a light ray travels inside an object.9 The scene is rendered from the light’s point of view to create a depth map that stores the distance from the objects nearest to the light (see Figure 4).

在这里插入图片描述
figure 4. a comparison of Simon Green’s approach9 (red lines) to that of Eugene d’Eon and his colleagues 4,10 (blue lines). The former stores only depth information (z), whereas the latter stores z and the (u, v) coordinates of the points of the surface nearest the light. zout represents the depth of the points where shading is being calculated, while zin is the corresponding depth of the nearest point to the light source. s is the distance between zin and zout. For example, while rendering zout1, this technique accesses the depth map to obtain the depth of zin1, the point nearest to the light. It uses an operation similar to the one used in shadow mapping. However, instead of evaluating a comparison to determine whether a pixel is shadowed, it simply subtracts zin1 from zout1 of the pixel being shaded, obtaining s1, the actual distance the light traveled inside the object.
After calculating this distance, Green’s approach offers two ways to calculate the attenuation as a function of s:
当我们知道这个距离之和,我们就有两种方法进行计算了:

1、using an artist-created texture that maps distance to attenuation or
2、attenuating light according to 在这里插入图片描述where 在这里插入图片描述 is the extinction coefficient of the material being rendered and T(s) is the transmission coefficient that relates the incoming and outgoing lighting.

an inherent problem with this transmittance approach (which also hinders most approaches based on shadow mapping) is that in theory it works only for convex objects. In practice, however, it approximates the solution well enough with arbitrary geometries.

the second approach is d’Eon and his colleague’s texture-space approach, 4,10 which extends the idea behind translucent shadow maps to leverage the fact the irradiance is calculated at each point of the surface being rendered. texture-space diffusion, per se, does not account for scattering in areas that are close in 3D space but far in texture space. so, simulation of this effect requires special measurements. translucent shadow maps store depth z, irradiance, and the normal of each point on the surface nearest the light, whereas the proposed modified translucent 半透明 shadow maps store z and these point’s (u,v) coordinates (see figure 4). Then, while rendering, for example, zout2, u can access the shadow map to obtain the (uin2, vin2) coordinates, which you can then use to obtain the irradiance at the back of the object. Using zout1 and zout2, you can calculate the distance traveled through the object using the depth information from the shadow map, as in Green’s approach.

as figure 5 shows, the approach can approximate the radiant exitance at point C by the radiant exitance M(x, y) at point B—where it’s faster to calculate—using the irradiance information E(x, y) around point A in the back of the object:
在这里插入图片描述
figure 5. in d’Eon and his colleague’s approach, the radiant exitance at point C is approximated by the radiant exitance at point B 点B的辐射出度——where it is faster to calculate——using the irradiance information E around point A.(参考文献4,10) L represents the light vector, N is the surface normal, d is the distance between A and B, s is the distance between A and C, and r is the distance from A to sampled points around it.
在这里插入图片描述

As we saw, d’Eon and his colleagues calculate 在这里插入图片描述 using the Gaussian-sum approximation (see Equation D in the “Diffusion Profiles and Convolutions” sidebar):4

在这里插入图片描述

This lets them reuse the irradiance maps convoluted by each G(vi, r), used for reflectance calculation, for transmittance computations.

with shadow-map-based transmittance 既有shadow-map的投射计算方法,其实就是利用深度计算,进行深度的比较, high-frequency features in the depth of the shadow map might turn into high-frequency shading features. this is generally a problem when rendering translucent objects, where a softer appearance is expected. Green 格林 recommends sampling multiple points from the shadow map to soften these high-frequency depth changes. in d’Eon’s texture-space approach, the distance traveled by the light inside the object is stored in the irradiance map’s alpha channel and blurred together with this irradiance information. the downside is that there’s no obvious way to extend to multiple lights, because the alpha channel can store only the distance of one light.

Although Green’s approach is physically based, if we use Beer’s law instead of artist-controlled attenuation textures, it doesn’t account for the attenuation of the light in multilayer materials. On the other side, d’Eon and his colleagues’ approach requires texture-space diffusion, because in screen space there are no irradiance maps or irradiance information in the back of the object. Furthermore, the approach requires storing three floats in each shadow map (depth and texture coordinates), whereas regular shadow mapping requires storing only depth. This implies 3× memory usage and 3× bandwidth consumption for each shadow map.

Our Algorithm
Building on these ideas, we present a simple yet physically based transmittance shader. For this, we need a physically based function T(s) ,我觉得这个T就是Transmittance的首字母缩写,而s就是物体的厚度,that relates the attenuation of the light with the distance traveled inside an object. First, we make four observations:

  1. for a great range of thin objects, we can approximate the normal at the back of the object to the reversed normal of the current pixel normal. 背面的法线和是前面法线的取反。this approximation will be exact when the front and back surfaces are parallel.

  2. when looking at a backlit 背后照明的 object from the front,从前面看——which we consider the most interesting transmittance——the viewer does not have acccurate information of the irradiance at the back.

  3. for materials with a tiny mean free path or for geometry with moderately thick surfaces——such as skin——transmittance is a very-low-frequency phenomenon. this is because the light is diffused as it travels inside an object, hiding most of its high-frequency features.

  4. in human skin, the albedo 反射 (that is, the surface reflectivity) does not vary dramatically over its surface, maintaining a similar skin tone. 相似的色调
    finally, because of observations 2 and 4, we can safely use the albedo 在这里插入图片描述
    at the front to approximate irradiance at the back of the object. also, because of observation 3, even if we use high-frequency normals to calculate irradiance around A, we
    什么是高频信息、什么是低频信息,参考:https://www.jianshu.com/p/fbe8c24af108
    低频就是颜色缓慢变化,也就是灰度缓慢地变化,就代表着那是连续渐变的一块区域,这部分就是低频。反之,高频即频率变化快,相邻区域之间灰度相差很大,这就是变化快
    still get low-frequency transmitted lighting. then, we assume that we can use low-frequency normals and obtain similar results. if we calcualte the irradiance in the back using vertex normals——instead of normals from the normal map——we assure this irradiance will be free of high frequencies. 无高频信息,也就是颜色变化缓慢 (for real-time usage, the high-frequency details are in the normal map and not in the vertex normals.) in this case, the irradiance in the back-around A——will change slowly, so just taking a single irradiance value will produce a result similar to performing the full convolution.

We can assume, then, that irradiance in the back is approximately locally constant; all the points around A in Figure 5 will have the same value as point A:

在这里插入图片描述
given a diffusion profile R®, 漫反射轮廓函数R®,the transmitted radiant exitance M(x,y) through a planar slab 薄片 is the convolution of the incoming irradiance with the diffusion profile (see figure 5):
M(x,y)出的辐出度,是进入的辐照度和漫反射轮廓线的卷积。
在这里插入图片描述

by our first assumption, E(x,y)=E, so we have
在这里插入图片描述
placing equation 2 into Equation 6, and considering that we define our Gaussian funcitons to have a unit total diffuse response, we obtain ,定义一个对积分等于1的高斯函数,则上面的积分,简化为:
在这里插入图片描述
which depends only on E and . approximating d by s and rewriting this equation, we can obtain the function T(s) we wanted:
在这里插入图片描述

在这里插入图片描述
we can now precalculate T(s) and store it in a look-up texture (see figure 6), to use it as the attenuation texture of Green’s approach. using this T(s) texture, we manage to produce similar results to the physically based approach while leveraging a simpler technique.

for rendering, we simply need to add the contributions from the reflectance (obtained as usual) and the transmittance. we can safely sum reflected and transmitted lighting instead of blending them——as other approaches do——because we are using the reversed normal for transmittance calculations. this implies that reflected lighting and transmitted lighting can not happen simultaneously, thus avoiding double contribution. also, algthough we perform our reflectance SSS calculations in screen space, we obtain the transmittance term in the conventional rendering pass.

As we explained in the section “Real-Time Transmittance Approaches,” blurring high-frequency features in the depth map to simulate how light diffuses as it travels through an object is recoommended. Instead of blurring the distance traveled inside the object, as previous approaches have done, we simply store the transmittance and reflectance together. We then use the screen-space Gaussian convolutions to blur them, yielding good results.

Implementation Details
Although using the reversed normal for transmittance calculations avoids double contribution, it causes nonsmooth transitions between areas illuminated by reflectance to areas of transmittance-only illumination. In these transitions, the dot product between the normal N and the light vector L is zero for both N and -N. To avoid these abrupt illumination changes, we increase the range of the object covered by the transmittance component, using this formula:
在这里插入图片描述
This means that the transmittance dot product will begin approximately 17 degrees before where it would begin if we used the usual N ⋅ L product. The minus is because we’re using the reversed normal for transmittance calculations.

A problem of using shadow maps for depth approximations is that artifacts can appear around the projection’s edges because pixels from the background are projected onto the object’s edges. To solve this problem, Green recommends growing the vertices in the direction of the normals while rendering the shadow maps.9 This ensures that all points fall onto the object while querying the depth from the shadow map. We opted to shrink the object instead in the normal direction, while querying the depth map. This yields the same result but has the advantage of using standard, unmodified shadow maps.
Figure 7 shows the 25 lines of code that execute our skin shader’s transmittance calculations, which highlight its simplicity.

float distance(float3 pos, float3 N, int i)
{
	float4 shrinkedpos = float4(pos - 0.005 ∗ N, 1.0);
	float4 shwpos = mul(shrinkedpos, lights[i].viewproj);
	float d1 = shwmaps[i].Sample(sampler, shwpos.xy/shwpos.w);
	float d2 = shwpos.z;
	return abs(d1 - d2);
}
// This function can be precomputed for efficiency
float3 T(float s) 
{
	return float3(0.233, 0.455, 0.649)exp(-s*s/0.0064) +
	float3(0.1, 0.336, 0.344)exp(-s*s/0.0484) +
	float3(0.118, 0.198, 0.0)exp(-s*s/0.187) +
	float3(0.113, 0.007, 0.007)exp(-s*s/0.567) +
	float3(0.358, 0.004, 0.0)exp(-s*s/1.99) +
	float3(0.078, 0.0, 0.0)exp(-s*s/7.41);
}

float s = scale ∗ distance(pos, Nvertex, i);
float E = max(0.3 + dot(-Nvertex, L), 0.0);
float3 transmittance = T(s) ∗ lights[i].color ∗ attenuation ∗ spot ∗ albedo.rgb ∗ E;
// We add the contribution of this light
M += transmittance + reflectance;
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值