In Chapter 7 we discussed many aspects of lighting and shading. However, only the effects of point and directional light sources were presented, thus limiting surfaces to receiving light from a handful of discrete directions. This description of lighting is incomplete—in reality, surfaces receive light from all incoming directions. Outdoors scenes are not just lit by the sun.
If that were true, all surfaces in shadow or facing away from the sun would be black. The sky is an important source of light, caused by sunlight scattering from the atmosphere. The importance of sky light can be seen by looking at a picture of the moon, which lacks sky light because it has no atmosphere (see Figure 8.1).
On overcast days阴天, at dusk黄昏, or at dawn 黎明, outdoors lighting is all sky light. Diffuse, indirect lighting is even more important in indoor scenes. Since directly visible light sources can cause an unpleasant glare, indoor lighting is often engineered to be mostly or completely indirect. The reader is unlikely to be interested in rendering only moonscapes. For realistic rendering, the effects of indirect and area lights must be taken into account. This is the topic of the current chapter. Until now, a simplified form of the radiometric equations has sufficed, since the restriction of illumination to point and directional lights enabled the conversion of integrals into summations. The topics discussed in this chapter require the full radiometric equations, so we will begin with them. A discussion of ambient and area lights will follow. The chapter will close with techniques for utilizing the most general lighting environments, with arbitrary radiance values incoming from all directions.
Figure 8.1. Scene on the moon, which has no sky light due to the lack of an atmosphere to scatter sunlight. This shows what a scene looks like when it is only lit by a direct light source. Note the pitch-black shadows and lack of any detail on surfaces facing away from the sun. This photograph shows Astronaut James B. Irwin next to the Lunar Roving Vehicle during the Apollo 15 mission. The shadow in the foreground is from the Lunar Module. Photograph taken by Astronaut David R. Scott, Commander. (Image from NASA’s collection.)
8.1 radiometry for arbitrary lighting
in section 7.1 the various radiometric quantities were discussed. however, some relationships between them were omitted, since they were not important for the discussion of lighting with point and directional light sources.
we will first discuss the relationship between radiance and irradiance. 辐照度和辐射率. let us look at a surface point, and how it is illuminated by a tiny patch of incident directions with solid angle dwi (see figure 8.2).
Figure 8.2. A surface point illuminated from all directions on the hemisphere. An example light direction l and an infinitesimal solid angle dωi around it are shown.
since dwi is very small, it can be accurately represented by a single incoming direction l, and we can assume that the incoming radiance from all directions in the patch is a constant Li(l).
as discussed in section 7.1, radiance is a measure of light in a single ray, more precisely defined as the density of light flux (power) with respect to both area (measured on a plane perpendicular to the way) and solid angle. irradiance 辐照度 is a measure of light incoming to a surface point from all directions, defined as the density of light flux with respect to area (measured on a plane perpendicular to the surface normal n). it follows from these definitions that
这个公式,我今天顿悟了:
E叫辐照度。L叫做辐射率。
辐照度是单位面积的通量;
辐射率是单位面积&单位立体角的通量;
where is our notation for a cosine function clamped to non-negative values, dE is the differential amount of irradiance contributed to the surface by the incoming light from dwi, and θi is the angle between the incoming light vector l and the surface normal. isolating dE results in
now that we know how much irradiance is contributed to the surface from the patch of directions dwi around l,we wish to compute the total irradiance at the surface,resulting from light in all directions above the surface. if the hemisphere of directions above the surface (which we will call Ω) is divided up into many tiny solid angles, we can use Equation 8.2 to compute dE from each and sum the results. this is an integration with respect to l, over Ω.
the cosine in Equation 8.3 is not clamped, since the integration is only performed over the region where the cosine is positive. note that in this integration, l is swept over the entire hemisphere of incoming directions——it is not a specific “light source direction”. the idea is that any incoming direction can (and usually will) have some radiance associated with it.
equation 8.3 describes an important relationship between radiance and irradiance: irradiance is the cosine-weighted integral of radiance over the hemisphere.
when rendering, we are interested in computing the outgoing radiance Lo in the view direction v, since this quantity determines the shaded pixel color. to see how Lo relates to incoming radiance Li, recall the definition of the BRDF:
combining this with equation 8.2 and integrating over the hemisphere yields the reflectance equation
where the ⊗ symbol (piecewise vector multiply) is used, since both the BRDF f(l, v) and the incoming radiance Li(l) vary over the visible spectrum, which in practive for real-time rendering purposes means that they are both RGB vectors. this is the full version of the simplified equation we used in chapter 7 for point and directional light sources. equation 8.5 shows that to compute the radiance outgoing in a given direction v, the incoming radiance times the BRDF times the cosine of the incoming angle θi needs to be integrated over the hemisphere above the surface. it is interesting to compare equation 8.5 to the simplified version used in chapter 7:
the most commonly used parameterization of the hemisphere uses polar coordinates (φ and θ). for this parameterization, the differential solid angle dw is equal to sin θdθdφ. using this, a double-integral form of equation 8.5 can be derived, which uses polar coordinates:
The angles θi, φi, θo, and φo are shown in Figure 7.15 on page 224.
Figure 7.15. The BRDF. Azimuth angles φi and φo are given with respect to a given tangent vector t. The relative azimuth angle φ (used for isotropic BRDFs instead of φi and φo) does not require a reference tangent vector.
Figure 8.3. A surface illuminated by a light source. On the left, the light source is modeled as a point or directional light. On the right, it is modeled as an area light source.
8.2 area light sources
previous chapters have described point or directional light sources. these light sources illuminate a surface point from one direction only. however, real lights illuminate a surface point from a range of directions——they subtend (cover) a nonzero solid angle. figure 8.3 shows a surface that is illuminated by a light source. it is modeled both as a point or directional source and as an area light source with a nonzero size. the point or directional light source (on the left) illuminates the surface from a single direction lL, which forms an angle θiL with the normal n. its brightness is represented by its irradiance El measured in a plane perpendicular to lL. the point or directional light’s contirbution to the outgoing radiance Lo(v) in direction v is f(lL, v)⊗EL cos θiL.
on the other hand, the birghtness of the area light source (on the right) is represented by its radiance LL. the area light 区域光 subtends a solid angle of ωL from the surface location. its contibution to the outgoing radiance in direction v is the integral of f(l, v) ⊗ Lcos θi over ωL. the fundamental approximation behind point and directional light sources is expressed in the following equation:
the amount that an area light source contributes to the illumination of a surface location is a function of both its radiacne (Ll) and its size as seen from that location (wl). point and directional light sources are approximations of area light sources——they can not be realized in practice, since their zero solid angle implies an infinite radiance.
the approximation in equation 8.8 is much less costly than computing the intergral, so it is worth using when possible. understanding the visual errors that are introduced by the approximation will help to know when to use it, and what approach to take when it can not be used. these errors will depend on two factors: how large the light source is (measured by the solid angle it covers from the shaded point), and how glossy the surface is. for light sources that subtend a very small solid angle, the error is small. for very rough surfaces, the error is small as well. we will take a closer look at the important special case of lambertian surfaces.
it turns out that for lambert surfaces, the approximation is exact under some circumstances. recall that for lambertian surfaces, the outgoing radiance is proportional to the irrdiance:
where Cdiff is the diffuse color of the surface.
this lets us use equations for irradiance, which are simper than the corresponding equations for outgoing radiance. the equivalent of equation 8.8 for computing irradiance is:
to understand how irradiance behaves in the presence of area light sources, the concept of vector irradiance is useful. vector irradiance was introduced by Gershun (who called it the light vector) and further described by Arvo. using vector irradiance, an area light source of arbitrary size and shape can be accurately converted into a point or directional light source. there are some caveats警告, which we will discuss later.
imagine a distribution of radiance coming into a point p in space (see figure 8.4). for now, we assume that the radiance does not vary with wavelength, so that the incoming radiance from each direction can be represented as a scalar, rather than as a spectral distribution or RGB triple. for every infinitesimal 无穷小 solid angle dw centered on an incoming direction l, a vector is constructed that is aligned with l and has a length equal to the (scalar) radiance incoming from that direction times dw. finally, all these vectors are summed to produce a total vector e (see figure 8.5). this is the vector irradiance. more formally, the vector irradiance is computed thus:
所有的这些向量加起来得到e向量,e向量就是辐照度向量。
下面的积分也就是相加的意思。
where Θ indicates that the integral is performed over the entire sphere of directions.
figure 8.4 computation of vector irradiance. point p is surrounded by light sources of various shapes, sizes, and radiance distributions (the brightness of the yellow color indicates the amount of radiance emitted). the orange arrows are vectors pointing in all directions from which there is any incoming radiance, and each length is equal to the amount of radiance coming from that direction times the infinitesimal solid angle covered by the arrow (in principle there should be an infinite number of arrows). the vector irradiance is the sum of all these vectors.
辐照度是所有这些方向的和。
figure 8.5 vector irradiance. the large orange arrow is the result of summing the small arrows in figure 8.4. the vector irradiance can be compute the net irradiance of any plane at point p.
figure 8.6 vector irradiance of a single area light source. on the left, the arrows represent the vectors used to compute the vector irradiance. on the right, the large orange arrow is the vector irradiance e. the short black and red vectors represent the range of surface normals for which e can be used to compute the irradiance from the area light source. the red dashed lines represent the extent of the light source, and the red vectors (which are perpendicular to the red dashed lines) define the limits of the set of surface normals. normals outside this set will have an angle greater than 90度 with some part of the area light source. such normals can not use e to compute their irradiance.
这段话的意思,我解释下。
the vector irradiance e is interesting because, once computed, it can be used to find the net irradiance at p through a plane of any orientation by performing a dot product (see figure 8.5):
where n is the normal to the plane. the net irradiance 净辐照度 through a plane is the difference between the irradiance following through the “positive side” of the plane (defined by the plane normal n) and the irradiance flowing through the “negative side”. by itself, the net irradiance is not useful for shading. 没有用,你这个net irradiance 说啥呀。。
but if no radiance is emitted through the “negative side” (in other words, the light distribution being analyzed has no parts for which the angle between l and n exceeds 90度), then E(p,-n)=0 and
the vector irradiance of a single area light source can be used with equation 8.13 to light lambertian surfaces with any normal n, as long as n does not face more than 90度 away from any part of the area light source (see figure 8.6).
the vector irradiance is for a single wavelength. given lights with different colors, the summed vector could be in a different direction for each wavelength. however, if all the light sources have the same spectral 待续
figure 8.7 highlights on smooth objects are sharp reflections of the light source shape. on the left, this appearance has been approximated by thresholding the highlight value of a blinn-phong shader. on the right, the same object rendered with an unmodified blinn-phong shader for comparision.
in principle, the reflecance equation does not distinguish between light arriving directly from a light source and indirect light that has been scattered from the sky or objects in the scene. all incoming directions have radiance, and the reflectance euqation integrates over them all. however, in practice, direct light is usually distinguished by relatively small solid angles with high radiance values, and indirect light tends to diffusely cover the rest of the hemisphere with moderate to low radiance values. this provides good practical reasons to handle two separately. this is even true for offline rendering systems——the performance advantages of using separate techniques tailored to direct and indirect light are too great to ignore. too to 结构
分开处理,提高性能,不能忽略。
8.3 ambient light
ambient light is the simplest model of indirect light, where the indirect radiacne does not vary with direction and has a constant value La. even such a simple model of indirect light improves visual quality significantly. a scene with no indirect light appears highly unrealistic. objects in shadow or facing away from the light in such a scene would be completely black, which is unlike any scene found in reality. the moonspace in figure 8.1 comes close, but even in such scenes some indirect light is bouncing from nearby objects.
the exact effects of ambient light will depend on the BRDF. For Lambertian surfaces, the constant radiance LA results in a constant contribution to outgoing radiance, regardless of surface normal n or view direction v:
When shading, this constant outgoing radiance contribution is added to the contributions from direct light sources:
For arbitrary BRDFs, the equivalent equation is
La是常数。
We define the ambient reflectance RA(v) thus:
Like any reflectance quantity, RA(v) has values between 0 and 1 and may vary over the visible spectrum, so for rendering purposes it is an RGB color. In real-time rendering applications, it is usually assumed to have a view-independent, constant value, referred to as the ambient color Camb.
For Lambertian surfaces, Camb is equal to the diffuse color Cdiff . For other surface types, Camb is usually assumed to be a weighted sum of the diffuse and specular colors [192, 193]. This tends to work quite well in practice, although the Fresnel effect implies that some proportion of white should ideally be mixed in, as well. Using a constant ambient color simplifies Equation 8.20, yielding the ambient term in its most commonly used form:
The reflectance equation ignores occlusion—the fact that many surface points will be blocked from “seeing” some of the incoming directions by other objects, or other parts of the same object. This reduces realism in general, but it is particularly noticeable for ambient lighting, which appears extremely flat when occlusion is ignored. Methods for addressing this will be discussed in Sections 9.2 and 9.10.1.
8.4.2 sphere mapping
initially mentioned by williams, and independently developed by miller and hoffman, this was the first environment mapping technique supported in general commercial graphics hardware. the texture image is derived from the appearance of the environment as viewd orthographically in a perfectly reflective sphere, so this texture is called a sphere map. one way to make a sphere map of a real envionment is to take a photograph of a shiny sphere, such as a christmas tree ornament. see figure 8.11 for an example.
this resulting circular image is also sometimes called a light probe, as it captures the lighting situation at the sphere’s location. sphere map textures for synthetic scenes can be generated using ray tracing or by warping the images generated for a cubic environment map. see figure 8.12 for an example of environment mapping done with sphere maps.
the sphere map has a basic (see Appendix A) that is the frame of reference in which the texture was generated. that is, the image is viewed along some axis f in world space, with u as the up vector for the image and h going horizontally to the right (and all the normalized). this gives a basic matrix:
to access the sphere map, first transform the surface normal n, and the view vector v, using this matrix. this yields n’ and v’ in the sphere map’s space. the reflected view vector is then computed to access the sphere map texture:
with r being the resulting reflected view vector, in the sphere map’s scene.
figure 8.11. a light probe image used for sphere mapping, formed by photographing a reflective sphere. this image is of the interior of Grace Cathedral, San Franciso. (Image courtesy of Paul Debevec, debevec.org.)
Figure 8.12. Specular highlighting. Per-vertex specular is shown on the left, environment mapping with one light in the middle, and environment mapping of the entire surrounding scene on the right. (Images courtesy of J.L. Mitchell, M. Tatro, and I. Bullard.)
Figure 8.13. Given the constant view direction and reflected view vector r in the sphere map’s space, the sphere map’s normal n is halfway between these two. For a unit sphere at the origin, the intersection point h has the same coordinates as the unit normal n.
Also shown is how hy (measured from the origin) and the sphere map texture coordinate v (not to be confused with the view vector v) are related.
8.4.3 cubic environment mapping
in 1986, Greene [449] introduced another EM technique. this method is far and away the most popular EM method implemented in modern graphics hardware, due to its speed and flexibility. the cubic environment map (a.k.a. EM cube map)is obtained by placing the camera in the center of the environment and then projecting the environment onto the sides of a cube positioned with its center at the camera’s location. The images of the cube are then used as the environment map. In practice, the scene is rendered six times (one for each cube face) with the camera at the center of the cube, looking at each cube face with a 90◦ view angle. This type of environment map is shown in Figure 8.14. In Figure 8.15, a typical cubic environment map is shown.
a great strength of Greene’s method is that environment maps can be generated relatively easily by any renderer (versus Blinn and Newell’s method, which uses a spherical projection), and can be generated in real time. see figure 8.16 for an example. cubic environment mapping is view independent, unlike sphere mapping. it also has much more uniform sampling characeteristics than Blinn and Newell’s method, which oversamples the poles 两极 compared to the equator 赤道. Uniformity of sampling is important for environment maps, as it helps maintain an equal level of quality of reflection across a surface. Wan et al. [1317, 1318] present a mapping called the isocube that has a lower sampling rate discrepancy than cube mapping and accesses the cube map hardware for rapid display.
for most applications, cube mapping provides acceptably high quality at extremely high speeds. accessing the cube map is extremely simple——reflected view vector r is used directly as a three-component texture coordinate (it does not even need to be normalized). the only caveat警告 is that r must be in the same coordinate system that the environment map is defined in (usually world space).
with the advent of shader model 4.0, cube maps can be generated in a single pass using the geometry shader. instead of having to form six different views and run the geometry through the pipeline six times, the geometry shader replicates 复制 incoming data into six separate to objects. each object is transformed using a different view, and the results are sent to different view, and the results are sent to different faces stored in a texture array [261].
note the since environment maps usually contain high dynamic range values, care should be taken when generating them dynamically, so that the right range of values is written into the environment map.
the primary limitation of environment maps is that they only reprsent distant objects. this can be overcome by using the position of the surface to make the cube map behave more like a local object, instead of an object that is infinitely far away. in this way, as an object move within a scene, its shading will be more influenced by the part of the environment nearer to it. Bjorke[93] uses a cube with a physical location and extent. he casts a ray from the shaded surface location in the direction of the reflection vector. the vector from the cube center to the ray-cube intersection point is used to look up the cube map. Szirmay-Kalos et al. [1234] store a distance for each texel, allowing more elaborate 精心制作的 environments to be reflected realistically.
8.4.4 Parabolic Mapping 抛物线映射
Heidrich and Seidel [532, 535] propose using two environment textures to perform parabolic environment mapping. 使用两个环境贴图来模拟抛物线映射。
The idea is similar to that of sphere mapping, but instead of generating the texture by recording the reflection of the environment off a sphere, two paraboloids are used. Each paraboloid creates a circular texture similar to a sphere map, with each covering an environment hemisphere. See Figure 8.17. 每个抛物线创建一个环形贴图,覆盖环境的半球。
figure 8.17. two parabolid 抛物面 mirrors that capture an environment using diametrically opposite views. the reflected view vectors all extend to meet at the center of the object, the foci疫源地 of both paraboloids.
as with sphere mapping, the reflected view ray is computed with equation 8.27 in the map’s basis (i.e., in its frame of reference). the sign of the z-component of the reflected view vector is used to decide which of the two textures to access. then the access function is simply
for the front image, and the same, with sign reversals for rz, for the back image.
the texture transformation matrixc is set and used as follows:
the authors present an implementation for this method using the opengl fixed-function pipeline. the problem of interpolating across the seam between the two textures is handled by accessing both paraboloid textures. if a sample is not on one texture, it is black, and each sample will be on one and only one of the textures. summing the results (one of which is always zero) gives the environment’s contribution.
there is no singularity with the parabolic map, so interpolation can be done between any two reflected view directions. the parabolic map has more uniform texel smapling of the environment compared to the sphere map, and even to the cubical map. Parabolic mapping, like cubic mapping, is view-independent. as with sphere maps, parabolic mapping can be done on any graphics hardware that supports texturing. the main drawback of parabolic mapping is in making the maps themselves. cubic maps are straightforward to make for synthetic scenes and can be regenerated on the fly, and sphere maps of real environments are relatively simple to photograph, but parabolic maps have neither advantage. staright edges in world space become curved in parabolic space. parabolic maps have to be created by tessellating objects, by wraping images, or by using ray tracing.
one common use of parabolic mapping is for inreasing data stored for a hemisphere of view directions. for this situation only a single parabolic map is needed. section 7.7.2 gives an example, using parabolic maps to capture angle-dependent surface reflectivity data for factorization.
8.6 Irradiance Environment Mapping
the previous section discussed using filtered environment maps for glossy specular reflections. filtered environment maps can be used for diffuse reflections as well [449,866]. environment maps for specular reflections have some common properties, whether they are unfiltered and used for mirror reflections, or filtered and used for glossy reflections. In both cases, specular environment maps are indexed with the reflected view vector, and they contain radiance values.(4)
(4) unfiltered environment maps contain incoming radiance values. filtered environment maps (more properly called reflection maps) contain outgoing radiance values.
in contrast, environment maps for diffuse reflections are indexed with the surface normal n, and they contain irradiance values. for this reason they are called irradiance environment maps[1045]. Figure 8.20 shows
that glossy reflections with environment maps have errors under some conditions
due to their inherent ambiguity—the same reflected view vector
may correspond to different reflection situations. This is not the case with
irradiance environment maps. The surface normal contains all of the relevant
information for diffuse reflection. Since irradiance environment maps
are extremely blurred compared to the original illumination, they can be
stored at significantly lower resolution.
Irradiance environment maps are created by applying a very wide filter
(covering an entire hemisphere) to the original environment map. The filter
includes the cosine factor (see Figure 8.21). The sphere map in Figure 8.11
Figure 8.21. Computing an irradiance environment map. The cosine weighted hemisphere
around the surface normal is sampled from the environment texture (a cube map
in this case) and summed to obtain the irradiance, which is view-independent. The
green square represents a cross section of the cube map, and the red tick marks denote
the boundaries between texels. Although a cube map representation is shown, any
environment representation can be used.
8.6.1 Spherical Harmonics Irradiance
although we have only discussed representing irradiance environment maps with textures such as cube maps, other representations are possible. sphere harmonics (SH) have become popular in recent years as in irradiance environment map representation. spherical harmonics are a set of mathematical functions that can be used to represent functions on the unit sphere such as irradiance.
spherical harmonics(5) are basis functions: a set of functions that can be weighted and summed to approximate some general space of functions. in the case of spherical harmonics, the space is “scalar functions on the unit sphere.” a simple of this concept is shown in figure 8.25. each of the functoins is scaled by a weight or coefficient such that the sum of the weighted basis functions forms an approximation to the original target function.
almost any set of functions can form a basis, but some are more convenient to use than others. an orthogonal set of basis functions is a set such that the inner product of any two different functions from the set is zero. the inner product is a more general, but similar, concept to the dot product. the inner product fo two vectors is their dot product, and the inner product of two functions is defined as the integral of the two functions multiplied together:
where the integration is performed over the relevant domain. for the functions shown in figure 8.25, the relevant domain is between 0 and 5 on the
x-axis (note that this particular set of functions is not orthogonal). For
spherical functions the form is slightly different, but the basic concept is
the same:
Figure 8.25. A simple example of basis functions. In this case, the space is “functions
that have values between 0 and 1 for inputs between 0 and 5.” The left side shows
an example of such a function. The middle shows a set of basis functions (each color
is a different function). The right side shows an approximation to the target function,
formed by multiplying each of the basis functions by a weight (the basis functions are
shown scaled by their respective weights) and summing them (the black line shows the
result of this sum, which is an approximation to the original function, shown in gray for
comparison).
where Θ indicates that the integral is performed over the unit sphere.
An orthonormal set is an orthogonal set with the additional condition
that the inner product of any function in the set with itself is equal to 1.
More formally, the condition for a set of functions {fj()} to be orthonormal
is
Figure 8.26 shows a similar example to Figure 8.25, where the basis functions
are orthonormal.
The advantage of an orthonormal basis is that the process to find the
closest approximation to the target function is straightforward. This process
is called basis projection. The coefficient for each basis function is
Figure 8.26. Orthonormal basis functions. This example uses the same space and target
function as Figure 8.25, but the basis functions have been modified to be orthonormal.
The left side shows the target function, the middle shows the orthonormal set of basis
functions, and the right side shows the scaled basis functions. The resulting approximation
to the target function is shown as a dotted black line, and the original function is
shown in gray for comparison.
simply the inner product of the target function ftarget () with the appropriate
basis function:
This is very similar in concept to the “standard basis” introduced in Section 4.2.4. Instead of a function, the target of the standard basis is a point’s location. Instead of a set of functions, the standard basis is composed of three vectors. The standard basis is orthonormal 正交的 by the same definition used in Equation 8.38. The dot product (which is the inner
product for vectors) of every vector in the set with another vector is zero
(since they are perpendicular to each other), and the dot product of every
vector with itself is one (since they are all unit length). The method of projecting
a point onto the standard basis is also the same: The coefficients
are the result of dot products between the position vector and the basis
vectors. One important difference is that the standard basis exactly reproduces
every point, and a finite set of basis functions only approximates its
target functions. This is because the standard basis uses three basis vectors
to represent a three-dimensional space. A function space has an infinite
number of dimensions, so a finite number of basis functions can only approximate
it. A roughly analogous situation would be to try to represent
three-dimensional points with just the x and y vectors. Projecting a threedimensional
point onto such a basis (essentially setting its z-coordinate to
0) is an approximation roughly similar to the projection of a function onto
a finite basis.
Ramamoorthi and Hanrahan [1045] also show that the SH coefficients
for the incoming radiance function L(l) can be converted into coefficients
for the irradiance function E(n) by multiplying each coefficient with a constant.
This yields a very fast way to filter environment maps into irradiance
environment maps: Project them into the SH basis and then multiply each
coefficient by a constant. For example, this is how King’s fast irradiance
filtering implementation [660] works. The basic concept is that the computation
of irradiance from radiance is equivalent to performing an operation
called spherical convolution between the incoming radiance function L(l)
and the clamped cosine function cos(θi). Since the clamped cosine function
is rotationally symmetrical about the z-axis, it has only one nonzero coefficient
in each frequency band. The nonzero coefficients correspond to the
basis functions in the center column of Figure 8.27, which are also known
as the zonal harmonics.
The result of performing a spherical convolution between a general
spherical function and a rotationally symmetrical one (such as the clamped
cosine function) is another function over the sphere. This convolution can
be performed directly on the function’s SH coefficients. The SH coefficients
of the convolution result is equal to the product (multiplication) of
the coefficients of the two functions, scaled by
4π/(2l + 1) (where l is
the frequency band index). This means that the SH coefficients of the
irradiance function E(n) are equal to the coefficients of the radiance function
L(l) times those of the clamped cosine function cos(θi), scaled by
the band constants. The coefficients of cos(θi) beyond the first nine have
small values, which explains why nine coefficients suffice for representing
the irradiance function E(n). SH irradiance environment maps can be
quickly evaluated in this manner. Sloan [1192] describes an efficient GPU
implementation.
There is an inherent approximation here, since although the higherorder coefficients of E(n) are small, they are not zero. 虽然我很小,但是我存在。The accuracy of the approximation can be seen in Figure 8.28.
The approximation is remarkably close, although the “wiggling” 摆动 of the curve between π/2 and π, 在这个区间是摆动的,when it should be zero, is notable. This “wiggling” is called ringing in signal processing and typically occurs when a function with a sharp change (like the clamp to zero at π/2) is approximated with a small number of basis functions. The ringing ??? 这个是什么意思 is not noticeable in most cases, but it can be seen under extreme lighting conditions as color shifts or bright “blobs” on the
shadowed sides of objects. If the irradiance environment map is only used to store indirect lighting (as often happens), then ringing is unlikely to be a problem.
Figure 8.22 on page 315 shows how an irradiance map derived directly compares to one synthesized by the nine-term function. This function can be evaluated during rendering with the current surface normal n [1045], or