Real-Time Cloud Rendering for Games

1623 篇文章 22 订阅
1277 篇文章 12 订阅

 

Real-Time Cloud Rendering for Games
Mark J. Harris
Department of Computer Science, University of North Carolina at Chapel Hill
harrism@cs.unc.edu, http://www.cs.unc.edu/~harrism/clouds
Abstract
This paper presents a method for realistic real-time rendering of clouds for flight simulators and
games. It describes a cloud illumination algorithm that approximates multiple forward
scattering in a preprocess, and first order anisotropic scattering at runtime. Impostors are
used to accelerate cloud rendering by exploiting frame-to-frame coherence in an interactive
flight simulation. Impostors are particularly well suited to clouds, even in circumstances under
which they cannot be applied to the rendering of polygonal geometry. The method allows
hundreds of clouds with hundreds of thousands of particles to be rendered at high frame rates,
and improves interaction with clouds by reducing artifacts introduced by direct particle
rendering.
1 Introduction
Clouds are an integral feature of the sky;
without them synthetic outdoor scenes
seem unrealistic. Game developers know
this; outdoor games nearly always have
clouds present. If the player’s viewpoint
stays near the ground, then the game can
rely on techniques similar to those used by
renaissance painters in ceiling frescos:
distant and high-flying clouds are
represented by paintings on an alwaysdistant
sky dome. Flight simulators and
other flying games don’t have it so easy –
their players’ viewpoints are free to roam
the sky.
Many techniques have been used for
clouds in games and flight simulators. They
have been hinted at with planar textures –
both static and animated – or with semi-transparent textured objects and fogging effects.
These techniques leave a lot of effects to be desired. In a flying game, we would like to fly in
and around realistic, volumetric clouds, and to see other flying objects pass within and behind
them. This paper describes a system for real-time volumetric cloud shading and rendering that
is appropriate for games and flight simulators.
This paper focuses on high-speed, high-quality rendering of constant-shape clouds for games.
Games are complex systems that are very computationally and graphically loaded, so cloud
rendering must be very fast. For this reason, we render realistically shaded static clouds, and
Figure 1: Realistic clouds in the game “Savage Skies”.
Appears in Game Developers Conference 2002 Proceedings
2
do not address issues of dynamic cloud
simulation. This choice enables us to
generate clouds ahead of time, and to
assume that cloud particles are static relative
to each other. This assumption speeds cloud
rendering because we need only shade them
once per scene in a preprocess.
The rest of this section presents previous
work. Section 2 gives a derivation and
description of our shading algorithm. Section
3 discusses dynamically generated impostors
and shows how we use them to accelerate
cloud rendering. We also discuss how we
have dealt with issues in interacting with
clouds. Section 4 discusses our results and presents performance measurements. We
conclude and discuss ideas for future research in section 5.
This paper is based on [Harris 2001]. More information, images, lecture notes, and tutorial
material can be found at http://www.cs.unc.edu/~harrism/clouds.
1.1 Previous Work
Two areas of previous work are important to this paper: cloud modeling and cloud rendering.
Cloud modeling deals with the data used to represent clouds in the computer, and how the
data are generated and organized. We build our clouds with particle systems. Reeves
introduced particle systems as an approach to modeling clouds and other such “fuzzy”
phenomena in [Reeves1983]. Voxels are another common representation for clouds. Voxel
models provide a uniform sampling of the volume, and can be rendered with both forward and
backward methods. Procedural solid noise techniques are also important to cloud modeling as
a way to generate random but continuous density data to fill cloud volumes [Lewis1989,
Perlin1985, Ebert1998].
Rendering clouds is difficult because realistic shading requires the integration of the effects of
optical properties along paths through the cloud volume, while incorporating the complex light
scattering within the medium. Previous work has attempted to approximate the physical
characteristics of clouds at various levels of accuracy and complexity, and then to use these
approximate models to render images of clouds. Blinn introduced the use of density models
for image synthesis in [Blinn1982], where he presented a low albedo, single scattering
approximation for illumination in a uniform medium. Kajiya and Von Herzen extended this
work with methods to ray trace volume data exhibiting both single and multiple scattering
[Kajiya1984]. Max provided an excellent survey in which he summarized the spectrum of
optical models used in volume rendering and derived their integral equations from physical
models [Max1995]. David Ebert has done much work in modeling “solid spaces”, including
offline computation of realistic images of smoke, steam, and clouds [Ebert1990, Ebert1997].
Nishita et al. introduced approximations and rendering techniques for global illumination of
clouds accounting for multiple anisotropic scattering and skylight [Nishita1996].
Figure 2: A view from an interactive flight through
clouds.
Appears in Game Developers Conference 2002 Proceedings
3
Our rendering approach draws most directly from the rendering technique presented by
Dobashi et al [Dobashi2000]. The shading method presented by Dobashi et al. implements an
isotropic single scattering approximation. We extend this method with an approximation to
multiple forward scattering and anisotropic first order scattering. The animated cloud scenes
of Dobashi et al. required 20-30 seconds rendering time per frame. Our system renders static
cloudy scenes at tens to hundreds of frames per second, depending on scene complexity.
2 Shading and Rendering
Particle systems are a simple and efficient method for representing and rendering clouds. Our
cloud model assumes that a particle represents a roughly spherical volume in which a
Gaussian distribution governs the density falloff from the center of the particle. Each particle is
made up of a center, radius, density, and color. We get good approximations of real clouds by
filling space with particles of varying size and density. Clouds in our system can be built by
filling a volume with particles, or by using an editing application that allows a user to place
particles and build clouds interactively. The randomized method is a good way to get a quick
field of clouds, but games have levels designed and built by artists who require fine control
over all details of the scene. Providing an artist with an editor allows the artist to produce
beautiful clouds tailored to the needs of the game.
We render particles using splatting [Westover1991], by drawing screen-oriented polygons
texture-mapped with a Gaussian density function. Although we choose a particle system
representation for our clouds, it is important to note that both our shading algorithm and our
fast rendering system are independent of the cloud representation, and can be used with any
model composed of discrete density samples in space.
2.1 Essential Definitions
To improve clarity in the next few sections, we will define some terms.
Absorption is the phenomenon by which light energy is converted into another form upon
interacting with particles in a medium. For example, your skin grows warm in sunlight because
some of the light is absorbed and transformed into heat energy.
Scattering is the phenomenon of absorption and reradiation of light by a medium.
Extinction describes the attenuation of light energy by absorption and scattering:
Extinction = Scattering + Absorption.
Any light that interacts with a medium undergoes either scattering or absorption. If it does not
interact, then it is transmitted. Extinction (and therefore scattering and absorption) is
proportional to density.
Single Scattering is scattering of light by a single particle:
In optically thin media (media that are either physically very thin, or very transparent),
scattering of light can be approximated using single scattering models. Clear air can usually
be approximated this way, but clouds cannot.
Appears in Game Developers Conference 2002 Proceedings
4
Multiple Scattering is scattering of light from multiple particles in succession:
Models that account for only single scattering cannot accurately represent optically thick media
such as clouds. Multiple scattering is the reason that clouds appear much brighter (and whiter)
than the sky. Most of the light that emerges from a cloud has been scattered many times.
Optical Depth: a measure of how opaque a medium is to light passing through it. It has units
of one over length (such as cm-1), and can be thought of as one over the distance light must
travel into a medium before all of its intensity has been either absorbed or scattered.
Albedo: (may also be called scattering coefficient) the percentage of attenuation by extinction
that is due to scattering by a medium rather than absorption.
Albedo = Scattered Power / Incident Power = Scattering / Extinction.
Phase Function: a function that determines, for any angle between incident and outgoing
directions, how much of the incident light intensity will be scattered in the outgoing direction.
For example, scattering by very small particles such as those found in clear air, can be
approximated using Rayleigh scattering. The phase function for Rayleigh scattering is
p(θ) = 3/4(1 +cos2θ),
where θ is the angle between incident and scattered directions. Scattering by larger particles
is more complicated. It is described by Mie scattering theory, which is outside the scope of this
paper. Cloud particles are more in the regime of Mie scattering than Rayleigh scattering.
However, we obtain good visual results by using the simpler Rayleigh scattering phase
function as an approximation.
2.2 Light Scattering Illumination
Scattering illumination models simulate the emission and absorption of light by a medium as
well as scattering through the medium. Single scattering models simulate scattering through
the medium in a single direction. This direction is usually the direction leading to the point of
view. Multiple scattering models are more physically accurate, but must account for scattering
in all directions (or a sampling of all directions), and therefore are much more complicated and
expensive to evaluate. The rendering algorithm presented by Dobashi et al. computes an
approximation of illumination of clouds with single scattering. This approximation has been
used previously to render clouds and other participating media [Blinn1982, Kajiya1984].
In a multiple scattering simulation that samples N directions on the sphere, each additional
order of scattering that is simulated multiplies the number of simulated paths by N.
Fortunately, as demonstrated by [Nishita1996], the contribution of most of these paths is
insignificant to cloud rendering. Nishita et al. found that scattering illumination is dominated by
the first and second orders, and therefore they only simulated up to the 4th order. They reduce
the directions sampled in their evaluation of scattering to sub-spaces of high contribution,
which are composed mostly of directions near the direction of forward scattering and those
directed at the viewer. We simplify further, and approximate multiple scattering only in the light
Appears in Game Developers Conference 2002 Proceedings
5
direction – or multiple forward scattering – and anisotropic single scattering in the eye
direction.
Our cloud rendering method is a two-pass algorithm similar to the one presented in
[Dobashi2000]: we precompute cloud shading in the first pass, and use this shading to render
the clouds in the second pass. The algorithm of Dobashi et al., however, uses only an
isotropic first order scattering approximation. If realistic values are used for the optical depth
and albedo of clouds shaded with only a first order scattering approximation, the clouds appear
very dark [Max1995]. This is because much of the illumination in a cloud is a result of light
scattered forward along the light direction. Figures 9 and 10 show the difference in
appearance between clouds shaded with and without our multiple forward scattering
approximation.
2.2.1. Multiple Forward Scattering
The first pass of our shading algorithm computes the amount of light incident on each particle
P in the light direction, l. This light, I(P, l), is composed of all direct light from direction l that is
not absorbed by intervening particles, plus light scattered to P from other particles. The
multiple scattering model is written
( , ) ( , ) ,
0
( ) ( )
0
0 ∫

+

= ⋅
− P −
DP
s
DP
t dt D t dt
I P I e g s e ds
τ τ
ω ω (1)
where DP is the depth of particle P in the cloud along the light direction, and
( , ) ( , , ' ) ( , ' ) '
4
= ∫
π
g x ω r x ω ω I x ω dω (2)
represents the light from all directions ω ′ scattered into direction ω at the point x. Here
r(x,ω,ω’) is the bi-directional scattering distribution function (BSDF). It determines the
percentage of light incident on x from direction ω ′ that is scattered in direction ω. It expands to
r(x,ω,ω ′) = a(x)⋅τ(x)⋅p(ω,ω ′), where τ(x) is the optical depth, a(x) is the albedo, and p(ω,ω ′) is the
phase function.
A full multiple scattering algorithm must compute this quantity for a sampling of all light flow
directions. We simplify our approximation to compute only multiple forward scattering in the
light direction, so ω = l, and ω ′ = -l. Thus, (2) reduces to g(x,l) = r(x,l,-l) ⋅I(x,-l) / 4π.
We split the light path from 0 to DP into discrete segments sj, for j from 1 to N, where N is the
number of cloud particles along the light direction from 0 to DP. By approximating the integrals
with Riemann Sums, we have
Π Σ Π
= = +

=
− = ⋅ +
N
j
N
k j
j
N
j
P
I I e j g e k
1 1 1
0 τ τ . (3)
I0 is the intensity of light incident on the edge of the cloud. In discrete form g(x,l) becomes gk =
ak⋅ τ
k⋅p(l,-l)⋅Ik / 4π. We assume that albedo and optical depth are represented at discrete
samples (particles) along the path of light. In order to easily transform (3) into an algorithm
that can be implemented in graphics hardware, we rewrite it as an equivalent recurrence
relation:
Appears in Game Developers Conference 2002 Proceedings
6
.
, 1
, 2
0
1 1 1
  
=
+ ⋅ ≤ ≤
= − − −
I k
g T I k N
I k k k
k (4)
If we let Tk = e−τ k be the transparency of particle pk, then (4) expands to (3). This representation
can be more intuitively understood. It simply says that starting outside the cloud, as we trace
along the light direction the light incident on any particle pk is equal to the intensity of light
scattered to pk from pk-1 plus the intensity transmitted through pk-1 (as determined by its
transparency, Tk-1). Notice that if gk is expanded in (4) then Ik-1 is a factor in both terms.
Section 2.3 explains how we combine frame buffer read back with hardware blending to
efficiently evaluate this recurrence.
2.2.2. Eye Scattering
In addition to our multiple forward scattering approximation, which we precompute, we also
implement single scattering toward the viewer as in [Dobashi2000]. The recurrence for this is
subtly different:
, 1 . 1 E S T E k N k k k k = + ⋅ ≤ ≤ − (5)
This says that the light, Ek, exiting any particle pk is equal to the light incident on it that it does
not absorb, Tk · Ek-1, plus the light that it scatters, Sk. In the first pass described in Section 2.2.1,
we computed the light Ik incident on each particle from the light source. In the second pass we
are interested in the portion of this light that is scattered toward the viewer. When Sk is
replaced by ak⋅ τ
k⋅p(ω, -l)⋅Ik / 4π, where ω is the view direction and Tk is as above, this recurrence
approximates single scattering toward the viewer. It is important to mention that (5) computes
light emitted from particles using results (Ik) computed in (4). Since illumination is multiplied by
the phase function in both recurrences, one might think that the phase function is multiplied
twice for the same light. This is not the case, since in (4), Ik-1 is multiplied by the phase
function to determine the amount of light Pk-1 scatters to Pk in the light direction, and in (5) Ik is
multiplied by the phase function to determine the amount of light that Pk scatters in the view
direction. Even if the viewpoint is directly opposite the light source, since the light incident on
Pk is stored and used in the scattering computation, the phase function is never taken into
account twice at the same particle.
2.2.3. Phase Function
The phase function, p(ω,ω’) mentioned above is very important to cloud shading. Clouds
exhibit anisotropic scattering of light (including the strong forward scattering that we assume in
our multiple forward scattering approximation). The phase function determines the distribution
of scattering for a given incident light direction. Phase functions are discussed in detail in
[Nishita1996], [Max1995], and [Blinn1982], among others. The images shown in this paper
were generated using a simple Rayleigh scattering phase function given in section 2.1.
Rayleigh scattering favors scattering in the forward and backward directions. Figures 11 and
12 demonstrate the differences between clouds shaded with and without anisotropic
scattering. Anisotropic scattering gives the clouds their characteristic “silver lining” when
viewed looking into the sun.
Appears in Game Developers Conference 2002 Proceedings
7
2.3 Rendering Algorithm
Armed with recurrences (4) and (5) and a
standard graphics API such as OpenGL or
Direct3D, computation of cloud illumination
is straightforward. Our algorithm is similar
to the one presented by [Dobashi2000]
and has two phases: a shading phase that
runs once per scene and a rendering
phase that runs in real time. The key to
the implementation is the use of hardware
blending and pixel read back.
Blending operates by computing a
weighted average of the frame buffer
contents (the destination) and an incoming
fragment (the source), and storing the result back in the frame buffer. This weighted average
can be written
result src src dest dest c = f ⋅ c + f ⋅ c (6)
If we let cresult = Ik, fsrc = 1, csrc = gk-1, fdest = Tk-1, and cdest = Ik–1, then we see that (4) and (6) are
equivalent if the contents of the frame buffer before blending represent I0. This is not quite
enough, though, since as we saw before, Ik-1 is a factor of both terms in (4). To solve the
recurrence for a particle pk, we must know how much light is incident on particle pk-1
beforehand. To do this, we employ pixel read back.
To compute (4) and (5), we use the procedure described by the pseudocode in figure 4. The
pseudocode shows that we use a nearly identical algorithm for preprocess and runtime. The
differences are as follows. In the illumination pass, the frame buffer is cleared to white and
particles are sorted with respect to the light. As a particle is blended into the frame buffer,
blending attenuates the intensity of each fragment by the opacity of the particle, and increases
the intensity by the amount the particle scatters in the forward direction. The percentage of
light that reaches pk, is found by reading back the color of the pixel in the frame buffer to which
the center of the particle projects immediately before rendering pk. Ik is computed by
multiplying this percentage by the light intensity. Ik is used to compute multiple forward
scattering in (4) and eye scattering in (5).
The runtime phase uses the same algorithm, with particles sorted with respect to the
viewpoint, and without reading pixels. The precomputed illumination of each particle Ik is used
in this phase to compute scattering toward the eye.
In both passes, we render particles in sorted order as polygons textured with a Gaussian
“splat” texture. The polygon color is set to the scattering factor ak⋅ τ
k⋅p(ω,l)⋅Ik / 4π and the texture
is modulated by this color. In the first pass, ω is the light direction, and in the second pass it is
the direction of the viewer. The source and destination blending factors are set to one and one
minus source alpha, respectively. All cloud images in this paper were computed with a
constant τ of 8.0, and an albedo of 0.9.
Figure 3: Clouds hang low over a valley.
Appears in Game Developers Conference 2002 Proceedings
8
2.3.1. Skylight
The most awe-inspiring images of clouds are created by the multi-colored spectacle of a
beautiful sunrise or sunset. These clouds are often not illuminated directly by the sun at all,
but by skylight – sunlight that is scattered by the atmosphere. The fact that light accumulates
in an additive manner provides us with a simple extension to our shading method that allows
the creation of such beautiful clouds. We simply shade clouds from multiple light sources and
store the resulting particle colors (ik in the algorithm above) from all shading iterations. At
render time, we evaluate the phase function at each particle once per light. By doing so, we
can approximate global illumination of the clouds.
While this technique is not completely physically-based, it is better than an ambient light
approximation, since it is directional and results in shadowing in the clouds as well as
anisotropic scattering from multiple light directions and intensities. We obtained best results by
using the images that make up the sky dome we place in the distance over our environments
to guide the placement and color of lights. Figure 13 shows a scene at sunset in which we use
two light sources, one orange and one pink, to create sunset lighting. In addition to
Source_blend_factor = 1;
destination_blend_factor = 1 – source_alpha;
texture mode = modulate;
l = direction from light;
if (preprocess) then
ω = -l;
view cloud from light source;
clear frame buffer to white;
particles.Sort(<, distance to light);
else
view cloud from eye position;
particles.Sort(>,distance from eye);
endif
[Sort(<, dist. from x) = sort in ascending order by dist. from x, > = descending ]
foreach particle pk [pk has extinction τk, albedo ak, radius rk, color, and alpha]
if (preprocess) then
x = pixel at projected center of pk;
ik = color(x) * light_color;
pk.color = ak * τ
k * ik / 4π;
pk.alpha = 1 - exp(-τk);
else
ω = pk.position – view_position;
endif
c = pk.color * phase(ω, l);
render pk with color c, side 2*rk;
end
Figure 4: Pseudocode for cloud shading and rendering.
Appears in Game Developers Conference 2002 Proceedings
9
illumination from multiple light sources, we provide an ambient term to provide some
compensation for scattered light lost due to our scattering approximation.
3 Dynamically Generated Impostors
While the cloud rendering method
described above provides beautiful
results and is fast for relatively simple
scenes, it suffers under the weight of
many complex clouds. The games for
which we developed this system dictate
that we must render complicated cloud
scenes at fast interactive rates. Clouds
are only one component of a complex
game environment, and therefore can
only use a small percentage of a frame
time. With direct particle rendering, even
a scene with ten or twenty thousand
particles is prohibitively slow on current
hardware.
The integration (section 2.2) required to
accurately render volumetric media results in high rates of pixel overdraw. Clouds have
inherently high depth complexity, and require blending, making rendering them a difficult job
even for current hardware with the highest fill rates. In addition, as the viewpoint approaches a
cloud, the projected area of that cloud’s particles increases, becoming greatest when the
viewpoint is within the cloud. Thus, pixel overdraw is increased and rendering slows as the
viewpoint nears and enters clouds.
In order to render many clouds made up of many particles at high frame rates, we need a way
to bypass fill rate limitations, either by reducing the amount of pixel overdraw performed, or by
amortizing the rendering of cloud particles over multiple frames. Dynamically generated
impostors allow us to do both.
[Maciel1995], [Schaufler1995], and [Shade1996] all discuss impostors. An impostor replaces
an object in the scene with a semi-transparent polygon texture-mapped with an image of the
object it replaces (figure 5). The image is a rendering of the object from a viewpoint V that is
valid (within some error tolerance) for viewpoints near V. Impostors used for appropriate
points of view give a very close approximation to rendering the object itself. An impostor is
valid (with no error) for the viewpoint from which its image was generated, regardless of
changes in the viewing direction. Impostors may be precomputed for an object from multiple
viewpoints, requiring much storage, or they may be generated only when needed. We choose
the latter technique, called dynamically generated impostors by [Schaufler1995].
We generate impostors using the following procedure. A view frustum is positioned so that its
viewpoint is at the position from which the impostor will be viewed, and it is tightly fit to the
bounding volume of the object (figure 6). We then render the object into an image used to
texture the impostor polygon.
Figure 5: Impostors, outlined in this image, are textured
polygons oriented toward the viewer.
Appears in Game Developers Conference 2002 Proceedings
10
As mentioned above, we can use impostors to
amortize the cost of rendering clouds over
multiple frames. We do this by exploiting the
frame-to-frame coherence inherent in threedimensional
scenes: the relative motion of
objects in a scene decreases with distance from
the viewpoint, and objects close to the viewpoint
present a similar image for some time. This lack
of sudden changes in the image of an object
allows us to re-use impostor images over
multiple frames. We can compute an estimate
of the error in an impostor representation that
we use to determine when the impostor needs
to be updated. Certain types of motion
introduce error in impostors more quickly than
others. [Schaufler1995] presents two worst-case error metrics for this purpose. The first,
which we will call the translation error, computes error caused by translation away from the
viewpoint at which the current impostor was generated. The second computes error
introduced by moving straight toward the object, which we call the zoom error.
We use the same translation error metric, and replace zoom error by a texture resolution error
metric. For the translation error metric, we simply compute the angle αtrans, shown in figure 6,
and compare it to a specified tolerance. The zoom error metric compares the current impostor
texture resolution to the required resolution for the texture, computed using the following
equation [Schaufler1995]
.
object dist
resolution resolution object size texture screen = ⋅
If either the translation error is greater than an error tolerance angle or the current resolution of
the impostor is less than the required resolution, we regenerate the impostor from the current
viewpoint. We find that a tolerance of about 0.15 degree reduces impostor “popping” to an
imperceptible level while maintaining good performance. For added performance, tolerances
up to one degree can be used with more noticeable (but not excessive) popping.
In the past, impostors were used mostly to replace geometric models. Since these models
have high frequencies in the form of sharp edges, impostors have usually been used only for
distant objects. Nearby objects must have impostor textures of a resolution at or near that of
the screen, and their impostors require frequent updates. We use impostors for clouds no
matter where they are in relation to the viewer. Clouds do not have high frequency edges like
those of geometric models, so artifacts caused by low texture resolution are less noticeable.
Clouds have very high fill rate requirements, so cloud impostors are beneficial even when they
must be updated every few frames.
3.1 Head in the Clouds
Impostors can provide a large reduction in overdraw even for viewpoints inside the cloud,
where the impostor must be updated every frame. The “foggy” nature of clouds makes it
difficult for the viewer to discern detail when inside them. In addition, in games and flight
Figure 6: Impostor generation and translation error
metric.
Appears in Game Developers Conference 2002 Proceedings
11
simulators, the viewpoint is often moving. These factors allow us to reduce the resolution at
which we render impostor textures for clouds containing the viewpoint by about a factor of 4 in
each dimension.
However, impostors cannot be generated in the same manner for these clouds as for distant
clouds, since the view frustum cannot be tightly fit to the bounding volume as described above.
Instead, we use the same frustum used to display the whole scene to generate the texture for
the impostor, but create the texture at a lower resolution, as described above. We display
these impostors as screen-space rectangles sized to fill the screen.
3.1.1. Objects in the Clouds
In order to create effective interactive cloudy
scenes, we must allow objects to pass in and
through the clouds, and we must render this
realistically. Impostors pose a problem because
they are two-dimensional. Objects that pass
through impostors appear as if they are passing
through images floating in space, rather than
through fluffy, volume-filling clouds.
One way to solve this problem would be to detect
clouds that contain objects and render their
particles directly to the frame buffer. By doing so,
however, we lose the benefits that impostors
provide us. Instead, we detect when objects pass
within the bounding volume of a cloud, and split the impostor representing that cloud into
multiple layers. If only one object resides in a certain cloud, then that cloud is rendered as two
layers: one for the portion of cloud particles that lies approximately behind the object with
respect to the viewpoint, and one for the portion that lies approximately in front of the object. If
two objects lie within a cloud, then we need three layers, and so on. Since cloud particles
must be sorted for rendering anyway, splitting the cloud into layers adds little expense. This
“impostor splitting” results in a set of alternating impostor layers and objects. This set is
rendered from back to front, with depth testing enabled for objects, and disabled for impostors.
The result is an image of a cloud that realistically contains objects, as shown on the right side
of figure 7.
Impostor splitting provides an additional advantage over direct particle rendering for clouds
that contain objects. When rendering cloud particles directly, the billboards used to render
particles may intersect the geometry of nearby objects. These intersections cause artifacts
that break the illusion of particles representing elements of volume. Impostor splitting avoids
these artifacts (figure 7).
4 Results and Discussion
We have implemented our cloud rendering system using both the OpenGL and DirectX 8 APIs.
On a PC with an NVIDIA GeForce graphics processor, we can achieve very high frame rates
by using impostors and view-frustum culling to accelerate rendering. We can render scenes
containing up to hundreds of thousands of particles at high frame rates (greater than 50 frames
Figure 7: An airplane in the clouds. On the left,
particles are directly rendered into the scene.
Artifacts of their intersection with the plane are
visible. On the right, the airplane is rendered
between impostor layers, and no artifacts are
visible.
Appears in Game Developers Conference 2002 Proceedings
12
per second). If the viewpoint moves slowly enough to keep impostor update rates low, we can
render a scene of more than 1.2 million particles at about 10 to 12 frames per second. Slow
movement is a reasonable assumption for flight simulators and games because the user’s
aircraft is typically much smaller than the clouds through which it is flying, so the frequency of
impostor updates remains low.
As mentioned before, our shading phase is a preprocess. For scenes with only a few
thousand particles shading takes less than a second, and scenes of a few hundred thousand
particles can be shaded in about five to ten seconds per light source.
We have performed several tests of our cloud system. Our first test machine was a PC with
256 MB of RAM and an Intel Pentium III processor running at 800 MHz. It used an NVIDIA
GeForce 256 graphics card with 32MB of video RAM. This tests rendered scenes of
increasing cloud complexity (from 100 to 12800 clouds of 200 particles each) with and without
using impostors. We also tested the performance for different types of movement. The first
test moved the camera around a circular path, and the second moved through the clouds in
the view direction. Figure 8 shows results of the tests. The chart shows that cloud rendering
speed is higher (by an average of about three to five times) with impostors over the entire
range of scene complexity, and that even for scenes with several hundred thousand particles
we achieve interactive frame rates.
We more recently ran a test on a 2 GHz Pentium 4 with a GeForce 3 processor. This test
displays a fly-through of a scene with 34 large clouds made up of almost 90,000 particles over
a terrain of more than 60,000 polygons (figure 3) at more than 60 frames per second in a
640x480 window, and about 30 frames per second in
a 1600x1200 window. Without impostors, the frame
rate drops as low as eight and four frames per
second at 640x480 and 1600x1200, respectively.
Our cloud rendering algorithms were originally
developed for the game “Savage Skies”, by iROCK
Interactive. In this game, players ride fantastical
flying creatures through beautiful environments with
realistically shaded volumetric clouds (Figure 1). The
clouds are interesting in an interactive sense, as
players may momentarily hide in them as they pass
through. The steps we have taken to ensure high
frame rates make our system work well in an already
graphics- and computation-laden game engine.
Impostors provide an essential means of scalability
for games intended to run on a wide range of
hardware. We can balance performance and quality
by adjusting impostor error tolerances, texture
resolution, and the number and size of particles that
make up each cloud.
5 Conclusion and Future Work
This paper presented methods for shading and
0.1
1
10
100
1000
20000
40000
80000
160000
320000
640000
1280000
2560000
Number of particles
Frame Rate (frames/sec)
Linear path No Impostor s
Linear path Impostor s
Cir cular Path No Impos tor s
Cir cular Path Impos tor s
Figure 8: Results of performance
measurements for cloudy scenes of varying
complexity rendered with and without
impostors.
Appears in Game Developers Conference 2002 Proceedings
13
rendering realistic clouds at high frame rates. Our shading and rendering algorithm simulates
multiple scattering in the light direction, and anisotropic single scattering in the view direction.
Clouds are illuminated by multiple directional light sources, with anisotropic scattering from
each.
Our method uses impostors to accelerate cloud rendering by exploiting frame-to-frame
coherence and greatly reducing pixel overdraw. Impostors are an advantageous
representation for clouds even in situations where they would not be successfully used to
represent other objects, such as when the viewpoint is in or near a cloud. Impostor splitting is
an effective way to render clouds that contain other objects, reducing artifacts caused by direct
particle rendering.
Since our shading algorithm computes multiple forward scattering during the illumination
phase, it should be straightforward to extend it to compute an approximation of global multiple
scattering. This would require running many passes to evenly sample all directions, and
accumulating the results at the particles. We are also researching methods for speeding cloud
shading by avoiding pixel read back, so that we can shade and render dynamic clouds in real
time. This will allow the visualization of cloud formation in an interactive simulation.
Beyond clouds, we think that other phenomena might benefit from our shading algorithm. For
example, we would like to be able to render realistic interactive flight through stellar nebulae.
We have ideas for representing nebulae as particle clouds with emissive properties, and
rendering them with a modified version of our algorithm.
Acknowledgements
This work would not have been possible without the support, encouragement, and ideas of the
developers at iROCK Interactive, especially Wesley Hunt, Paul Rowan, Brian Stone, and
Robert Stevenson. Anselmo Lastra, Mary Whitton and Frederick Brooks at UNC provide
continuous advice and support. Rui Bastos gave ideas for the future, Sharif Razzaque helped
with modeling, and Andrew Zaferakis provided a simple and fast terrain renderer. This work
was supported by iROCK Interactive, NVIDIA Corporation, NIH National Center for Research
Resources, Grant No. P41 RR 02170, and Department of Energy ASCI program, National
Science Foundation grant ACR-9876914.
References
For more information, updates, lecture notes, and demos, see http://www.cs.unc.edu/~harrism/clouds. Another
good reference site for clouds is http://www.vterrain.org/Atmosphere/Clouds/index.html.
[Blinn1982] J. Blinn, “Light Reflection Functions for Simulation of Clouds and Dusty Surfaces”. SIGGRAPH 1982, pp. 21-29
[Dobashi2000] Y. Dobashi, K. Kaneda, H. Yamashita, T. Okita, and T. Nishita. “A Simple, Efficient Method for Realistic Animation of Clouds”.
SIGGRAPH 2000, pp. 19-28
[Ebert1990] D. S. Ebert, R. E. Parent, “Rendering and Animation of Gaseous Phenomena by Combining Fast Volume Scanline A-Buffer
Techniques,” SIGGRAPH 1990, pp. 357-366.
[Ebert1997] D. S. Ebert, “Volumetric Modeling with Implicit Functions: A Cloud is Born,” Visual Proceedings of SIGGRAPH 1997, pp.147.
[Ebert1998] D. S. Ebert, F.K. Musgrave, D. Peachey, K. Perlin, S. Worley, Texturing & Modeling: a Procedural Approach. 1998, AP
Professional.
[Harris2001] Harris, M. J. and A. Lastra, Real-Time Cloud Rendering. Computer Graphics Forum (Eurographics 2001 Proceedings), 20(3):76-
84, September 2001.
[Kajiya1984] J. Kajiya and B. Von Herzen. “Ray Tracing Volume Densities”. SIGGRAPH 1984, pp. 165-174.
[Lewis1989] J. Lewis. “Algorithms for Solid Noise Synthesis”. SIGGRAPH 1989, pp. 263-270.
[Maciel1995] P. Maciel, P. Shirley. “Visual Navigation of Large Environments Using Textured Clusters”. Proceedings of the 1995 symposium
on Interactive 3D graphics, 1995, Page 95
Appears in Game Developers Conference 2002 Proceedings
14
[Max1995] N. Max. “Optical Models for Direct Volume Rendering”, IEEE Transactions on Visualization and Computer Graphics, vol. 1 no. 2,
June 1995.
[Nishita1996] T. Nishita, Y. Dobashi, E. Nakamae. “Display of Clouds Taking into Account Multiple Anisotropic Scattering and Sky Light.”
SIGGRAPH 1996, pp. 379-386.
[Perlin1985] K. Perlin. An Image Synthesizer. SIGGRAPH 1985, pp. 287-296.
[Reeves1983] W. Reeves. “Particle Systems – A Technique for Modeling a Class of Fuzzy Objects”. ACM Transactions on Graphics, Vol. 2,
No. 2. April 1983. ACM.
[Schaufler1995] G. Schaufler, “Dynamically Generated Impostors”, GI Workshop Modeling - Virtual Worlds - Distributed Graphics, 1995, pp
129-136.
[Westover1990] L. Westover, “Footprint evaluation for volume rendering” SIGGRAPH 1990, Pages 367 - 376
Figure 9: Shading with multiple forward scattering.
Figure 10: Shading with only single scattering.
Figure 11: Clouds with anisotropic scattering.
Figure 12: Clouds with isotropic scattering.
Figure 13: An example of shading from two light sources to simulate skylight. This scene was rendered with
two lights, one orange and one pink. Anisotropic scattering simulation accentuates the light coming from
different directions. See section 2.
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值