可以说是精简版的SpeedTree,远处的树是根据Impostor画出来的,可以画大规模的森林。
Introduction
This blog series discusses some ideas and issues around rendering foliage. Why? I’m on Intel’s Game Technology Development team. We make samples around gaming on Intel platforms. Hardware and APIs constantly evolve. We ship small example programs (with source) showing how game developers might use the new features. This often takes the form of: “The new features allow game developers to improve existing technology”. Foliage often comes up in these discussions. It’s a common feature with interesting and unique technical challenges.
This first version is a baseline; it doesn’t focus on specific HW or API features. The intent is for future samples to show how it can be improved with new HW and/or API features. Note also that, while it doesn’t exactly use “programmer art”, it does use “prototype art”. We use only two tree models, and two grass models. A real game would use many more of each. A production-ready system would also have a more-complete feature set. We’ll discuss some of them later in the series.
The project is available here: http://github.com/DougMcNabbIntel/Foliage
Figure 1 Foliage screenshot
What’s So Special About Rendering Foliage?
It’s simple enough to use brute-force and issue separate draw calls for each object. It's simple, but not efficient. The cost of independently rendering every object would quickly add up, limiting our scene to a few thousand visible objects per-frame. Filling the world with unique objects would also limit the total number of objects we could have; memory is limited. Some thought and work enables us to fill our world with objects.
There are many possible foliage scenarios. We focus on the challenge of covering a relatively-large world; e.g., an “open world” game where the player isn’t “on rails”. Other approaches could easily be more appropriate if the camera is sufficiently constrained.
Our first specialization is to split the work in two:
- 3D objects near the camera
- 2D billboard proxies for distant objects, and for small objects near the camera
Both have interesting characteristics, and present different challenges. Combining them to form a seamless experience adds more challenges – increasing the “degree of difficulty”. Both of them need to support the following:
- Lighting
- Shadows
- Shading
- Transparency Sorting
3D Objects
We render objects that are close to the camera as more-or-less "regular" 3D objects. A brute-force approach would have the artist fill the world with unique objects. This would, of course, be too expensive; it would require more memory and time than we have.
Instancing increases efficiency. We use simple instancing: We render each object multiple times; we store each object only once, but draw it multiple times. We don’t yet use the API’s instancing support to draw multiple instances with a single call. We leave that for a future sample - to illustrate the relative value of that approach.
We’ll explore the details in upcoming blog posts.
2D Billboard Proxies
We render distant objects as two-triangle ("quad"), camera-facing billboards. We collect them into "patches". Each patch contains thousands of objects, covering a rectangular area on the ground. As expected, this is much faster than separately rendering the individual objects, with thousands of tiny triangles each. It isn’t all roses though. These billboard proxies are different from their corresponding 3D objects. We do what we can to have them both produce the same visual result. But, the results are different. Minimizing artifacts when transitioning between 3D objects and 2D billboards is a significant challenge.
-----------------------------------------------------------------------------------------------------------Part2----------------------------------------------------------------------------------------------------------------
Introduction
This blog post discusses how we organize our foliage data with patches. The previous blog post (here) mentioned that we separate foliage rendering into full-3D objects near the camera, and 2D billboard proxies further away (and for small objects near the camera). This data drives our rendering of both 2D and 3D objects.
Aside: We render plants. I intentionally use the generic term “object” instead of “plant” because we could render other kinds of things: like rocks, twigs, mushrooms, trash, etc. Though, “plant” will probably sneak in here and there.
Foliage Patch Regular Grid
A foliage patch is a regular grid, with one object placed randomly in each cell. Figure 1 shows a simplified view of a foliage patch; we illustrate here with an 8x8 grid (64 objects), but the sample uses 128x128 (16K objects).
Figure 1 – Foliage patch with and without grid lines
We place each object in a grid cell. This has some appealing properties: It gives an even distribution, reducing the variance for the distance between objects; a fully-random distribution can clump objects together, or even place them right on top of each other. It also simplifies sorting our objects. We want to sort our objects for correct transparency compositing; correct alpha-blending requires we render distant objects before close objects.
The regular grid enables us to analytically sort. We’ll cover this approach in greater detail in a later post. Here’s a quick summary for now: we use a function to directly compute sort-order, instead of spending time (and power and bandwidth) performing an iterative general-purpose sort. We do something like:
sortedIndex = ComputeSortedIndex(camera, renderOrderIndex)
Given the camera’s position and look vector, along with the render-order index, we directly compute the grid index. Example:
ComputeSortedIndex(camera, 0)
Tells us which object we should draw first. This is a relatively-big topic, so will get its own blog entry.
We render our 2D billboards a patch at-a-time. We use the same data for rendering our 3D objects, but render them one object at a time. This provides some desired load-balancing: Individually drawing thousands of two-triangle billboards would be prohibitively expensive. Drawing all of the patch’s objects at full 3D, with thousands of triangles each, would also be a big waste.
Arrays of Patches
We cover our world with a 2D array of patches. This is similar to texture tiling, but in practice doesn’t generate repeating-pattern visual artifacts. Figure 2 shows an example 4x4 array of repeated patches.
Figure 2 – 2D Array of patches
Multiple Object Types
We support multiple object types by overlaying grids with different cell sizes (Figure 3). Here we see the larger, blue objects in larger grids. These grids have the same number of objects as their smaller counterparts, but the objects are larger, and the spacing between them is larger too.
Figure 3 – Patches of larger, blue objects overlaying the smaller green plants
We have some options for what this means in practice. We could separately render the differently-sized patches – e.g., render the smallest objects, followed by the next-larger objects, etc. Or, we could fill the patches with the differently-sized objects and render them together. We chose the latter for this sample.
However, rendering the patches separately has some advantages; we should be aware of the tradeoffs and what we’re giving up. We fade objects with distance; smaller objects fade out when they’re relatively close to the camera, and larger objects fade when they’re relatively far away (roughly the same screen-space size). Separately rendering by size simplifies this process, as we can easily cull entire patches when they’re beyond the far-fade distance. But, then differently-sized objects would render at different times. Sorting objects with respect to each other requires rendering them together. We want to sort them as correctly as we can, so we choose to combine them.
Figure 4 – Storing objects at smallest grid spacing
Figure 4 shows how larger objects cover cells that are already occupied by smaller objects. The final patch contains only one object per cell. We also see that the larger objects exceed the bounds of the small grid cells. This can introduce sorting errors, but is minimal in practice. We’ll cover that more in a later blog post.
The sample populates all patches at create time. A desired future-feature is to populate them on-demand, as the camera moves about the scene and patches come in and out view. Our patch-populating code is deterministic for this reason, but we haven’t yet implemented on-demand populating.
The sample uses a simple, naïve mechanism for populating the patch. It places all of the small objects, followed by the next-larger objects, followed by the next larger, etc. This isn’t ideal. We would like to specify the overall probability that an object is placed, without it being affected by subsequently-placed objects. This is an area for future development.
Random Properties and Probability Maps
We randomize some object properties. We currently support randomizing position, scale (size and height:width ratio), and probability. Note: we randomize scale only for objects that are always 2D. We need to add support for scaling the 3D objects to match their 2D billboards, and then we can randomize their scales and aspect-ratios too. This isn’t difficult, but we haven’t spent the time to do it yet.
We leave randomizing other object properties (e.g., color) for future features.
Figure 5 Probability Map Example. Before, after with map, and after without map.
What about places where we don’t want objects (e.g., roads, rivers, rocky areas, etc.)? What if we want a meadow with lots of grass, but no trees? We specify these areas with probability maps. Figure 5 shows a probability map in action. We draw no objects where the probability map is black. We draw ~ 50% of the objects where the probability map is 50% grey. Each object stores a random value between 0.0 and 1.0. If the probability map’s value at the object’s location is less than the object’s stored value, then we cull the object. Otherwise we draw it. We support different probability maps for each object type.
Data Driven
We specify these data here: http://github.com/DougMcNabbIntel/Foliage/blob/master/Foliage/Media/Foliage/TreesAndGrass.fol
Here’s an example from that file (with some unrelated bits removed)
[plant 1] probabilityTexture = grass_PM.dds plantSpacingWidth = 3.0 plantSpacingHeight = 3.0 sizeMin = 2.0 sizeMax = 3.0 widthRatioMin = 1.0 widthRatioMax = 2.5
We use grass_PM.dds for this object’s probability (probabilityTexture). We place these objects 3.0 feet apart on average (plantSpacingWidth and plantSpacingHeight). We randomly vary the height between 2.0 and 3.0 feet (sizeMin and sizeMax). We randomly vary the aspect ratio between a square, and a rectangle that’s 2.5X wider than it is tall (widthRatioMin and widthRatioMax).
Wrapping Up
We have a lot more to talk about. But, this is a reasonable place wrap up this blog post. Stay tuned for how we place objects on the ground, how we render billboards, how we light and shadow, how we manage the transition between 2D billboard proxies, and full-3D objects, and how we perform our analytic sort.
Post-Script
I used 3dsMax and MaxScript to make the pictures. Here’s the script
spheres = #() ww = 8 hh = 8 radius = 2 spacingX = 16 spacingY = 16 randScaleX = (spacingX/2 - radius) randScaleY = (spacingY/2 - radius) startX = spacingX * -(ww+1)/2 startY = spacingY * -(hh+1)/2 start = [startX, startY, 0] index = 1 for row in 1 to hh do ( for col in 1 to ww do ( randX = random -randScaleX randScaleX randY = random -randScaleY randScaleY location = start+[col*spacingX + randX, row*spacingY+randY, 0] spheres[index] = sphere radius:radius pos:location index += 1 ) )