One of the great myths concerning computers is that one day we will have enough processing power. even in a relatively simple application like word processing, we find that additional power can be applied to all sorts of features, such as on-the-fly spell and grammar checking, antialiased text display, voice recognition and dictation, etc.
in real-time rendering, we have at least four performance goals: more frames per second, higher resolution and sampling rates, more realistic materials and lighting, and increased complexity. a speed of 60-85 frames per second is generally considered a fast enough frame rate; see the introduction for more on this topic. even with motion blurring (section 10.14), which can lower the frame rate needed for image quality, a fast rate is still needed to minimized latency when interacting with a scene.
IBM’s T221 LCD display offers 3840x2400 resolution, with 200 dots per inch (dpi). a resolution of 1200 dpi, 36 time the T221’s density, is offered by many printer companies today. a problem with the T221 is that it is difficult to update this many pixels at interative rates. perhaps 1600x1200 is enough resolution for a while. even with a limit on screen resolution, antialiasing and motion blur increase the number of samples needed for generating high-quality images. a discussed in section 18.1.1, the number of bits per color channel can also be increased, which drives the need for higher precision (and therefore more costly) computations.
as previous chapters have shown, describing and evaluating an object’s material can be computationally complex. modeling the interplay of light and surface can absorb an arbitrarily high amount of computing power. this is true because an image should ultimately be formed by the contributions of light traveling down a limitless number of paths from an illumination source to the eye.
frame rate, resolution, and shading can always be made more complex, but there is some sense of diminishing returns to increasing any of these. there is no real upper limit on scene complexity. the rendering of a boeing 777 includes 132,500 unique parts and over 3,000,000 fasteners, which would yield a polygonal model with over 500,000,000 polygons. see figure 14.1.
even if most of these objects are not seen due to size or position, some work must be done to determine that this is the case. neither z-buffering nor ray tracing can handle such models without the use of techniques to reduce the sheer number of computation. 这句话是tm的什么意思。
our conclusion: acceleration algorithms will always be needed.
in this chapter, a smorgasbord of algorithms for accelerating computer garaphics rendering, in particular the rendering of large amounts of geometry, will be presented and explained. the core of many such algorithms is based on spatial data structures, which are described in the next section. based on that knowledge, we then continue with culling techniques. these are algorithms that try to find out which objects are at all visible and need to be treated further. level of detail techniques reduce the complexity of rendering the remaining objects. finally, systems for rendering very large models, and point rendering algorithms, are briefly dicussed.
14.1 spatial data structures
a spatial data structure is one that organizes geometry in some n-dimensional space. only two- and three-dimensional structures are used in this book, but the concepts can often easily be extended to higher dimensions. these data structures can be used to accelerate queries about whether geometric entities overlap. such queries are used in a wide variety of operations, such as culling algorithms, during intersection testing and ray tracing, and for collision detection.