Best Practices for Working with Texture Data
Texture data is often the largest portion of the data your application uses to render a frame; textures provide the detail required to present great images to the user. To get the best possible performance out of your application, you must manage your application’s textures carefully. To summarize the guidelines:
-
Reduce the amount of memory your textures use.
-
Create your textures when your application is initialized, and never change them in the rendering loop.
-
Combine smaller textures into a larger texture atlas.
-
Use mipmaps to reduce the bandwidth required to fetch texture data.
-
Use multitexturing to perform texturing operations in a single pass.
Reduce Texture Memory Usage
Reducing the amount of memory your iOS application uses is always an important part of tuning your application. However, an OpenGL ES application is also constrained in the total amount of memory it can use to load textures. iOS devices that use the PowerVR MBX graphics hardware have a limit on the total amount of memory they can use for textures and renderbuffers (described in “PowerVR MBX”). Where possible, your application should always try to reduce the amount of memory it uses to hold texture data. Reducing the memory used by a texture is almost always at the cost of image quality, so your application must balance any changes it makes to its textures with the quality level of the final rendered frame. For best results, try the different techniques described below, and choose the technique that provides the best memory savings at an acceptable quality level.
Compress Textures
Texture compression usually provides the best balance of memory savings and quality. OpenGL ES for iOS supports the PowerVR Texture Compression (PVRTC) format by implementing the GL_IMG_texture_compression_pvrtc
extension. There are two levels of PVRTC compression, 4 bits per channel and 2 bits per channel, which offer a 8:1 and 16:1 compression ratio over the uncompressed 32-bit texture format respectively. A compressed PVRTC texture still provides a decent level of quality, particularly at the 4-bit level.
Important: Future Apple hardware may not support the PVRTC texture format. You must test for the existence of theGL_IMG_texture_compression_pvrtc
extension before loading a PVRTC compressed texture. For more information on how to check for extensions, see “Check for Extensions Before Using Them.” For maximum compatibility, your application may want to include uncompressed textures to use when this extension is not available.
For more information on compressing textures into PVRTC format, see “Using texturetool to Compress Textures.”
Use Lower-Precision Color Formats
If your application cannot use compressed textures, consider using a lower precision pixel format. A texture in RGB565, RGBA5551, or RGBA4444 format uses half the memory of a texture in RGBA8888 format. Use RGBA8888 only when your application needs that level of quality.
Use Properly Sized Textures
The images that an iOS-based device displays are very small. Your application does not need to provide large textures to present acceptable images to the screen. Halving both dimensions of a texture reduces the amount of memory needed for that texture to one-quarter that of the original texture.
Before shrinking your textures, attempt to compress the texture or use a lower-precision color format first. A texture compressed with the PVRTC format usually provides higher image quality than shrinking the texture—and it uses less memory too!
Load Textures During Initialization
Creating and loading textures is an expensive operation. For best results, avoid creating new textures while your application is running. Instead, create and load your texture data during initialization. Be sure to dispose of your original images once you’ve finished creating the texture.
Once your application creates a texture, avoid changing it except at the beginning or end of a frame. Currently, all iOS devices use a tile-based deferred renderer that makes calls to the glTexSubImage
and glCopyTexSubImage
functions particularly expensive. See “Tile-Based Deferred Rendering” for more information.
Combine Textures into Texture Atlases
Binding to a texture takes time for OpenGL ES to process. Applications that reduce the number of changes they make to OpenGL ES state perform better. For textures, one way to avoid binding to new textures is to combine multiple smaller textures into a single large texture, known as a texture atlas. A texture atlas allows your application to bind a single texture and then make multiple drawing calls that use that texture, The texture coordinates provided in your vertex data are modified to select the smaller portion of the texture from within the atlas.
Texture atlases have a few restrictions:
-
You cannot use a texture atlas if you are using the
GL_REPEAT
texture wrap parameter. -
Filtering may sometimes fetch texels outside the expected range. To use those textures in a texture atlas, you must place padding between the textures that make up the texture atlas.
-
Because the texture atlas is still a texture, it is subject to the limitations of the OpenGL ES implementation, in particular the maximum texture size allowed by the implementation.
Use Mipmapping to Reduce Memory Bandwidth
Your application should provide mipmaps for all textures except those being used to draw 2D unscaled images. Although mipmaps use additional memory, they prevent texturing artifacts and improve image quality. More importantly, when the smaller mipmaps are sampled, fewer texels are fetched from texture memory which reduces the memory bandwidth needed by the graphics hardware, improving performance.
The GL_LINEAR_MIPMAP_LINEAR
filter mode provides the best quality when texturing but requires additional texels to be fetched from memory. Your application can trade some image quality for better performance by specifying the GL_LINEAR_MIPMAP_NEAREST
filter mode instead.
When combining mip maps with texture atlases, use the APPLE_texture_max_level extension to control how your textures are filtered.
Use Multitexturing Instead of Multiple Passes
Many applications perform multiple passes to draw a model, altering the configuration of the graphics pipeline for each pass. This not only requires additional time to reconfigure the graphics pipeline, but it also requires vertex information to be reprocessed for every pass, and pixel data to be read back from the framebuffer on later passes.
All OpenGL ES implementations on iOS support at least two texture units, and most devices support up to eight. Your application should use these texture units to perform as many steps as possible in your algorithm in each pass. You can retrieve the number of texture units available to your application by calling the glGetIntegerv
function, passing in GL_MAX_TEXTURE_UNITS
as the parameter.
If your application requires multiple passes to render a single object:
-
Ensure that the position data remains unchanged for every pass.
-
On the second and later stage, test for pixels that are on the surface of your model by calling the
glDepthFunc
function withGL_EQUAL
as the parameter.
Configure Texture Parameters Before Loading Texture Image Data
Always set any texture parameters before loading texture data, as shown in Listing 10-1. By setting the parameters first, OpenGL ES can optimize the texture data it provides to the graphics hardware to match your settings.
Loading a new texture
glGenTextures(1, &spriteTexture); |
glBindTexture(GL_TEXTURE_2D, spriteTexture); |
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); |
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, spriteData); |