OpenGL FAQ


FAQ

Welcome to the FAQ

What is OpenGL?

OpenGL stands for Open Graphics Library. It is a specification of an API for rendering graphics, usually in 3D. OpenGL implementations are libraries that implement the API defined by the specification.

Graphics cards usually have an OpenGL implementation. Because the OpenGL specification is not platform-specific, it is possible to write an application that will be possible to use against many different types of graphics cards. It also increases the chance that the application will continue to work when new hardware will become available.

What is NOT OpenGL?

The OpenGL API only deals with rendering graphics. OpenGL does not provide functions for animations, timing, file IO, image file format processing, GUI, and so forth. OpenGL is concerned only about rendering.

GLUT is not OpenGL. It is not a part of OpenGL; it is simply a library that is used by some users to create an OpenGL window.

Who maintains the OpenGL specification?

The specification is maintained by OpenGL Architectural Review Board or ARB.

Is OpenGL Open Source?

No, OpenGL doesn't have any source code. GL is a specification which can be found on this website. It describes the interface the programmer uses and expected behavior. OpenGL is an open specification. Anyone can download the spec for free. This is as opposed to ISO standards and specifications, which cost money to access.

There is an implementation of GL that is Open Source and it is called Mesa3D It announces itself as implementing OpenGL 3.0 and GLSL 1.30.

Where can I download OpenGL?

Just like the "Open Source?" section explains, OpenGL is not a software product. it is a specification.

On Mac OS X, Apple's OpenGL implementation is included.

On Windows, companies like nVidia and AMD/ATI use the spec to write their own implementation, so OpenGL is included in the drivers that they supply. For laptop owners, however, you'll need to visit the manufacturer of your laptop and download the drivers from them.

Where can I download OpenGL? #2

Updating your graphics drivers is usually enough to get the latest OpenGL implementation for your graphics hardware. This is sufficient for those who want to use applications that require OpenGL.

For programmers, installing drivers is generally insufficient. You will need to load the OpenGL function pointers, either manually or automatically with a library. More information on this can be found in the Getting started page.

Is there an OpenGL SDK?

There is no actual OpenGL SDK. There is a collection of websites, some (outdated) documentation, and links to tutorials, all found here. But it is not an SDK of the kind you are thinking about.

NVIDIA and ATI have their own SDKs, both of which have various example code for OpenGL.

What platforms have GL?

  • Windows: 95 and above
  • Mac OSX: all versions
  • Linux: OpenGL is provided by open source drivers and MESA library, or by proprietary drivers.
  • FreeBSD: OpenGL is provided by open source drivers and MESA library or proprietary Nvidia drivers.

OpenGL ES is often supported on embedded systems, but OpenGL ES is a different API from regular OpenGL.

What is an OpenGL context and why do you need a window to do GL rendering?


The GL context comprises resources (driver resources in RAM, texture IDs assigned, VBO IDs assigned, enabled states (GL_BLEND​GL_DEPTH_TEST​) and many other things). Think of the GL context as some memory allocated by the driver to store some information about the state of your GL program.

You must create a GL context in order for your GL function calls to make sense. You can't just write a minimal program such as this:

int main(int argc, char **argv)
{
    char *GL_version=(char *)glGetString(GL_VERSION);
    char *GL_vendor=(char *)glGetString(GL_VENDOR);
    char *GL_renderer=(char *)glGetString(GL_RENDERER);
    return 0;
}

In the above, the programmer simply wants to get information about this system (without rendering anything) but it simply won't work because no communication has been established with the GL driver. The GL driver also needs to allocate resources with respect to the window such as a backbuffer. Based on the pixelformat you have chosen, there can be a color buffer with some format such as GL_BGRA8​. There may or may not be a depth buffer. The depth might contain 24 bits. There might be a 8 bit stencil. There might be an accumulation buffer. Perhaps the pixelformat you have chosen can do multisampling. Up until now, no one has introduced a windowless context.

You must create a window. You must select a pixelformat. You must create a GL context. You must make the GL context current (wglMakeCurrent​ for Windows andglXMakeCurrent​ for *nix).

How do I do offscreen rendering?

Some people want to do offscreen rendering and they don't want to show a window to the user. The only solution is to create a window and make it invisible, select a pixelformat, create a GL context, make the context current. Now you can make GL function calls. You should make a FBO and render to that. If you chose to not create a FBO and you prefer to use the backbuffer, there is a risk that it won't work.

How Does It Work On Windows?

All Windows versions support OpenGL.

When you compile an application, you link with opengl32.dll​ (even on Win64).

When you run your program, opengl32.dll​ gets loaded and it checks in the Windows registry if there is a true GL driver. If there is, it will load it. For example, ATI's GL driver name starts with atioglxx.dll​ and NVIDIA's GL driver is nvoglv32.dll​. The actual names can change from release versions.

The Microsoft Windows DLL opengl32.dll​ only directly exposes OpenGL 1.1 functions. To gain access to functions from higher GL versions, you must load these function pointers manually with wglGetProcAddress​. The details of this process is explained.

There are several helper libraries for this, doing what is commonly called Extension Loading Libraries.

The important thing to know is that opengl32.dll​ belongs to Microsoft. No one can modify it. You must not replace it. You must not ship your application with this file. You must not ship nvoglv32.dll​ or any other system file either.

It is the responsibility of the user to install the driver made available from Dell, HP, nVidia, ATI/AMD, Intel, SiS, and whatever. Though feel free to remind them to do so.

How do I tell what version of OpenGL I'm using?

Use the function glGetString, with GL_VERSION passed as argument. This will return a null-terminated string. Be careful when copying this string into a fixed-length buffer, as it can be fairly long.

Alternatively, you can use glGetIntegerv(GL_MAJOR_VERSION, *) and glGetIntegerv(GL_MINOR_VERSION, *). These require GL 3.0 or greater.

In order to get the latest version that your GPU supports, make sure that you update your video drivers. GL support is included in your video card's drivers. Also, you might notice that your GL version is for example 2.1. How can you get the latest version? It depends on your GPU. It is possible that your GPU doesn't support anything higher therefore the manufacturer of you video card doesn't provide a higher version. In that case, you can either buy a new video card or try Mesa3D (which is a software renderer) http://www.mesa3d.org

Why is my GL version only 1.4 or lower?

There are three reasons you may get an unexpectedly low OpenGL version.

On Windows, you might get a low GL version if, during context creation, you use an unaccelerated pixel format. This means you get the default implementation of OpenGL which is version 1.1.

The solution to this is to be more careful in your pixel format selection. More information can be found at Platform_specifics:_Windows and other parts of the Wiki.

The other reason is that the makers of your video card (and therefore the makers of your video drivers) do not provide an up-to-date OpenGL implementation. There are a number of defunct graphics card vendors out there. However, of the non-defunct ones, this is most likely to happen with Intel's integrated GPUs.

Intel does not provide a proper, up-to-date OpenGL implementation for their integrated GPUs. There is nothing that can be done about this. NVIDIA and ATI provide good support for their integrated GPUs.

Another reason is that you haven't installed your video card drivers after installing your OS.

Be sure to query OpenGL with glGetString and make sure the returned values make sense.

Are glTranslate/glRotate/glScale hardware accelerated?

No, there are no known GPUs that execute this. These functions are deprecated in GL 3.0. You should have your own math library, build your own matrix, upload your matrix to the shader. There are some libraries which you can use for this.

Do modern GPUs still support the fixed-function pipeline?

Modern GPUs no longer provide specialized hardware for the purpose of doing specific calculations in the OpenGL pipeline. Everything is done with shaders. In order to preserve compatibility, the GL driver generates a shader which emulates the fixed functionality.

Among many others, a simple example is rendering a primitive using one function call to submit each vertex attribute separately, e.g. glVertex3f(1.f, 0.f, 0.f)​, inside a glBegin()​ and glEnd()​ statement. Using shaders, you have to first define all vertex attributes in a local memory buffer, create a buffer object, and then transfer the vertex attributes using glBufferDataglBufferSubData or by mapping the buffer using glMapBuffer or glMapBufferRange. Shaders will then be able to use the data in the buffer object's data store for rendering.

How to render in pixel space

Setup a certain projection matrix:

  glMatrixMode(GL_PROJECTION);
  glLoadIdentity();
  glOrtho(0.0, WindowWidth, 0.0, WindowHeight, -1.0, 1.0);
  //Setup modelview to identity if you don't need GL to move around objects for you
  glMatrixMode(GL_MODELVIEW);
  glLoadIdentity();

Notice that y axis goes from bottom to top because of the glOrtho call. You can swap bottom and top parameters if you want y to go from top to bottom. make sure you render your polygons in the right order so that GL doesn't cull them or just call glDisable(GL_CULL_FACE).

Fullscreen quad

Users seem to ask often how to render a fullscreen quad. What should the projection matrix look like?

The projection matrix should be an identity matrix. In old GL, you can call glMatrixMode(GL_PROJECTION) and glLoadIdentity() and glMatrixMode(GL_MODELVIEW) and glLoadIdentity().

In shader based GL, the GLSL shader doesn't even need a matrix. You can just do this

 #version 110
 void main()
 {
   gl_Position = gl_Vertex;   //Just output the incoming vertex
 }

The vertices for your quad (or 2 triangles) need to be {-1.0, -1.0, 0.0}, {1.0, -1.0, 0.0}, {1.0, 1.0, 0.0}, {-1.0, 1.0, 0.0}.

Multi indexed rendering

What this means is that each Vertex Attribute (position, normal, etc) has its own index array. OpenGL (and Direct3D, for that matter) do not support this.

It is up to you the user to adjust your data format so that there is only one index array, which samples from multiple attribute arrays. To do this, you will need to duplicate some attribute data so that all of the attribute lists are the same size.

Quite often, this question is asked by those wanting to use the OBJ file format:

 v 1.52284 39.3701 1.01523
 v 36.7365 17.6068 1.01523
 v 12.4045 17.6068 -32.475
 and so on ...
 n 0.137265 0.985501 -0.0997287
 n 0.894427 0.447214 -8.16501e-08
 n 0.276393 0.447214 -0.850651
 and so on ...
 t 0.6 1
 t 0.5 0.647584
 t 0.7 0.647584
 and so on ...
 f 102/102/102 84/84/84 158/158/158 
 f 158/158/158 84/84/84 83/83/83 
 f 158/158/158 83/83/83 159/159/159 
 and so on ...

The lines that start with an f are the faces. As you can see, each vertex has 3 indices, one for vertex, normal, texcoord. In the example above, luckily the index for each {vertex, normal, texcoord} is identical but you will also encounter cases where they are not. You would have to expand such cases. Example :

 f 1/1/1 2/2/2 3/2/2
 f 5/5/5 6/6/6 3/4/5

so the group 3/2/2 and 3/4/5 are considered a difference vertex entirely even though they both access vertex 3.

You will need to do post-processing on OBJ files before you can use them. This means that the vertex count, the normal count, the texcoord count and whatever other attributes you have have, the count must be the same for all. Example : If you have 10 vertices, then you must have 10 normals to go along with them. You can't have 10 vertices and 7 normals.

You have 3 options : either you allocate separate arrays for each of your attributes or you create a single array for your attributes and you interleave the vertex and normals and texcoords or you create a single array and you don't interleave (example : you put all your vertices at the start of the array, then all the normals, then all the texcoords, etc). There are other pages on this Wiki that explain all that in more detail.

See also

Drawing A Cube

This question comes up often since there seems to be a lot of users rendering a lot of cubes. Just like the above section explains, OpenGL supports one index for a Vertex Attribute (a vertex attribute is position, normal, texcoord, etc).

Since a cube's face is flat, a user would want flat shading on each face. This means having 1 normal per face. However, GL wants 1 normal per vertex. The best solution is as the above section explains, duplicate the vertex because once one of the attributes is different, it is considered a completely different vertex. Another alternative is to do the following :

//The following code is for GL 2.0
//Render face 1
glNormal3fv(mynormal1);  //Set your face normal
glBindBuffer(GL_ARRAY_BUFFER, VertexVBOID);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, sizeof(MyVertex), BUFFER_OFFSET(0));   //The starting point of the VBO, for the vertices
glClientActiveTexture(GL_TEXTURE0);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glTexCoordPointer(2, GL_FLOAT, sizeof(MyVertex), BUFFER_OFFSET(12));   //The starting point of texcoords, 12 bytes away
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, IndexVBOID);  //Bind the IBO
glDrawElements(GL_QUADS, 4, GL_UNSIGNED_SHORT, BUFFER_OFFSET(0));
 
//Render face 2
glNormal3fv(mynormal2);  //Set your face normal
//We can just call glDrawElements since the rest of the setup is already done above
glDrawElements(GL_QUADS, 4, GL_UNSIGNED_SHORT, BUFFER_OFFSET(16));
 
//.... and render face 3 and 4 and 5 and 6

In the above code, the disadvantage becomes clear. We can only render 1 quad at a time. It requires 6 calls to glDrawElements just to render a single cube. Another problem is that you are using glNormal3fv which doesn't fetch its data from a VBO!

So what are the savings? For each face, you only needed memory space for 1 normal. Your savings are = 3 normals * 3 floats * 4 bytes_per_float * 6 faces = 216 bytes saved per cube.

glClear and glScissor

glScissor is one of the few functions that affects on how glClear operates. If you want to clear only a region of the back buffer, then call glScissor and alsoglEnable(GL_SCISSOR_TEST).

Alternatively, if you have used the scissor test and forgot to glDisable(GL_SCISSOR_TEST), then you might wonder why glClear isn't working the way you want to.

Masking

Pay attention to glColorMask, glStencilMask and glDepthMask. For example, if you disable depth writes by calling glDepthMask(FALSE), then all calls to glClear will not clear the depth buffer.

glGetError (or "How do I check for GL errors?)

OpenGL keeps a set of error flags, and each call to glGetError() tests and clears one of those flags. When there are no more error flags set, then glGetError()returns GL_NO_ERROR.

This helper function can be used to query all previously fired OpenGL errors:

int CheckGLErrors()
{
  int errCount = 0;
  for(GLenum currError = glGetError(); currError != GL_NO_ERROR; currError = glGetError())
  {
    //Do something with `currError`.
    ++errCount;
  }
 
  return errCount;
}

This requires active polling for errors. Doing so can incur a performance penalty, so it is best to limit this to debug builds where possible.

The extension ARB_debug_output provides an alternative mechanism that can offer error handling without explicit polling. The overhead for this can be significant, and it is only available if the OpenGL context is created with the CONTEXT_DEBUG_BIT_ARB​ flag.

This became a core feature in OpenGL 4.3, with KHR_debug. This feature is always available, but non-debug contexts are not required to actually log message.

What 3D file format should I use?

Newcomers often wonder what 3D file format for their mesh data to use for their project.

OpenGL does not load files; therefore, you can use any mesh format you wish. This also means that you must provide the appropriate loading code yourself; OpenGL won't help you.

There are several file format alternatives, with different capabilities. All of these formats (and more) can be loaded by the Open Asset Import library.

Wavefront .obj
This is a simple text format for mesh data. Each .obj file holds a single mesh. Obj files can reference material files, stored in the less-frequently-used .mtl format. Meshes in this format may only contain positions, normals and optionally a single texture coordinate.
Autodesk .3ds
This is a binary mesh format. This format contains materials and can store multiple named meshes in a single file.
Quake 2 .md2 and  Quake 3 .md3
These are binary mesh formats. The formats do not contain material information, and they only technically store a single mesh. They do have support for keyframe animation, so a single mesh file would contain all of the animation keyframes, as well as animation data.
COLLADA
This is an XML-based mesh file format. It can store pretty much  anything; it is primarily used for document exchange between different 3D modelling packages.

Memory Usage

It seems to be common to think that there is a memory leak in the OpenGL driver. Some users write simple programs such as this

  glClear(...);
  SwapBuffers(...);

and they observe that their memory usage goes up each time their Display function is called. That is normal. The driver might allocate some memory space and since the driver is basically a black box, we don't know what it is doing. The driver might be doing some work at optimizing in a secondary thread or preparing some buffering area. We don't know what it is doing, but there is no memory leak.

Some users call glDeleteTextures or glDeleteLists or one of the other delete functions and they notice that memory usage doesn't go down. You can't do anything about it. The driver does its own memory management and it might choose not to deallocate for the time being. Therefore, this is not a memory leak either.

Who manages memory? How does OpenGL manage memory?

Graphics cards have limited memory, if you exceed it by allocating many buffer objects and textures and other GL resources, the driver can store some of it in system RAM. As you use those resources, the driver can swap in and out of VRAM resources as needed. Of course, this slows down rendering. The amount of RAM storage is also limited for the driver and it might return a GL_OUT_OF_MEMORY when you call glGetError(). It might even return a GL_OUT_OF_MEMORY if you have plenty of VRAM and RAM available and you try to allocate a really large buffer object that the driver doesn't like.

The purpose of this section is to answer those who want to know what happens when they allocate resources and the video card runs out of VRAM. This behavior is not documented in the GL specification because it doesn't concern itself with system resources and system design. System design can differ and GL tries to remain system neutral. Some systems don't have a video card. Some systems have an integrated CPU/GPU with shared RAM.

Should I use display lists, vertex arrays or vertex buffer objects?

Display lists and vertex arrays have been with GL since the beginning. Vertex buffer objects were introduced with GL 1.5. Although they can all be used to render primitives, they are not the same and have disctinct properties:

  • A display list is a series of well defined GL commands which may be optimized in terms of execution and data transfer to video memory when the list is created. Commands and data in the list stored in server or video memory and are retrieved and executed when the list is called. For instance, specifying vertex attributes using a vertex array will likely cause the GL to store the data in video memory for fast access. Display lists are static, so mapping display lists to dynamically changing data is not possible. The advantage is that execution time is generally very fast in comparison to some alternatives.
  • Vertex arrays allow for vertex specification in one shot using an array of attributes. This avoid using immediate mode constructs which possibly pass each attribute vertex by vertex and thus lead to a high number of API call and to data transfer attribute by attribute. Vertex arrays allow for dynamic vertex specification. However, since data is stored in client memory, i.e. system memory managed by the application, it has to be transferred to the GL every time as well.
  • Vertex buffer objects are, if the GL can do so, stored in video memory - just like display lists. Thus you avoid having to re-transfer every frame. However, unlike display lists, you can dynamically updated vertex buffer objects. Also, you get the advantage of the very low number of API calls necessary to actually render.

Both display lists and vertex arrays are legacy OpenGL features. Vertex specification is now mostly done with vertex buffer objects and vertex array objects in modern OpenGL. As a general advice, you should refrain from using legacy constructs in new OpenGL applications.

If and only if there is a requirement to maintain existing legacy code and there is no way to rewrite the rendering system, using display lists can be quite advantageous in comparison to vertex arrays for static data. If dynamic updates are required, there is no way arround vertex arrays. If immediate mode constructs like glBegin()​ are found, they can be replaces by either display lists or vertex arrays to improve performance in many real cases where the number of vertices is usually hundreds or thousands per object.

What does Unresolved External Symbol mean?

Some newcomers try to compile their GL program and get linker errors such as:

  error LNK2001: unresolved external symbol _glBegin

and similar linker errors related to other GL functions and perhaps GLU functions and other functions from other libraries.

The example given above is specific to Microsoft Visual C++ but you can get linker errors from other linkers as well. In order for the linker to do its job, it needs to know which library file it should search.

For VC++ 2010, you could click on Project from the menu. Select Properties. From that properties dialog box, on the left side, drop the Configuration Properties. Drop Linker. Click on Input. On the right side, it says Additional Dependencies. Type the name of the library file followed by a colon. For OpenGL, it would be opengl32.lib. For GLU, it would be glu32.lib.

Alternatively, you can add these lines:

   #pragma comment(lib, "opengl32.lib")
   #pragma comment(lib, "glu32.lib")

to your .cpp files, which will force the libraries to be included. These only work on compilers that support this use of #pragma.

Obviously, we can't list what you need to do for each IDE. You need to search the internet or your manuals. You need to know how to use your IDE and your particular programming language.

In the case of gcc, a user types this command on the CLI :

  gcc -lGL -lglut myprogram.c -o myprogram

and CLI shows

 /tmp/ccCQkTKm.o:myprogram.c:function display: error: undefined reference to 'gluLookAt'

That's because you are using GLU and you did not link again the GLU library. The command you should type is :

  gcc -lGL -lGLU -lglut myprogram.c -o myprogram

What does Not Declared In This Scope mean?

Some newcomers try to compile their GL program and get compile errors such as:

  GL_TEXTURE_3D was not declared in this scope

This happens because the compiler has no idea what GL_TEXTURE_3D is since it was not declared anywhere.

You probably have not included a proper OpenGL header. gl.h​ may not include all of the stuff you need. You should use an OpenGL Loading Library; otherwise, you will have to use glext.h​, found in the OpenGL Registry. This also means you need to manually load OpenGL functions.

Why limit to 8 lights?

The OpenGL fixed-function pipeline has the concept of a maximum number of lights to render with. Shaders do not have any lights; you may use shaders to render any number of lights, but you must build this framework within the tools that shaders give you.

Within fixed-function OpenGL, it is often asked why GL supports a maximum of 8 lights. That is not true, GL doesn't impose a maximum of 8 lights. It imposes a minimum of 8 lights. Your driver/GPU combo is allowed to support more than 8 but most of them limit themselves to 8. The reason for that is that it doesn't become noticeable when there are more than 3 or 4 lights shining on the same surface. So in fact, 8 is an excessive number.

Example, if you have a city and you have street lights, you probably need more than 8 street lights. The solution is to subdivide your city streets in sections where only 3 lights effect each surface. It is up to you, the artist, to make it look good.

Example, if you are doing particle effects where each particle is a light source and you have perhaps 1000 particles, that is actually an insane number of lights for a real time renderer for old hardware (example : hardware that supports GL 1.5). You can get away with it with just 1 light for the entire group of particles.

If you do really want to have more than 8 lights, you can do it with a multipass approach. Render your object with 8 lights on. Then enable blending and enable additive blending (glBlendFunc(GL_ONE, GL_ONE)) and then render your object again. You might want to set your depth test to GL_LEQUAL.

Now let's look at it in a different perspective. Did old game use lights? Actually they did not. Many old games used light maps for static surfaces. For a moving object, they computed the lighting themselves or it was precomputed (aka. light volume).

Can I precompile my shaders?

GL 4.1 adds the ability to compile the shader and you get to download the shader binary from the GL driver, however, these binaries are GPU and driver specific. There is no guarantee they will work on other GPUs. There is no guarantee that they will work once the user upgrades or downgrades the driver. The purpose of this feature is to compile once and store on the hard drive for future runs. They avoid the compile and link time in future runs of your application. However, driver updates may change the version of these shaders, which will force you to recompile shaders from scratch.

If you ship your application with precompiled shaders, you have to include the source code format also. That way you can do the compilation the usual way if loading of the precompiled files fails.

Note that the extension to do this, ARB_get_program_binary, is widely available on pre-4.1 hardware. So you do not need 4.x-class hardware to use it.

Many Small 2D Textures

This technique is also called a texture atlas.

Some users ask about putting together many smaller 2D textures inside one big 2D texture (1024 x 1024 or something larger). This way, you can avoid a call to glBindTexture and perhaps gain some performance. Yes, you can do that but you also have to watch out for the texture coordinates on your models. You also have to watch out for texture filtering because a linear filter can cause texel bleed (neighboring sub-texture gets sampled).

Another solution is to put those small textures in one 3D texture (GL_TEXTURE_3D which is part of GL 1.3 and above) as long as all the 2D textures are the same size. The problem of texture filtering still is present in this case in case you use linear filtering. There is no problem in the S and T direction but the problem is present between the texture layers.

Another solution is to make use of 2D texture array (GL_TEXTURE_2D_ARRAY which is part of GL 3.0 and above). This solves the problem that is present in the case of using a 3D texture as described above, however, the texture sizes must all be the same.

Font Rendering and Text Rendering

GL doesn't render text because GL is a low level library. It handles the basics such as rendering points, lines and triangles and whatever technique that might get introduced in the future. For rendering text, you either need a 3rd party library or do it yourself.

One of the simplest methods is to create a texture with all the characters on it. Then, render many quads on your screen and texture map the characters.

You could also have a texture with full sentences. Then, render a quad and texture it.

You could also use the features of your OS and get access to the fonts and make texture out of it. Then, texture map some quads.

Windows offers certain functions for text rendering but these are old and should probably not be used these days. See wglUseFontBitmaps and wglUseFontOutlines.

The old FAQ explains the above well. It also has links to 3rd party libraries. http://www.opengl.org/archives/resources/faq/technical/#indx0170

What is GLU?

GLU stands for OpenGL utility library and provides convenience functions layered on legacy OpenGL features. Popular examples of GLU functionality are

gluLooktAt()

and

gluPerspective()

which in principle ease the process of calculating a view- and projection matrix to be used during rendering. However, both functions use and/or are to be used in the context of a legacy feature called the matrix stack and corresponding matrix manipulation functions which make them unfit or illegal in modern OpenGL applications which don't intend to use deprecated or removed functionality.

If you're writing an application which uses modern OpenGL, i.e. core OpenGL 3.0 or higher, it is recommended to either write your own code or use a 3rd party library which either isn't layered on OpenGL at all or uses OpenGL 3.0 or higher core features. A list of suitable 3rd party libraries can be found here.

If you still plan on using GLU, you should know that Windows provides glu32.dll. It provides GLU version 1.2. Your compiler should come with glu32.lib or glu32.a. You should also know that the latest version of GLU is 1.3. You can download the entire package that Mesa3D provides (http://www.mesa3d.org). Inside, you will find the source code for GLU 1.3 and you must compile it yourself. Remember that you must not replace Microsoft's glu32.dll. It is considered a system file and you must never overwrite system files.


评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值