offscreen rendering

What is offscreen rendering?
When creating window in OpenGL, programmer renders the graphics to the window client area, so it is immediately visible after swapping the buffers This approach is called on screen rendering.

The off screen rendering is where you don't draw directly to screen, but to memory or files instead. The most straightforward approach to implement offscreen rendering is using Frame Buffer Objects, by friends and family members called simply FBO.

The advantages of this approach are:

  • it is possible to render images at bigger resolution than display supports
  • rendered image is immediately linked with texture


The disadvantages are:

  • it is available only on OpenGL implementations with GL_EXT_framebuffer_object extension support
  • it can eat lot of graphic memory, depending on resolution 

To generate output in a file, you basically set up OpenGL to render to a texture, and then read the resulting texture into main memory and save it to a file. At least on some systems (e.g., Windows), I'm pretty sure you'll still have to create a window and associate the rendering context with the window, though it will probably be fine if the window is always hidden.


Four options:

  • render to backbuffer (default render place)
  • render to a texture
  • render to a Framebuffer object (FBO)
  • render to a Pixelbuffer object (PBO)

then read the pixels with glReadPixels, and put them in a file.

Framebuffer and Pixelbuffer are better than the backbuffer and texture since they are made for data to be read back to CPU, while the backbuffer and textures are made to stay on the GPU and show on screen.

As discussed at: What are the differences between a Frame Buffer Object and a Pixel Buffer Object in OpenGL?, PBO is for asynchronous transfers, for most simulations this will not be necessary, so we FBO.

As other answers pointed out, it is not possible not to open a window, so open a 1x1 window and hide it with glutHideWindow.

Next use glReadPixels as shown at: glReadPixels() "data" argument usage?

Finally find some library that transforms the pixels you read into your desired file format.

The following pseudo code shows the general organization. This should not to stall you physical calculations any longer than necessary:

void init(int argc, char** argv)  {
    // MUST initialize window BEFORE framebuffer!
    // `GLUT_SINGLE` since user does not see output
    glutInitDisplayMode( GLUT_SINLGE | GLUT_RGB | GLUT_DEPTH );
    glutInitWindowSize(1, 1);
    glutCreateWindow(argv[0]);
    glutHideWindow();

    GLuint fbo, rboColor, rboDepth;

    // Color renderbuffer.
    glGenRenderbuffers(1,&rboColor);
    glBindRenderbuffer(rboColor);
    // Set storage for currently bound renderbuffer.
    glRenderbufferStorage(GL_RENDERBUFFER, GL_BGRA8, width, height);

    // Depth renderbuffer
    glGenRenderbuffers(1,&rboDepth);
    glBindRenderbuffer(rboDepth);
    glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24, width, height);

    // Framebuffer
    glGenFramebuffers(1, &fbo);
    glBindFramebuffer(GL_FRAMEBUFFER,fbo);
    glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, rboColor);
    // Set renderbuffers for currently bound framebuffer
    glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, rboDepth);

    // Set to write to the framebuffer.
    glBindFramebuffer(GL_FRAMEBUFFER,fbo);

    // Tell glReadPixels where to read from.
    glReadBuffer(GL_COLOR_ATTACHMENT0);

    // init the rest of OpenGL, lights, etc.

    // init physical model
}

void display() {
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

    glutSolidSphere(1.0, 20, 20);

    // Continue drawing. 

    // Makes sure scene is rendered
    // and put pixels on the backbuffer
    // which is not shown on the window.
    // window shows the frontbuffer
    //
    // glSwapBuffers() would wait for screen refresh,
    // and put backbuffer on the frontbuffer where it
    // would be visible on screen        
    glFlush();

    // Data will now contain the pixels.
    glReadPixels(data);

    // Not in OpenGL: use some library to convert to the format.
    writePixelsToFile(data)
}

void idle() {
    bool outputNow = False;
    while(!outputNow) {
        // Update the physical system.
        updatePhysicalCalculations();
        outputNow = doOutputNow();
    }
    // Do the next display.
    glutPostRedisplay();
}

int main() {
    init(argc,argv);
    glutDisplayFunc(display);
    glutIdleFunc(idle);
    glutMainLoop();
}

I've got a full working example on my GitHub C++ cheat. Clone the repo, go into the dir, make run make run (supposing glut/opengl is already installed). Now change the value of offscreen in the source file to true/false. If true, it outputs pixels to stdout faster than 60fps, if false, it outputs to screen and stays at 60 fps.


The Frame Buffer object is not actually a buffer, but an aggregator object that contains one or more attachments, which by their turn, are the actual buffers. You can understand the Frame Buffer as C structure where every member is a pointer to a buffer. Without any attachment, a Frame Buffer object has very low footprint.

Now each buffer attached to a Frame Buffer can be a Render Buffer or a texture.

The Render Buffer is an actual buffer (an array of bytes, or integers, or pixels). The Render Buffer stores pixel values in native format, so it's optimized for offscreen rendering. In other words, drawing to a Render Buffer can be much faster than drawing to a texture. The drawback is that pixels uses a native, implementation-dependent format, so that reading from a Render Buffer is much harder than reading from a texture. Nevertheless, once a Render Buffer has been painted, one can copy its content directly to screen (or to other Render Buffer, I guess), very quickly using pixel transfer operations. This means that a Render Buffer can be used to efficiently implement the double buffer pattern that you mentioned.

Render Buffers are a relatively new concept. Before them, a Frame Buffer was used to render to a texture, which can be slower because a texture uses a standard format. It is still possible to render to a texture, and that's quite useful when one needs to perform multiple passes over each pixel to build a scene, or to draw a scene on a surface of another scene!

The OpenGL wiki has this page that shows more details and links.


It all starts with glReadPixels, which you will use to transfer the pixels stored in a specific buffer on the GPU to the main memory (RAM). As you will notice in the documentation, there is no argument to choose which buffer. As is usual with OpenGL, the current buffer to read from is a state, which you can set with glReadBuffer.

So a very basic offscreen rendering method would be something like the following. I use c++ pseudo code so it will likely contain errors, but should make the general flow clear:

//Before swapping
std::vector<std::uint8_t> data(width*height*4);
glReadBuffer(GL_BACK);
glReadPixels(0,0,width,height,GL_BGRA,GL_UNSIGNED_BYTE,&data[0]);

This will read the current back buffer (usually the buffer you're drawing to). You should call this before swapping the buffers. Note that you can also perfectly read the back buffer with the above method, clear it and draw something totally different before swapping it. Technically you can also read the front buffer, but this is often discouraged as theoretically implementations were allowed to make some optimizations that might make your front buffer contain rubbish.

There are a few drawbacks with this. First of all, we don't really do offscreen rendering do we. We render to the screen buffers and read from those. We can emulate offscreen rendering by never swapping in the back buffer, but it doesn't feel right. Next to that, the front and back buffers are optimized to display pixels, not to read them back. That's where Framebuffer Objects come into play.

Essentially, an FBO lets you create a non-default framebuffer (like the FRONT and BACK buffers) that allow you to draw to a memory buffer instead of the screen buffers. In practice, you can either draw to a texture or to a renderbuffer. The first is optimal when you want to re-use the pixels in OpenGL itself as a texture (e.g. a naive "security camera" in a game), the latter if you just want to render/read-back. With this the code above would become something like this, again pseudo-code, so don't kill me if mistyped or forgot some statements.

//Somewhere at initialization
GLuint fbo, render_buf;
glGenFramebuffers(1,&fbo);
glGenRenderbuffers(1,&render_buf);
glBindRenderbuffer(render_buf);
glRenderbufferStorage(GL_RENDERBUFFER, GL_BGRA8, width, height);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER​,fbo);
glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, render_buf);

//At deinit:
glDeleteFramebuffers(1,&fbo);
glDeleteRenderbuffers(1,&render_buf);

//Before drawing
glBindFramebuffer(GL_DRAW_FRAMEBUFFER​,fbo);
//after drawing
std::vector<std::uint8_t> data(width*height*4);
glReadBuffer(GL_COLOR_ATTACHMENT0);
glReadPixels(0,0,width,height,GL_BGRA,GL_UNSIGNED_BYTE,&data[0]);
// Return to onscreen rendering:
glBindFramebuffer(GL_DRAW_FRAMEBUFFER​,0);

This is a simple example, in reality you likely also want storage for the depth (and stencil) buffer. You also might want to render to texture, but I'll leave that as an exercise. In any case, you will now perform real offscreen rendering and it might work faster then reading the back buffer.

Finally, you can use pixel buffer objects to make read pixels asynchronous. The problem is that glReadPixels blocks until the pixel data is completely transfered, which may stall your CPU. With PBO's the implementation may return immediately as it controls the buffer anyway. It is only when you map the buffer that the pipeline will block. However, PBO's may be optimized to buffer the data solely on RAM, so this block could take a lot less time. The read pixels code would become something like this:

//Init:
GLuint pbo;
glGenBuffers(1,&pbo);
glBindBuffer(GL_PIXEL_PACK_BUFFER, pbo);
glBufferData(GL_PIXEL_PACK_BUFFER, width*height*4, NULL, GL_DYNAMIC_READ);

//Deinit:
glDeleteBuffers(1,&pbo);

//Reading:
glBindBuffer(GL_PIXEL_PACK_BUFFER, pbo);
glReadPixels(0,0,width,height,GL_BGRA,GL_UNSIGNED_BYTE,0); // 0 instead of a pointer, it is now an offset in the buffer.
//DO SOME OTHER STUFF (otherwise this is a waste of your time)
glBindBuffer(GL_PIXEL_PACK_BUFFER, pbo); //Might not be necessary...
pixel_data = glMapBuffer(GL_PIXEL_PACK_BUFFER, GL_READ_ONLY);

The part in caps is essential. If you just issue a glReadPixels to a PBO, followed by a glMapBuffer of that PBO, you gained nothing but a lot of code. Sure the glReadPixels might return immediately, but now the glMapBuffer will stall because it has to safely map the data from the read buffer to the PBO and to a block of memory in main RAM.

Please also note that I use GL_BGRA everywhere, this is because many graphics cards internally use this as the optimal rendering format (or the GL_BGR version without alpha). It should be the fastest format for pixel transfers like this. I'll try to find the nvidia article I read about this a few monts back.

When using OpenGL ES 2.0, GL_DRAW_FRAMEBUFFER might not be available, you should just use GL_FRAMEBUFFER in that case.


Reference links:

http://stackoverflow.com/questions/12157646/how-to-render-offscreen-on-opengl

http://stackoverflow.com/questions/3191978/how-to-use-glut-opengl-to-render-to-a-file

http://www.opengl.org/wiki/FBO

http://www.opengl.org/wiki/Framebuffer_Object_Examples

http://www.swiftless.com/tutorials/opengl/framebuffer.html

http://www.thinbasic.com/community/content.php?31-Beyond-TBGL-Offscreen-rendering-why-and-how

http://www.lighthouse3d.com/tutorials/opengl-short-tutorials/opengl_framebuffer_objects/

  • 1
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值