Deferred rendering (or deferred shading) is an interesting and ingenuous technique that, differently from direct rendering, postpones light's computation to the end of the rendering pipeline, to image space, similary to a post-processing technique. The underlying idea is that if a pixel is not rendered to screen, why to waste time computing it's shading? Once pixels get to the image space they are at the end of the rendering's pipeline, therefore we are going to shade only those pixels that pass all the test to the screen. This allows us to ignore the illumination process during rendering. On the other hand this technique makes trouble when it comes to deal with transparencies and moreover it is very slow (or it doens't work at all) when run on the oldest hardware. Since I wanted to implement this technique in Psyche I decided to keep track of my progresses and to develop this simple tutorial. Anyway I won't discuss any of the data structures used to implement, in a transparent way, the deferred rendering in Psyche, we just focus on how to implement this technique in a simple, direct and clear way. This tutorial explains how to implement Deferred Rendering taking advantage of Psyche engine. I hope that the whole work is clear, simple an that it goes straight to the point of the problem. Anyway deferred rendering is quite an adavanced topic therefore it requires a good knowledge of both C++ and OpenGL. I take it for granted that your knowledge of both is deep enought, and this is why I won't be explaining every single line of code. Since the engine online version is provided with all the structures used for texture rendering and filtering, it is not necessary to rewrite them; anyway if somebody would like to re-develop the example from (almost) zero, then he must create a main class, in respect of the main class's structure, and then he can just follow the steps. It is possible to work on the finished version of this example. It can be found in the folder that contains all the main classes (Psyche engine is required).
| ||||||
1 | Let's start creating a little scene that will allow us to work on our deferred rendering's tutorial. try{ Remembering to define them in the main: RenderObject *athenaHead; Inside method doAnimation() we tell the camera to swing a little. camera.setPosition(465, 128, FastCos(Time::time()*0.01f)*300 ); We proceed by requesting the program to render the objects invoking the related methods inside function doRender() athenaHead->render(); We notify the system that these objects must be updated at the end of the synchrnization procedure, so we put the corresponding calls inside function applyChanges() right above the camera's updating call: athenaHead->applyChanges(); Lastly we manage to release everything inside the proper funcion releaseGLEntities() DataObjectManager::getInstance()->deleteObject(athenaHead); Now, if we compile and run the code, we are going to see a simple scene showing a statue, a semi-cylindrical wall and a floor. The rendering technique that we are wathcing right now is the classical OpenGL's forward rendering. | |||||
| ||||||
2 | Now that the scene is in position we have to start writing the code for the algorithm itself. Deferred rendering needs one to acquire at least positions, normals and diffusive components from the scene and to save them into some textures, which usually have the same size of the screen. The so obtained textures are then used to compute lighting at every pixel. Then the result is sent to the screen. This computation isn't always straightforward; for example, on most part of the oldest videocard it is impossible to save different data inside textures with different channels's depth using a single render pass. To trick this problem, in the first deferred shading's implementations, programmers used to save information taking advantage of every single bit in order to bring all the data to the end of pipeline; for example in the "Deferred Shading" paper, the authors used the depth information to re-compute vertices' position, and they saved inside the normal's alpha channel some materials' information. Let’s start. First of all we have to create the right files and folders.
We are going to use multiple render target (MRT) to avoid transforming the geometry more than once. To this aim we can use the OpenGL's variable gl_FragData to tell to the fragment shader to render into each one of the different render targets. We then need to arrange some textures to hold these data, but we will care about this problem after having wrote down our shaders. I don’t think there is much to explain here, the vertex shader transforms the geometry and sends information about normals and positions to the fragment shader. One important detail is that we have two different fragment shaders, one that handle normal mapping and one "plain". Vertex Shader: varying vec4 normals; Fragment Shader (with normal mapping): varying vec4 position; Fragment Shader (without normal mapping): varying vec4 position; We made a lot of headway. Anyway, if we compiile and execute the program at this point, we aren't going to see any change to the scene because we have not linked the shaders to the scene’s objects. Moreover, even if they were linked, we would have seen the sole scene without any lighting since, by default, the front-buffer shows the content of gl_FragData[0]. It is necessary to create a filter that processes the MRT textures before being able to see our deferred shading on the job. | |||||
| ||||||
3 |
Psyche provides an abstract class, called RenderTexture, used for all those procedures that requires rendering to a texture. What we want is to extend this class with a derived class in order to handle multiple render target textures (MRT). Such a class can be written from scratch, but Psyche already contains an implementation of it. In fact, FBODeferredShadingBuffer class was specifically wrote to handle three textures at the same time (via MRT), and it is easily expandable with additional textures. Let analyze the main part of this class, the constructor: FBODeferredShadingBuffer::FBODeferredShadingBuffer(int _dWidth, int _dHeight){ This class uses FBO for rendering to the texture, and the following lines generate and connect the Fragment Buffer Object. glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, diffuseBuffer); In the four code blocks we connected and defined four rendering targets (render buffer). In details we use an RGB buffer, 8 bit per channel, for the diffusive component of the deferred pass, a HDR texture with 32 bit channel for the position component while for normals it’s enough to have 16 bit per channel. Depth component was added for z-buffer’s needings. Now we have to create the very textures and bind them to the render buffers. glGenTextures(1, &dwTexture); In each of the three blocks we create and bind the texture within the first lines, then we specifiy format and paramenters for the texture, and finally we define the connection to the rendering’s target. The texture is then saved into a vector for any possible future need. Another detail is worth mentioning is about the method that starts the rendering to texture process. To specify the correlation between the elements from gl_FragData and the render buffers we added a few lines as shown (grey parts are less important): void FBODeferredShadingBuffer::start(){ With the disposition shown above we connected the diffusion component at site 0, the position component at site 1 and the normal component at site 2. Now let's create the texture in the main and let try to see if we are acquiring all the information properly inside the disposed buffers. It is important to remember that we have to link the shaders to the objects, too. Here’s the code: try{ Notice that athenaHead object has no bump mapping, therefore, it is linked to the fragment shader that doesn’t perform any bump computation. So let's test our work. Let's render the content of the texture to the monitor to see that everything is working as we expect to. void DeferredRenderingTest::doRender(){ Ok, we are ready. Let's compile and execute the code. | |||||
| ||||||
4 | Finally we reached the final stage. Lighting computation. Now we have the three textures with all the information we needed. In this tutorial we are going to ignore materials’ proprieties and we will light each surface using Blinn’s algorithm, forcing the values of equation’s parameters directly into the shader. A simple and elegant way to create what we need is to use the ImageFilter class from Psyche. Every object instantiated from this class requires in input one or more texture that will then be processed by several filters. In our case we will provide FBODeferredShadingBuffer and then we add a filter for lighting computation. It would be possible to add additional post-processing steps, but we will restrict ourselves to deferred rendering filtering in this tutorial. First we create the filter’sshader inside folder "Psyche / shaders / DeferredRendering" with names "deferredRendering.vert" and "deferredRendering.frag" and then we write the following code: Vertex Shader: void main( void ) Fragment Shader: uniform sampler2D tImage0; The vertex shader speaks for itself! The fragment shader is a bit wider. The idea is that it acquires from the program the three textures as well as the location of the camera; then the shader reads the values from the texture and applies the Blinn’s algorithm for local lighting. The light’s position is set directly into the shader’s code; at this implementation level we don't handle dynamic lights. We will see in the second part of this tutorial how to read some lights from a texture with colors’ and intensities’ information. Ok it is almost done. You just have to run these shaders and render the scene to the monitor. To do this you must extend class FilterOperator with a new class, which we call DeferredShadingOperator while overriding the method void DeferredShadingOperator::operation().Let see the class's code. class DeferredShadingOperator : public FilterOperator{ Very easy,isn’t it? It’s important to keep it as simple as possible, whatever you are designing just remember to keep it simple. An important but invisible detail is that we are inheriting a vector composed of RenderTexture elements from FilterOperator; it is the texture source. In our case the vector has only one RenderTexture element that contains various render buffer, but this structure allow us to read from several RenderTexture if needed. Ok, let’s see the constructor: DeferredShadingOperator::DeferredShadingOperator() : FilterOperator(new BasicRenderTexture(shWIDTH,shHEIGHT)){ The constructor loads the shader from the files that we created and sets the uniform variables as well. Nothing new. Even operation()'s code is very simple: void DeferredShadingOperator::operation(){ operation() method captures the operation’s result storing it inside a texture. This texture will be then provided to the next filter for its computations, so it's important to save the result in it starting the render to texture procedure. Moreover, if we don’t render to the texture, we would render to the front buffer, and we don’t want this to happen. //Set matrices OpenGL calls are used to set the projection and the model_view matrix. We disable the lighting system because we don’t want to compute any lighting on the render square and then we enable the shader; moreover we send current camera’s position to the shader itself. glActiveTextureARB(GL_TEXTURE0_ARB); With the previous code we send all the textures to the shader as usual. We recover the RenderTexture (that we know being of type FBODeferredShadingBuffer) from the previous filtering step and we read the textures it contains from the render buffers. Then we provide the shader with the textures paying attention to send them in the correct order. glColor4f(1,1,1,1); The remaining code is used for rendering and for turning off the shader and all the associated parameters when we are done. Moreover the state is restored as it was before the function begun and the system is told to stop to render to texture. In Psyche filters are constructed from a specific builder class. Therefore we must add a method to this class for our new filter. The method’s code is almost identical to that of the other methods of this builder class: PRESULT ImageFilterBuilder::addDeferredShadingFilter(){ We are ready to connect everything into the main class. Here the code of the initialization’s and rendering’s function: try{ The object screenFilter is of type ImageFilter* and it is a field of the main class. This object contains a source texture, that is our FBODeferredShadingBuffer, a filter that effectively computes the deferred shading, and an operator that sends to the screen the result of the filter’s chain. Rendering is done as follows: void DeferredRenderingTest::doRender(){ Lastly we add all the objects that are missing to the release method. void DeferredRenderingTest::releaseGLEntities(){ Let's compile and execute the code. Here it is! |