Tutorial - Deferred Rendering (Part 1)

Deferred rendering (or deferred shading) is an interesting and ingenuous technique that, differently from direct rendering, postpones light's computation to the end of the rendering pipeline, to image space, similary to a post-processing technique. The underlying idea is that if a pixel is not rendered to screen, why to waste time computing it's shading? Once pixels get to the image space they are at the end of the rendering's pipeline, therefore we are going to shade only those pixels that pass all the test to the screen. This allows us to ignore the illumination process during rendering. On the other hand this technique makes trouble when it comes to deal with transparencies and moreover it is very slow (or it doens't work at all) when run on the oldest hardware.
Deferred shading has been used in different games, for example
S.T.A.L.K.E.R. and Dead Space.
If you'd like to have a deeper explanation about this technique I suggest you to read
Hargreaves, Shawn, and Mark Harris (2004) "Deferred Shading"

Since I wanted to implement this technique in Psyche I decided to keep track of my progresses and to develop this simple tutorial. Anyway I won't discuss any of the data structures used to implement, in a transparent way, the deferred rendering in Psyche, we just focus on how to implement this technique in a simple, direct and clear way.

This tutorial explains how to implement Deferred Rendering taking advantage of Psyche engine. I hope that the whole work is clear, simple an that it goes straight to the point of the problem. Anyway deferred rendering is quite an adavanced topic therefore it requires a good knowledge of both C++ and OpenGL. I take it for granted that your knowledge of both is deep enought, and this is why I won't be explaining every single line of code.

Since the engine online version is provided with all the structures used for texture rendering and filtering, it is not necessary to rewrite them; anyway if somebody would like to re-develop the example from (almost) zero, then he must create a main class, in respect of the main class's structure, and then he can just follow the steps.

It is possible to work on the finished version of this example. It can be found in the folder that contains all the main classes (Psyche engine is required).

 

 

 


1

Let's start creating a little scene that will allow us to work on our deferred rendering's tutorial.
First of all, let's instantiate some objects inside the initialization function PRESULT DeferredRenderingTest::initializeGLEntities():

try{
   CheckAssignment(athenaHead = DataObjectManager::getInstance()->
         createRenderObjectFromFile(
"./models/athenaHead.exo"));
   CheckAssignment(athenaBack = DataObjectManager::getInstance()->
         createRenderObjectFromFile(
"./models/athenaBack.exo"));
   CheckAssignment(athenaFloor = DataObjectManager::getInstance()->
         createRenderObjectFromFile(
"./models/athenaFloor.exo"));
}
catch(PException* e){
   LogMex(
"Error",e->sException);
   
return P_FAIL;
}

Remembering to define them in the main:

RenderObject   *athenaHead;
RenderObject   *athenaBack;
RenderObject   *athenaFloor;

Inside method doAnimation() we tell the camera to swing a little.

camera.setPosition(465, 128, FastCos(Time::time()*0.01f)*300 );

We proceed by requesting the program to render the objects invoking the related methods inside function doRender()

athenaHead->render();
athenaBack->render();
athenaFloor->render();

We notify the system that these objects must be updated at the end of the synchrnization procedure, so we put the corresponding calls inside function applyChanges() right above the camera's updating call:

athenaHead->applyChanges();
athenaBack->applyChanges();
athenaFloor->applyChanges();
camera.applyChanges();

Lastly we manage to release everything inside the proper funcion releaseGLEntities()

DataObjectManager::getInstance()->deleteObject(athenaHead);
DataObjectManager::getInstance()->deleteObject(athenaBack);
DataObjectManager::getInstance()->deleteObject(athenaFloor);

Now, if we compile and run the code, we are going to see a simple scene showing a statue, a semi-cylindrical wall and a floor. The rendering technique that we are wathcing right now is the classical OpenGL's forward rendering.
Set up is now complete!


2

Now that the scene is in position we have to start writing the code for the algorithm itself. Deferred rendering needs one to acquire at least positions, normals and diffusive components from the scene and to save them into some textures, which usually have the same size of the screen. The so obtained textures are then used to compute lighting at every pixel. Then the result is sent to the screen.

This computation isn't always straightforward; for example, on most part of the oldest videocard it is impossible to save different data inside textures with different channels's depth using a single render pass. To trick this problem, in the first deferred shading's implementations, programmers used to save information taking advantage of every single bit in order to bring all the data to the end of pipeline; for example in the "Deferred Shading" paper, the authors used the depth information to re-compute vertices' position, and they saved inside the normal's alpha channel some materials' information.
For simplicity, and given that modern hardware allows us to do it, we are going to save our information as it comes, straight into our textures, even using different data precisions. Probably we are going to lose something in efficiency, but for this tutorial we won't care.

Let’s start. First of all we have to create the right files and folders.

  • Create a folder “./Psyche/shaders/DeferredRendering/Test”
  • Add three files “deferredShading.vert”, “deferredShading.frag” and “deferredShading_bump.frag”, that are used for extracting the information we want from the scene
  • Open these three files and let's write our shaders.

We are going to use multiple render target (MRT) to avoid transforming the geometry more than once. To this aim we can use the OpenGL's variable gl_FragData to tell to the fragment shader to render into each one of the different render targets. We then need to arrange some textures to hold these data, but we will care about this problem after having wrote down our shaders. I don’t think there is much to explain here, the vertex shader transforms the geometry and sends information about normals and positions to the fragment shader. One important detail is that we have two different fragment shaders, one that handle normal mapping and one "plain".

Vertex Shader:

varying vec4 normals;
varying vec4 position;
varying mat4 TBN;
attribute vec3 vNormal, vTangent, vBiNormal;
uniform mat4 ModelMatrix;
uniform mat4 TransformMatrixInvese;
uniform mat4 RotationMatrixInverse;

void main( void )
{
   gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
   gl_TexCoord[0] = gl_MultiTexCoord0;
   normals = RotationMatrixInverse * vec4(gl_Normal,1);
   position = ModelMatrix * gl_Vertex;


   TBN = mat4( vTangent.x, vBiNormal.x, vNormal.x, 0,
               vTangent.y, vBiNormal.y, vNormal.y, 0,
               vTangent.z, vBiNormal.z, vNormal.z, 0,
               0, 0, 0, 1 );

   gl_FrontColor = vec4(1.0, 1.0, 1.0, 1.0);
}

Fragment Shader (with normal mapping):

varying vec4         position;
varying mat4         TBN;
uniform sampler2D    tDiffuse, tBumpMap;

void main( void )
{
   gl_FragData[0] = texture2D(tDiffuse,gl_TexCoord[0].st);
   gl_FragData[0].a = 1;
   gl_FragData[1] = vec4(position.xyz,1);
   gl_FragData[2] = (texture2D(tBumpMap,gl_TexCoord[0].st) * 2 -
                     vec4(1,1,1,0)) * TBN;
   gl_FragData[2].a = 1;
}

Fragment Shader (without normal mapping):

varying vec4         position;
uniform sampler2D    tDiffuse, tBumpMap;

void main( void )
{
   gl_FragData[0] = texture2D(tDiffuse,gl_TexCoord[0].st);
   gl_FragData[0].a = 1;
   gl_FragData[1] = vec4(position.xyz,1);
   gl_FragData[2] = vec4(normals.xyz,1);
}

We made a lot of headway. Anyway, if we compiile and execute the program at this point, we aren't going to see any change to the scene because we have not linked the shaders to the scene’s objects. Moreover, even if they were linked, we would have seen the sole scene without any lighting since, by default, the front-buffer shows the content of gl_FragData[0]. It is necessary to create a filter that processes the MRT textures before being able to see our deferred shading on the job.


3

 

Psyche provides an abstract class, called RenderTexture, used for all those procedures that requires rendering to a texture. What we want is to extend this class with a derived class in order to handle multiple render target textures (MRT). Such a class can be written from scratch, but Psyche already contains an implementation of it. In fact, FBODeferredShadingBuffer class was specifically wrote to handle three textures at the same time (via MRT), and it is easily expandable with additional textures. Let analyze the main part of this class, the constructor:

FBODeferredShadingBuffer::FBODeferredShadingBuffer(int _dWidth, int _dHeight){

   [...]

   glGenFramebuffersEXT(1, &fbo);
   glGenRenderbuffersEXT(1, &depthBuffer);
   glGenFramebuffersEXT(1, &diffuseBuffer);
   glGenRenderbuffersEXT(1, &positionsBuffer);
   glGenRenderbuffersEXT(1, &normalsBuffer);

   glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fbo);

This class uses FBO for rendering to the texture, and the following lines generate and connect the Fragment Buffer Object.

   glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, diffuseBuffer);
   glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_RGB, dWidth,
                            dHeight);
   glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT,
                            GL_COLOR_ATTACHMENT0_EXT,
                            GL_RENDERBUFFER_EXT, diffuseBuffer);

   glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, positionsBuffer);
   glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_RGB32F_ARB, dWidth,
                            dHeight);
   glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT,
                            GL_COLOR_ATTACHMENT1_EXT,
                            GL_RENDERBUFFER_EXT, positionsBuffer);

   glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, normalsBuffer);
   glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_RGB16F_ARB, dWidth,
                            dHeight);
   glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT,
                            GL_COLOR_ATTACHMENT2_EXT,
                            GL_RENDERBUFFER_EXT, normalsBuffer);

   glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, depthBuffer);
   glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT24,
                            dWidth, dHeight);
   glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT,
                                GL_DEPTH_ATTACHMENT_EXT,
                                GL_RENDERBUFFER_EXT, depthBuffer);

In the four code blocks we connected and defined four rendering targets (render buffer). In details we use an RGB buffer, 8 bit per channel, for the diffusive component of the deferred pass, a HDR texture with 32 bit channel for the position component while for normals it’s enough to have 16 bit per channel. Depth component was added for z-buffer’s needings. Now we have to create the very textures and bind them to the render buffers.

   glGenTextures(1, &dwTexture);
   glBindTexture(GL_TEXTURE_2D, dwTexture);
   glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, dWidth, dHeight, 0,
                GL_RGB, GL_UNSIGNED_BYTE, NULL);
   glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
   glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
   glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
   glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
   glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT,
                             GL_COLOR_ATTACHMENT0_EXT,
                             GL_TEXTURE_2D, dwTexture, 0);
   texture.push_back(dwTexture);

   glGenTextures(1, &dwTexture);
   glBindTexture(GL_TEXTURE_2D, dwTexture);
   glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB32F_ARB, dWidth, dHeight, 0,
                GL_RGB, GL_FLOAT, NULL);
   glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
   glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
   glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
   glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
   glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT,
                             GL_COLOR_ATTACHMENT1_EXT,
                             GL_TEXTURE_2D, dwTexture, 0);
   texture.push_back(dwTexture);

   glGenTextures(1, &dwTexture);
   glBindTexture(GL_TEXTURE_2D, dwTexture);
   glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB16F_ARB, dWidth, dHeight, 0,
                GL_RGB, GL_FLOAT, NULL);
   glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
   glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
   glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
   glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
   glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT,
                             GL_COLOR_ATTACHMENT2_EXT,
                             GL_TEXTURE_2D, dwTexture, 0);
   texture.push_back(dwTexture);

   [...]
}

In each of the three blocks we create and bind the texture within the first lines, then we specifiy format and paramenters for the texture, and finally we define the connection to the rendering’s target. The texture is then saved into a vector for any possible future need. Another detail is worth mentioning is about the method that starts the rendering to texture process. To specify the correlation between the elements from gl_FragData and the render buffers we added a few lines as shown (grey parts are less important):

void FBODeferredShadingBuffer::start(){
   glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fbo);
   glPushAttrib(GL_VIEWPORT_BIT);
   glViewport(0,0,dWidth, dHeight);

   glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
   glClearColor( 0.0f, 0.0f, 0.0f, 1.0f );

   GLenum buffers[] = { GL_COLOR_ATTACHMENT0_EXT,
                        GL_COLOR_ATTACHMENT1_EXT,
                        GL_COLOR_ATTACHMENT2_EXT };
   glDrawBuffers(3, buffers);
}

With the disposition shown above we connected the diffusion component at site 0, the position component at site 1 and the normal component at site 2. Now let's create the texture in the main and let try to see if we are acquiring all the information properly inside the disposed buffers. It is important to remember that we have to link the shaders to the objects, too. Here’s the code:

try{
   CheckAssignment(athenaHead = DataObjectManager::getInstance()->
      createGLSLRenderObjectFromFile(
"./models/athenaHead.exo",
      
"shaders/DeferredRendering/deferredShading.vert",
      
"shaders/DeferredRendering/deferredShading.frag") );

   CheckAssignment(athenaBack = DataObjectManager::getInstance()->
      createGLSLRenderObjectFromFile(
"./models/athenaBack.exo",
      
"shaders/DeferredRendering/deferredShading.vert",
      
"shaders/DeferredRendering/deferredShading_bump.frag") );

   CheckAssignment(athenaFloor = DataObjectManager::getInstance()->
      createGLSLRenderObjectFromFile(
"./models/athenaFloor.exo",
      
"shaders/DeferredRendering/deferredShading.vert",
      
"shaders/DeferredRendering/deferredShading_bump.frag") );

   defBuffer =
new FBODeferredShadingBuffer(shWIDTH,shHEIGHT);
}
catch(PException* e){
   LogMex(
"Error",e->sException);
   
return P_FAIL;
}

Notice that athenaHead object has no bump mapping, therefore, it is linked to the fragment shader that doesn’t perform any bump computation. So let's test our work. Let's render the content of the texture to the monitor to see that everything is working as we expect to.

void DeferredRenderingTest::doRender(){
   Camera::setCurrentCamera(&camera);
   Light::update();

   glDisable(GL_LIGHTING);
   
   depth_diffuse->start();
      athenaHead->render();
      athenaBack->render();
      athenaFloor->render();
   depth_diffuse->stop();

   defBuffer->showTexture(0);
   defBuffer->showTexture(1,400,400);
   defBuffer->showTexture(2,400,0,400);
};

Ok, we are ready. Let's compile and execute the code.
Not to bad, eh? We are rendering to all the three textures in a single rendering pass!!
We are almost done. We just have to mix everything to get the final result.


4

Finally we reached the final stage. Lighting computation. Now we have the three textures with all the information we needed. In this tutorial we are going to ignore materials’ proprieties and we will light each surface using Blinn’s algorithm, forcing the values of equation’s parameters directly into the shader.

A simple and elegant way to create what we need is to use the ImageFilter class from Psyche. Every object instantiated from this class requires in input one or more texture that will then be processed by several filters. In our case we will provide FBODeferredShadingBuffer and then we add a filter for lighting computation. It would be possible to add additional post-processing steps, but we will restrict ourselves to deferred rendering filtering in this tutorial. First we create the filter’sshader inside folder "Psyche / shaders / DeferredRendering" with names "deferredRendering.vert" and "deferredRendering.frag" and then we write the following code:

Vertex Shader:

void main( void )
{
   gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
   gl_TexCoord[0] = gl_MultiTexCoord0;

   gl_FrontColor = vec4(1.0, 1.0, 1.0, 1.0);
}

Fragment Shader:

uniform sampler2D tImage0;
uniform sampler2D tImage1;
uniform sampler2D tImage2;
uniform vec3 cameraPosition;

void main( void )
{
   vec4 image0 = texture2D( tImage0, gl_TexCoord[0].xy );
   vec4 position = texture2D( tImage1, gl_TexCoord[0].xy );
   vec4 normal = texture2D( tImage2, gl_TexCoord[0].xy );

   vec3 light = vec3(150,250,0);
   vec3 lightDir = light - position.xyz ;

   normal = normalize(normal);
   lightDir = normalize(lightDir);

   vec3 eyeDir = normalize(cameraPosition-position.xyz);
   vec3 vHalfVector = normalize(lightDir.xyz+eyeDir);

   gl_FragColor = max(dot(normal,lightDir),0) * image0 +
                      pow(max(dot(normal,vHalfVector),0.0),9)*10;
}

The vertex shader speaks for itself! The fragment shader is a bit wider. The idea is that it acquires from the program the three textures as well as the location of the camera; then the shader reads the values from the texture and applies the Blinn’s algorithm for local lighting.

The light’s position is set directly into the shader’s code; at this implementation level we don't handle dynamic lights. We will see in the second part of this tutorial how to read some lights from a texture with colors’ and intensities’ information.

Ok it is almost done. You just have to run these shaders and render the scene to the monitor.
But since we are paranoid and accurate programmers, we are not going to stack in a pile our data, instead we are going to write a pretty ordered class suitable for being used in the system of filters we mentioned before!

To do this you must extend class FilterOperator with a new class, which we call DeferredShadingOperator while overriding the method void DeferredShadingOperator::operation().Let see the class's code.

class DeferredShadingOperator : public FilterOperator{
private:
   
// Variables
   
//------------------------------------------------------------
   GLSLShaderData    *shader;
   vector            glImage;
   GLuint            glCameraPosition;

public:
   
// Functions
   
//------------------------------------------------------------
   DeferredShadingOperator();
   ~DeferredShadingOperator();

   
void   operation();
};

Very easy,isn’t it? It’s important to keep it as simple as possible, whatever you are designing just remember to keep it simple. An important but invisible detail is that we are inheriting a vector composed of RenderTexture elements from FilterOperator; it is the texture source. In our case the vector has only one RenderTexture element that contains various render buffer, but this structure allow us to read from several RenderTexture if needed. Ok, let’s see the constructor:

DeferredShadingOperator::DeferredShadingOperator() : FilterOperator(new BasicRenderTexture(shWIDTH,shHEIGHT)){
   shader =
new GLSLShaderData(
            
"./shaders/DeferredRendering/deferredRendering.vert",
            
"./shaders/DeferredRendering/deferredRendering.frag");

   
for(DWORD i=0; i<4; i++){
      stringstream uniformName;
      uniformName <<
"tImage" << i << ends;
      GLuint image = glGetUniformLocationARB(shader->glProgramHandler,
                     uniformName.str().c_str());
      glImage.push_back(image);
   }

   glCameraPosition = glGetUniformLocationARB(shader->glProgramHandler,
                      
"cameraPosition");
}

The constructor loads the shader from the files that we created and sets the uniform variables as well. Nothing new. Even operation()'s code is very simple:

void DeferredShadingOperator::operation(){
   renderTexture[0]->start();

operation() method captures the operation’s result storing it inside a texture. This texture will be then provided to the next filter for its computations, so it's important to save the result in it starting the render to texture procedure. Moreover, if we don’t render to the texture, we would render to the front buffer, and we don’t want this to happen.

   //Set matrices
   glMatrixMode(GL_PROJECTION);
   glPushMatrix();
   glLoadIdentity();
   glOrtho(0,1,0,1,0.1f,2);

   glMatrixMode(GL_MODELVIEW);
   glPushMatrix();
   glLoadIdentity();
   glTranslatef(0,0,-1.0);

   glDisable(GL_LIGHTING);

   
//Render the quad
   glUseProgramObjectARB(shader->glProgramHandler);

   Camera *camera = Camera::getCurrentCamera();
   glUniform3fARB(glCameraPosition, camera->getPosition().comp.x,
                                    camera->getPosition().comp.y,
                                    camera->getPosition().comp.z);

OpenGL calls are used to set the projection and the model_view matrix. We disable the lighting system because we don’t want to compute any lighting on the render square and then we enable the shader; moreover we send current camera’s position to the shader itself.

   glActiveTextureARB(GL_TEXTURE0_ARB);
   glBindTexture( GL_TEXTURE_2D, 0 );

   for(
unsigned int i=0;
       i < sourceOperator->getRenderTexture(0)->getTextureCount();
       i++)
   {
      glActiveTextureARB(GL_TEXTURE0_ARB + i );
      glBindTexture(GL_TEXTURE_2D,
                    sourceOperator->getRenderTexture(0)->getTexture(i));
      glEnable(GL_TEXTURE_2D);
      glUniform1iARB (glImage[i], i);
   }

With the previous code we send all the textures to the shader as usual. We recover the RenderTexture (that we know being of type FBODeferredShadingBuffer) from the previous filtering step and we read the textures it contains from the render buffers. Then we provide the shader with the textures paying attention to send them in the correct order.

   glColor4f(1,1,1,1);
   glBegin(GL_QUADS);
      glMultiTexCoord2fARB(GL_TEXTURE0_ARB, 0, 0);
      glVertex3f( 0, 0, 0);
      glMultiTexCoord2fARB(GL_TEXTURE0_ARB, 1, 0);
      glVertex3f( 1, 0, 0);
      glMultiTexCoord2fARB(GL_TEXTURE0_ARB, 1, 1);
      glVertex3f( 1, 1, 0);
      glMultiTexCoord2fARB(GL_TEXTURE0_ARB, 0, 1);
      glVertex3f( 0, 1, 0);
   glEnd();

   glUseProgramObjectARB(0);

   for(
unsigned int i=0;
       i < sourceOperator->getRenderTexture(0)->getTextureCount();
       i++)
      glActiveTextureARB(GL_TEXTURE0_ARB + i );
      glBindTexture( GL_TEXTURE_2D, 0 );
      glDisable(GL_TEXTURE_2D);
   }

   glEnable(GL_LIGHTING);

   glMatrixMode(GL_PROJECTION);
   glPopMatrix();

   glMatrixMode(GL_MODELVIEW);
   glPopMatrix();

   renderTexture[0]->stop();
}

The remaining code is used for rendering and for turning off the shader and all the associated parameters when we are done. Moreover the state is restored as it was before the function begun and the system is told to stop to render to texture. In Psyche filters are constructed from a specific builder class. Therefore we must add a method to this class for our new filter. The method’s code is almost identical to that of the other methods of this builder class:

PRESULT ImageFilterBuilder::addDeferredShadingFilter(){
   instance();
   
if(imageFilter->pSource == NULL) return P_FAIL;
   imageFilter->vFilters.push_back(
new DeferredShadingOperator() );
   imageFilter->recomputeGraph();

   
return P_OK;
}

We are ready to connect everything into the main class. Here the code of the initialization’s and rendering’s function:

try{
   CheckAssignment(athenaHead = DataObjectManager::getInstance()->
      createGLSLRenderObjectFromFile(
"./models/athenaHead.exo",
      
"shaders/DeferredRendering/deferredShading.vert",
      
"shaders/DeferredRendering/deferredShading.frag") );

   CheckAssignment(athenaBack = DataObjectManager::getInstance()->
      createGLSLRenderObjectFromFile(
"./models/athenaBack.exo",
      
"shaders/DeferredRendering/deferredShading.vert",
      
"shaders/DeferredRendering/deferredShading_bump.frag") );

   CheckAssignment(athenaFloor = DataObjectManager::getInstance()->
      createGLSLRenderObjectFromFile(
"./models/athenaFloor.exo",
      
"shaders/DeferredRendering/deferredShading.vert",
      
"shaders/DeferredRendering/deferredShading_bump.frag") );

   athenaHead->setListRender();
   athenaBack->setListRender();
   athenaFloor->setListRender();

   defBuffer =
new FBODeferredShadingBuffer(shWIDTH,shHEIGHT);

   ImageFilterBuilder builder;
   builder.addAcquireFromTextureOperator(defBuffer);
   builder.addDeferredShadingFilter();
   builder.addRenderToScreenOperator();
   screenFilter = builder.getImageFilter();

}
catch(PException* e){
   LogMex(
"Error",e->sException);
   
return P_FAIL;
}

The object screenFilter is of type ImageFilter* and it is a field of the main class. This object contains a source texture, that is our FBODeferredShadingBuffer, a filter that effectively computes the deferred shading, and an operator that sends to the screen the result of the filter’s chain. Rendering is done as follows:

void DeferredRenderingTest::doRender(){
   Camera::setCurrentCamera(&camera);
   Light::update();

   glDisable(GL_LIGHTING);
   
   depth_diffuse->start();
      athenaHead->render();
      athenaBack->render();
      athenaFloor->render();
   depth_diffuse->stop();

   screenFilter->executeFiltering();
};

Lastly we add all the objects that are missing to the release method.

void DeferredRenderingTest::releaseGLEntities(){
   DataObjectManager::getInstance()->deleteObject(athenaHead);
   DataObjectManager::getInstance()->deleteObject(athenaBack);
   DataObjectManager::getInstance()->deleteObject(athenaFloor);
   delete screenFilter;
   
delete defBuffer;
};

Let's compile and execute the code. Here it is!
Deferred shading on the job! Definitely a good result, isn’t it?! Ok the light is just one, and ok, there is only one material with the speculars that are too strong making the scene look weird… but… it works!

Since I want to learn more about deferred rendering integrating it into the engine in a transparent way I think that I'll write a second tutorial about it! Integration, support of lights, materials management, volumetric lights and later screen space ambient occlusion.
See you at soon!

For whatever you may need, comments, errors notification, requests, etc.., Do not hesitate to contact me at

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值