Caustics Mapping: Explanation and Implementation in DirectX 9 Figure 1: Images rendered using

CausticsMapping: Explanation and Implementation in DirectX 9

 

Figure 1: Images rendered using caustics mapping (from left to right): (a) under water caustics, (b) colored caustics, and (c) reflective caustics.

 

 

1. Introduction:

Real-time computer graphics has been greatlyinfluenced by image-space algorithms, i.e., algorithms that operate on imagesof objects rather than the object geometry. For example, shadow mapping is awidely used image-space technique for rendering shadows by performing a simpledepth test on rasterized pixels rather than doing expensive shadow-rayintersection tests with the scene geometry. Another popular example ofimage-space algorithms is displacement mapping. Essentially, image-spacealgorithms allow for a fast, approximative approach for rendering variousoptical effects that are prohibitively slow to produce using conventionalgeometric techniques, especially for real-time applications such as games. Thereason why image-space techniques are faster is because they decouple thegeometric complexity of a 3D scene from the render time. Therefore, in the caseof shadow mapping, it does not matter if there are ten or one hundred objectsin the scene, the algorithm will take more or less the same amount of time torender shadows. Whereas if the shadows were being rendered using ray-casting,the time costs will increase significantly as more objects are introduced intothe scene. The second advantage of image-space algorithms is that they are verywell suited for the current programmable graphics hardware since GPUs are greatat brute force per-pixel operations.

As hardware becomes more and more powerful, gamesstart to pack in more and more eye-popping graphics effects, thanks to a veryactive graphics research community. Recent research developments haveintroduced a new image-space technique for rendering caustics in real-timeentitled “caustics mapping”. This article is a tutorial styleexplanation of the technique and serves as a stepwise guide to implementingcaustics mapping.

 

 

Figure 2: Caustics formation: light rays from the source get focused on a single point on the receiver surface due to the refractive object.

 

 

2. Whatare caustics?

Before we dive into the details of renderingcaustics, let’s first define what they actually mean. Simply put, caustics arecomplex patterns of shimmering light that can be seen on surfaces in presenceof reflective or refractive objects, for example those formed on the floor of aswimming pool in sunlight. A more formal definition is that whenever multiplerays of light converge on the same point on a surface, they cause that regionto become relatively brighter than its surrounding regions. These non-uniformdistributions of bright and dark regions are known as caustics. The convergenceof the light rays is generally caused by reflective or refractive objects,which bend the rays causing them to change their direction and focus on asingle point (see Figure 2). The caustics mapping algorithm simulates thislight transport using an approximative image-space approach to render visuallyconvincing caustics in real-time.

 

Figure 3: The caustics mapping algorithm: (top) the creation of the caustics map texture using photon splatting, and (bottom) rendering caustics on receiver geometry by looking up the caustics map.

 

 

3. TheCaustics Mapping Algorithm

The caustics mapping algorithm runs entirely on theGPU, therefore freeing up the CPU to perform the numerous other tasks requiredin games such as the AI. Conversely, that also implies the reader should havesome knowledge of vertex and pixel shader programming in order to fullyunderstand this algorithm. The graphics API and shader language used in thisimplementation are Microsoft DirectX 9.0 and HLSL; however it is equallyconvenient to implement the algorithm using OpenGL and GLSL or CG. Lastly, agraphics card with support for shader model 3.0 is required since the algorithmperforms texture lookups in the vertex shader.

 

We are now ready to look at the caustics mappingalgorithm. Without loss of generality, caustics from a refractive object willbe considered throughout this tutorial. The use of reflective objects does notrequire any modifications to the overall algorithm, and therefore can besubstituted in place of the refractive objects if desired.

 

The algorithm is conceptually quite simple andintuitive. It starts with tracing light through the refractive object. Thefootprints of the refracted light are then collected onto an image plane facingthe light source to create a caustics map. Caustics are thenrendered onto surfaces by looking up the caustics map. Figure 3 shows aschematic of the algorithm.

 

Straight away we run into a few major problems thatneed to be addressed. First, we need to compute the intersection points of therefracted light rays with the scene geometry to determine where the causticswill form. This has to be done fast enough so that the real-time performance isnot compromised. For now let’s assume that we are given some magic functionthat takes a refracted light ray and returns the intersection point at virtuallyno cost. We can therefore shoot a light ray from the light source to each pointon the surface of the refractive object, perform the refraction, and then usethe intersection function to compute where the refracted light ray intersectsin the rest of the 3D world. For each ray, a certain amount of light isdeposited in the caustics map at the location of projection of intersectionpoint. So when several refracted light rays intersect at the same point, multiplelight deposits will be accumulated in that region causing it to get brighterand thus forming caustics. However, this accumulation process presents anotherchallenge. Since we will be implementing the algorithm on the GPU, we cannotrandomly write to pixels; they can only be written in the order that they arerasterized. Therefore, this means that if we want to write to certain pixelsmore than once, they need to be rasterized more than once. The caustics mappingalgorithm solves this problem using “vertex splatting” (also known aspoint splatting), which simply displaces the vertices of the refractive objectalong the refracted light direction and renders them as point primitives. As aresult of the displacement, multiple vertices can end up in the same position thusproviding the repeated rasterization of the same pixels that is required forthe light accumulation. Additive blending is used to sum up the contributionfrom each individual fragment.

 

We are still left with one small problem. Since thealgorithm works using vertex splatting, the quality of the caustics produceddepends heavily on the number of vertices of the refractive object. Forexample, if we want to render caustics from a glass cube, which would generallybe modeled using only eight vertices, we would end up with random, disjoint patchesof light. A simple workaround is to tessellate the mesh so that it contains alarge number of vertices. However, using highly tessellated meshes in games andother real-time applications is not desired, especially for simple objects suchas a cube. A more elegant solution is to create an auxiliary mesh, known as therefractivevertex grid, containing a set of vertices evenly distributed over thesurface of the refractive object visible from the light source. This mesh canthen be used for the splatting process instead of the original refractiveobject mesh.

 

 

3.1. Implementation

 Let us nowlook at caustics mapping from an implementation oriented perspective. Followingare the main steps of the algorithm:

 

Step 1: Setup refractive vertex grid.

Step 2: Create caustics map.

Step 3: Render caustics on receiver geometry usingthe caustics map.

 

Users familiar with shadow mapping might notice astriking structural similarity, with the exception of the first step. Let’sconsider each of the steps in some detail. First, a “refractive vertex grid”has to be created. As mentioned earlier, this is simply a set of vertices thatwill be “splatted” wherever rays of light hit the receiver geometry.Splatting means exactly what it sounds like. Think of it as a paint ball beingthrown at a wall, and thus creating a splat on it. In our case, the paint ballis a vertex which represents a photon (or a number of photons), and the wall isthe receiver geometry, i.e., surfaces on which the caustics can be formed. Theidea is that if multiple vertices, traveling along the light rays, end upgetting splatted at the same point, that region will become brighter. Recallthat this is exactly what we wanted to simulate for rendering caustics. Theresults of the splatting are stored in a texture called the causticsmap. Then in the final rendering stage, each point on the receivergeometry is projected into the caustics map texture to determine the amount ofcaustics it receives, if any.

 

It is time to get down and dirty with theimplementation details. We will reiterate through the steps of the causticsmapping algorithm and consider what needs to be done at each step. Before webegin, let’s define the scene that we will be rendering. First, we need arefractive (or reflective) object that will cause the light rays to converge,thus forming caustics. Second, we need some receiver geometry on which thecaustics will be formed. The choice of these objects is left solely to the user;the algorithm places no limitation on the type of 3D geometry used. A goodstarter selection would be to use a sphere for the refractive object and a flatplane underneath it as the caustic receiver geometry. Lighting is provided by apoint light source (the algorithm currently does not support environment orarea lighting). Once that is decided, we can start with the implementation ofthe first step of the algorithm: setting up the refractive vertex grid.

Figure 4: Final splat locations for the vertices in the refractive vertex grid.

 

 

The goal of the refractive vertex grid is to give usa uniform distribution of points on the surface of the refractive objectvisible from the light source. This can be achieved by rendering the refractiveobject from the light’s point of view onto a texture of a certain resolution.Instead of outputting color, the 3D world positions are outputted at eachpixel. Then, if we create a rectangular vertex grid of the same resolution, weend up with a one to one correspondence between the pixels of the texture andthe vertices of the grid. And since the texture contains 3D positions of thepoints on the surface of the refractive object instead of color, we essentiallyend up with a 3D position for each grid vertex. Mission accomplished; we have a set ofvertices evenly distributed over the surface of the refractive object and cannow proceed with the rest of the algorithm. However, we can take this a stepfurther. Recall that the ultimate objective of the refractive vertex grid is tosplat the vertices at the points of intersection of the refracted light raysand the receiver geometry. So instead of points on the surface of therefractive object, why not render the final intersection points onto thetexture and apply those to the grid (See Figure 4)? And that is exactly whatthe algorithm does.

 

The vertex grid is created and initialized asfollows:

 

#define PP_RES 100

 

// Refractive vertex grid

IDirect3DVertexBuffer9* pRefractVertBuffer;

struct myVertex

{

         D3DXVECTOR3 pos;

         D3DXVECTOR2 tc;

};

#define D3DFVF_MYVERT (D3DFVF_XYZ | D3DFVF_TEX2 | D3DFVF_TEXCOORDSIZE2(1))

 

 

 

// Create vertex buffer for the refractive vertex grid

myVertex * vertices = new myVertex[PP_RES * PP_RES];

 

float delta = 2.0/float(PP_RES-1.0);

float deltat = 1.0/float(PP_RES-1.0);

 

for(int i=0;i<PP_RES;i++)

{

         for(int j=0;j<PP_RES;j++)

         {

                   vertices[i*PP_RES + j].pos = D3DXVECTOR3(-1.0+delta*j,1.0-

                                                     delta*i,1.0);

                   vertices[i*PP_RES + j].tc = D3DXVECTOR2(deltat*j,deltat*i);

                   }

         }

}

pd3dDevice->CreateVertexBuffer(sizeof(myVertex)*PP_RES*PP_RES, 0,      

                               D3DFVF_MYVERT, D3DPOOL_MANAGED,

                               &pRefractVertBuffer, NULL);

 

 

// fill the vertex buffer with the refractive vertex grid

void *pData;

 

pRefractVertBuffer->Lock(0,0,&pData,0);

memcpy(pData,vertices,sizeof(myVertex)*PP_RES*PP_RES);

pRefractVertBuffer->Unlock();

 

delete [] vertices;

 

The code above creates a DirectX vertex buffer forthe refractive vertex grid. The grid is planar, with a constant z coordinateequal to 1.0, and the x and y coordinates ranging from -1.0 to 1.0. Each vertexin the grid also has associated texture coordinates u and v, which range from 0to 1.0. PP_RES defines the resolution of the refractive vertex grid. Thesharpness and quality of the rendered caustics depends on this resolution. Thevalue can be adjusted by the user to achieve the desired balance between speedand image quality.

 

Now that the refractive vertex grid has been setup,we need to determine the positions where the vertices will be splatted tocreate the caustics map. These positions are the intersection points of therefracted lights rays with the receiver geometry. However, since we cannotafford to perform ray-geometry intersection tests, we will use an image-spaceapproximation technique. Remember that magic intersection function that weassumed to possess? Well, its time to define it now. Following are the stepsthat need to be taken:

 

Step 1: From the light’s point of view, render thereceiver geometry (do not render the refractive object) onto a texture. Insteadof outputting color at every pixel, the 3D coordinates in world space areoutput. We will refer to this as the “positions texture”.

Step 2: From the light’s point of view, render therefractive object onto a texture. At every pixel, compute the refracted lightray and estimate intersection point with the receiver geometry using thepositions texture. Output the 3D coordinates in world space of the intersectionpoint.

 

Step 1 is pretty straightforward; it will consist ofsimple shader programs that you have probably written a thousand times. Theonly difference is instead of outputting color at each pixel, you will outputthe interpolated 3D world positions. Step 2 is a little more involved. Followingare the vertex and pixel shaders corresponding to Step 2:

 

VertexShader:

float4x4 g_mWorld;  // World transformation matrix

float4x4 g_mLightView;     // View matrix for light

float4x4 g_mLightProj;       // Projection matrix for light

 

// vertex input and output structures

struct VS_IN

{

    float4 pos : POSITION;    // 3D position

    float3 normal : NORMAL0;  // vertex normal

};

 

struct VS_OUT

{

    float4 pos    : POSITION;  // position in clip space

    float4 posi   : TEXCOORD0; // interpolated position in world space

    float3 normal : TEXCOORD1; // interpolated normal in world space

};

 

// The vertex shader

VS_OUT vsLightRender(VS_IN input)

{

         float4x4 mLightViewProjection = mul(g_mLightView, g_mLightProj);

         VS_OUT output;

         output.posi = mul(input.pos,g_mWorld);

         output.normal = mul(input.normal,g_mWorld);

         output.pos = mul(output.posi,mLightViewProjection);

        

         return output;

}

 

 

PixelShader:

// Compute texture coordinates by projecting the given 3D position in // world space

float2 getTC(float4x4 mVP, float3 wpos)

{

         float4 texPt = mul(float4(wpos,1),mVP);

         float2 tc = float2(0.5*(texPt.xy/texPt.w) + float2( 0.5, 0.5 ));

    tc.y = 1.0f - tc.y;

   

    return tc;

}

 

// Estimate ray/geometry intersection point

float rayGeo(float3 pos, float3 dir, float4x4 mVP, sampler posTex, int num_iter)

{

         float2 tc = float2(0.0,0.0);

         // initial guess

         float dist = 1.0;

         for(int i=0;i<num_iter;i++)

         {

                   float3 pos_p = pos + dir*dist;

                   tc = getTC(mVP, pos_p);

                   float4 newPos = tex2D(posTex, tc);

                   dist = distance(newPos.xyz/newPos.w, pos);

         }

        

         return dist;

}

 

// The pixel shader

float4 psIntersectPts(VS_OUT input) : COLOR

{

         float3 posi = input.posi.xyz/input.posi.w;

         float3 normal = normalize(input.normal);

         float4x4 mVP = mul(g_mLightView, g_mLightProj);

        

         // Compute refracted ray direction

         float3 lightDir = normalize(posi - g_lightPos);

         float3 refrDir = normalize(refract(lightDir,normal,g_refIndex));

        

         // Get distance to ray/geometry intersection estimation point

         float dist = rayGeo(posi, refrDir, mVP, posTexSampler, 1);

        

         // Compute the intersection point

         float3 intPt = posi + refrDir*dist;

     

      return float4(intPt,1);

}

 

 

The pixel shader utilizes the magic intersectionestimation function, rayGeo. This function implements aniterative root finding method similar to the Newton-Raphson method. The basicidea is that the problem of finding the intersection point is posed as theproblem of finding the root of a mathematical function. The function in ourcase is defined by the positions texture.Therefore, the root of our function is where a given light ray intersects the positions texture, which is essentiallythe intersection point that we are trying to find. There are a number ofroot-finding methods out there, such as the Secant method, Bisection method,etc. and they differ in accuracy, robustness, and the number of iterations ittakes to converge to the solution. Any one of these methods can be used for theintersection estimation. The authors of caustics mapping decided to go with avariant of the Newton-Raphson method which converges faster than the others. Forfurther details on the convergence and accuracy of this function, pleaseconsult the caustics mapping paper [SKP2006].

 

The rayGeo function returns an estimateof the distance to the intersection point of the ray along the given direction,dir,originating from the point, pos. It uses the positions texture (posTexSampler) created in Step 1. The parameter num_iterspecifies the number of iterations to be performed. Generally, each successiveiteration increases the accuracy of the approximation. However, for the purposeof rendering caustics just a single iteration is sufficient.

 

After this step, we end up with a texture containingthe intersection points of the refracted light rays and the receiver geometry.The hard part is over, everything after this is just simple point renderingwith texture mapping. In order to create the caustics map, all we need to donow is splat the vertices from the refractive vertex grid at these intersectionpoints. We do this by rendering the grid as point primitives using additivealpha blending onto a texture, i.e. the caustics map. In DirectX, pointprimitives are rendered as screen-aligned quads. Therefore, if we use them forsplatting, the splats will look like squares and create some unwanted visualartifacts. However, we can apply an alpha mask with a Gaussian falloff to thepoint primitives so that they have a nice circular shape with a smooth gradientat the edges. In DirectX, the PointSpriteEnable state must be setin order to use point primitive texturing. Figure 5 shows an example texturethat can be used as the alpha mask. The white color indicates totaltransparency whereas the black color indicates total opaqueness.

 

 

Figure 5: Alpha mask for point splat.

 

 

The vertex and pixel shaders for this step are asfollows:

 

VertexShader:

// Vertex input and output structures

struct VS_IN1

{

         float4 pos : POSITION;

         float2 tc : TEXCOORD0;

};

 

 

struct VS_OUT1

{

         float4 pos : POSITION;

         float3 light_int : COLOR0;

         float2 tc : TEXCOORD0;

};

 

// The vertex shader

VS_OUT1 vsCaustics(VS_IN1 input)

{      

         // look up the intersection position

         float4 intPt = tex2Dlod(ppPosSampler, float4(input.tc,0,0));

        

         // set the output value

             float4x4 mLightViewProj = mul(g_mLightView, g_mLightProj);

         output.pos = mul(float4(intPt,1),mLightViewProj);

        

         // compute the light intensity

         output.light_int = float3(1000,1000,1000)*(4.0/g_ppRes);

        

         return output;

}

 

 

PixelShader:

// Caustics shader

float4 psCaustics(VS_OUT1 input) : COLOR

{

         // look up alpha mask

         float4 tcol = tex2D(pixelSampler,input.tc);

     

         return float4(input.light_int,1-tcol.r);

}

 

 

In the vertex shader, the intersection point islooked up from the texture we created in the previous pass using theimage-space intersection approximation. This point is where the point primitivewill be rendered. However, since the point stored in the texture is in worldspace, we must first project it into the light’s view space so that it can bestored in the caustics map. The vertex shader output also consists of the lightcontribution from the current vertex. This is computed by dividing the totallight intensity by the resolution of the refractive vertex grid. In the pixelshader, the interpolated light contribution is output which gets accumulatedwhen multiple point primitives overlap the same pixels. Remember, you must haveadditive alpha blending enabled for this accumulation to take place!

 

Figure 6: An example caustics map (left) and the actual 3D scene rendered using the map (right).

 

 

At the end of this render pass, we will have ourcaustics map texture. Figure 6 shows an example of a caustics map texture. Nowall that’s left to do is performing the final rendering of our 3D scene (withcaustics of course!) and display it on the screen. For this step, you willrender the scene as you normally would. The only exception is that whencomputing lighting on the receiver surfaces, other than the point light youhave an additional light source: the caustics map. Therefore, the totalincident light at a point is given by the light source and the caustics fallingon that point, if any. So for example, a diffuse surface with caustics will beshaded as follows:

 

// Caustics getter

float3 get_caustics(float3 pos)

{

         float4x4 mLightViewProj = mul(g_mLightView, g_mLightProj);

         float4 texPt = mul(float4(pos,1),mLightViewProj);

         float2 tc = float2(0.5*(texPt.xy/texPt.w) + float2( 0.5, 0.5 ));

      tc.y = 1.0f - tc.y;

         float4 caustic = tex2D(causticsSampler,tc);

        

         float dec = saturate(dot(normalize(-g_lightPos),normalize(pos)));

         if(dec == 0) caustic = caustic*0;

        

         return pow(float3(caustic.rgb),1.3);

}

 

// Diffuse shader with caustics

float4 psDiffuse(VS_OUT input) : COLOR

{

    // compute Lambertian diffuse term

    float3 diffuse = float3(1,1,1)*

                       saturate(dot(normalize(input.normal),

                       normalize(g_lightPos -           

                                 (input.posi.xyz/input.posi.w))));

          

    // compute final color

    float3 col = (diffuse +

                  diffuse*get_caustics(input.posi))*g_diff.xyz;

        

    return float4(col,1);

}

 

The get_caustics function takes thecurrent 3D point in world-space, projects it into the light’s view space, andlooks up the caustics map texture. It returns the amount of light falling onthat point due to caustics. In the pixel shader, diffuse shading is performedby using the light from the point light source as well as the caustics. The g_diff variable is the diffuse materialcolor of the objecting being rendered.

 

You should now be looking at a pretty picture withnice caustics, assuming everything went well in all the steps that we covered.If things don’t look quite the way you want them to look, try adjusting someparameters here and there (such as the refractive index of the refractiveobject, or the light intensity in the caustics map creation step). In my demoapplication, I generally code in some sliders that I can use to tweak thevarious parameters until the desired results are achieved. Also, keep in mindthat the code presented above was written to make things easy to explain andtherefore is not the most optimal implementation. It is left to the user tocome up with hacks and trick to achieve killer frame-rates.

 

4.Conclusion

In this article, we went through the steps ofimplementing a new technique for rendering caustics in real-time called causticsmapping. The algorithm utilizes a caustics map, which is created usingvertexsplatting at the intersection points of the refracted/reflected lightrays and the receiver geometry. The intersection points are computed using afast, approximative technique based on the Newton-Raphson root-finding methodand it operates in image-space. This enables the algorithm to achieve highframe rates which are suitable for games and other interactive graphicsapplications. Furthermore, the algorithm runs entirely on the GPU so the CPU isfree to perform other operations such as AI in games. Lastly, caustics mappingcan easily be integrated into graphics and game engines, very much like shadowmapping, since it does not hinder or affect any other rendering process.

 

5. References

 

1.       ChrisWyman. An approximate image-space approach for interactive refraction. In SIGGRAPH2005. ACM Press, 2005.

2.       L´aszl´oSzirmay-Kalos, Barnab´as Asz´odi, Istv´an Laz´anyi, and M´aty´as Premecz.Approximate raytracing on the GPU with distance impostors. In Proceedings ofEurographics 2005, 2005.

3.       MusawirA. Shah, Jaakko Konttinen, and Sumanta Pattanaik. To appear in IEEETransactions on Visualization and Computer Graphics. http://graphics.cs.ucf.edu/caustics

4.       M. Wandand W. Straßer. Real-time caustics. Computer Graphics Forum,22(3):611–611, 2003.

5.       ManfredErnst, Tomas Akenine-M¨oller, and Henrik Wann Jensen. Interactive rendering ofcaustics using interpolated warped volumes. In Graphics Interface, May2005.

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值