Real Time Shadows

from : real time render and learnopengl :

abstract:

An old saying tells us that there is no light without shadow, and although it is originally a metaphor, it is perfectly true: without light, everything is dark, and definitely not very exciting but as soon as there is a light source ,there are also cast shadows.

On the one hand, shadow are important for the understanding of scene. We better comprehend spatial relationships between objects and better successd in localizing them in space. Further , we can deduce shape informatiopn not only of the shadow-casting elements but also of the receiver, by interpreting shadow deformations.

Shadows are also an artistic mean. many movie explolit shadow to illustrate the presence of some person or object without revealing its actual appearance (just think of the hundreds of Dracula movie out there).

Figure shows an example where shadows are used in this manner. while we cannot directly see the camels(骆驼), their shadows complete our understanding of the scene.

Definition:

what is shadow? "Shade within clear boumdaries" or "An unilluminated area"(具有明确的遮蔽区域或者一个完全没有照明的区域)

From Figure one realizes rapidly that this definition is not accurate enough.

The dark figure cast upon a surface by a body intercepting the ray from a source of light.

An area that is not or is only partially irradiated or illuminated because of the interception of radiation by an opaque object between the area and the soure radiation .

Basic Shadow techniques

Planar Projected Shadows

Planar projected shadows are based on the perhaps most simple idea for creatinghard shadows on a planar ground. We have seen that a point is in shadow if one ofits segments towards the light source intersects the geometry. Because the light is apoint, all segments meet at the same location. In other words, there is a one-to-onemapping between the segments and the points on the ground. Hence, if we wouldpush down the geometry along those segments on the ground, it would end up covering exactly those points on the ground that lie in shadow. is pushing down ofthe shadow-caster polygons is actually nothing else but a projection from the lightsource onto the planar receiver. As for standard rasterization, such projections canbe described in the form of a matrix expression. Using such a custom projectionmatrix, one can then simply draw the triangles as dark objects on the planar surface . To project the caster geometry with this matrix, apply it to each castervertex in the vertex shader before applying the camera matrix.

One problem of planar projected shadows appears when the light source is located between the ground plane and any point of the shadow caster (see Figure).In such situations, the caster would not be able to cast a shadow on the ground, butthe projection matrix still leads to a point on the receiver plane. It is easy to test fora single vertex if such a situation occurs (its w value is negative). To ensure correctbehavior for a triangle with at least one positive and one negative w value for itsvertices, w could be interpolated and tested per fragment.

Because the projection delivers a point on the groundplane,another observation is important .The ground plane and shadow projection will coincide(重叠) at the same depth locaion in space.This leads to trouble because z-buffering usually only keeps the pixel that is nearest to the observer, and imprecisions will basically lead to a random choice between a ground lane or shadow pixel. The corresponding artifact is called z-fighting.

In the case of projected shadow. z-fighting can be solved easily by first drawing the ground plane, then disabling culling and the depth test when projecting the shadows(thereby enforcing the being drawn), and,finally, rendering the rest of the scene with standard setting and activated depth buffering. The situation becomes more complex when shadows are not supposed to be completely black.

Shadow Texture

let's assume a very simple scenario where we have a scene that consists of a shadow receiver and a point I, as well as distinct set of occluders, or blockers that are place in between the two.Let us now place a camera at the light source in the direction of the receiver.From there, we render a binary image of the scene that is initially cleared to white and in which we draw all shadow casters in black. This Image will allow us to query whether a point lies in shadow..

a point p is in shadow if the open segment between p and l intersects the scene. To test whetherthis segment intersects the scene, we use a simple observation: just like the segments projected to points with the previously seen projection matrix (see page 24),each segment projects to a single point for the camera at the light source that wasused to produce the binary image.

Consequently, a single texture lookup allows us to test for shadows: if the pixelcontaining the segment was lled while drawing the occluders, the point has tolie in shadow; otherwise, it is lit. Such a lookup can be performed directly usingprojective texture mapping while rendering the receiver—resulting in the so-calledshadow-texture method.

Here is an overview of the shadow-texture algorithm:

1. Render blockers into the shadow texture.

2. Render shadow receivers with the shadow texture enabled.

Camera

We will now outline the detail of the camera used for projecting the shadow casters onto the shadow texture.A camera matrix can be defined as Mc = MpMv, where Mp is a paralll or perspective projection matrix and Mv is a matrix transforming a vertex from world space into camera space.This transform is a simple matter ofa change of frame and consist of a translation, to place the origin at the cameraposition, and a rotation. The space in which the origin is located in the light centerand the view direction is aligned with the normal of the shadow projection plane,is called light view space or just light space for short.

Note that the view directionis along the negative z-axis of the view space for a right-handed coordinate system

We also need to construct a proper light projection matrix

. This matrixprojects the geometry, either using a parallel (orthogonal) projection or a perspective projection, and thereby provides the transformation into light clip space. Aparallel projection can be used for directional light (i.e., when light rays are paralleland come from a light source innitely far away). This is a common approximationfor sunlight. Such a matrix simply scales the x, y-coordinates with a uniform factor to end up in [−1, 1] and sets all z-values to a desired projection depth or depthinterval.

For a point light, we instead need a perspective projection. the projection matrix Simplifies to:

However, in practice, we also want to limit the coordinate range of our projected x, y, z-values to [−1, 1], since the graphics hardware clips geometry outsidethe unit cube. us, choosing d = 1 is a reasonable option, which projects tothe plane z = −1. To limit the x, y-coordinate range, we add scaling of the x, ycomponents to the matrix. If a horizontal field of view of fovx degrees is desired,the scaling factor of x becomes sx = cot(fovx/2). Analogously, the scaling factorof y is s y = cot(fovy/2), where fovy is the vertical field of view. It is, however, common to express fovx in terms of the aspect ratio α = w/h instead, where w and hare the image width and height. Thus, sx = s y /α. Furthermore, since we chose theprojection distance d = 1, this means that cot(fovx/2) = w and cot(fovy/2) = h,which simplifies (sx , s y) to sx = w and s y = h. This gives us the expression for thelight projection matrix as:

General camera.

The planar projection above collapses all z-values to z = −1.It is common to want to keep relative z-information in order to resolve visibilitywhen using a z-buffer. This is the case for a standard camera when rendering fromthe eye, and also when rendering images from the light in the shadow-map–basedtechniques described later on in this book. We can also use it for shadow textures,since z-values do not matter for this technique. To keep relative z-information, itis necessary to set

(2, 3) (i.e., the element of the third row, fourth column) tononzero. The rationale is that after applying the projection transform to a pointv, we will get a new point v′ = (v′x, v′y, v′z, v′w). The homogenization will divide allcomponents by v′w, which currently is set to −vz by the matrix. One can think of thisdivision as a way to obtain the foreshortening effect in x and y with increased depthto give the illusion of three-dimensional space when the image is visualized in twodimensions. To avoid the z-component being the same constant value v′z = −1 forall pixels, we set Mlp(2, 3) = c, so that v′z = vz+c, which keeps relative z-informationaŸer the division with v′w = −vz.

Typically, the user wants to set light-space near and far planes of the view frustum by specifying their distances n and f from the origin . Notethat these values are positive (0 < n < f), while the light view direction is the negative z-axis. All geometry will be clipped against the six frustum planes (near, far,left, right, top, bottom) by the hardware unit-cube clipping, before being sent torasterization. In order to distribute the z-values in the range z ∈ [n, f] over theunit cube range (x, y, z) ∈ [−1, 1], we should set

(2, 2) = −( f + n)/( f − n)and Mlp(2, 3) = −2 f n/( f − n). This maps z = −n to −1 and z = −f to +1. The projection matrix becomes:

To summarize, our desired camera matrix is now given by

Shadow Mapping

The shadow-texture method described in the previous section is a simplified version of today’s most famous solution to computing shadows in real-time applications—namely shadow mapping [Williams78]. This method no longer needs toseparate occluders from receivers and is thereby also capable of managing selfshadowing, as we will see. The principle is to render an image of the scene fromthe position of the light source. Every point that appears in such an image is necessarily lit, while regions not visible are in shadow. To determine whether a certain three-dimensional position is in shadow then becomes a matter of checkingwhether it is visible in the image from the light source or not.

Although theoretically very simple, the fact that the scene is sampled, in termsof a discrete image resolution, leads to the requirement of a tolerance thresholdwhen doing the position comparisons, which causes concerns. It is interesting tonote that in the 1970s, before the domination of the z-buffer algorithm for hidden surface removal, similar shadow techniques existed without any such drawback [Weiler77,Atherton78]. The caveat with those techniques is that the hiddensurface removal is done by geometrical polygon clipping instead, which can be veryslow. Here follows the shadow-mapping algorithm in detail.

Basic Algorithm:

Shadow mapping builds upon the observation that the light sees all lit surfacesof the scene. Every hidden (unseen) element lies in shadow. To determine thevisible surfaces as seen from the light, shadow mapping starts by creating an imagefrom the light’s position. In this image, the so-called shadow depth map or simplyshadow map, each pixel holds the depth (i.e., the distance from the light) of thefirst visible surface. Graphics hardware supports the creation of such depth mapsat very little cost because the same mechanism is used to resolve visibility duringstandard rendering. The second step of the algorithm performs a rendering of thescene from the actual viewpoint. For each rasterized fragment (which we will callview sample), its position p is transformed into light clip space, yielding plc =(plcx, plcy, plcz). Note that (plcx, plcy) is the position in the depth map to where thefragment would project when seen from the light, and plczis the distance of thefragment to the light source. Hence, to determine whether the fragment is visiblefrom the light, it is su›cient to compare its depth value plczto the value stored inthe shadow map at position (plcx, plcy). If plczis larger than the stored value, thefragment is necessarily hidden by some other surface nearer to the light sourceand consequently lies in shadow. Otherwise, it is lit.

There are two details to note. First, in practice, the depth map is accessednot directly with the light-clip-space position plc, but with the according shadow(map)-texture coordinates ps, Second, the depth plczis measured along the light view direction, whichis the negative z-axis of the light space in our definition (which assumes a righthanded coordinate system). More generally, we will refer to both this depth valueand −plzas light-space depth in the following, denoting it as plz.

The technique is particularly interesting as it is usable with almost any arbitrary input, as long as depth values can be produced. Further, the fact that bothsteps involve standard rasterization gives it a huge potential for acceleration ongraphics cards. In fact, OpenGL provides extensions to perform the algorithmwithout shader intervention (today, most people would just use shaders, which ismore convenient). Currently, shadow mapping and variants are the most popular techniques for creating shadows in games. Nevertheless, several problems areinherent to this method.

Depth Bias

To test whether a point is farther away than the reference in the shadow map requires some tolerance threshold(容错阀值) for the comparison.Otherwise, the discrete sampling due to the limited shadow-map resolution, and alsonumerical issues, can lead to incorrect self-shadowing, which is referred to as z-fighting or surface acne. This results in visible shadow sparkles on lit surfaces and can be explained as follows. Ifthe shadow map had innite resolution, then the shadow testing would be a matter of checking if the point is represented in the shadow map (i.e., visible from thelight and therefore not in shadow). However, with the discrete shadow-map resolution, sample points from the eye are compared to an image consisting of pixels.Each pixel’s value is defined solely by the world sample corresponding to the pixel’scenter. Hence, when querying a view sample, it will rarely project to the locationthat was actually sampled in the shadow map. Consequently, one can only compare values of nearby points. This can lead to problems when the view sample isfarther from the source (has a higher depth from the light) than the correspondingvalue in the shadow map because unwanted shadows can occur.For example, imagine a tilted receiver plane. Within each shadow-map pixel, thedepth values of the world samples vary, and some lie below and others above thecorresponding pixel-center depth. The ones below will be declared shadowed despite them actually being visible. In order to address this issue, a threshold can beintroduced in the form of a depth bias that ošsets the light samples slightly fartherfrom the light source.

Defining the depth bias is more problematic than it might seem at first. The greater the surface’s slope, as seen from the light source (i.e., the more edge-on),the more the depth values change between adjacent shadow-map samples, whichmeans that a higher bias is needed to avoid the surface incorrectly shadowing itself.On the other hand, a too high bias will lead to light leaking at contact shadows , making the shadows disconnected from the shadow caster. This is called Peter Panning or the Peter Pan problem, referring to the famous characterby James Matthew Barrie that got detached from his shadow.

The standard approach supported by graphics hardware is to rely on biasingwith two parameters instead of just one: a constant offset and an offset that depends on the slope of the triangle as seen from the light.

---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Code

use OpenGL

The depth map

The first pass requires us to generate a depth map. The depth map is the depth texture as rendered from the light's perspective that we'll be using for testing for shadows. Because we need to store the rendered result of a scene into a texture we're going to need framebuffers again.

First we'll create a framebuffer object for rendering the depth map:

Next we Create a 2D texture that we'll use as the framebuffer's depth buffer:

Generating the depth map shouldn't look too complicated . Because we only care about depth values we specify the specify the texture's formats as GL_DEPTH_COMPONENT. We also give the texture a width and height. this is the resolution of the depth map.

with the generated depth texture we can attach it as the framebuffer's depth buffer:

We only need the depth information when rendering the scene from the light's perspective so there is no need for a color buffer. A framebuffer object however is not complete without a color buffer so we need to explicitly tell OpenGL we're not going to render any color data. We do this by setting both the read and draw buffer to GL_NONE with glDrawBuffer and glReadbuffer.

With a properly configured framebuffer that renders depth values to a texture we can start the first pass: generate the depth map. When combined with the second pass, the complete rendering stage will look a bit like this:

This code left out some details, but it'll give you the general idea of shadow mapping. What is important to note here are the calls to glViewport. Because shadow maps often have a different resolution compared to what we originally render the scene in (usually the window resolution), we need to change the viewport parameters to accommodate for the size of the shadow map. If we forget to update the viewport parameters, the resulting depth map will be either incomplete or too small.

Light space transform

An unknown in the previous snippet of code is the ConfigureShaderAndMatrices function. In the second pass this is business as usual: make sure proper projection and view matrices are set, and set the relevant model matrices per object. However, in the first pass we need to use a different projection and view matrix to render the scene from the light's point of view.

Because we're modelling a directional light source, all its light rays are parallel. For this reason, we're going to use an orthographic projection matrix for the light source where there is no perspective deform:

Here is an example orthographic projection matrix as used in this chapter's demo scene. Because a projection matrix indirectly determines the range of what is visible (e.g. what is not clipped) you want to make sure the size of the projection frustum correctly contains the objects you want to be in the depth map. When objects or fragments are not in the depth map they will not produce shadows.

To create a view matrix to transform each object so they're visible from the light's point of view, we're going to use the infamous glm::lookAt function; this time with the light source's position looking at the scene's center.

Combining these two gives us a light space transformation matrix that transforms each world-space vector into the space as visible from the light source; exactly what we need to render the depth map.

This lightSpaceMatrix is the transformation matrix that we earlier denoted can render the scene as usual as long as we give each shader the light-space equivalents of the projection and view matrices.

Rendering shadows

With a properly generated depth map we can start rendering the actual shadows. The code to check if a fragment is in shadow is (quite obviously) executed in the fragment shader, but we do the light-space transformation in the vertex shader:

What is new here is the extra output vector FragPosLightSpace. We take the same lightSpaceMatrix (used to transform vertices to light space in the depth map stage) and transform the world-space vertex position to light space for use in the fragment shader.

The main fragment shader we'll use to render the scene uses the Blinn-Phong lighting model. Within the fragment shader we then calculate a shadow value that is either 1.0 when the fragment is in shadow or 0.0 when not in shadow. The resulting diffuse and specular components are then multiplied by this shadow component. Because shadows are rarely completely dark (due to light scattering) we leave the ambient component out of the shadow multiplications.

Run 截图:

Reference:

https://learnopengl-cn.readthedocs.io/zh/latest/05%20Advanced%20Lighting/03%20Shadows/01%20Shadow%20Mapping/

Real-Time Shadow

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值