Volumetric lighting implementations in games

本文探讨了现代游戏中的体积光照技术,介绍了大气散射背后的物理理论及五种已知的实现算法。从《杀戮地带:暗影坠落》中的基本光线行进算法到Unity中的双边上采样优化,研究了不同散射函数的实验及其在Unity中的性能对比。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

abstract
this paper researches different volumetric ligthing implementations in the modern games. phyiscal theory behind the atmospheric scattering and five known algorithms of its implementation are introduced. paper shows the development of a basic Raymarching algorithm and Bilateral Upsampling optimization using Unity Engine. approach was documented by Vos(Vos,2014) in a chapter about the volemetric fog in Killzone:Shadow Fall. paper also includes experiments with the different scattering functions and performance comparisions between different algorithms in Unity. research also gives hints of how the setup could be developed further with the known resources and algorithms.

Keywords: atmospheric-scattering, raymarching, HLSL, C#, Unity Engine, Bilateral Upsampling, Mie-scattering, Rayleigh-scattering, volumetric lighting, volumetric fog.

introduction

we can observe volumetric lighting effects as a natural phenomenon (figure 1.1(b)). it can be seen indoor as well as outdoor, by scattering of light in such environments as moisty or dusty rooms, or smoky plains. because modern rendering is trying to achieve natural looking environments, we can add a lot to the atmosphere of a game by creating such effects (figure 1.1(a)).
在这里插入图片描述
figure 1.1 (a) volumetric light effect in a game (b) volumetric light as a natural phenomenon

these effects are needed to reproduce the reality correctly, but it also helps us to perceive the right distance and hide LOD (level of detail, decreasing complexity of the model the farther it gets).

this leads us to our reseach goal: we will analyse different approaches for implementing volumetric lighting in games. after than we will select one method and implement it into Unity Engine using the HLSL shading language and the C# programming language.

many developers were introducing different methods within the last decades, constantly evolving and using the capabilities of modern graphics processors. research and solutions came from individual developers as well as big companies which were trying to produce the optimal solution between realistic look and minimal performance costs.

in the paper below we describe five different methods of implementing volumetric lighting into games. section 2 will contain the physical theory behind natural volumetric lighting (atmoshperic scattering) and a short descrption with evaluation for each implementation method. Section 3 will show which questions were tried to be answered in the research process. The actual method implementation, optimization and experimentation is shown in Section 4. Respectively Section 5 and 6 will include conclusions and discussion.

2. theory

Volumetric light effects are basically example of the effects one can get by studying the atmospheric scattering. atmospheric scattering is a natural phenomenon, it analyses how much light particles are scattered in the atmosphere. by simulating it realistic visual results of the sky color, fog, clouds, “god rays”, light shafts or volumetric shadows are got.

paper will study and try to achieve only the dynamic light shafts with shadows. to achieve understanding of the subject theory behind physical atmospheric scattering will be studied. furthermore, this section will analyse the algorithms which are used to simulate discussed effects and how modern games dealt with this question.

2.1. atmoshperic scattering

In simple light rendering scenarios it is assumed that light moves in a vacuum: direct lighting, where no radiance is lost or gain on the light path. but if the light path from a physical side is analysed it is plain that there are a few more factors which have a direct influence on the incoming light result: in scatterig, out-scattering and absorption (Naty hoffman, 2012).

Absorption is the easiest factor to specify, it tells how much light is absorbed by the particles in the atmosphere. Absorbed light particles are transformed into thermal energy and are not visible to the eye (Figure 2.1(a)).
Out-Scattering is the factor which indicates how much light is bounced-off the particles into different directions than your eye path (Direction from your eye to something you look at). Figure 2.1(b) illustrates the process.

In-Scattering is the factor which specifies how much light is scattered into your eye path from different particles in the atmosphere. Basically it tells how much light is out-scattered from other particles into your eye path (Figure 2.1©).
在这里插入图片描述
Figure 2.1 Arrows indicate the light path (a) Absorption of light particles (b) Out-Scattering of light particles © In-Scattering of light particles (Naty Hoffman, 2012)

Depending on the participating media (the type of particles which light is scattered through) the scattering has different physical models which describe the process. Two main scattering models are Rayleigh scattering and Mie scattering.

The Rayleigh model 瑞利散射 is mostly used to measure the scattering through very small particles like the atmosphere air. It is very isotropic (same value when measured in different directions) (Figure 2.2) and has almost no absorption capability. Thanks to Rayleigh scattering we are able to see the blue sky color.

The Mie model on the other hand is used to measure scattering of bigger participating media like dust or fog for example. Comparing to the Rayleigh model it has much higher forward scattering factor (Figure 2.2) as well as higher absorption factor.
在这里插入图片描述
Figure 2.2 Scattering types shown in the pictures above, where arrows visualize scattering direction after particle hit.

physically light particles can out-scatter and in-scatter multiple times until they enter your eye vision path, but it is a very complicated process to compute in real time. so research will use single-scattering model in its calculations. common way to approximate the light in-scattering model is to use a phase function.

the phase function 相位函数 which will be used to approximate Mie scattering is the Henyey-Greenstein phase function (Greenstein, 1941). this function is pretty simple and does not put a lot of difficulty on the graphical computation. it is described using following formula:

在这里插入图片描述
G – the value which describes how much light is scattered forward (ranges from -0.99 to 0.99), where -0.99 gives reverted lighting effect, 0 gives isotropic results (Rayleigh scattering) and 0.99 makes light stay in one spherical place (no scattering forward). P – returned light per sample (ranges between 0 and 1). Θ – the angle between light direction and view direction.

furthermore, to calculate the out-scattering we will use the Beer-Lambert law 朗伯比尔定律. it is an exponential function of travelled distance by light, in a predetermined participating media. this function uses the extinction coefficient which is in other words optical thickness (how thick participating media is). in figure 2.3 the transmittance with distance using different extinction coefficients is visualised.

在这里插入图片描述
Figure 2.3 Graph of transmittance with distance using Beer-Lambert law and different optical thicknesses (0.5, 1.0, and 2.0) of the property which light is travelling through. (Vertical axe – transmittance, horizontal axe – distance)

In this section different algorithms and theory behind volumetric lighting implementations in games will be described. Different research papers and discussions will be summarized. Because limited time is available for implementation, a good understanding of each technique is needed.

2.2.1. combination of sky colour scattering and linear fog.

Method implemented in CryEngine2 by Wenzel (Wenzel, 2006) and described in his paper about real-time atmospheric effects in games. This approach is simplified a lot compared to the methods used ten years later because of lacking compute power at that time. The method is a combination of real time sky light and global volumetric fog. For sky light a mixed CPU/GPU approach is chosen, where a loop executes for each sampled point on sky hemisphere to calculate Mie and Rayleigh scattering. Calculations for the 128x64 sample points are done on the CPU and passed to the GPU as a floating point texture which colours the sky. Full update on that texture takes 15-20 seconds and is distributed over several frames. To calculate the height/distance based fog exponential function similar to Beer-Lambert law is used, which results in a linear fog with an exponential falloff. Finally the fog colour is matched to the colour of the sky and that results in appropriate sun halo on the horizon. Figure 2.4 illustrates sunset shots with different settings for Mie/Rayleigh/global density. This approach unfortunately does not provide shadowing and varying medium density. In addition it has problems with transparent objects.

在这里插入图片描述

2.2.2. Post-Process Effects.

Next method is simpler. By using the post-process effects light rays (sun shafts) and simulation of volumetric lighting can be achieved with a bit of cheating. Radial blur and bloom effects with some additional calculation results in a “god ray” effect which arises when a very bright light source is partly obscured. Unity has such effect in the default package Effects under name “SunShafts” (Figure 2.5(b)). Method was also used in UE3 and Cryengine. This approach
在这里插入图片描述
however has a lot of disadvantages, it is not physically based, it does not produce shadows and it disappears if light source is not visible on the screen. Nevertheless, in the “GPU Gems 3” (Mitchell, 2007) a different post-process approach is shown. This method applies physical scattering and shadowing. By using 2D projections on the screen space method gathers light samples across the projections with applied physical scattering. Projections start on light centre position and go through all screen pixels. Furthermore, method checks if a 2D ray has occluded geometry and as a result produces shadow effects (Figure 2.4(b)).

2.2.3. Artists Assets.

Another simple way to add volumetric light effects to your game are assets created by artists. A simple camera facing billboard or particles with a fadeout on the camera intersection. Obvious disadvantages of such approach is dependability on artists, not being dynamic enough (no change with other lighting or shadows, not following the positioning of the sun). Figure 2.6 (a) shows an example of such effect.
在这里插入图片描述

2.2.4. Polygonal Volumetric Lighting.

This method was introduced by Hoobler in 2016 GDC presentation (Hoobler, 2016) and was developed to integrate into Fallout 4, but the libraries and source code is published on GitHub. What the method offers is a fast, flexible and physically based integration of volumetric lighting. It can be easily implemented into existing engines and has low compute costs. Its implementation is easily understood if you would imagine a geometric volume which fills the lighted volume of the scene as a mesh (Figure 2.6(b) visualizes the geometric volume with one light source and eye position).

To calculate the volume in real-time rendering this approach uses a depth map information encoded in the shadow map of the light combined with tessellation (Vertex processing where patches of vertex data are subdivided into smaller primitives). The calculation uses sums and differences of intervals taken from lighted and shadowed parts of the scene. The other way they describe this: method subtracts light on the outside faces of the volumetric geometry and adds light on the inside faces of it. This algorithm provides solutions for directional/omnidirectional 定向/全方向 and spotlights. The discussed issues of such technique are: getting intense effects without washing out the scene, shadow map inconsistencies become much more noticeable, flickering 闪烁的 caused by anti-aliasing and poor performance in specific views (ex. sunset through woods – complex volumetric geometry). Figure 2.7 visualizes the effects of volumetric lighting in Fallout 4.

在这里插入图片描述

2.2.5. Raymarching Solutions.

last but not least the most popular approach is based on raymarching. there are a lot of different algorithms which use a raymarching approach, but this section will describe two advanced algortihms which are introduced in “GPU Pro” book series. First one was used in Killzone: Shadow Fall and was described by Vos (Vos, 2014). Second approach was developed by Wronski and implemented in Assassin’s Creed 4: Black Flag (Wronski, 2016).

Starting with the simpler method: how Killzone: Shadow Fall tackled 解决 the problem. To explain the core of the algorithm raymarching and its implementation should be introduced. The discussed method implements the algorithm using deferred rendering 延迟渲染. To render the volumetric light effect, a shape representing the light (sphere for point, screen quad for directional) should be rendered. Each shape is rendered using an additional Shader which does the calculations. 需要额外的shader来计算 Volumetric lighting calculations are done separately from the scene rendering, 体积光的计算和场景的计算是独立分开的 using only shadow maps and depth textures. To calculate the scattering for each pixel of the rendered volumetric shape we project a ray from the camera position to the world position. After that we enter ray marching loop, where we collect samples along the ray by a predefined number of steps (Figure 2.8(a)). 上图(a)三个线上都是8个黄色的点。After adding all

在这里插入图片描述

samples the final colour for the volumetric lighting is delivered. That colour is later added to the normally rendered scene color. Of course rendering at full resolution is very heavy on the performance, so the method uses lower resolution rendering (half of the actual). Performance is increased a lot from using it. Bilateral upsampling (See Section 4.2) is used to scale the results to full resolution when combining it with the scene. Furthermore, method uses interleaved 交叉 sampling (See Section 6) to reduce number of required ray steps. Combined with a custom blur effect the difference from the starting full screen projection is minimal, but the resolution of rendering is reduced by half, iterations of samples reduced from 256 to 32.

To control when, where and how much light should be scattered Killzone uses a 3D texture which holds the scattering amount over depth for each pixel on the screen. The 3D texture is eight times smaller than the original render resolution and has 16 depth slices. Distance between slices is smaller near camera and grows bigger further it gets from the camera (up to 128 meters). Raymarching march是行进的意思 algorithms have an issue with rendering transparent objects because depth map does not represent them correctly. 对于透明的物体,这个算法不能正确的渲染。为了解决这个问题,需要使用额外的一个3D的图,来计算光的强度 To solve the issue this algorithm uses an additional 3D texture to calculate the light intensity. Figures 2.8 b and c show the final results from this method of volumetric lighting.

next method explained by Wronski and implemented in Assassin’s Creed 4: Black Flag tries to solve the problems of precision and performance in standard raymarching solutions. the core ideas behind it are usage of compute shaders and 3D textures to store the volumetric data of scattering. Wronski explains that standard solutions operate in loops, which is counter-productive 副产品 on modern GPU’s which can launch thousands extra thread 线程 waves if operated manually. by using compute shaders and UAV (Unordered Access Views) which provide ability to arbitrarily (based on random choice than system) read/write memory at any point in rendering pipeline instead of being forced to follow strict order of it (imagine that pixel shader can only write to one location per render target its own pixel, with UAV pixel shader can write to random locations in whatever way UAV has bound it), this method is able to run different raymarching steps parallel to each other. to store the volumetric data method uses earlier mentioned 3D textures size 190x90x128 or 190x190x64, which are aligned to camera frustum.

在这里插入图片描述
figure 2.10 (a) screenshot of the volumetric fog from the Assassin’s Creed 4: black flag. (b) volumetric fog of unreal engine 4

the algorithms is done in 4 steps: estimating density of the participating media, calculating in-scattering, ray marching and applying the effect to shaded objects.

first, in the estimation of density method uses exponential height fog distribution (assuming that fog is thicker near ground level) and purely for art direction it adds 3D Perlin noise to the calculations, resulting in 3D texture.

second, in-scattering is calculated by launching a thread per Texel in the 3D texture, results are stored in a second 3D texture combined with regular ambient lighting.

third, having results in 3D textures, 2D raymarching is started to combine the results and add some final calculations per Texel. contrary to second step, slices are calculated serially, launching group of threads per Texel of the slice size.

fourth, the results are applied to the pixel color. after that additionally some under-sampling and aliasing problems are solved. figure 2.10(a) illustrates a screen of volumetric fog from Assassin’s Creed 4:Black flag.

similar approach can be found in the unreal engine 4, where u have volumetric fog which u can add from default assets of the engine (figure 2.10(b)).

2.3. evaluation
in this section the physical theory behind atmosphere scattering was dicussed and most used formulas were introduced. following it, five known methods of volumetric lighting in modern games were described. based on performance and visual looks Raymarching and Polygonal solutions were far better algorithms compared to other methods. research showed that Wronski Raymarching and Hoobler polygonal solutions were too complicated to implement in given time frame of a few weeks. that left the Vos Raymarching solution which when separated into smaller steps can be implemented easily and extended by adding optimizations and visual extensions to the basic algorithm.

3. research approach and questions

the problem statement discussed in the paper has a broad spectrum. to gain an understanding and ability to practically implement the topic we decided to keep it simple. main research question which was asked is: how implement volumetric lighting into the Unity Engine? this question has clear variables which need to be tackled. unity engine is one of those. from here comes the first sub question: how to setup unity for the discussed algorithm implementation? after this research basic results should be in a working setup. because research operates in rendering pipeline scope it will need a good optimization. second sub question is exactly about that: how to optimize implementation of a basic Raymarching algorithm? to be able to freely operate with setup after that, our research needs a deeper understanding of what can be done with the algorithm. third question which helps to answer the main one is: what possibilities different scattering functions provide? finally to get the scope and compute difficulties of different volumetric lighting algorithms, paper will compare performances of those implemented into the unity project. following that, the final sub question is: how different volumetric lighting algorithms perform in unity engine?

results

as a result of the research method described by Vos in his article about volumetric lighting in Killzone: Shadow Fall (Vos, 2014) was chosen to be implemented. a big help of structure and some pieces of code was provided by Skalsky in his project of the Volumetric lighting for the unity 5 (Skalsky, 2015). To fit into the time frame implementation is provided only for point lights and it uses deferred rendering. This section will start with explanation of how to setup Unity for the chosen approach. After that, paper will show implemented optimization strategies using lower resolution rendering and a section with different implementations of atmospheric scattering formulas

这个插件允许您通过生成真正容积的程序光束来大大改善场景的照明。 这是模拟聚光灯和手电筒的密度,深度和音量的完美,简单而便宜的方法。 The simple and efficient volumetric lighting solution compatible with every platforms: Windows PC, Mac OS X, Linux, WebGL, iOS, Android, VR, AR, Consoles, Built-in/Legacy Render Pipeline, SRP (URP & HDRP)! The perfect, easy and cheap way to simulate density, depth and volume for your spotlights and flashlights, even on Mobile! It greatly improves the lighting of your scenes by automatically and efficiently generating truly volumetric procedural beams of light to render high quality light shafts effects. A production ready plugin proven by awesome released games showcasing it: - BONEWORKS released for high-end PC VR - Carly and the Reaperman released for Playstation 4 PSVR and high-end PC VR - Kingspray Graffiti released for high-end PC VR and Oculus Quest - Hexagroove released for Nintendo Switch - Covert released for Playstation 4 PSVR, Oculus Rift and Oculus Go Features: - Truly volumetric: works even if you are INSIDE the beam of light. - Incredibly easy to use and integrate / Import it instantly / Zero setup required. - In addition to the Built-in Legacy Render Pipeline, it fully supports the Universal Render Pipeline (URP) and the High Definition Pipeline (HDRP). - Optimized for VR: tested with high-end headsets (Oculus Rift, HTC Vive, Valve Index...) and standalone hardware (Oculus Go, Oculus Quest...), supports all Stereo Rendering Methods (Multi Pass, Single Pass and Single Pass Instanced or Multiview). - AR Ready: supports both Apple iOS ARKit and Google Android ARCore. - GPU Instancing & SRP Batcher: render and batch thousands of beams in 1 single drawcall. - Super FAST w/ low memory footprint: doesn't require any post-process, command buffers, nor compute shaders: works great even on low-performance platforms such as Mobiles and WebGL. - Procedural generation: everything is dynamically computed under the hood. - Add unlimited light beams everywhere: alternative solutions usually requi
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值