http://www.jordanstevenstechart.com/physically-based-rendering
Physically-Based Rendering (PBR) has become extremely popular over the past few years. Unity 5, Unreal Engine 4, Frostbite, even ThreeJS, and many more game engines use it. Many 3D modeling studios are switching to the "PBR Pipeline" following the suit of popular tools such as the Marmoset Toolbag and Allegorithmic's Substance Suite. Today, it is hard to find artists that are not familiar with the pipeline, but it can still be hard to find engineers and technical artists who are familiar with how the pipeline actually works behind the scenes. I wanted to create this tutorial to break down PBR shading, and make it as easy to understand as possible for potential beginners to PBR. Let's get started.
Let's Talk About Physically Based Rendering
What Makes PBR Physical?
Over the past three or four decades, our understanding of the world around us and how it scientifically/mathematically functions has grown leaps and bounds. Part of this understanding has led to tremendous breakthroughs in the realm of rendering technology. Standing on the backs of giants, other intelligently tall individuals have been able to come to some serious conclusions regarding light, view, surface normal, and how all three of these things interact with each other. Most of these breakthroughs revolve around the idea of BRDF (the Bi-Directional Reflectance Distribution Function) and, inherent to that, Energy Conservation.
In order to understand how light and your viewpoint interact with surfaces you have to first understand surfaces themselves. When light shines on a perfectly smooth surface, it will reflect off that surface in an almost perfect way. When light interacts with what we call a rough surface, it will not be reflected in a similar manner. This can be explained by the existence of microfacets.
When we look at an object, we must assume its surface is not perfectly smooth and is composed of many very tiny facets, each of which is a potentially perfect specular reflector. These microfacets have normals that must be distributed across the normal of the smooth surface. The degree to which microfacet normals differ from the smooth surface normal is determined by the roughness of the surface. The more rough the surface, the more potential exists for disruption to the specular highlight. Because of this, rougher surfaces have larger and dimmer looking specular highlights. Smooth surfaces can cause the specular highlight to compress as the light is reflected more perfectly than before.
Now back to the BRDF. A Bidirectional Reflectance Distribution Function (BRDF) is a function that describes the reflectance of a surface. There are several different BRDF models/algorithms, many of which are not physically based algorithms. For a BRDF to be considered physically based, it must be energy conserving. Energy Conservation states that the total amount of light reflected by a surface is less than the total amount that surface received. The light reflected off the surface should never be more intense than it was before it interacted with all those microfacets that we discussed before.
BRDF algorithms feature a more complicated shading model than other algorithms. This shading model is technically 3 pieces of a single whole: the Normal Distribution Function, the Geometric Shadowing Function, and the Fresnel Function. Together they make this algorithm, which we will break down later:
To understand BRDF, it is very important to understand the 3 functions that make up a BRDF. Let's hit each of these in turn to make a shading model that will work for us.
Writing a PBR Shader: Nuts, Bolts, and Smooth Surfaces
What are the Properties of a PBR Shader?
In most PBR shading models, it is quite usual to see a few of the same properties influencing them in some format. The two most important properties in modern PBR approaches are Smoothness (or Roughness) and Metallic. Both of these values work best if they are between 0..1. There are many different approaches to building PBR shaders, and some of them allow for using the BRDF model for more effects, such as Disney's PBR Pipeline, each effect being driven by a specific property.
Let's move to building our properties out, if you haven't checked out my page on Writing Shaders in Unity now would be a great time to head over and read that.
So here we have defined our public variables in our Unity Shader. We will add more later, but this is a good start. Below the properties is the initialization structure of our shader. We will refer to the #pragma directives later on as we continue to add more functionality.
Our Vertex Program is extremely similar to the one produced in the tutorial for Writing Shaders in Unity. The key elements we need are the normal, tangent, and bitangent information about the vertex, so we make sure to include those in our Vertex Program:
Vertex Program
Fragment Program
In our fragment program, we will want to define a set of variables that we can use in our algorithms later:
Above are the variables we will build with the data that Unity gives us, per the description in the Unity Shader Tutorial. These variables will be used repeatedly throughout our shader as we move to build the BRDF.
Roughness
In my approach, I remap roughness. The reason I do this is more of a personal preference, as I find that roughness remapped to the below produces much more physical results.
Some more housekeeping. There are a lot of concerns when using Metallic in a PBR shader. You will find that none of the algorithms account for it, so we include it in a different format entirely.
Metalness is a control value to determine whether a material is a dielectric material (a non-metal, i.e. metalness = 0) or a metal (metalness = 1) material. Therefore, to have our Metallic value affect our shader in the right way, we are going to plug it into our diffuse color and have it drive our specular color. Since a metal will not show any diffuse reflection it will have a completely black diffuse albedo, while its actual specular color will change to reflect the surface of the object. See below:
Metallic
The Guts of Our Shader
Below is the basic shader format that we will be building on. Please note the comments as they will help to organize and inform as to where we will be inserting our code.
The above code should create a white object when attached to a material in Unity. We will extend this shader by placing properties in the Properties Section, Variables in the Variables Section, Helper Functions in the Helper Function Section, Algorithms in the Algorithm Section and implementing the shader code in the Fragment Section. Let's begin.
Piecing Together Your PBR Shader Pt.1: The Normal Distribution Function (Specular Function)
The Normal Distribution Function is one of the three key elements that make up our BRDF shader. The NDF statistically describes the distribution of microfacet normals on a surface. For our uses, the NDF serves as a weighted function to scale the brightness of reflections (specularity). It is important to think of the NDF as a fundemental geometric property of the surface. Let's start to add some algorithms to our shader, so that we can visualize the effect that the NDF produces.
The first thing we want to do is build an algorithm. In order to visualize our algorithm, we will override our return float.
What is the Normal Distribution Function?
The format for the following sections will be as follows. After your write the algorithm in the Algorithm Section, you will implement the algorithm in the location described above. When you implement a new algorithm, simply comment out the above active algorithm, and the resulting effect will be based on only the currently uncommented algorithm. Don't worry, we will clean this up later and provide you with a way to easily switch between the algorithms from within Unity, no further comment play required.
Let's start with the simple Blinn-Phong approach:
Blinn-Phong NDF
The Blinn approximation of Phong specularity was created as an optimization of the Phong Specular Model. Blinn decided that it was faster to produce the dot product of the normal and half vector, than it was to calculate the reflect vector of light every frame. The algorithms do produce much different results, with Blinn being softer than Phong.
Blinn-Phong is not considered a physically correct algorithm, but it will still produce reliable specular highlights that can be used for specific artistic intentions. Place the above algorithm in the Algorithms Section, and the below code in the Fragment Section.
If you assign a smoothness value to your shader, you should see that the object will have a white highlight depicting the Normal Distribution (Specularity) and the rest of the object will be black. This is how we will continue to function, so that we can easily test our shader. The 40 in the above implementation is just so that I could provide the function with a high range, but it most certainly isn't the optimal value for everyone.
Phong NDF
The Phong algorithm is another non-physical algorithm, but it produces much finer results than the above Blinn approximation. Below is an example implementation:
As with the Blinn-Phong approach, don't be sold on the *40.
Beckman NDF
The Beckman Normal Distribution function is a much more advanced function, and takes our roughness value into account. Accounting for the roughness, as well as the dot product between our normal and half direction, we can accurately approximate the distribution of the normal across the surface.
The implementation of this Algorithm is failry simple.
An important thing to note is how the Beckman model treats the surface of the object. As you can tell in the figure above, the Beckman model is slow to change with the smoothness value, until a certain point at which it tightens the highlight dramatically. As the smoothness of the surface increases, the specular highlight pulls together, producing a very pleasing rough to smooth value from an artistic perspective. This behavior is very favorable for rough metals in the early roughness values, and also quite good for plastics as the smoothness value increases.
Gaussian NDF
The Gaussian Normal Distribution model is not as popular as some of the other models, as it tends to produce much softer specular highlights than can be desired at higher smoothness values. From an artistic perspective this can be desirable, but there are arguments as to its true physical nature.
The implementation of this algorithm is similar to that of other Normal Distribution functions, relying on the roughness of the surface and the dot product of the Normal and Half Vectors.
GGX NDF
GGX is one of the more popular, if not the most popular, algorithms in use today. The majority of modern applications rely on it for several of their BRDF functions. GGX was developed by Bruce Walter and Kenneth Torrance. Many of the algorithms in their paper are some of the more popular algorithms in use.
It's implementation follows suit to other methods.
The specular highlight of the GGX Algorithm is very tight and hot, while still maintaining a smooth distribution across the surface of our ball. This is a prime example of why the GGX algorithm is preferred to replicate the distortion of specularity across metallic surfaces.
Trowbridge-Reitz NDF
The Trowbridge-Reitz approach was developed in the same paper as GGX, and produces remarkably similar results to the GGX algorithm. The main noticeable difference is that the extreme edge of the object features a smoother highlight than the GGX, which is a more harsh falloff at the grazing angle.
As usual, the Trowbridge-Reitz formula relies on roughness and the dot product of the normal and half vectors.
Trowbridge-Reitz Anisotropic NDF
Anisotropic NDF functions produce the normal distribution Anisotropically. This allows for us to create surface effects that mimic brushed metals and other finely faceted/anisotropic surfaces. For this function we will need to add a variable to our Properties and Public Variables Sections.
Our Property:
Our Variable:
One of the differences between Anisotropic and Isotropic approaches is the necessity of tangent and binormal data to process the direction of the normal distribution. This image was produced with an Anisotropic value of 1.
Ward Anisotropic NDF
The Ward approach to Anisotropic BRDF produces drastically different results than the Trowbridge-Reitz method. The Specular highlight is much softer, and dissapates much faster as the surface proceeds in smoothness.
As with the Trowbridge-Reitz method, the Ward Algorithm requires tangent and bitangent data, but also relies on the dot product of the normal and light, as well as the dot product of the normal and our viewpoint.
Piecing Together Your PBR Shader Pt.2: The Geometric Shadowing Function
What are Geometric Shadowing Algorithms?
The Geometric Shadowing Function is used to describe the attenuation of the light due to the self-shadowing behavior of microfacets. This approximation models the probability that at a given point the microfacets are occluded by each other, or that light bounces on multiple microfacets. During these probabilities, light loses energy before reaching the viewpoint. In order to accurately generate a GSF, one must sample roughness to determine the microfacet distribution. There are several functions that do not include roughness, and while they still produce reliable results, they do not fit as many use cases as the functions that do sample roughness.
The Geometric Shadowing Function is essential for BRDF energy conservation. Without the GSF, the BRDF can reflect more light energy than it receives. A key part of the microfacet BRDF equation relates to the ratio between the active surface area (the area covered by surface regions that reflect light energy from L to V) and the total surface area of the microfaceted surface. If shadowing and masking are not accounted for, then the active area may exceed the total area, which can lead to the BRDF not conserving energy, in some cases by drastic amounts.
In order to preview our GSF functions, let's place this code above our Normal Distribution Functions. The format will work in a very similar way to how we implemented the NDF functions.
Implicit GSF
The Implicit GSF is the basis of logic behind Geometric Shadowing.
By multiplying the dot product of the normal and light by the dot product of the normal and our viewpoint we get an accurate portrayal of how light affects the surface of an object based on our view.
Ashikhmin-Shirley GSF
Designed for use with Anisotropic Normal Distribution Functions, the Ashikhmin-Shirley GSF provides a good foundation for Anisotropic effects.
The shadowing of microfacets produced by this model is very subtle, as can be seen on the right.
Ashikhmin-Premoze GSF
The Ashikhmin-Premoze GSF is designed for use with Isotropic NDF, unlike the Ashikhmin-Shirley approach. As with the Ashikhmin-Shirley, this is a very subtle GSF.
Duer GSF
Duer proposed the GSF function below to fix an issue with specularity found in the Ward GSF function we will cover later. The Duer GSF produces similar results as the Ashikhmin-Shirley above, but is more suited towards Isotropic BRDFs, or very slightly Anisotropic BRDF.
Neumann GSF
The Neumann-Neumann GSF is another example of a GSF suited for Anisotropic Normal Distribution. It produces more pronounced geometric shading based on the greater of view direction or light direction.
Kelemen GSF
The Kelemen GSF presents an appropriately energy conserving GSF. Unlike most of the previous models, the proportion of the Geometric Shadow is not constant but varies with the viewing angle. This is an extreme Approximation of the Cook-Torrance Geometric Shadowing Function.
Modified-Kelemen GSF
This is a modified form of the Kelemen Approximation of Cook-Torrance. It has been modified to produce the Keleman GSF distributed by roughness.
Cook-Torrance GSF
The Cook-Torrance GSF was created to solve three situations of Geometric attenuation. The first case states that the light is reflected without interference,while the second case states that some of the reflected light is blocked after reflection, and the third case states that some of the light is blocked before reaching the next microfacet. To compute these cases we use the Cook-Torrance GSF below.
Ward GSF
The Ward GSF is a strengthened Implicit GSF. Ward uses this approach to strengthen the Normal Distribution Function. It works particularly well to highlight Anisotropic bands on surfaces from varying angles of view.
Kurt GSF
The Kurt GSF is another example of an Anisotropic GSF. This model is proposed to help control the Distribution of Anisotropic surfaces based on the surface roughness. This model seeks to conserve energy particularly along the grazing angles.
Smith Based Geometric Shadowing Functions
The Smith Based GSFs are widely considered to be more accurate than the other GSFs, and take into account the roughness and shape of the normal distribution. These functions require two pieces to be processed in order to compute for the GSF.