learnopengl——Parallax Mapping

https://learnopengl.com/Advanced-Lighting/Parallax-Mapping

parallax mapping
parallax mapping is a technique similar to normal mapping, but based on different principles. similar to normal mapping it is a technique that significantly boosts a textured surface’s detail and gives it a sense of depth. while also an illusion, parallax mapping is a lot better in conveying a sense of depth and together with normal mapping gives incredibly realistic results. while parallex mapping is not necessary a technique directly related to (advanced) lighting, i will still discuss it here as the technique is a logical follow-up of normal mapping. note that getting an understanding of normal mapping, specifically tangent space, is strongly advised before learning parallax mapping.

parallax mapping belongs to the family of displacement mapping techniques that displace or offset vertices based on geometrical information stored inside a texture. one way to do this is to take a plane with roughly 1000 vertices and displace each of these vertices based on a value in a texture that tells us the height of the plane at a specific area. such a texture that contains height values per texel is called a height map. an example height map derived from the geometric properties of a simple brick surface looks a bit like this:
在这里插入图片描述
When spanned over a plane each vertex is displaced based on the sampled height value in the height map, transforming a flat plane to a rough bumpy surface based on a material’s geometric properties. For instance, taking a flat plane displaced with the above heightmap results in the following image:

在这里插入图片描述
A problem with displacing vertices is that a plane needs to consist of a large amount of triangles to get a realistic displacement otherwise the displacement looks too blocky. As each flat surface could then require over 1000 vertices this quickly becomes computationally infeasible. What if we could somehow achieve similar realism without the need of extra vertices? In fact, what if I were to tell you that the above displaced surface is actually rendered with only 6 vertices (or 2 triangles)? This brick surface shown is rendered with parallax mapping, a displacement mapping technique that doesn’t require extra vertex data to convey depth, but similar to normal mapping uses a clever technique to trick the user.
The idea behind parallax mapping is to alter the texture coordinates in such a way that it looks like a fragment’s surface is higher or lower than it actually is, all based on the view direction and a heightmap. To understand how it works, take a look at the following image of our brick surface:
在这里插入图片描述
here the rough red line presents the values in the heightmap as the geometric surface representation of the brick surface and the vector V 上横 represnets the surface to view direction (viewDir). if the plane would have actual displacement the viewer would see the surface at point B. however, as our plane has no actual displacement the view direction hits the flat plane at point A as we would expect. Parallax mapping aims to offset the texture coordinates at fragment position A in such a way that we get texture coordinates at point B. we then ues the texture coordinates at point B for all subsequent texture samples, making it look like the viewer is actually looking at point B.

the trick is to figure out how to get the texture coordinates at point B from point A. parallax mapping tries to solve this by scaling the fragment-to-view direction vector V 上横 by the height at fragment A. so we are scaling the length of V上横 to be equal to a sampled value from the heightmap H(A) at fragment position A. the image below shows this scaled vector P上横:

在这里插入图片描述
we then take this vector P上横 and take its vector coordinates that align with the plane as the texture coordinate offset. this work because vector P上横 is calculated using a height value from the heightmap so the higher a fragment’s height, the more it effectively gets displaced.

this little trick gives good results most of the time, but is however a really crude approximation to get to point B. when height change quickly over a surface the results tend to look unrealistic as the vector P上横 will not end up close to B as u can see below:
在这里插入图片描述

another issue with parallax mapping is that it’s difficult to figure out which coordinates to retrieve from P上横 when the surface is arbitrary rotated in some way. what we would rather do is parallax mapping in a different coordinate space where the x and y component of vector P上横 always aligns with the texture’s surface. if u have followed along in the normal mapping tutorial u probably guessed how we can accomplish this and yes, we would like to do parallax mapping a tangent space.

By transforming the fragment-to-view direction vector V¯V¯ to tangent space the transformed P¯P¯ vector will have its x and y component aligned to the surface’s tangent and bitangent vectors. As the tangent and bitangent vectors are pointing in the same direction as the surface’s texture coordinates we can take the x and y components of P¯P¯ as the texture coordinate offset, regardless of the surface’s direction.

But enough about the theory, let’s get our feet wet and start implementing actual parallax mapping.

parallax mapping
for parallax mapping we are going to use a simple 2D plane of which we calculate its tangent and bitangent vectors before sending it to the GPU; similar to what we did in the normal mapping tutorial. Onto the plane we’re going to attach a diffuse texture, a normal map and a displacement map that you can download yourself by clicking the respective links. For this example we’re going to use parallax mapping in conjunction with normal mapping. Because parallax mapping gives the illusion that it displaces a surface, the illusion breaks when the lighting doesn’t match. As normal maps are often generated from heightmaps, using a normal map together with the heightmap makes sure the lighting is in place with the displacement.

You might’ve already noted that the displacement map linked above is the inverse of the heightmap shown at the start of this tutorial. With parallax mapping it makes more sense to use the inverse of the heightmap (also known as a depthmap) as it’s easier to fake depth than height on flat surfaces. This slightly changes how we perceive parallax mapping as shown below:
在这里插入图片描述

We again have a points AA and BB, but this time we obtain vector P¯P¯ by subtracting vector V¯V¯ from the texture coordinates at point AA. We can obtain depth values instead of height values by subtracting the sampled heightmap values from 1.0 in the shaders, or by simply inversing its texture values in image-editing software as we did with the depthmap linked above.

Parallax mapping is implemented in the fragment shader as the displacement effect differs all over a triangle’s surface. In the fragment shader we’re then going to need to calculate the fragment-to-view direction vector V¯ so we need the view position and a fragment position in tangent space. In the normal mapping tutorial we already had a vertex shader that sends these vectors in tangent space so we can take an exact copy of that tutorial’s vertex shader:

#version 330 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec3 aNormal;
layout (location = 2) in vec2 aTexCoords;
layout (location = 3) in vec3 aTangent;
layout (location = 4) in vec3 aBitangent;

out VS_OUT {
    vec3 FragPos;
    vec2 TexCoords;
    vec3 TangentLightPos;
    vec3 TangentViewPos;
    vec3 TangentFragPos;
} vs_out;

uniform mat4 projection;
uniform mat4 view;
uniform mat4 model;

uniform vec3 lightPos;
uniform vec3 viewPos;

void main()
{
    gl_Position      = projection * view * model * vec4(aPos, 1.0);
    vs_out.FragPos   = vec3(model * vec4(aPos, 1.0));   
    vs_out.TexCoords = aTexCoords;    
    
    vec3 T   = normalize(mat3(model) * aTangent);
    vec3 B   = normalize(mat3(model) * aBitangent);
    vec3 N   = normalize(mat3(model) * aNormal);
    mat3 TBN = transpose(mat3(T, B, N));

    vs_out.TangentLightPos = TBN * lightPos;
    vs_out.TangentViewPos  = TBN * viewPos;
    vs_out.TangentFragPos  = TBN * vs_out.FragPos;
}   

What’s important to note here is that for parallax mapping we specifically need to send the aPos and viewer’s position viewPos in tangent space to the fragment shader.

Within the fragment shader we then implement the parallax mapping logic. The fragment shader looks a bit like this:

#version 330 core
out vec4 FragColor;

in VS_OUT {
    vec3 FragPos;
    vec2 TexCoords;
    vec3 TangentLightPos;
    vec3 TangentViewPos;
    vec3 TangentFragPos;
} fs_in;

uniform sampler2D diffuseMap;
uniform sampler2D normalMap;
uniform sampler2D depthMap;
  
uniform float height_scale;
  
vec2 ParallaxMapping(vec2 texCoords, vec3 viewDir);
  
void main()
{           
    // offset texture coordinates with Parallax Mapping
    vec3 viewDir   = normalize(fs_in.TangentViewPos - fs_in.TangentFragPos);
    vec2 texCoords = ParallaxMapping(fs_in.TexCoords,  viewDir);

    // then sample textures with new texture coords
    vec3 diffuse = texture(diffuseMap, texCoords);
    vec3 normal  = texture(normalMap, texCoords);
    normal = normalize(normal * 2.0 - 1.0);
    // proceed with lighting code
    [...]    
}

We defined a function called ParallaxMapping that takes as input the fragment’s texture coordinates and the fragment-to-view direction V¯V¯ in tangent space. The function returns the displaced texture coordinates. We then use these displaced texture coordinates as the texture coordinates for sampling the diffuse and normal map. As a result the fragment’s diffuse color and normal vector correctly corresponds to the surface’s displaced geometry.

Let’s take a look inside the ParallaxMapping function:

vec2 ParallaxMapping(vec2 texCoords, vec3 viewDir)
{ 
    float height =  texture(depthMap, texCoords).r;    
    vec2 p = viewDir.xy / viewDir.z * (height * height_scale);
    return texCoords - p;    
} 
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值