LearnOpenGL 总结记录<15> Parallax Mapping

思路:

(视觉看平面上的A点,其实我们想要得到的是 B点 在heightmap的值是多少,那么一个方法就是,先把cameraPos,和A点变换到切线空间中,求出A点的在heightmap的值H(A)(A点有自己的纹理坐标),然后利用H(A)来缩放V̄,得到P̄,通过A点纹理坐标偏移 P̄.x, P̄.y ,得到近似B点的在heightmap的纹理坐标 )

The idea behind parallax mapping is to alter the texture coordinates in such a way that it looks like a fragment’s surface is higher or lower than it actually is, all based on the view direction and a heightmap. To understand how it works, take a look at the following image of our brick(砖) surface:

Here the rough red line represents the values in the heightmap as the geometric surface representation of the brick surface and the vector V̄ represents the surface to view direction (viewDir). If the plane would have actual displacement the viewer would see the surface at point B. However, as our plane has no actual displacement the view direction hits the flat plane at point A as we’d expect. Parallax mapping aims to offset the texture coordinates at fragment position A in such a way that we get texture coordinates at point B. We then use the texture coordinates at point B for all subsequent texture samples, making it look like the viewer is actually looking at point B.

The trick is to figure out how to get the texture coordinates at point B from point A. Parallax mapping tries to solve this by scaling the fragment-to-view direction vector V̄ by the height at fragment A. So we’re scaling the length of V̄ to be equal to a sampled value from the heightmap H(A) at fragment position A. The image below shows this scaled vector P̄:

We then take this vector P̄ and take its vector coordinates that align with the plane as the texture coordinate offset. This works because vector P̄ is calculated using a height value from the heightmap so the higher a fragment’s height, the more it effectively gets displaced.

This little trick gives good results most of the time, but is however a really crude approximation to get to point B. When heights change rapidly over a surface the results tend to look unrealistic as the vector P̄ will not end up close to B as you can see below:

Another issue with parallax mapping is that it’s difficult to figure out which coordinates to retrieve fromP̄ when the surface is arbitrarily rotated in some way. What we’d rather do is parallax mapping in a different coordinate space where the x and y component of vector P̄ always aligns with the texture’s surface. If you’ve followed along in the normal mapping tutorial you probably guessed how we can accomplish this and yes, we would like to do parallax mapping in tangent space.

By transforming the fragment-to-view direction vector V̄ to tangent space the transformed P̄ vector will have its x and y component aligned to the surface’s tangent and bitangent vectors. As the tangent and bitangent vectors are pointing in the same direction as the surface’s texture coordinates we can take the xand y components of P̄ as the texture coordinate offset, regardless of the surface’s direction.

 

代码

vs:

(先把 顶点和cameraPos 从世界空间变换到切线空间)

#version 330 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec3 aNormal;
layout (location = 2) in vec2 aTexCoords;
layout (location = 3) in vec3 aTangent;
layout (location = 4) in vec3 aBitangent;

out VS_OUT {
    vec3 FragPos;
    vec2 TexCoords;
    vec3 TangentLightPos;
    vec3 TangentViewPos;
    vec3 TangentFragPos;
} vs_out;

uniform mat4 projection;
uniform mat4 view;
uniform mat4 model;

uniform vec3 lightPos;
uniform vec3 viewPos;

void main()
{
    vs_out.FragPos = vec3(model * vec4(aPos, 1.0));   
    vs_out.TexCoords = aTexCoords;   
    
    vec3 T = normalize(mat3(model) * aTangent);
    vec3 B = normalize(mat3(model) * aBitangent);
    vec3 N = normalize(mat3(model) * aNormal);
    mat3 TBN = transpose(mat3(T, B, N));

    vs_out.TangentLightPos = TBN * lightPos;
    vs_out.TangentViewPos  = TBN * viewPos;
    vs_out.TangentFragPos  = TBN * vs_out.FragPos;
    
    gl_Position = projection * view * model * vec4(aPos, 1.0);
}

 

fs:

(主要是ParallaxMapping的函数,看参数,texCoords 就是A点的纹理坐标,那么就是先采样A点在depthMap的height 值,之后计算出 viewDir.xy * (height * heightScale) ,这个就是 P向量在切线空间上的xy投影,之后与A点的纹理坐标进行偏移)

#version 330 core
out vec4 FragColor;

in VS_OUT {
    vec3 FragPos;
    vec2 TexCoords;
    vec3 TangentLightPos;
    vec3 TangentViewPos;
    vec3 TangentFragPos;
} fs_in;

uniform sampler2D diffuseMap;
uniform sampler2D normalMap;
uniform sampler2D depthMap;

uniform float heightScale;

vec2 ParallaxMapping(vec2 texCoords, vec3 viewDir)
{ 
    float height =  texture(depthMap, texCoords).r;     
    return texCoords - viewDir.xy * (height * heightScale);        
}

void main()
{           
    // offset texture coordinates with Parallax Mapping
    vec3 viewDir = normalize(fs_in.TangentViewPos - fs_in.TangentFragPos);
    vec2 texCoords = fs_in.TexCoords;
    
    texCoords = ParallaxMapping(fs_in.TexCoords,  viewDir);       
    if(texCoords.x > 1.0 || texCoords.y > 1.0 || texCoords.x < 0.0 || texCoords.y < 0.0)
        discard;

    // obtain normal from normal map
    vec3 normal = texture(normalMap, texCoords).rgb;
    normal = normalize(normal * 2.0 - 1.0);   
   
    // get diffuse color
    vec3 color = texture(diffuseMap, texCoords).rgb;
    // ambient
    vec3 ambient = 0.1 * color;
    // diffuse
    vec3 lightDir = normalize(fs_in.TangentLightPos - fs_in.TangentFragPos);
    float diff = max(dot(lightDir, normal), 0.0);
    vec3 diffuse = diff * color;
    // specular    
    vec3 reflectDir = reflect(-lightDir, normal);
    vec3 halfwayDir = normalize(lightDir + viewDir);  
    float spec = pow(max(dot(normal, halfwayDir), 0.0), 32.0);

    vec3 specular = vec3(0.2) * spec;
    FragColor = vec4(ambient + diffuse + specular, 1.0);
}

 

vec2 ParallaxMapping(vec2 texCoords, vec3 viewDir)
{
    float height = texture(depthMap, texCoords).r;
	vec2 p = viewDir.xy / viewDir.z * (height * height_scale);
	return texCoords - p;
}

What is interesting to note here is the division of viewDir.xy by viewDir.z. As the viewDirvector is normalized viewDir.z will be somewhere in the range between 0.0 and 1.0. When viewDiris largely parallel to the surface its z component is close to 0.0 and the division returns a much larger vectorP̄ compared to when viewDir is largely perpendicular to the surface. So basically we’re increasing the size of P̄ in such a way that it offsets the texture coordinates at a larger scale when looking at a surface from an angle compared to when looking at it from the top; this gives more realistic results at angles.

Some people prefer to leave the division by viewDir.z out of the equation as normal Parallax Mapping could produce undesirable results at angles; the technique is then called Parallax Mapping with Offset Limiting. Choosing which technique to pick is usually a matter of personal preference, but I often tend to side with normal Parallax Mapping.

 

 

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值