games101作业3——部分解答
题目
在这次编程任务中,我们会进一步模拟现代图形技术。我们在代码中添加了
Object Loader(用于加载三维模型), Vertex Shader 与 Fragment Shader,并且支持
了纹理映射。
而在本次实验中,你需要完成的任务是:
- 修改函数 rasterize_triangle(const Triangle& t) in rasterizer.cpp: 在此
处实现与作业 2 类似的插值算法,实现法向量、颜色、纹理颜色的插值。 - 修改函数 get_projection_matrix() in main.cpp: 将你自己在之前的实验中
实现的投影矩阵填到此处,此时你可以运行 ./Rasterizer output.png normal
来观察法向量实现结果。 - 修改函数 phong_fragment_shader() in main.cpp: 实现 Blinn-Phong 模型计
算 Fragment Color. - 修改函数 texture_fragment_shader() in main.cpp: 在实现 Blinn-Phong
的基础上,将纹理颜色视为公式中的 kd,实现 Texture Shading Fragment
Shader. - 修改函数 bump_fragment_shader() in main.cpp: 在实现 Blinn-Phong 的
基础上,仔细阅读该函数中的注释,实现 Bump mapping. - 修改函数 displacement_fragment_shader() in main.cpp: 在实现 Bump
mapping 的基础上,实现 displacement mapping.
答案与解析
这次作业同上两次作业的关系依旧紧密,所以这里放一下上两个作业的链接以供参考:
rasterize_triangle(const Triangle& t)函数
首先是rasterizer.cpp文件中的rasterize_triangle(const Triangle& t)函数:
根据 computeBarycentric2D(i + 0.5, j + 0.5, t.v) 函数,利用三角形重心的重心公式,计算出三角形内三个点对中心点的权重:
以及他们对中心点的加权:
同理,对法向,颜色,纹理坐标和shading坐标(片段坐标)的插值也可以通过上述公式进行。
//Screen space rasterization
void rst::rasterizer::rasterize_triangle(const Triangle& t, const std::array<Eigen::Vector3f, 3>& view_pos)
{
// TODO: From your HW3, get the triangle rasterization code.
// TODO: Inside your rasterization loop:
// * v[i].w() is the vertex view space depth value z.
// * Z is interpolated view space depth for the current pixel
// * zp is depth between zNear and zFar, used for z-buffer
// float Z = 1.0 / (alpha / v[0].w() + beta / v[1].w() + gamma / v[2].w());
// float zp = alpha * v[0].z() / v[0].w() + beta * v[1].z() / v[1].w() + gamma * v[2].z() / v[2].w();
// zp *= Z;
// TODO: Interpolate the attributes:
// auto interpolated_color
// auto interpolated_normal
// auto interpolated_texcoords
// auto interpolated_shadingcoords
// Use: fragment_shader_payload payload( interpolated_color, interpolated_normal.normalized(), interpolated_texcoords, texture ? &*texture : nullptr);
// Use: payload.view_pos = interpolated_shadingcoords;
// Use: Instead of passing the triangle's color directly to the frame buffer, pass the color to the shaders first to get the final color;
// Use: auto pixel_color = fragment_shader(payload);
auto v = t.toVector4();
float min_x = std::min(v[0][0], std::min(v[1][0], v[2][0]));
float max_x = std::max(v[0][0], std::max(v[1][0], v[2][0]));
float min_y = std::min(v[0][1], std::min(v[1][1], v[2][1]));
float max_y = std::max(v[0][1], std::max(v[1][1], v[2][1]));
int x_min = std::floor(min_x);
int x_max = std::ceil(max_x);
int y_min = std::floor(min_y);
int y_max = std::ceil(max_y);
for(int i = x_min; i < x_max;i++){
for(int j = y_min; j < y_max;j++){
if(insideTriangle(i+0.5,j+0.5,t.v)){
//depth interpolated
auto[alpha, beta, gamma] = computeBarycentric2D(i+0.5,j+0.5,t.v);
float Z = 1.0 / (alpha / v[0].w() + beta / v[1].w() + gamma / v[2].w()); //根据质心坐标计算实际的z值
float zp = alpha * v[0].z() / v[0].w() + beta * v[1].z() / v[1].w() + gamma * v[2].z() / v[2].w();
zp *= Z;
if(zp < depth_buf[get_index(i,j)]){
//分别对应颜色、法向量、纹理坐标、阴影坐标(没用到)进行插值
auto interpolated_color = interpolate(alpha,beta,gamma,t.color[0], t.color[1], t.color[2],1);
auto interpolated_normal = interpolate(alpha,beta,gamma,t.normal[0],t.normal[1],t.normal[2],1).normalized();
auto interpolated_texcoords = interpolate(alpha,beta,gamma,t.tex_coords[0],t.tex_coords[1],t.tex_coords[2],1);
//view_pos[]是三角形顶点在view space中的坐标,插值是为了huanyuan在camera space中的坐标
auto interpolated_shadingcoords = interpolate(alpha,beta,gamma,view_pos[0],view_pos[1],view_pos[2],1);
//插值结果传到payload里,渲染时根据这些值确定发现,uv坐标
//fragment_shader_payload类型的payload是用来传递插值结果的结构体
fragment_shader_payload payload(interpolated_color,interpolated_normal,interpolated_texcoords,texture ? &*texture : nullptr);
payload.view_pos = interpolated_shadingcoords;
auto pixel_color = fragment_shader(payload);
//设置深度
depth_buf[get_index(i,j)] = zp;
//设置颜色
set_pixel(Eigen::Vector2i(i,j),pixel_color);
}
}
}
}
}
get_projection_matrix()函数
接着是透视函数main.cpp文件中的get_projection_matrix()函数,具体分析参考前两次作业:
Eigen::Matrix4f get_projection_matrix(float eye_fov, float aspect_ratio, float zNear, float zFar)
{
// TODO: Use the same projection matrix from the previous assignments
Eigen::Matrix4f projection = Eigen::Matrix4f::Identity();
Eigen::Matrix4f pto = Eigen::Matrix4f::Identity();
pto << zNear,0,0,0,
0,zNear,0,0,
0,0,(zNear+zFar),(-1*zFar*zNear),
0,0,1,0;
//float halfAngle = eye_fov/2.0 * MY_PI /180.0f;
//float top = -1.0f * tan(halfAngle) * zNear;
//float bottom = -1.0f * top;
//float right = top * aspect_ratio;
//float left = -1.0f * right;
float halfAngle = eye_fov * MY_PI / 180.0f;
float height = zNear * tan(halfAngle) * 2;
float width = height * aspect_ratio;
auto top = -zNear * tan(halfAngle / 2);
auto right = top * aspect_ratio;
auto left = -right;
auto bottom = -top;
Eigen::Matrix4f m_s = Eigen::Matrix4f::Identity();
m_s << 2/(right-left),0,0,0,
0,2/(top-bottom),0,0,
0,0,2/(zNear-zFar),0,
0,0,0,1;
Eigen::Matrix4f m_t = Eigen::Matrix4f::Identity();
m_t << 1,0,0,-(right+left)/2,
0,1,0,-(top + bottom)/2,
0,0,1,-(zFar+zNear)/2,
0,0,0,1;
projection = m_s * m_t * pto * projection;
return projection;
}
到这里为止,运行 ./Rasterizer output.png normal命令来观察法向量实现结果,效果如下图所示:
phong_fragment_shader()函数
然后是main.cpp文件中的phong_fragment_shader()函数:
这个部分需要复习课程内容,具体课程是games101中的shading1和shading2课程,以下是我做的个人笔记作为参考:
具体需要看一下环境光照、漫反射和镜面反射三部分,以下是关键公式:
环境光照:
漫反射:
我们假设半径为 r 的球面的能量是 E。由于我们假设光在传播过程总能量没有损失,所以 r 无论取什么值,其所对应的球面的能量都为 E。
我们再具体分析某一个点,我们假设半径为1的球面上的某一个点的能量是 I .那么就由 E=4π×r²×I(该点的能量) 公式,推出:
- 半径为1时,E=4πl
- 半径为r时,E=4πr²·L(能量,未知)
根据能量守恒定律可以推出 L=l/r²。
镜面反射:
三者关系:
三种光照加在一起就能得到最后的Blinn-Phong光照模型。
根据公式,具体代码如下:
Eigen::Vector3f phong_fragment_shader(const fragment_shader_payload& payload)
{
Eigen::Vector3f ka = Eigen::Vector3f(0.005, 0.005, 0.005);
Eigen::Vector3f kd = payload.color;
Eigen::Vector3f ks = Eigen::Vector3f(0.7937, 0.7937, 0.7937);
//光的位置和强度
auto l1 = light{{20, 20, 20}, {500, 500, 500}};
auto l2 = light{{-20, 20, 0}, {500, 500, 500}};
std::vector<light> lights = {l1, l2};
Eigen::Vector3f amb_light_intensity{10, 10, 10};
Eigen::Vector3f eye_pos{0, 0, 10};
float p = 150;
Eigen::Vector3f color = payload.color;
Eigen::Vector3f point = payload.view_pos;
Eigen::Vector3f normal = payload.normal;
Eigen::Vector3f result_color = {0, 0, 0};
for (auto& light : lights)
{
// TODO: For each light source in the code, calculate what the *ambient*, *diffuse*, and *specular*
// components are. Then, accumulate that result on the *result_color* object.
//光的方向
Eigen::Vector3f light_dir = light.position - point;
//视线方向
Eigen::Vector3f view_dir = eye_pos - point;
//衰减因子r,两个向量的点乘
float r2 = light_dir.dot(light_dir);
//ambient环境光照
Eigen::Vector3f La = ka.cwiseProduct(amb_light_intensity);
//diffuse漫反射
Eigen::Vector3f Ld = kd.cwiseProduct(light.intensity / r2);
Ld *= std::max(0.0f,normal.normalized().dot(light_dir.normalized()));
//specular镜面反射
//半程向量:视线方向和光线方向平均
Eigen::Vector3f h = (light_dir + view_dir).normalized();
Eigen::Vector3f Ls = ks.cwiseProduct(light.intensity / r2);
Ls *= std::pow(std::max(0.0f,normal.normalized().dot(h)),p);
//三者相加
result_color += (La + Ld + Ls);
}
return result_color * 255.f;
}
效果图:
texture_fragment_shader()函数
最后是main.cpp中的texture_fragment_shader()函数:
这个函数跟phong_fragment_shader()函数的差别在于phong shading的kd是从RGB中取,而texture shading的kd则是根据UV坐标从纹理中取。
实现材质贴图,只需要将漫反射项的系数改为材质贴图的对应值即可,当前点对应的uv坐标已经在光栅化的过程中计算好放在payload中了,用uv坐标在texture中获取对应颜色就可以
Eigen::Vector3f texture_fragment_shader(const fragment_shader_payload& payload)
{
Eigen::Vector3f return_color = {0, 0, 0};
if (payload.texture)
{
// TODO: Get the texture value at the texture coordinates of the current fragment
//当前点对应的uv坐标已经在光栅化的过程中计算好放在payload中了,用uv坐标在texture中获取对应颜色就可以
//与phong_fragment_shader()函数的主要差别在这,kd的取值不一样。
return_color = payload.texture->getColor(payload.tex_coords.x(),payload.tex_coords.y());
}
Eigen::Vector3f texture_color;
texture_color << return_color.x(), return_color.y(), return_color.z();
Eigen::Vector3f ka = Eigen::Vector3f(0.005, 0.005, 0.005);
Eigen::Vector3f kd = texture_color / 255.f;
Eigen::Vector3f ks = Eigen::Vector3f(0.7937, 0.7937, 0.7937);
auto l1 = light{{20, 20, 20}, {500, 500, 500}};
auto l2 = light{{-20, 20, 0}, {500, 500, 500}};
std::vector<light> lights = {l1, l2};
Eigen::Vector3f amb_light_intensity{10, 10, 10};
Eigen::Vector3f eye_pos{0, 0, 10};
float p = 150;
Eigen::Vector3f color = texture_color;
Eigen::Vector3f point = payload.view_pos;
Eigen::Vector3f normal = payload.normal;
Eigen::Vector3f result_color = {0, 0, 0};
for (auto& light : lights)
{
// TODO: For each light source in the code, calculate what the *ambient*, *diffuse*, and *specular*
// components are. Then, accumulate that result on the *result_color* object.
//光的方向
Eigen::Vector3f light_dir = light.position - point;
//视线方向
Eigen::Vector3f view_dir = eye_pos - point;
//衰减因子r,两个向量的点乘
float r2 = light_dir.dot(light_dir);
//ambient
Eigen::Vector3f La = ka.cwiseProduct(amb_light_intensity);
//diffuse
//实现材质贴图,只需要将漫反射项的系数改为材质贴图的对应值即可
Eigen::Vector3f Ld = kd.cwiseProduct(light.intensity / r2);
Ld *= std::max(0.0f,normal.normalized().dot(light_dir.normalized()));
//specular
Eigen::Vector3f h = (light_dir + view_dir).normalized();
Eigen::Vector3f Ls = ks.cwiseProduct(light.intensity / r2);
Ls *= std::pow(std::max(0.0f,normal.normalized().dot(h)),p);
result_color += (La + Ld + Ls);
}
return result_color * 255.f;
}
效果图:
bump_fragment_shader()函数
这里就是凹凸贴图的实现,主要思路就是实现贴图对法线的扰动(计算导数)
代码:
Eigen::Vector3f bump_fragment_shader(const fragment_shader_payload& payload)
{
Eigen::Vector3f ka = Eigen::Vector3f(0.005, 0.005, 0.005);
Eigen::Vector3f kd = payload.color;
Eigen::Vector3f ks = Eigen::Vector3f(0.7937, 0.7937, 0.7937);
auto l1 = light{{20, 20, 20}, {500, 500, 500}};
auto l2 = light{{-20, 20, 0}, {500, 500, 500}};
std::vector<light> lights = {l1, l2};
Eigen::Vector3f amb_light_intensity{10, 10, 10};
Eigen::Vector3f eye_pos{0, 0, 10};
float p = 150;
Eigen::Vector3f color = payload.color;
Eigen::Vector3f point = payload.view_pos;
Eigen::Vector3f normal = payload.normal;
float kh = 0.2, kn = 0.1;
// TODO: Implement bump mapping here
// Let n = normal = (x, y, z)
// Vector t = (x*y/sqrt(x*x+z*z),sqrt(x*x+z*z),z*y/sqrt(x*x+z*z))
// Vector b = n cross product t
// Matrix TBN = [t b n]
// dU = kh * kn * (h(u+1/w,v)-h(u,v))
// dV = kh * kn * (h(u,v+1/h)-h(u,v))
// Vector ln = (-dU, -dV, 1)
// Normal n = normalize(TBN * ln)
float x = normal.x();
float y = normal.y();
float z = normal.z();
Eigen::Vector3f t(x*y/std::sqrt(x*x+z*z),std::sqrt(x*x+z*z),z*y/std::sqrt(x*x+z*z));
Eigen::Vector3f b = normal.cross(t);
Eigen::Matrix3f TBN;
TBN << t.x(), b.x(), normal.x(),
t.y(), b.y(), normal.y(),
t.z(), b.z(), normal.z();
// dU = kh * kn * (h(u+1/w,v)-h(u,v))
// dV = kh * kn * (h(u,v+1/h)-h(u,v))
// Vector ln = (-dU, -dV, 1)
// Position p = p + kn * n * h(u,v)
// Normal n = normalize(TBN * ln)
float u = payload.tex_coords.x();
float v = payload.tex_coords.y();
float h = payload.texture->height;
float w = payload.texture->width;
float dU = kh * kn * (payload.texture->getColor(u + 1.0f / w, v).norm() - payload.texture->getColor(u, v).norm());
float dV = kh * kn * (payload.texture->getColor(u, v + 1.0f / h).norm() - payload.texture->getColor(u, v).norm());
Eigen::Vector3f ln(-dU, -dV, 1.0f);
//point += (kn * normal * payload.texture->getColor(u, v).norm());
normal = TBN * ln;
//Eigen::Vector3f result_color = normal.normalized();
Eigen::Vector3f result_color = {0, 0, 0};
result_color = normal.normalized();
return result_color * 255.f;
}
效果图如下:
displacement_fragment_shader()函数
按照games101教程所说,位移贴图实际改变了三角形的顶点,并不是在凹凸贴图上使用blinn-phong模型。
Eigen::Vector3f displacement_fragment_shader(const fragment_shader_payload& payload)
{
Eigen::Vector3f ka = Eigen::Vector3f(0.005, 0.005, 0.005);
Eigen::Vector3f kd = payload.color;
Eigen::Vector3f ks = Eigen::Vector3f(0.7937, 0.7937, 0.7937);
auto l1 = light{{20, 20, 20}, {500, 500, 500}};
auto l2 = light{{-20, 20, 0}, {500, 500, 500}};
std::vector<light> lights = {l1, l2};
Eigen::Vector3f amb_light_intensity{10, 10, 10};
Eigen::Vector3f eye_pos{0, 0, 10};
float p = 150;
Eigen::Vector3f color = payload.color;
Eigen::Vector3f point = payload.view_pos;
Eigen::Vector3f normal = payload.normal;
float kh = 0.2, kn = 0.1;
// TODO: Implement displacement mapping here
// Let n = normal = (x, y, z)
// Vector t = (x*y/sqrt(x*x+z*z),sqrt(x*x+z*z),z*y/sqrt(x*x+z*z))
// Vector b = n cross product t
// Matrix TBN = [t b n]
float x = normal.x();
float y = normal.y();
float z = normal.z();
Eigen::Vector3f t(x*y/std::sqrt(x*x+z*z),std::sqrt(x*x+z*z),z*y/std::sqrt(x*x+z*z));
Eigen::Vector3f b = normal.cross(t);
Eigen::Matrix3f TBN;
TBN << t.x(), b.x(), normal.x(),
t.y(), b.y(), normal.y(),
t.z(), b.z(), normal.z();
// dU = kh * kn * (h(u+1/w,v)-h(u,v))
// dV = kh * kn * (h(u,v+1/h)-h(u,v))
// Vector ln = (-dU, -dV, 1)
// Position p = p + kn * n * h(u,v)
// Normal n = normalize(TBN * ln)
float u = payload.tex_coords.x();
float v = payload.tex_coords.y();
float h = payload.texture->height;
float w = payload.texture->width;
float dU = kh * kn * (payload.texture->getColor(u + 1.0f / w, v).norm() - payload.texture->getColor(u, v).norm());
float dV = kh * kn * (payload.texture->getColor(u, v + 1.0f / h).norm() - payload.texture->getColor(u, v).norm());
Eigen::Vector3f ln(-dU, -dV, 1.0f);
point += (kn * normal * payload.texture->getColor(u, v).norm());
normal = (TBN * ln).normalized();
Eigen::Vector3f result_color = {0, 0, 0};
for (auto &light : lights)
{
// TODO: For each light source in the code, calculate what the *ambient*, *diffuse*, and *specular*
// components are. Then, accumulate that result on the *result_color* object.
Vector3f la,ld,ls;
Vector3f light_intensity = light.intensity / (light.position - point).dot(light.position - point); //L = I/r^2
Vector3f l = (light.position - point).normalized(); //光的入射角度
Vector3f v = (eye_pos - point).normalized(); //人眼的入射角度
Vector3f h = (l + v).normalized(); //半程向量
float ld_rece = l.dot(normal) > 0 ? l.dot(normal) : 0; //max(0,n·l)
float ls_rece = pow((h.dot(normal) > 0 ? h.dot(normal) : 0), p); //max(0,n·h)
la = ka.cwiseProduct(amb_light_intensity);
ld = kd.cwiseProduct(light_intensity) * ld_rece;
ls = ks.cwiseProduct(light_intensity) * ls_rece;
result_color += la + ld + ls;
}
return result_color * 255.f;
}
效果图如下: