图形学摘要

vertex shader vs fragment shader

vertex shader 又叫gouraud shader
fragment shader 又叫Phong shader

vertex shader的任务是转换顶点坐标和准备frag shader需要的数据,
frag shader的任务是决定当前像素是否需 要绘制以及如果绘制,要以雯么颜色绘制。 OpenGL中的渲染过程:
在这里插入图片描述

flat shading - gouraud shading - phong shading(从渲染粒度和流程上分)

shader
flat shader
vertex shader(Gouraud shader)
fragment shader(Phong shader)
  • flat shading
    三角形内的每个点渲染时使用同一个法向量, 所以结果很粗糙。

在这里插入图片描述

  • gouraud shading(高洛德shading)
    vertex shader负责计算每个顶点颜色,颜色会传给frag shader, frag shader再用顶点的颜色插值出每个pixel的颜色。该方法优点是只对vertex做光照计算, 所以计算量较小。
    Gouraud Shading
    gouraud
  • phong shading(冯氏shading)
    phong shading会在vertex shader中计算顶点的normal和位置信息,送给frag shader后, frag shader通过这些信息来计算像素点颜色。 phong shading 显示效果要比gouraud更细腻, 但是由于每个像素点都要重新计算下光照,所以计算量比gouraud大。
    Phong Shading
    phong shading

gouraud shading 例子

face3d项目提供了渲染3d模型的功能,可以拿这份代码窥探下3维模型是如何渲染成二维图片的,代码如下,可以看出,直接用每个顶点的颜色插值出三角形内部的颜色,所以他属于gouraud shading,原始代码, 这里贴下简化流程

def render_colors(vertices, triangles, colors, h, w, c = 3):
    ''' render mesh with colors
    Args:
        vertices: [nver, 3]
        triangles: [ntri, 3] 
        colors: [nver, 3]
        h: height
        w: width    
    Returns:
        image: [h, w, c]. 
    '''
    assert vertices.shape[0] == colors.shape[0]
    
    # initial 
    image = np.zeros((h, w, c))
    depth_buffer = np.zeros([h, w]) - 999999. 

    # 遍历所有三角形
    for i in range(triangles.shape[0]):
        tri = triangles[i, :] # 3 vertex indices

        # 计算当前三角形的bbox
        umin = max(int(np.ceil(np.min(vertices[tri, 0]))), 0)
        umax = min(int(np.floor(np.max(vertices[tri, 0]))), w-1)

        vmin = max(int(np.ceil(np.min(vertices[tri, 1]))), 0)
        vmax = min(int(np.floor(np.max(vertices[tri, 1]))), h-1)

        if umax<umin or vmax<vmin:
            continue

        for u in range(umin, umax+1):
            for v in range(vmin, vmax+1):
                # 对于bbox内的所有像素点遍历, 如果当前点不在三角形内则进入下一阶段循环
                if not isPointInTri([u,v], vertices[tri, :2]): 
                    continue
                # 获取当前像素点在三角形中的重心坐标
                w0, w1, w2 = get_point_weight([u, v], vertices[tri, :2])
                point_depth = w0*vertices[tri[0], 2] + w1*vertices[tri[1], 2] + w2*vertices[tri[2], 2]
                # 如果该点没有绘制过则直接绘制,如果已经绘制过,比较下第二次绘制点的深度值是否大于
                # 前一个深度值, 如果深度值更大(离相机更近)则继续绘制该点,并更新深度值。
                if point_depth > depth_buffer[v, u]:
                    depth_buffer[v, u] = point_depth
                    image[v, u, :] = w0*colors[tri[0], :] + w1*colors[tri[1], :] + w2*colors[tri[2], :]
    return image

上面说的detph buffer就是z buffer, cs184有个图很详细的描述了z buffer算法流程:
在这里插入图片描述

光照类型

物体受到的光照可以分为:Specular highlights(高光),Diffuse reflection(漫反射),Ambient lighting(环境光).

光照
高光
漫反射
环境光
phong
blinn-phong
兰伯特光照模型
半兰伯特光照模型

在这里插入图片描述

  • Ambient shading
    在这里插入图片描述

  • diffuse reflection
    常见漫反射模型有兰伯特光照模型(Lambert's law)半兰伯特光照模型
    其中兰伯特定律定义:散射到所有方向的光线都是一样的(和视角无关,所以L_d计算和v无关)
    在这里插入图片描述
    兰伯特光照模型中光无法照射的地方是全黑的,没有任何明暗变化,而半兰伯特光照模型则可以cover住这个问题,具体如何实现的这里就不展开了。

  • Specular Shading
    对于高光的计算,常见的有Phong光照模型Blinn-Phong光照模型

    • Phong光照模型
      L指向光源方向,N为平面发向量,R为反射光方向
      在这里插入图片描述
      那么可以通过已知的L和N计算出来反射光R:
      在这里插入图片描述
      在这里插入图片描述
      在这里插入图片描述
    • Blinn-Phong光照模型
      由于phong模型计算量大(r的计算需要3次点乘),所以Blinn提出的Blinn-Phong模型来解决这个问题。这个方法需要计算出v和l的平均向量:
      在这里插入图片描述

    blinn-phone光照模型
    最后将上面的三个光照结果相加
    在这里插入图片描述

光照代码示例

对应的,可以参考face3d中计算光照的代码,着色过程使用的gouraud模型,直接算出每个顶点的颜色,漫反射模型为Lambert’s law,高光为blinn-phong.(注意, 代码只实现了漫反射光照)

def add_light(vertices, triangles, colors, light_positions = 0, light_intensities = 0):
    ''' Gouraud shading. add point lights.
    In 3d face, usually assume:
    1. The surface of face is Lambertian(reflect only the low frequencies of lighting)
    2. Lighting can be an arbitrary combination of point sources
    3. No specular (unless skin is oil, 23333)

    Ref: https://cs184.eecs.berkeley.edu/lecture/pipeline    
    Args:
        vertices: [nver, 3]
        triangles: [ntri, 3]
        light_positions: [nlight, 3] 
        light_intensities: [nlight, 3]
    Returns:
        lit_colors: [nver, 3]
    '''
    nver = vertices.shape[0]
    normals = get_normal(vertices, triangles) # [nver, 3]

    # ambient
    # La = ka*Ia

    # diffuse
    # Ld = kd*(I/r^2)max(0, nxl)
    # n: surf norm, l:light vec, 
    direction_to_lights = vertices[np.newaxis, :, :] - light_positions[:, np.newaxis, :] # [nlight, nver, 3]
    direction_to_lights_n = np.sqrt(np.sum(direction_to_lights**2, axis = 2)) # [nlight, nver]
    direction_to_lights = direction_to_lights/direction_to_lights_n[:, :, np.newaxis]
    normals_dot_lights = normals[np.newaxis, :, :]*direction_to_lights # [nlight, nver, 3]
    normals_dot_lights = np.sum(normals_dot_lights, axis = 2) # [nlight, nver]
    diffuse_output = colors[np.newaxis, :, :]*normals_dot_lights[:, :, np.newaxis]*light_intensities[:, np.newaxis, :]
    diffuse_output = np.sum(diffuse_output, axis = 0) # [nver, 3]
    
    # specular
    # h = (v + l)/(|v + l|) bisector
    # Ls = ks*(I/r^2)max(0, nxh)^p
    # increasing p narrows the reflectionlob

    lit_colors = diffuse_output # only diffuse part here.
    lit_colors = np.minimum(np.maximum(lit_colors, 0), 1)
    return lit_colors

纹理

  • 漫反射纹理
    可以使用漫反射纹理替换前面提到漫反射光照计算

  • 镜面光纹理
    类似的,镜面光的计算也可以不用计算,直接用镜面光纹理替代

  • 法线纹理
    每个fragment都有自己的法线向量,可以实现凹凸不平的效果。
    在这里插入图片描述

投影

常见的投影分为透视投影和正交投影。

  • 正交投影
    物体离相机的距离远大于物体本身的尺寸时可以使用正交投影来近似(比如距离相机很远的人头)。face3d中的缩放正交投影代码(缩放已经做过了):
 ## --------- 3d-2d project. from camera space to image plane
 # generally, image plane only keeps x,y channels, here reserve z channel for calculating z-buffer.
def orthographic_project(vertices):
   ''' scaled orthographic projection(just delete z)
       assumes: variations in depth over the object is small relative to 
       the mean distance from camera to object
       x -> x*f/z, y -> x*f/z, z -> f.
       for point i,j. zi~=zj. so just delete z
       ** often used in face
       Homo: P = [[1,0,0,0], [0,1,0,0], [0,0,1,0]]
   Args:
       vertices: [nver, 3]
   Returns:
       projected_vertices: [nver, 3] if isKeepZ=True. [nver, 2] if isKeepZ=False.
   '''
   return vertices.copy()
  • 透视投影
    图形学的透视投影作用其实不是投影,而是为投影做预处理,透视投影后的顶点会被映射到NDC空间,xyz都处于[-1, 1]之间,投影后的点依然是3维的。投影矩阵有两种计算方式
  1. 通过远&近裁剪平面计算
    在这里插入图片描述
    在这里插入图片描述
  2. 通过fov计算
    在这里插入图片描述
    关于这两种计算有篇文章描述的很详细。对应代码:
def perspective_project(vertices, fovy, aspect_ratio = 1., near = 0.1, far = 1000.):
    ''' perspective projection.
    Args:
        vertices: [nver, 3]
        fovy: vertical angular field of view. degree.
        aspect_ratio : width / height of field of view
        near : depth of near clipping plane
        far : depth of far clipping plane
    Returns:
        projected_vertices: [nver, 3] 
    '''
    fovy = np.deg2rad(fovy)
    top = near*np.tan(fovy)
    bottom = -top 
    right = top*aspect_ratio
    left = -right

    #-- homo
    P = np.array([[near/right, 0, 0, 0],
                 [0, near/top, 0, 0],
                 [0, 0, -(far+near)/(far-near), -2*far*near/(far-near)],
                 [0, 0, -1, 0]])
    vertices_homo = np.hstack((vertices, np.ones((vertices.shape[0], 1)))) # [nver, 4]
    projected_vertices = vertices_homo.dot(P.T)
    projected_vertices = projected_vertices/projected_vertices[:,3:]
    projected_vertices = projected_vertices[:,:3]
    projected_vertices[:,2] = -projected_vertices[:,2]

    #-- non homo. only fovy
    # projected_vertices = vertices.copy()
    # projected_vertices[:,0] = -(near/right)*vertices[:,0]/vertices[:,2]
    # projected_vertices[:,1] = -(near/top)*vertices[:,1]/vertices[:,2]
    return projected_vertices

pytorch3d提供的渲染流程

请添加图片描述

图形学中相机坐标系和OpenCV相机坐标系区别.

  • OpenCV相机坐标系定义
    OpenCV中相机坐标系的Z是直接指向目标的, 刚好和OpenGL的相机坐标系定义相反
    在这里插入图片描述

  • OpenGL相机坐标系定义
    OpenGL中为了保证相机坐标系的z和世界坐标系的+z朝向相同, 对坐标系做了些变化, 如下图, 当相机看向点look_target时, 按照OpenCV的理解, +z应该是look_target - cam_pos, 但是, OpenGL的定义刚好相反为: cam_pos - look_target, 愿意如learn OpenGL中提到的:

For the view matrix’s coordinate system we want its z-axis to be positive and because by convention (in OpenGL) the camera points towards the negative z-axis we want to negate the direction vector. If we switch the subtraction order around we now get a vector pointing towards the camera’s positive z-axis:

在这里插入图片描述
3dface中的lookat定义:

def lookat_camera(vertices, eye, at = None, up = None):
    """ 'look at' transformation: from world space to camera space
    standard camera space: 
        camera located at the origin. 
        looking down negative z-axis. 
        vertical vector is y-axis.
    Xcam = R(X - C)
    Homo: [[R, -RC], [0, 1]]
    Args:
      vertices: [nver, 3] 
      eye: [3,] the XYZ world space position of the camera.
      at: [3,] a position along the center of the camera's gaze.
      up: [3,] up direction 
    Returns:
      transformed_vertices: [nver, 3]
    """
    if at is None:
      at = np.array([0, 0, 0], np.float32)
    if up is None:
      up = np.array([0, 1, 0], np.float32)

    eye = np.array(eye).astype(np.float32)
    at = np.array(at).astype(np.float32)
    z_aixs = -normalize(at - eye) # look forward
    x_aixs = normalize(np.cross(up, z_aixs)) # look right
    y_axis = np.cross(z_aixs, x_aixs) # look up

    R = np.stack((x_aixs, y_axis, z_aixs))#, axis = 0) # 3 x 3
    transformed_vertices = vertices - eye # translation
    transformed_vertices = transformed_vertices.dot(R.T) # rotation
    return transformed_vertices

相机坐标系下的人脸:
在这里插入图片描述
3dface中直接将人脸转换image坐标系在render

def to_image(vertices, h, w, is_perspective = False):
    ''' change vertices to image coord system
    3d system: XYZ, center(0, 0, 0)
    2d image: x(u), y(v). center(w/2, h/2), flip y-axis. 
    image coord and camera coord:
    o--------------  u
    |
    |       x
    |       |
    |       | 
    |       o----------> y
    |      /
    |     /
    |    / z
    v
    we need map this points in camera coordiante system to image(uov) coordinate system

    Args:
        vertices: [nver, 3]
        h: height of the rendering
        w : width of the rendering
    Returns:
        projected_vertices: [nver, 3]  
    '''
    image_vertices = vertices.copy()
    if is_perspective:
        # if perspective, the projected vertices are normalized to [-1, 1]. so change it to image size first.
        image_vertices[:,0] = image_vertices[:,0]*w/2
        image_vertices[:,1] = image_vertices[:,1]*h/2
    # move to center of image(to pixel coord system)
    image_vertices[:,0] = image_vertices[:,0] + w/2
    image_vertices[:,1] = image_vertices[:,1] + h/2
    # flip vertices along y-axis.
    image_vertices[:,1] = h - image_vertices[:,1] - 1
    return image_vertices

最后列张图对比下OpenGL和OpenCV相机坐标系的对比:
在这里插入图片描述

其中uv为像素坐标系, 可以看出, OpenCV相机坐标系到像素坐标系转换时只要对xy做下offset就行, 但是, OpenGL的相机坐标系要转换成像素坐标系时除了offset, 还要flip y, 细节可以参考前面的to_image函数.

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值