nvdiffrast可微渲染记录

一、介绍

nvdiffrast是一个高效的底层渲染工具,具体介绍可以参考

官方文档或者中文翻译

相比于pytorch3d,kaolin等库实现可微,nvdiffrast优点是高效,但缺点就是偏低层实现,对于相机光照,材质等需要额外处理,官方是这样描述的:

nvdiffrast has no built-in camera models, lighting/material models, etc. Instead, the provided operations encapsulate only the most graphics-centric steps in the modern hardware graphics pipeline: rasterization, interpolation, texturing, and antialiasing.

二、使用说明 

这里使用kaolin + nvdiffrast实现mask,法线,颜色图的渲染。

1.相机参数

使用kaolin设置相机参数,代码来源:FlexiCubes

def get_random_camera_batch(batch_size, fovy = np.deg2rad(45), iter_res=[512,512], cam_near_far=[0.1, 1000.0], cam_radius=3.0, device="cuda", use_kaolin=True):
    if use_kaolin:
        camera_pos = torch.stack(kal.ops.coords.spherical2cartesian(
            *kal.ops.random.sample_spherical_coords((batch_size,), azimuth_low=0., azimuth_high=math.pi * 2,
                                                    elevation_low=-math.pi / 2., elevation_high=math.pi / 2., device='cuda'),
            cam_radius
        ), dim=-1)
        return kal.render.camera.Camera.from_args(
            eye=camera_pos + torch.rand((batch_size, 1), device='cuda') * 0.5 - 0.25,
            at=torch.zeros(batch_size, 3),
            up=torch.tensor([[0., 1., 0.]]),
            fov=fovy,
            near=cam_near_far[0], far=cam_near_far[1],
            height=iter_res[0], width=iter_res[1],
            device='cuda'
        )

这里设置成围绕物体生成随机相机视角(mesh需要先归一化)

2.读取mesh

这里使用trimesh读取obj格式文件。我测试了kaolin,pytorch3d读取obj,其中trimesh返回的是PIL读取的图片需要转换一下,pytorch3d读取的所有内容都已经转换成tensor格式,kaolin读取obj列表一次返回多个对象。

class Mesh:
    def __init__(self, vertices, faces,uv=None,texture=None,color=None):
        self.vertices = vertices
        self.faces = faces
        self.uv = uv
        self.texture = texture
        self.color = color

def load_mesh(path, device):
    mesh_np = trimesh.load(path)  
    vertices = torch.tensor(mesh_np.vertices, device=device, dtype=torch.float)
    faces = torch.tensor(mesh_np.faces, device=device, dtype=torch.long)
    
    if hasattr(mesh_np, 'visual') and hasattr(mesh_np.visual, 'vertex_colors'):
        color = torch.tensor(mesh_np.visual.vertex_colors, device=device, dtype=torch.float)/255
    else:
        color = None
    # Normalize
    vmin, vmax = vertices.min(dim=0)[0], vertices.max(dim=0)[0]
    scale = 1.8 / torch.max(vmax - vmin).item()
    vertices = vertices - (vmax + vmin) / 2 # Center mesh on origin
    vertices = vertices * scale # Rescale to [-0.9, 0.9]
    if hasattr(mesh_np, 'visual') and hasattr(mesh_np.visual, 'uv'):  
        uv = torch.tensor(mesh_np.visual.uv, device=device, dtype=torch.float)
        uv[:,1] = 1-uv[:,1]
        img = mesh_np.visual.material.image
        transform = transforms.ToTensor()
        tensor = transform(img)
        tensor = tensor.permute(1,2,0).contiguous()
        texture = tensor.to(device)
    else:
        uv = None
        texture = None
    return Mesh(vertices, faces,uv,texture,color)

在读取uv时注意对应方式。uv作为顶点对应像素的位置,范围是[0,1]。

如果渲染出来贴图错位,那需要转换一下。之前使用meshlab可以正常贴图,使用pytorch3d也可以正常渲染,但使用nvdiffrast渲染有问题。这里给出了解决方案

uv[:,1] = 1-uv[:,1]

3.渲染

渲染流程:Rasterization--->Interpolation--->mipmap (Texture)

glctx = dr.RasterizeCudaContext()
def render_mesh(mesh, camera, iter_res, return_types = ["mask", "depth"], white_bg=False, wireframe_thickness=0.4):
    vertices_camera = camera.extrinsics.transform(mesh.vertices)
    face_vertices_camera = kal.ops.mesh.index_vertices_by_faces(
        vertices_camera, mesh.faces
    )
    # Projection: nvdiffrast take clip coordinates as input to apply barycentric perspective correction.
    # Using `camera.intrinsics.transform(vertices_camera) would return the normalized device coordinates.
    proj = camera.projection_matrix().unsqueeze(1)
    proj[:, :, 1, 1] = -proj[:, :, 1, 1]
    homogeneous_vecs = kal.render.camera.up_to_homogeneous(
        vertices_camera
    )
    vertices_clip = (proj @ homogeneous_vecs.unsqueeze(-1)).squeeze(-1)
    faces_int = mesh.faces.int()

    rast, rast_out_db = dr.rasterize(
        glctx, vertices_clip, faces_int, iter_res)

    out_dict = {}
    for type in return_types:
        if type == "mask" :
            img = dr.antialias((rast[..., -1:] > 0).float(), rast, vertices_clip, faces_int)
        elif type == "depth":
            img = dr.interpolate(homogeneous_vecs, rast, faces_int)[0]
        elif type == "wireframe":
            img = torch.logical_or(
                torch.logical_or(rast[..., 0] < wireframe_thickness, rast[..., 1] < wireframe_thickness),
                (rast[..., 0] + rast[..., 1]) > (1. - wireframe_thickness)
            ).unsqueeze(-1)
        elif type == "normals" :
            img = dr.interpolate(
                mesh.nrm.unsqueeze(0).contiguous(), rast,
                torch.arange(mesh.faces.shape[0] * 3, device='cuda', dtype=torch.int).reshape(-1, 3)
            )[0]
        elif type == "color":
            uvs = mesh.uv.contiguous()
            texc, texd = dr.interpolate(uvs, rast,faces_int,rast_db=rast_out_db, diff_attrs='all')
            tex = mesh.texture
            img = dr.texture(tex[None, ...], texc, uv_da=texd, filter_mode='linear-mipmap-linear',max_mip_level=9)
            # texc, _= dr.interpolate(uvs, rast,faces_int)
            # tex = mesh.texture
            # img = dr.texture(tex[None, ...], texc,filter_mode='linear')
        elif type == "vertices_color":
            vtx_color = mesh.color[:,:3].contiguous()
            color,_ = dr.interpolate(vtx_color, rast, faces_int,rast_db=None,diff_attrs=None)
            img  = dr.antialias(color, rast, vertices_clip, faces_int)
        if white_bg:
            bg = torch.ones_like(img)
            alpha = (rast[..., -1:] > 0).float() 
            img = torch.lerp(bg, img, alpha)
        out_dict[type] = img
    return out_dict

kal_cam = get_random_camera_batch(8, iter_res=iter_res, device=device, use_kaolin=True)
target = render_mesh(gt_mesh, kal_cam, iter_res,return_types=["color"],white_bg=True)

通过 dr.rasterize 得到光栅化的结果,然后使用 dr.interpolate 进行插值,对于纹理使用dr.texture纹理采样,可以使用dr.antialias来实现抗锯齿。

其中在插值时,需要提供顶点索引:

tri: Triangle tensor with shape [num_triangles, 3] and dtype `torch.int32`.

在进行uv贴图时,官方的demo中提供了uv_idx 作为uv索引,但对于一般obj格式文件uv_idx与faces一致,所以一般读取obj的库记录这项,不一致时应该可以读取出来(或者obj中f 对应中间列)。

(法线渲染没有试过) 

三、排坑记录

1.uv贴图问题

解决方法

uv[:,1] = 1-uv[:,1]

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值