kitti数据点云velo坐标转像素坐标与点云深度值计算方法与教程(代码实现)

前言

kitti数据是一个通用数据,但里面点云坐标如何转到像素坐标以及对应点云转到像素坐标对应的相机坐标的深度值求解,也是一个较为麻烦的事情。如果你是小白,必然会在其中摸不着头脑。为此,本文分享kitti数据的点云坐标转像素坐标方法以及对应像素坐标的深度值求解方法,这也是本文重点介绍内容。而本文与其它文章不太相同,我们不纯粹讲解原理,而是使用代码直接转换,并在代码过程中说明其原理,我们也会给出完整代码。

一、kitti标签label坐标转换的主函数

1、主函数调用代码

先给出调用的主函数,然后在分别讲解,直接看代码如下:

if __name__ == '__main__':
    path = r'C:\Users\Administrator\Desktop\kitti_data\KITTI\training'
    index = '000020'
    calib_path = os.path.join(path, 'calib', index + '.txt')  # 获得标定文件
    image_2_path = os.path.join(path, 'image_2', index + '.png')  # 获得图像
    label_2_path = os.path.join(path, 'label_2', index + '.txt')  # 获得标签lebl
    velodyne_path = os.path.join(path, 'velodyne', index + '.bin')

    img = get_image(image_2_path)  # 读取图像
    objects = read_label(label_2_path)  # 处理标签
    calib = Calibration(calib_path)  # 处理标定文件
    pc_velo = load_velo_scan(velodyne_path)  # 载入点云数据

    img_lidar,valid_pixel_2d,imgfov_pc_rect = main_velo2pixel_and_velo2depth(pc_velo, img, calib, clip_distance=2.0)
    cv2.imwrite('./velo_depth2img.png', img_lidar)
    show_img(img_lidar)

2、数据格式示意图

在这里插入图片描述

二、kitti数据获取

1、图像获取

因为要将其2d与3d坐标展现到图像中,需要获取图像数据,其代码如下:

def get_image(img_path):
    img = cv2.imread(img_path)
    return img

2、点云数据获取

读取标签txt文件内容,将其每个目标内容赋值给Object3d类中,构建一个列表,如下:

# 读取label标签
def load_velo_scan(velo_filename, dtype=np.float32, n_vec=4):
    scan = np.fromfile(velo_filename, dtype=dtype)
    scan = scan.reshape((-1, n_vec))
    return scan

3、标定文件数据获取

其代码如下:

class Calibration(object):
    """ Calibration matrices and utils
        3d XYZ in <label>.txt are in rect camera coord.
        2d box xy are in image2 coord
        Points in <lidar>.bin are in Velodyne coord.

        y_image2 = P^2_rect * x_rect
        y_image2 = P^2_rect * R0_rect * Tr_velo_to_cam * x_velo
        x_ref = Tr_velo_to_cam * x_velo
        x_rect = R0_rect * x_ref

        P^2_rect = [f^2_u,  0,      c^2_u,  -f^2_u b^2_x;
                    0,      f^2_v,  c^2_v,  -f^2_v b^2_y;
                    0,      0,      1,      0]
                 = K * [1|t]

        image2 coord:
         ----> x-axis (u)
        |
        |
        v y-axis (v)

        velodyne coord:
        front x, left y, up z

        rect/ref camera coord:
        right x, down y, front z

        Ref (KITTI paper): http://www.cvlibs.net/publications/Geiger2013IJRR.pdf

        TODO(rqi): do matrix multiplication only once for each projection.
    """

    def __init__(self, calib_filepath, from_video=False):
        if from_video:
            calibs = self.read_calib_from_video(calib_filepath)
        else:
            calibs = self.read_calib_file(calib_filepath)
        # Projection matrix from rect camera coord to image2 coord
        self.P = calibs["P2"]
        self.P = np.reshape(self.P, [3, 4])
        # Rigid transform from Velodyne coord to reference camera coord
        self.V2C = calibs["Tr_velo_to_cam"]
        self.V2C = np.reshape(self.V2C, [3, 4])
        self.C2V = self.inverse_rigid_trans(self.V2C)
        # Rotation from reference camera coord to rect camera coord
        self.R0 = calibs["R0_rect"]
        self.R0 = np.reshape(self.R0, [3, 3])

        # Camera intrinsics and extrinsics
        self.c_u = self.P[0, 2]
        self.c_v = self.P[1, 2]
        self.f_u = self.P[0, 0]
        self.f_v = self.P[1, 1]
        self.b_x = self.P[0, 3] / (-self.f_u)  # relative
        self.b_y = self.P[1, 3] / (-self.f_v)

旋转可参考:
https://zhuanlan.zhihu.com/p/86223712

三、kitti点云坐标转像素坐标方法

1、集成主函数-main_velo2pixel_and_velo2depth

我先给出坐标转换集成主函数,该函数实现了kitti点云坐标转换为对应像素坐标实现方法,也将其kitti点云像素坐标对应深度转换显示在图中,我也有大量注释,其代码如下:

def main_velo2pixel_and_velo2depth(pc_velo, img, calib, clip_distance=2.0):
    """ Project LiDAR points to image """
    pc_velo=pc_velo[:,:3]
    # 点云转像素就是P2*R0*Tr_velo_to_cam*[x,y,z,1].T
    pts_2d = computer_velo2pixel(pc_velo, calib)  # 点云转像素值方法

    # 转为像素pts_2d不能超过图像尺寸,获得索引
    img = np.copy(img)
    img_height, img_width, _ = img.shape
    xmin, ymin, xmax, ymax = 0, 0, img_width, img_height
    valid_indx = ((pts_2d[:, 0] < xmax) & (pts_2d[:, 0] >= xmin)
                  & (pts_2d[:, 1] < ymax) & (pts_2d[:, 1] >= ymin))  # 不能超过图尺寸像素值判断

    # 使用clip_distance认为是排除离激光雷达太近点的干扰
    valid_indx = valid_indx & (pc_velo[:, 0] > clip_distance) if clip_distance is not None else valid_indx

    # 获得有效的点云对应的像素坐标
    valid_pixel_2d = pts_2d[valid_indx, :]  # 获得有效点云坐标对应的像素坐标

    # 获得有效点云坐标计算深度
    vaild_pc_velo = pc_velo[valid_indx, :]
    # 点云转深度就是少了内参,公式为R0*Tr_velo_to_cam*[x,y,z,1].T
    imgfov_pc_rect, imgfov_depth = computer_velo2depth(vaild_pc_velo, calib)

    lidar_depth2img = draw_lidar_depth2image(img, valid_pixel_2d, imgfov_depth)

    return lidar_depth2img,valid_pixel_2d,imgfov_pc_rect

在这个函数中只解读compute_box_3d(obj, calib.P)函数,其它画框啥的我不在说明了。

2、点云坐标转像素坐标-computer_velo2pixel(pc_velo, calib)

1、函数输入内容

输入是kitti点云坐标内容,输入是标定文件相机内参内容,calib是标定文件,会用到Tr_velo_to_cam进行点云到相机坐标转换,R0_rect相机校准矩阵,P2相机坐标到像素坐标转换矩阵。

2、计算源码

其代码如下:

def computer_velo2pixel(pc_velo, calib):

    n = pc_velo.shape[0]
    pts_3d_hom = np.hstack((pc_velo, np.ones((n, 1))))

    pts_3d_ref = np.dot(calib.V2C, pts_3d_hom.T).T

    pts_3d_rect = np.transpose(np.dot(calib.R0, np.transpose(pts_3d_ref)))

    pts_3d_rect = np.hstack((pts_3d_rect, np.ones((n, 1))))

    pts_2d = np.dot(calib.P, pts_3d_rect.T).T  # 此句与下面代码一样

    pts_2d[:, 0] /= pts_2d[:, 2]
    pts_2d[:, 1] /= pts_2d[:, 2]


    return pts_2d[:, 0:2]

比较简单,我不在解读。但这是核心内容。

3、获取有效索引

在2中获得点云坐标到像素坐标转换,那么不能超过图像尺寸,不然我们后面没法显示,为此我们做了clip操作,代码如下:

# 转为像素pts_2d不能超过图像尺寸,获得索引
img = np.copy(img)
img_height, img_width, _ = img.shape
xmin, ymin, xmax, ymax = 0, 0, img_width, img_height
valid_indx = ((pts_2d[:, 0] < xmax) & (pts_2d[:, 0] >= xmin)
              & (pts_2d[:, 1] < ymax) & (pts_2d[:, 1] >= ymin))  # 不能超过图尺寸像素值判断

# 使用clip_distance认为是排除离激光雷达太近点的干扰
valid_indx = valid_indx & (pc_velo[:, 0] > clip_distance) if clip_distance is not None else valid_indx


4、获得有效点云坐标与对应像素坐标

简单,代码如下:

  # 获得有效的点云对应的像素坐标
    valid_pixel_2d = pts_2d[valid_indx, :]  # 获得有效点云坐标对应的像素坐标

    # 获得有效点云坐标计算深度
    vaild_pc_velo = pc_velo[valid_indx, :]

5、点云坐标深度值

该函数是3d坐标转像素坐标方法调用函数,如下:

    # 点云转深度就是少了内参,公式为R0*Tr_velo_to_cam*[x,y,z,1].T
    imgfov_pc_rect, imgfov_depth = computer_velo2depth(vaild_pc_velo, calib)

该函数就是实现np.dot(P内参,3d坐标),我也不在解释,代码如下:

def computer_velo2depth(pc_velo, calib):
    n = pc_velo.shape[0]
    pts_3d_hom = np.hstack((pc_velo, np.ones((n, 1))))

    pts_3d_ref = np.dot(calib.V2C, pts_3d_hom.T).T

    pts_3d_rect = np.transpose(np.dot(calib.R0, np.transpose(pts_3d_ref)))

    depth = pts_3d_rect[:,2]

    return pts_3d_rect, depth

深度值是相机坐标系的深度值,是点云坐标转到0号相机在经过R0校正后的坐标--这是我的理解!

四、完整代码与结果现实

1、结果现实

在这里插入图片描述

2、完整代码

这个代码可以直接使用,这个代码包含了标签3d坐标转到像素图像中,也包含了点云坐标转像素坐标方法,其完整代码如下:

import os
import cv2
import numpy as np
import matplotlib.pyplot as plt

def get_image(img_path):
    img = cv2.imread(img_path)
    return img
def show_img(img):
    plt.imshow(img)
    plt.show()
# 转换kitti数据标签labels
class Object3d(object):
    """ 3d object label """

    def __init__(self, label_file_line):
        data = label_file_line.split(" ")
        data[1:] = [float(x) for x in data[1:]]

        # extract label, truncation, occlusion
        self.type = data[0]  # 'Car', 'Pedestrian', ...
        self.truncation = data[1]  # truncated pixel ratio [0..1]
        self.occlusion = int(
            data[2]
        )  # 0=visible, 1=partly occluded, 2=fully occluded, 3=unknown
        self.alpha = data[3]  # object observation angle [-pi..pi]

        # extract 2d bounding box in 0-based coordinates
        self.xmin = data[4]  # left
        self.ymin = data[5]  # top
        self.xmax = data[6]  # right
        self.ymax = data[7]  # bottom
        self.box2d = np.array([self.xmin, self.ymin, self.xmax, self.ymax])

        # extract 3d bounding box information
        self.h = data[8]  # box height
        self.w = data[9]  # box width
        self.l = data[10]  # box length (in meters)
        self.t = (data[11], data[12], data[13])  # location (x,y,z) in camera coord.
        self.ry = data[14]  # yaw angle (around Y-axis in camera coordinates) [-pi..pi]

    def estimate_diffculty(self):
        """ Function that estimate difficulty to detect the object as defined in kitti website"""
        # height of the bounding box
        bb_height = np.abs(self.xmax - self.xmin)

        if bb_height >= 40 and self.occlusion == 0 and self.truncation <= 0.15:
            return "Easy"
        elif bb_height >= 25 and self.occlusion in [0, 1] and self.truncation <= 0.30:
            return "Moderate"
        elif (
            bb_height >= 25 and self.occlusion in [0, 1, 2] and self.truncation <= 0.50
        ):
            return "Hard"
        else:
            return "Unknown"

    def print_object(self):
        print(
            "Type, truncation, occlusion, alpha: %s, %d, %d, %f"
            % (self.type, self.truncation, self.occlusion, self.alpha)
        )
        print(
            "2d bbox (x0,y0,x1,y1): %f, %f, %f, %f"
            % (self.xmin, self.ymin, self.xmax, self.ymax)
        )
        print("3d bbox h,w,l: %f, %f, %f" % (self.h, self.w, self.l))
        print(
            "3d bbox location, ry: (%f, %f, %f), %f"
            % (self.t[0], self.t[1], self.t[2], self.ry)
        )
        print("Difficulty of estimation: {}".format(self.estimate_diffculty()))
def project_to_image(pts_3d, P):
    """ Project 3d points to image plane.

    Usage: pts_2d = projectToImage(pts_3d, P)
      input: pts_3d: nx3 matrix
             P:      3x4 projection matrix
      output: pts_2d: nx2 matrix

      P(3x4) dot pts_3d_extended(4xn) = projected_pts_2d(3xn)
      => normalize projected_pts_2d(2xn)

      <=> pts_3d_extended(nx4) dot P'(4x3) = projected_pts_2d(nx3)
          => normalize projected_pts_2d(nx2)
    """
    n = pts_3d.shape[0]
    pts_3d_extend = np.hstack((pts_3d, np.ones((n, 1))))
    # print(('pts_3d_extend shape: ', pts_3d_extend.shape))
    # pts_2d = np.dot(P, pts_3d_extend.T).T # 这一句与下面一句是等价的
    pts_2d = np.dot(pts_3d_extend, np.transpose(P))  # nx3
    pts_2d[:, 0] /= pts_2d[:, 2]
    pts_2d[:, 1] /= pts_2d[:, 2]
    return pts_2d[:, 0:2]
def roty(t):
    """ Rotation about the y-axis. """
    c = np.cos(t)
    s = np.sin(t)
    return np.array([[c, 0, s], [0, 1, 0], [-s, 0, c]])
def compute_box_3d(obj, P):
    """ Takes an object and a projection matrix (P) and projects the 3d
        bounding box into the image plane.
        Returns:
            corners_2d: (8,2) array in left image coord.
            corners_3d: (8,3) array in in rect camera coord.
    """
    # compute rotational matrix around yaw axis
    R = roty(obj.ry)

    # 3d bounding box dimensions
    l = obj.l
    w = obj.w
    h = obj.h

    # 3d bounding box corners
    x_corners = [l / 2, l / 2, -l / 2, -l / 2, l / 2, l / 2, -l / 2, -l / 2]
    y_corners = [0, 0, 0, 0, -h, -h, -h, -h]
    z_corners = [w / 2, -w / 2, -w / 2, w / 2, w / 2, -w / 2, -w / 2, w / 2]

    # rotate and translate 3d bounding box
    corners_3d = np.dot(R, np.vstack([x_corners, y_corners, z_corners]))
    # print corners_3d.shape
    corners_3d[0, :] = corners_3d[0, :] + obj.t[0]
    corners_3d[1, :] = corners_3d[1, :] + obj.t[1]
    corners_3d[2, :] = corners_3d[2, :] + obj.t[2]
    # print 'cornsers_3d: ', corners_3d
    # only draw 3d bounding box for objs in front of the camera
    if np.any(corners_3d[2, :] < 0.1):
        corners_2d = None
        return corners_2d, np.transpose(corners_3d)

    # project the 3d bounding box into the image plane,相机坐标转像素坐标
    corners_2d = project_to_image(np.transpose(corners_3d), P)
    # print 'corners_2d: ', corners_2d
    return corners_2d, np.transpose(corners_3d)
# 读取label标签
def read_label(label_filename):
    lines = [line.rstrip() for line in open(label_filename)]
    objects = [Object3d(line) for line in lines]
    return objects
class Calibration(object):
    def __init__(self, calib_filepath, from_video=False):

        calibs = self.read_calib_file(calib_filepath)
        # Projection matrix from rect camera coord to image2 coord
        self.P = calibs["P2"]
        self.P = np.reshape(self.P, [3, 4])
        # Rigid transform from Velodyne coord to reference camera coord
        self.V2C = calibs["Tr_velo_to_cam"]
        self.V2C = np.reshape(self.V2C, [3, 4])  # 这个就是点云转相机坐标的点
        self.C2V = self.inverse_rigid_trans(self.V2C)
        # Rotation from reference camera coord to rect camera coord
        self.R0 = calibs["R0_rect"]
        self.R0 = np.reshape(self.R0, [3, 3])

        # Camera intrinsics and extrinsics
        self.c_u = self.P[0, 2]
        self.c_v = self.P[1, 2]
        self.f_u = self.P[0, 0]
        self.f_v = self.P[1, 1]
        self.b_x = self.P[0, 3] / (-self.f_u)  # relative
        self.b_y = self.P[1, 3] / (-self.f_v)

    def read_calib_file(self, filepath):
        """ Read in a calibration file and parse into a dictionary.
        Ref: https://github.com/utiasSTARS/pykitti/blob/master/pykitti/utils.py
        """
        data = {}
        with open(filepath, "r") as f:
            for line in f.readlines():
                line = line.rstrip()
                if len(line) == 0:
                    continue
                key, value = line.split(":", 1)
                # The only non-float values in these files are dates, which
                # we don't care about anyway
                try:
                    data[key] = np.array([float(x) for x in value.split()])
                except ValueError:
                    pass

        return data



    def inverse_rigid_trans(self,Tr):
        """ Inverse a rigid body transform matrix (3x4 as [R|t])
            [R'|-R't; 0|1]
        """
        inv_Tr = np.zeros_like(Tr)  # 3x4
        inv_Tr[0:3, 0:3] = np.transpose(Tr[0:3, 0:3])
        inv_Tr[0:3, 3] = np.dot(-np.transpose(Tr[0:3, 0:3]), Tr[0:3, 3])
        return inv_Tr


def draw_projected_box3d(image, qs, color=(0, 255, 0), thickness=2):
    """ Draw 3d bounding box in image
        qs: (8,3) array of vertices for the 3d box in following order:
            1 -------- 0
           /|         /|
          2 -------- 3 .
          | |        | |
          . 5 -------- 4
          |/         |/
          6 -------- 7
    """
    qs = qs.astype(np.int32)
    for k in range(0, 4):
        # Ref: http://docs.enthought.com/mayavi/mayavi/auto/mlab_helper_functions.html
        i, j = k, (k + 1) % 4
        # use LINE_AA for opencv3
        # cv2.line(image, (qs[i,0],qs[i,1]), (qs[j,0],qs[j,1]), color, thickness, cv2.CV_AA)
        cv2.line(image, (qs[i, 0], qs[i, 1]), (qs[j, 0], qs[j, 1]), color, thickness)
        i, j = k + 4, (k + 1) % 4 + 4
        cv2.line(image, (qs[i, 0], qs[i, 1]), (qs[j, 0], qs[j, 1]), color, thickness)

        i, j = k, k + 4
        cv2.line(image, (qs[i, 0], qs[i, 1]), (qs[j, 0], qs[j, 1]), color, thickness)
    return image

######*******************************************计算label坐标到图像像素坐标并显示**********************###########
def labels_boxes2pixel_in_image(img, objects, calib):
    """
    Show image with 2D bounding boxes
    :param img: 图像内容
    :param objects: label标签内容
    :param calib: 标定文件内容
    :return: img1, img2,box3d_to_pixel2d,第一个值是2d坐标图像,第二个值是3d坐标转到像素画的图像,第三个值是相机坐标与像素坐标对应点
    """
    img1 = np.copy(img)  # for 2d bbox
    img2 = np.copy(img)  # for 3d bbox
    # TODO: change the color of boxes
    box3d_to_pixel2d = {"corners_3d":[],"box3d_pts_2d":[]}
    for obj in objects:
        # 画2d坐标
        if obj.type == "DontCare":
            continue
        else:
            cv2.rectangle(img1,(int(obj.xmin),int(obj.ymin)),(int(obj.xmax), int(obj.ymax)),(0, 255, 0),2,)
            cv2.putText(img1, str(obj.type), (int(obj.xmin), int(obj.ymin)), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2)

        # 这个非常重要,该代码就是将其label的3d坐标转为像素2d坐标关键函数
        box3d_pts_2d, corners_3d = compute_box_3d(obj, calib.P)
        box3d_to_pixel2d['corners_3d'].append(corners_3d)
        box3d_to_pixel2d['box3d_pts_2d'].append(box3d_pts_2d)
        if box3d_pts_2d is None:
            print("something wrong in the 3D box.")
            continue
        img2 = draw_projected_box3d(img2, box3d_pts_2d, color=(0, 255, 0))
    return img1, img2,box3d_to_pixel2d

def main_label2pixel(img, objects, calib):
    img_bbox2d, img_bbox3d, box3d_to_pixel2d = labels_boxes2pixel_in_image(img, objects, calib)  # 重点内容,集成转换
    return img_bbox2d, img_bbox3d, box3d_to_pixel2d

######*******************************************计算点云坐标到像素坐标与点云深度并显示*******************###########

def computer_velo2pixel(pc_velo, calib):

    n = pc_velo.shape[0]
    pts_3d_hom = np.hstack((pc_velo, np.ones((n, 1))))

    pts_3d_ref = np.dot(calib.V2C, pts_3d_hom.T).T

    pts_3d_rect = np.transpose(np.dot(calib.R0, np.transpose(pts_3d_ref)))

    pts_3d_rect = np.hstack((pts_3d_rect, np.ones((n, 1))))

    pts_2d = np.dot(calib.P, pts_3d_rect.T).T  # 此句与下面代码一样

    pts_2d[:, 0] /= pts_2d[:, 2]
    pts_2d[:, 1] /= pts_2d[:, 2]


    return pts_2d[:, 0:2]

def computer_velo2depth(pc_velo, calib):

    n = pc_velo.shape[0]
    pts_3d_hom = np.hstack((pc_velo, np.ones((n, 1))))

    pts_3d_ref = np.dot(calib.V2C, pts_3d_hom.T).T

    pts_3d_rect = np.transpose(np.dot(calib.R0, np.transpose(pts_3d_ref)))

    depth = pts_3d_rect[:, 2]

    return pts_3d_rect, depth

def draw_lidar_depth2image(img, imgfov_pts_2d, imgfov_depth):


    import matplotlib.pyplot as plt

    cmap = plt.cm.get_cmap("hsv", 256)
    cmap = np.array([cmap(i) for i in range(256)])[:, :3] * 255

    for i in range(imgfov_pts_2d.shape[0]):
        depth = imgfov_depth[i]
        color = cmap[int(640.0 / depth), :]
        cv2.circle( img,(int(np.round(imgfov_pts_2d[i, 0])), int(np.round(imgfov_pts_2d[i, 1]))),
            2,color=tuple(color),thickness=-1,)

    return img

def load_velo_scan(velo_filename, dtype=np.float32, n_vec=4):
    scan = np.fromfile(velo_filename, dtype=dtype)
    scan = scan.reshape((-1, n_vec))
    return scan

def main_velo2pixel_and_velo2depth(pc_velo, img, calib, clip_distance=2.0):
    """ Project LiDAR points to image """
    pc_velo=pc_velo[:,:3]
    # 点云转像素就是P2*R0*Tr_velo_to_cam*[x,y,z,1].T
    pts_2d = computer_velo2pixel(pc_velo, calib)  # 点云转像素值方法

    # 转为像素pts_2d不能超过图像尺寸,获得索引
    img = np.copy(img)
    img_height, img_width, _ = img.shape
    xmin, ymin, xmax, ymax = 0, 0, img_width, img_height
    valid_indx = ((pts_2d[:, 0] < xmax) & (pts_2d[:, 0] >= xmin)
                  & (pts_2d[:, 1] < ymax) & (pts_2d[:, 1] >= ymin))  # 不能超过图尺寸像素值判断

    # 使用clip_distance认为是排除离激光雷达太近点的干扰
    valid_indx = valid_indx & (pc_velo[:, 0] > clip_distance) if clip_distance is not None else valid_indx

    # 获得有效的点云对应的像素坐标
    valid_pixel_2d = pts_2d[valid_indx, :]  # 获得有效点云坐标对应的像素坐标

    # 获得有效点云坐标计算深度
    vaild_pc_velo = pc_velo[valid_indx, :]
    # 点云转深度就是少了内参,公式为R0*Tr_velo_to_cam*[x,y,z,1].T
    imgfov_pc_rect, imgfov_depth = computer_velo2depth(vaild_pc_velo, calib)

    lidar_depth2img = draw_lidar_depth2image(img, valid_pixel_2d, imgfov_depth)

    return lidar_depth2img,valid_pixel_2d,imgfov_pc_rect



if __name__ == '__main__':
    path = r'C:\Users\Administrator\Desktop\kitti_data\KITTI\training'
    index = '000020'
    calib_path = os.path.join(path, 'calib', index + '.txt')  # 获得标定文件
    image_2_path = os.path.join(path, 'image_2', index + '.png')  # 获得图像
    label_2_path = os.path.join(path, 'label_2', index + '.txt')  # 获得标签lebl
    velodyne_path = os.path.join(path, 'velodyne', index + '.bin')

    img = get_image(image_2_path)  # 读取图像
    objects = read_label(label_2_path)  # 处理标签
    calib = Calibration(calib_path)  # 处理标定文件
    pc_velo = load_velo_scan(velodyne_path)  # 载入点云数据

    img_bbox2d, img_bbox3d, box3d_to_pixel2d = labels_boxes2pixel_in_image(img, objects, calib)  # 重点内容,集成转换
    cv2.imwrite('./bbox2d.png',img_bbox2d)
    cv2.imwrite('./bbox3d.png',img_bbox3d)
    img_lidar,valid_pixel_2d,imgfov_pc_rect = main_velo2pixel_and_velo2depth(pc_velo, img, calib, clip_distance=2.0)
    cv2.imwrite('./velo_depth2img.png', img_lidar)
    show_img(img_lidar)

  • 16
    点赞
  • 12
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

tangjunjun-owen

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值