链接:动手学无人驾驶(4):基于激光雷达点云数据3D目标检测_自动驾驶小学生的博客-CSDN博客_激光雷达三维点云数据
1.KITTI数据集
KITTI目标检测数据集(http://www.cvlibs.net/datasets/kitti/eval_object.php?obj_benchmark=3d)中共包含7481张训练图片和7518张测试图片,以及与之相对应的点云数据,label文件以及calib文件。这里以训练集编号为000008的场景为例介绍KITTI数据集中calib和label文件所含信息。
Calib文件:在KITTI数据采集过程中使用到了摄像头和激光雷达,采集到的数据坐标分别为摄像头坐标和激光雷达坐标下的测量值,这时就需要calib文件将摄像头与激光雷达进行标定,calib文件通过txt格式来保存。
velodyne:velodyne中存储着激光雷达点云采集到数据,数据以2进制格式存储(.bin),点云数据存储格式为(N,4)。N为激光线束反射点个数,4代表着:(x,y,z,r),分别返回在3个坐标轴上的位置和反射率。
label文件:label文件为标注数据,以txt格式保存,000008.txt中标注的object内容如下
class Object3d(object):
''' 3d object label '''
def __init__(self, label_file_line):
data = label_file_line.split(' ')
data[1:] = [float(x) for x in data[1:]]
# extract label, truncation, occlusion
self.type = data[0] # 'Car', 'Pedestrian', ...
self.truncation = data[1] # truncated pixel ratio [0..1]
self.occlusion = int(data[2]) # 0=visible, 1=partly occluded, 2=fully occluded, 3=unknown
self.alpha = data[3] # object observation angle [-pi..pi]
# extract 2d bounding box in 0-based coordinates
self.xmin = data[4] # left
self.ymin = data[5] # top
self.xmax = data[6] # right
self.ymax = data[7] # bottom
self.box2d = np.array([self.xmin,self.ymin,self.xmax,self.ymax])
# extract 3d bounding box information
self.h = data[8] # box height
self.w = data[9] # box width
self.l = data[10] # box length (in meters)
self.t = (data[11],data[12],data[13]) # location (x,y,z) in camera coord.
self.ry = data[14] # yaw angle (around Y-axis in camera coordinates) [-pi..pi]
def print_object(self):
print('Type, truncation, occlusion, alpha: %s, %d, %d, %f' % \
(self.type, self.truncation, self.occlusion, self.alpha))
print('2d bbox (x0,y0,x1,y1): %f, %f, %f, %f' % \
(self.xmin, self.ymin, self.xmax, self.ymax))
print('3d bbox h,w,l: %f, %f, %f' % \
(self.h, self.w, self.l))
print('3d bbox location, ry: (%f, %f, %f), %f' % \
(self.t[0],self.t[1],self.t[2],self.ry))