SurroundOcc

动机:目前3D目标检测不能识别任意形状以及无限类的物体

流程:首先为环视图像提取多尺度特征,并采用空间2D-3D注意力转换到3D空间(不是BEV特征),然后对其上采样,并在多个级别实施监督。那么用于监督的gt如何产生呢?本文设计了一个pipeline来生成密集占用gt。具体操作是首先融合了动态对象和静态场景的多帧lidar数据,然后采用泊松重构来填充孔洞,并对网格进行体素化以获得密集的占用标签。

目前一些工作的缺点:

依赖于lidar的方法有缺点:激光雷达价格昂贵,扫描点稀疏。

纯视觉的3D检测容易出现长尾问题

使用3D场景重建可以帮助下游的感知任务。但是他不能解决遮挡问题。

3D占用预测是一个很好的方法。

方法论:

问题描述

网络

其中空间2D-3D注意力与BEV做cross-attentiopn的区别:

Occupancy predition和损失设计

Dense Occupancy GT

稀疏的lidar点监督的网络无法预测足够的dense occupancy。

为了生成密集的占用标签,本文设计了一个pipeline,利用现有的3D检测和3D语义分割标签来生成dense occupancy GT

首先多帧激光雷达点序列直接转换为统一的坐标系,然后将融合后的点云体素化为体素网格。然后这个简单的方法有很多弊端:1、只适用于完全静止的物体,忽略了移动的对象  2、多帧点云不够密集,存在许多漏掉的区域。这些漏掉的区域将会导致错误的占用标签

为了解决这个问题,本文建议分别缝合动态对象和静态对象的多帧激光雷达点,此外利用泊松重建来填充空洞,对获得的网格进行体素化,以获得密集的体素占用率

多帧点云缝合

本文提出了一种双流pipeline来分别缝合静态场景和动态对象,并在体素化之前将他们合并为一个完整的场景。

对于每一帧,首先根据GT框从点云中剪切出动态物体,这样就获得了静态场景和动态对象的3D点。遍历所有帧,将静态场景放入一个集合,动态对象放入一个集合。为了组合多帧片段,我们通过已知的校准矩阵和自我位姿将他们的坐标系转换到世界坐标系(这一块重点理解)

下面解释如何获取密集点云:

首先获取点云数据和rtk数据

从rtk数据获取roll, pitch, yaw, latitude, longitude, height

"rtk_data": {
 "angle": {
  "azimuth_angle": 79.47509765625, # yaw
  "pitch_angle": -0.010986328125, # pitch
  "roll_angle": 0.450439453125   # roll
 },
 "gps_time_us": 1658891500748000,
 "header": {
  "length": 159,
  "original_stamp": 1658891500758176,
  "seq": 846934,
  "stamp": 1658891500758176,
  "valid": true
 },
 "hstd": 0.04688774049282074,
 "latstd": 0.04415721073746681,
 "lon_lat_height": {
  "height": 32.117000579833984, # height
  "latitude": 40.1215915,  # latitude 
  "longitude": 116.7270676 # longitude
 },
 "lonstd": 0.019841117784380913,
 "rtk_sts": 50,
 "speed": {
  "east_speed": 8.1085205078125,
  "north_speed": 1.50146484375
 },
 "stas_num": 33,
 "status": 15,
 "sys_time_us": 1658891500748000,
 "xyz_rate_acc": {
  "x_acc": -0.973242998123169,
  "x_angular_rate": -0.007669903803616762,
  "y_acc": 0.39145198464393616,
  "y_angular_rate": 0.0030360035598278046,
  "z_acc": -9.883264541625977,
  "z_angular_rate": -0.001597896683961153
 },
 "yawstd": 0.10860918462276459
},

然后根据上述信息和lidar_ego_pose0 = get_Rwv(yaw, pitch, roll, latitude, longitude, height)获取ego_pose。 ego_pose是指车辆自身相对于世界坐标系的坐标

def get_Rwv(yaw, pitch, roll, latitude, longitude, height):
    # 将经度/纬度/高度转换为本地ENU坐标系
    x,y,z = (geodetic2enu(latitude, longitude, height))
    # yaw/pitch/roll为欧拉角,代表绕x,y,z的转角 得到与欧拉角有关的矩阵 
    R = (ypr2R(yaw, pitch, roll))
    # 最后将旋转矩阵和平移矩阵结合起来,得到的矩阵是从当前自车坐标系转换到世界坐标系的矩阵。
    Rwv = np.concatenate((np.concatenate((R,np.array([[x],[y],[z]])),axis=1),np.array([[0,0,0,1]])),axis=0)
    Rwv = np.matrix(Rwv)
    return Rwv

接下来是融21帧的操作

    for i, scene in enumerate(my_scene):
        ############################# get boxes ##########################
        # lidar_path, boxes, _ = nusc.get_sample_data(lidar_data['token'])
        data = load_json('/'+scene)

        # 通过标注来获取目标的boxes=[x,y,z,l,w,h] object_tokens(不太理解) object_category= 物体类别
        boxes, object_tokens, object_category = parse_objects_data(data['objects'])
        # boxes_token = [box.token for box in boxes]
        # object_tokens = [nusc.get('sample_annotation', box_token)['instance_token'] for box_token in boxes_token]
        # object_category = [nusc.get('sample_annotation', box_token)['category_name'] for box_token in boxes_token]

        ############################# get object categories ##########################
        converted_object_category = []
        for category in object_category:
            # nuscenesyaml['labels']是每个类别对应索引的一个字典
            for (j, label) in enumerate(nuscenesyaml['labels']):
                if category == nuscenesyaml['labels'][label]:
                    # 转换目标类别:将具体的物体类别划分到9个大类里面
                    converted_object_category.append(np.vectorize(label_map.__getitem__)(label).item())

        ############################# get bbox attributes ##########################
        gt_bbox_3d = np.array(boxes)
        gt_bbox_3d[:, 2] -= gt_bbox_3d[:, 5] / 2 
        gt_bbox_3d[:, 2] = gt_bbox_3d[:, 2] - 0.1
        gt_bbox_3d[:, 3:6] = gt_bbox_3d[:, 3:6] * 1.1
        # locs = np.array([b[:3] for b in boxes]).reshape(-1,3)
        # dims = np.array(b[3:6] for b in boxes).reshape(-1,3)
        # rots = np.array(b[6:] for b in boxes).reshape(-1,1)
        # locs = np.array([b.center for b in boxes]).reshape(-1, 3)
        # dims = np.array([b.wlh for b in boxes]).reshape(-1, 3)
        # rots = np.array([b.orientation.yaw_pitch_roll[0]
        #                  for b in boxes]).reshape(-1, 1)
        # gt_bbox_3d = np.concatenate([locs, dims, rots], axis=1).astype(np.float32)
        # gt_bbox_3d[:, 6] += np.pi / 2.
        # gt_bbox_3d[:, 2] -= dims[:, 2] / 2.
        # gt_bbox_3d[:, 2] = gt_bbox_3d[:, 2] - 0.1  # Move the bbox slightly down in the z direction
        # gt_bbox_3d[:, 3:6] = gt_bbox_3d[:, 3:6] * 1.1 # Slightly expand the bbox to wrap all object points


        pc_file_path = ''
        lidars = data['lidar']
        for l in lidars:
            if l['name'] == 'center_128_lidar_scan_data':
                pc_file_path = '/' + l['oss_path_pcd_txt']
                break

        # get 128 lidar
        # pc_file_path = '/' + data['point_cloud_Url'] # load LiDAR names
        pc_file_path = pc_file_path.replace(r"/oss://", "/data-engine/")
        # pc0 = np.fromfile(os.path.join(data_root, pc_file_name), # 34720 4
        #                   dtype=np.float32,
        #                   count=-1).reshape(-1, 5)[..., :4]
        pc0 = pd.read_csv(pc_file_path, delimiter=' ', skiprows=11, dtype=np.float32).values.reshape(-1, 4)[..., :4]
        # if lidar_data['is_key_frame']: # only key frame has semantic annotations
        #     lidar_sd_token = lidar_data['token']
        #     lidarseg_labels_filename = os.path.join(nusc.dataroot,
        #                                             nusc.get('lidarseg', lidar_sd_token)['filename'])

        #     points_label = np.fromfile(lidarseg_labels_filename, dtype=np.uint8).reshape([-1, 1])
        #     points_label = np.vectorize(learning_map.__getitem__)(points_label)

        #     pc_with_semantic = np.concatenate([pc0[:, :3], points_label], axis=1) # n 4

        hardware_path = '/' + data['hardware_config_path']
        hardware_data = load_json(hardware_path)
        lidar_param = hardware_data['sensor_config']['lidar_param']
        for param in lidar_param:
            if param['name'] == "MIDDLE_LIDAR":
                w = param['pose']['attitude']['w']
                x = param['pose']['attitude']['x']
                y = param['pose']['attitude']['y']
                z = param['pose']['attitude']['z']
                t_x = param['pose']['translation']['x']
                t_z = param['pose']['translation']['z']
                t_y = 0
                Rm = R.from_quat([x, y, z, w])
                rot = Rm.as_matrix()

                pc1 = (rot @ pc0[..., :3].T).T + np.array([t_x, t_y, t_z])

                pc0 = np.concatenate([pc1, pc0[:,-1:]], -1) 











        
        ############################# get LiDAR points with semantics ##########################
        # pc_file_path = '/' + data['point_cloud_Url'] # load LiDAR names
        # pc_file_path = pc_file_path.replace(r"/oss://", "/data-engine/")
        # # pc0 = np.fromfile(os.path.join(data_root, pc_file_name), # 34720 4
        # #                   dtype=np.float32,
        # #                   count=-1).reshape(-1, 5)[..., :4]
        # pc0 = pd.read_csv(pc_file_path, delimiter=' ', skiprows=11, dtype=np.float32).values.reshape(-1, 5)[..., :4]
        # if lidar_data['is_key_frame']: # only key frame has semantic annotations
        #     lidar_sd_token = lidar_data['token']
        #     lidarseg_labels_filename = os.path.join(nusc.dataroot,
        #                                             nusc.get('lidarseg', lidar_sd_token)['filename'])

        #     points_label = np.fromfile(lidarseg_labels_filename, dtype=np.uint8).reshape([-1, 1])
        #     points_label = np.vectorize(learning_map.__getitem__)(points_label)

        #     pc_with_semantic = np.concatenate([pc0[:, :3], points_label], axis=1) # n 4

        ############################# cut out movable object points and masks ##########################
        points_in_boxes = points_in_boxes_cpu(torch.from_numpy(pc0[:, :3][np.newaxis, :, :]),
                                              torch.from_numpy(gt_bbox_3d[np.newaxis, :])) # 1 24720 10
        object_points_list = []
        j = 0
        while j < points_in_boxes.shape[-1]:
            object_points_mask = points_in_boxes[0][:,j].bool()
            object_points = pc0[object_points_mask]
            object_points_list.append(object_points)
            j = j + 1

        moving_mask = torch.ones_like(points_in_boxes)
        points_in_boxes = torch.sum(points_in_boxes * moving_mask, dim=-1).bool()
        points_mask = ~(points_in_boxes[0])

        ############################# get point mask of the vehicle itself ##########################
        range = config['self_range']
        oneself_mask = torch.from_numpy((np.abs(pc0[:, 0]) > range[0]) |
                                        (np.abs(pc0[:, 1]) > range[1]) |
                                        (np.abs(pc0[:, 2]) > range[2]))

        ############################# get static scene segment ##########################
        points_mask = points_mask & oneself_mask
        pc = pc0[points_mask]

        ################## coordinate conversion to the same (first) LiDAR coordinate  ##################
        rtks = data.get('rtk_data', '')
        roll, pitch, yaw, latitude, longitude, height = parse_rtk_data(rtks)
        lidar_ego_pose = get_Rwv(yaw, pitch, roll, latitude, longitude, height)
        # lidar_ego_pose = nusc.get('ego_pose', lidar_data['ego_pose_token'])
        # lidar_calibrated_sensor = nusc.get('calibrated_sensor', lidar_data['calibrated_sensor_token'])
        # lidar_pc = lidar_to_world_to_lidar(pc.copy(), lidar_calibrated_sensor.copy(), lidar_ego_pose.copy(),
        #                                    lidar_calibrated_sensor0,
        #                                    lidar_ego_pose0)
        lidar_pc = ego_to_world_to_ego(pc.copy(), lidar_ego_pose.copy(), lidar_ego_pose0)
        ################## record Non-key frame information into a dict  ########################
        dict = {"object_tokens": object_tokens,
                "object_points_list": object_points_list,
                "lidar_pc": lidar_pc.points,
                "lidar_ego_pose": lidar_ego_pose,
                # "lidar_calibrated_sensor": lidar_calibrated_sensor,
                # "lidar_token": lidar_data['token'],
                "lidar_token": data['timestamp'],
                # "is_key_frame": lidar_data['is_key_frame'],
                "gt_bbox_3d": gt_bbox_3d,
                "converted_object_category": converted_object_category,
                "pc_file_name": pc_file_path.split('/')[-1]}
        ################## record semantic information into the dict if it's a key frame  ########################
        # if lidar_data['is_key_frame']:
        #     pc_with_semantic = pc_with_semantic[points_mask]
        #     lidar_pc_with_semantic = lidar_to_world_to_lidar(pc_with_semantic.copy(),
        #                                                      lidar_calibrated_sensor.copy(),
        #                                                      lidar_ego_pose.copy(),
        #                                                      lidar_calibrated_sensor0,
        #                                                      lidar_ego_pose0)
        #     dict["lidar_pc_with_semantic"] = lidar_pc_with_semantic.points

        dict_list.append(dict)
        ################## go to next frame of the sequence  ########################
        # next_token = lidar_data['next']
        # if next_token != '':
        #     lidar_data = nusc.get('sample_data', next_token)
        # else:
        #     break

    ################## concatenate all static scene segments (including non-key frames)  ########################
    lidar_pc_list = [dict['lidar_pc'] for dict in dict_list]
    lidar_pc = np.concatenate(lidar_pc_list, axis=1).T

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值