R3live+hesai激光雷达

1. IMU内置与外置

1.1 坐标系转换

R3live主要适配的是livox激光雷达,并且使用的是该激光雷达的内置IMU,因此当使用其他不带内置IMU的激光雷达时,需要将IMU坐标系转换至雷达坐标系下再运行R3live。

方法1:imu_pipeline功能包集:https://github.com/ros-perception/imu_pipeline

修改该功能包中的启动文件ned_to_enu.launch

这里有个疑问,节点是同时运行的,如何保证R3live接受到的数据是经过转换过后的呢?

方法2:直接在R3live的r3live_bag.launch中添加下面这句,也可以直接转换坐标系

<node pkg="tf" type="static_transform_publisher" name="imu_to_lidar" args="-0.005, -0.037, 0.078 0 0 0 /limu_link /velodyne 100" />

参考链接:https://blog.csdn.net/m0_49929125/article/details/124535341
                      https://blog.csdn.net/weixin_40599145/article/details/128064323

1.2 坐标系转换查看验证:

首先检查是否有安装tf2相关工具包,如果没有则安装:

sudo apt-get install ros-noetic-turtle-tf2 ros-noetic-tf2-tools
sudo apt-get install ros-melodic-tf2-sensor-msgs

方法1:rosbag play过程中查看坐标系frame_id:

rostopic echo /points_raw | grep frame_id

需先rosbag info 获取数据集话题名称points_raw
 slam运行过程中查看验证坐标系转换:

rostopic echo /tf  

方法2: 使用

find /opt/ros/ -name tf2_echo

查找完整路径   ----找不到该路径,因此下面这个命令不可用
 rosrun /opt/ros/.../tf2_ros tf2_echo /livox_frame /camera_init   

rosrun tf2_ros tf2_echo [source_frame] [target_frame]使用这个命令再尝试一次  

rosrun tf tf_echo /livox_frame /camera_init

这个可以用
方法3:

rosrun tf2_tools view_frames.py  -->  rosrun tf view_frames.py

前面的可以用,会生成一个PDF文件
后面这个有报错,需要view_frames.py修改代码,参考https://blog.csdn.net/m0_62388326/article/details/135350277

2. 禾赛激光雷达适配

2.1代码修改

首先需要在Lidar_front_end.cpp中添加根据禾赛激光点云数据格式的PCL点:

namespace hesai_ros {
// hesai lidar的点云格式
struct EIGEN_ALIGN16 HesaiPointXYZIRT
{
    PCL_ADD_POINT4D;
     float intensity;
    //PCL_ADD_INTENSITY;
    double timestamp;
    uint16_t ring;
    EIGEN_MAKE_ALIGNED_OPERATOR_NEW
} ;
}

POINT_CLOUD_REGISTER_POINT_STRUCT(hesai_ros::HesaiPointXYZIRT,
    (float, x, x)
    (float, y, y)
    (float, z, z)
    (float, intensity, intensity)
    (double, timestamp, timestamp)
    (uint16_t, ring, ring)
    )

以及hesai_handle():

void hesaiPandar_handler(const sensor_msgs::PointCloud2::ConstPtr &msg) 
{
    pcl::PointCloud< PointType >  pl_full, pl_corn, pl_surf;
    vector< pcl::PointCloud< PointType > > pl_buff( N_SCANS );
    vector< vector< orgtype > >            typess( N_SCANS );
    pl_surf.clear();
    pl_corn.clear();
    pl_full.clear();

    pcl::PointCloud<hesai_ros::HesaiPointXYZIRT> pl_orig;
    pcl::fromROSMsg(*msg, pl_orig);
    int plsize = pl_orig.points.size();
    pl_surf.reserve(plsize);
    pl_corn.reserve(plsize);
    pl_full.reserve(plsize);
    

    /*** These variables only works when no point timestamps given ***/
    double omega_l = 0.361 * SCAN_RATE;       // scan angular velocity
    std::vector<bool> is_first(N_SCANS,true);
    std::vector<double> yaw_fp(N_SCANS, 0.0);      // yaw of first scan point
    std::vector<float> yaw_last(N_SCANS, 0.0);   // yaw of last scan point
    std::vector<float> time_last(N_SCANS, 0.0);  // last offset time
    /*****************************************************************/

    if(feature_enabled)
    {
      for (int i = 0; i < N_SCANS; i++)
      {
        pl_buff[i].clear();
        pl_buff[i].reserve(plsize);
      }

      //计算时间、转换点云格式为PointType,正序遍历
      for (int i = 0; i < plsize; i++)
      {
        PointType added_pt;
        added_pt.normal_x = 0;
        added_pt.normal_y = 0;
        added_pt.normal_z = 0;
        int layer  = pl_orig.points[i].ring;
        if (layer >= N_SCANS) continue;
        added_pt.x = pl_orig.points[i].x;
        added_pt.y = pl_orig.points[i].y;
        added_pt.z = pl_orig.points[i].z;
        added_pt.intensity = pl_orig.points[i].intensity;
        added_pt.curvature = (pl_orig.points[i].timestamp - msg->header.stamp.toSec()) * 1000;  //s to ms
        pl_buff[layer].points.push_back(added_pt);
      }

      for (int j = 0; j < N_SCANS; j++)
      {
        pcl::PointCloud< PointType > &pl = pl_buff[j];
        int linesize = pl.size();
        vector<orgtype> &types = typess[j]; //用于记录当前扫描线上每个点的参数
        types.clear();
        types.resize(linesize);
        linesize--;
        for (uint i = 0; i < linesize; i++)
        {
          PointType added_pt;
          types[i].range = sqrt(pl[i].x * pl[i].x + pl[i].y * pl[i].y);
          vx = pl[i].x - pl[i + 1].x;
          vy = pl[i].y - pl[i + 1].y;
          vz = pl[i].z - pl[i + 1].z;
          types[i].dista = vx * vx + vy * vy + vz * vz;
        }
        types[linesize].range = sqrt(pl[linesize].x * pl[linesize].x + pl[linesize].y * pl[linesize].y);
        //give_feature(pl, types);
        give_feature( pl, types, pl_corn, pl_surf );
      }
    }
    else
    {
      for (int i = 0; i < plsize; i++)
      {
        if (i % point_filter_num != 0) continue;

        double range = pl_orig.points[i].x * pl_orig.points[i].x + pl_orig.points[i].y * pl_orig.points[i].y + pl_orig.points[i].z * pl_orig.points[i].z;
      
        if (range < (blind * blind)) continue;

        PointType added_pt;
        // cout<<"!!!!!!"<<i<<" "<<plsize<<endl;
        
        added_pt.normal_x = 0;
        added_pt.normal_y = 0;
        added_pt.normal_z = 0;
        added_pt.x = pl_orig.points[i].x;
        added_pt.y = pl_orig.points[i].y;
        added_pt.z = pl_orig.points[i].z;
        added_pt.intensity = pl_orig.points[i].intensity;
        // added_pt.curvature = pl_orig.points[i].time / 1000.0;  // curvature unit: ms
        added_pt.curvature = (pl_orig.points[i].timestamp - msg->header.stamp.toSec()) * 1000;  //s to ms

        pl_full.points.push_back(added_pt);
      }
    }
  }

2.2可能的报错

添加完尝试运行,可能会有报错:

Get pointcloud data from ros messages fail!!! Input pointcloud field names: [6]: x, y, z, intensity, ring, time,

这个报错的原因是r3live_lio.cpp中的get_pointcloud_data_from_ros_message()函数中需要添加针对velodyne/hesai或其他激光雷达数据的处理方式。

修改r3live_config.yaml文件,添加r3live_bag_velodyne.launch文件

修改完后运行,发现无法保存点云地图,点云也没有着色,且没有图像数据

原因是.launch文件中的image_topic有问题,数据集中的是压缩图像,topic为/camera/image_color/compressed,
但r3live.hpp(line334)会自动给图像topic添加"/compressed",因此删掉.launch中topic的"/compressed"即可正常运行。

由此可知,R3live虽然提供了针对原图像和压缩图像两种处理方式,但实际上还是全部当作压缩图像处理的,这样可以降低资源占用。可在r3live.hpp(line334)中对代码作修改,使其可以根据需要处理图像。

代码修改如下:

get_ros_parameter<std::string>(m_ros_node_handle, "/LiDAR_pointcloud_topic", LiDAR_pointcloud_topic, std::string("/laser_cloud_flat") );
        get_ros_parameter<std::string>(m_ros_node_handle, "/IMU_topic", IMU_topic, std::string("/livox/imu") );
        get_ros_parameter<std::string>(m_ros_node_handle, "/Image_topic", IMAGE_topic, std::string("/camera/image_color") );
        //IMAGE_topic_compressed = std::string(IMAGE_topic).append("/compressed");
        
        if(IMAGE_topic.find("/compressed")==std::string::npos)//如果不是压缩图像
        {
            cout << "Image topic: " << IMAGE_topic << endl;
            cout << "Image compressed topic: nan"  << endl;
            sub_img = m_ros_node_handle.subscribe(IMAGE_topic.c_str(), 1000000, &R3LIVE::image_callback, this, ros::TransportHints().tcpNoDelay());
            
        }
        else
        {
            IMAGE_topic_compressed = IMAGE_topic;
            cout << "Image topic: nan"  << endl;
            cout << "Image compressed topic: " << IMAGE_topic_compressed << endl;
            sub_img_comp = m_ros_node_handle.subscribe(IMAGE_topic_compressed.c_str(), 1000000, &R3LIVE::image_comp_callback, this, ros::TransportHints().tcpNoDelay());
            
        } 
        
        if(1)
        {
            scope_color(ANSI_COLOR_BLUE_BOLD);
            cout << "======= Summary of subscribed topics =======" << endl;
            cout << "LiDAR pointcloud topic: " << LiDAR_pointcloud_topic << endl;
            cout << "IMU topic: " << IMU_topic << endl;
            //cout << "Image topic: " << IMAGE_topic << endl;
            //cout << "Image compressed topic: " << IMAGE_topic_compressed << endl;
            cout << "=======        -End-                =======" << endl;
            std::this_thread::sleep_for(std::chrono::milliseconds(1));
        }

        sub_imu = m_ros_node_handle.subscribe(IMU_topic.c_str(), 2000000, &R3LIVE::imu_cbk, this, ros::TransportHints().tcpNoDelay());
        sub_pcl = m_ros_node_handle.subscribe(LiDAR_pointcloud_topic.c_str(), 2000000, &R3LIVE::feat_points_cbk, this, ros::TransportHints().tcpNoDelay());
        //sub_img = m_ros_node_handle.subscribe(IMAGE_topic.c_str(), 1000000, &R3LIVE::image_callback, this, ros::TransportHints().tcpNoDelay());
        //sub_img_comp = m_ros_node_handle.subscribe(IMAGE_topic_compressed.c_str(), 1000000, &R3LIVE::image_comp_callback, this, ros::TransportHints().tcpNoDelay());

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值