机器人自主探索系列论文环境代码

最近看到CMU开源了一系列的机器人自主探索代码,我自己也正好在做这个方向的工作,就打算先看看他们的工作来入个门。

他们已经开源了两个自主探索的代码,所使用的状态估计、避障等的算法都是根据一个工作来进行的,故先分析分析这一个工作的代码。

代码地址:https://github.com/HongbiaoZ/autonomous_exploration_development_environmenthttps://github.com/HongbiaoZ/autonomous_exploration_development_environment

代码launch文件: 

<launch>

  <arg name="cameraOffsetZ" default="0"/>
  <arg name="vehicleX" default="0"/>
  <arg name="vehicleY" default="0"/>
  <arg name="checkTerrainConn" default="false"/>

  <include file="$(find ps3joy)/launch/ps3.launch" />

  <include file="$(find local_planner)/launch/local_planner.launch" >
    <arg name="cameraOffsetZ" value="$(arg cameraOffsetZ)"/>
    <arg name="goalX" value="$(arg vehicleX)"/>
    <arg name="goalY" value="$(arg vehicleY)"/>
  </include>

  <include file="$(find terrain_analysis)/launch/terrain_analysis.launch" />

  <include file="$(find terrain_analysis_ext)/launch/terrain_analysis_ext.launch" >
    <arg name="checkTerrainConn" value="$(arg checkTerrainConn)"/>
  </include>

  <include file="$(find sensor_scan_generation)/launch/sensor_scan_generation.launch" />

  <include file="$(find loam_interface)/launch/loam_interface.launch" />

  <node launch-prefix="nice" pkg="rviz" type="rviz" name="rvizGA" args="-d $(find vehicle_simulator)/rviz/vehicle_simulator.rviz" respawn="true"/>

</launch>

1. loam_interface

本程序就是提供一个和slam状态估计节点通讯的借口,可以很方便的在launch文件中修改使用的slam算法的topic 

首先看loam_interface.launch

<launch>

  <node pkg="loam_interface" type="loamInterface" name="loamInterface" output="screen" required="true">
    <param name="stateEstimationTopic" type="string" value="/lio_sam/mapping/odometry" />
    <param name="registeredScanTopic" type="string" value="/lio_sam/mapping/cloud_registered" />
    <param name="flipStateEstimation" type="bool" value="false />
    <param name="flipRegisteredScan" type="bool" value="false" />
    <param name="sendTF" type="bool" value="true" />
    <param name="reverseTF" type="bool" value="false" />
  </node>

</launch>

本文以lio-sam为状态估计节点。

ros::Subscriber subOdometry = nh.subscribe<nav_msgs::Odometry> (stateEstimationTopic, 5, odometryHandler);

ros::Subscriber subLaserCloud = nh.subscribe<sensor_msgs::PointCloud2> (registeredScanTopic, 5, laserCloudHandler);

1.1 odometryHandler

订阅lio-sam的里程计信息/lio_sam/mapping/odometry(机器人在世界坐标系下的位姿),执行回调函数odometryHandler(将机器人在世界坐标系下的位姿换个话题名/state_estimation发布出去,以及发布TF变换关系)

void odometryHandler(const nav_msgs::Odometry::ConstPtr& odom)
{
  double roll, pitch, yaw;
  geometry_msgs::Quaternion geoQuat = odom->pose.pose.orientation;
  odomData = *odom;
  // 本例中flipStateEstimation为false
  if (flipStateEstimation) {
    tf::Matrix3x3(tf::Quaternion(geoQuat.z, -geoQuat.x, -geoQuat.y, geoQuat.w)).getRPY(roll, pitch, yaw);

    pitch = -pitch;
    yaw = -yaw;

    geoQuat = tf::createQuaternionMsgFromRollPitchYaw(roll, pitch, yaw);

    odomData.pose.pose.orientation = geoQuat;
    odomData.pose.pose.position.x = odom->pose.pose.position.z;
    odomData.pose.pose.position.y = odom->pose.pose.position.x;
    odomData.pose.pose.position.z = odom->pose.pose.position.y;
  }

  // publish odometry messages
  odomData.header.frame_id = "/map";
  odomData.child_frame_id = "/sensor";
  pubOdometryPointer->publish(odomData);

  // publish tf messages
  odomTrans.stamp_ = odom->header.stamp;
  odomTrans.frame_id_ = "/map";
  odomTrans.child_frame_id_ = "/sensor";
  odomTrans.setRotation(tf::Quaternion(geoQuat.x, geoQuat.y, geoQuat.z, geoQuat.w));
  odomTrans.setOrigin(tf::Vector3(odomData.pose.pose.position.x, odomData.pose.pose.position.y, odomData.pose.pose.position.z));

  if (sendTF) {
    if (!reverseTF) {
      tfBroadcasterPointer->sendTransform(odomTrans);
    } else {
      tfBroadcasterPointer->sendTransform(tf::StampedTransform(odomTrans.inverse(), odom->header.stamp, "/sensor", "/map"));
    }
  }
}

1.2 laserCloudHandler

订阅lio-sam的关键帧点云信息/lio_sam/mapping/cloud_registered(世界坐标系下的关键帧),执行回调函数laserCloudHandler

void laserCloudHandler(const sensor_msgs::PointCloud2ConstPtr& laserCloudIn)
{
  laserCloud->clear();
  pcl::fromROSMsg(*laserCloudIn, *laserCloud);

  //在lio-sam中为false
  if (flipRegisteredScan) {
    int laserCloudSize = laserCloud->points.size();
    for (int i = 0; i < laserCloudSize; i++) {
      float temp = laserCloud->points[i].x;
      laserCloud->points[i].x = laserCloud->points[i].z;
      laserCloud->points[i].z = laserCloud->points[i].y;
      laserCloud->points[i].y = temp;
    }
  }

  // publish registered scan messages
  sensor_msgs::PointCloud2 laserCloud2;
  pcl::toROSMsg(*laserCloud, laserCloud2);
  laserCloud2.header.stamp = laserCloudIn->header.stamp;
  laserCloud2.header.frame_id = "/map";
  pubLaserCloudPointer->publish(laserCloud2);
}

本回调函数也是将世界坐标系下的关键帧点云换个话题名发布出去/registered_scan

1 loam_interface总结:

不断接收slam部分提供的世界坐标系下的机器人当前位姿以及当前关键帧,并分别以/state-estimastion和/registered_scan话题名发布出去

 2 sensor_scan_generation

订阅/state_estimation和/registered_scan,执行laserCloudAndOdometryHandler,并发布/state_estimation_at_scan和/sensor_scan

void laserCloudAndOdometryHandler(const nav_msgs::Odometry::ConstPtr& odometry,
                                  const sensor_msgs::PointCloud2ConstPtr& laserCloud2)
{
  laserCloudIn->clear();
  laserCLoudInSensorFrame->clear();

  pcl::fromROSMsg(*laserCloud2, *laserCloudIn);

  odometryIn = *odometry;

  transformToMap.setOrigin(tf::Vector3(odometryIn.pose.pose.position.x, odometryIn.pose.pose.position.y, odometryIn.pose.pose.position.z));
  transformToMap.setRotation(tf::Quaternion(odometryIn.pose.pose.orientation.x, odometryIn.pose.pose.orientation.y,
                                            odometryIn.pose.pose.orientation.z, odometryIn.pose.pose.orientation.w));

  int laserCloudInNum = laserCloudIn->points.size();

  pcl::PointXYZ p1;
  tf::Vector3 vec;

  for (int i = 0; i < laserCloudInNum; i++)
  {
    p1 = laserCloudIn->points[i];
    vec.setX(p1.x);
    vec.setY(p1.y);
    vec.setZ(p1.z);

    vec = transformToMap.inverse() * vec;

    p1.x = vec.x();
    p1.y = vec.y();
    p1.z = vec.z();

    laserCLoudInSensorFrame->points.push_back(p1);
  }

 2. 总结

订阅/state_estimation以及/registered_scan,执行回调函数。

发布/state_estimation_at_scan(等同于/state_estimation)

发布/sensor_scan(世界坐标系下的点云转换至传感器Lidar坐标系下)

 3.  terrain_analysis

本程序订阅两个话题:

 3.1 订阅/state_estimation,执行odometryHandler

ros::Subscriber subOdometry = nh.subscribe<nav_msgs::Odometry>("/state_estimation", 5, odometryHandler);
void odometryHandler(const nav_msgs::Odometry::ConstPtr &odom) {
  double roll, pitch, yaw;
  geometry_msgs::Quaternion geoQuat = odom->pose.pose.orientation;
  tf::Matrix3x3(tf::Quaternion(geoQuat.x, geoQuat.y, geoQuat.z, geoQuat.w))
      .getRPY(roll, pitch, yaw);

  vehicleRoll = roll;
  vehiclePitch = pitch;
  vehicleYaw = yaw;
  vehicleX = odom->pose.pose.position.x;
  vehicleY = odom->pose.pose.position.y;
  vehicleZ = odom->pose.pose.position.z;

  sinVehicleRoll = sin(vehicleRoll);
  cosVehicleRoll = cos(vehicleRoll);
  sinVehiclePitch = sin(vehiclePitch);
  cosVehiclePitch = cos(vehiclePitch);
  sinVehicleYaw = sin(vehicleYaw);
  cosVehicleYaw = cos(vehicleYaw);

  if (noDataInited == 0) {
    vehicleXRec = vehicleX;
    vehicleYRec = vehicleY;
    noDataInited = 1;
  }
  if (noDataInited == 1) {
    float dis = sqrt((vehicleX - vehicleXRec) * (vehicleX - vehicleXRec) +
                     (vehicleY - vehicleYRec) * (vehicleY - vehicleYRec));
    if (dis >= noDecayDis)
      noDataInited = 2;
  }
}

 3.2 订阅/registered_scan,执行回调函数laserCloudHandler

void laserCloudHandler(const sensor_msgs::PointCloud2ConstPtr &laserCloud2) {
  laserCloudTime = laserCloud2->header.stamp.toSec();

  if (!systemInited) {
    systemInitTime = laserCloudTime;
    systemInited = true;
  }

  laserCloud->clear();
  pcl::fromROSMsg(*laserCloud2, *laserCloud);

  pcl::PointXYZI point;
  laserCloudCrop->clear();
  int laserCloudSize = laserCloud->points.size();
  for (int i = 0; i < laserCloudSize; i++) {
    point = laserCloud->points[i];

    float pointX = point.x;
    float pointY = point.y;
    float pointZ = point.z;

    float dis = sqrt((pointX - vehicleX) * (pointX - vehicleX) +
                     (pointY - vehicleY) * (pointY - vehicleY));
    if (pointZ - vehicleZ > minRelZ - disRatioZ * dis &&
        pointZ - vehicleZ < maxRelZ + disRatioZ * dis &&
        dis < terrainVoxelSize * (terrainVoxelHalfWidth + 1)) {
      point.x = pointX;
      point.y = pointY;
      point.z = pointZ;
      point.intensity = laserCloudTime - systemInitTime;
      laserCloudCrop->push_back(point);
    }
  }

  newlaserCloud = true;
}

 将满足一定条件的当前帧点云保存到laserCloudCrop中,点的强度设置为激光帧的时间差。

 然后进入主函数,对于laserCloudCrop中每一个点,获取其二维栅格坐标索引indX,indY

 

// stack registered laser scans
      pcl::PointXYZI point;
      int laserCloudCropSize = laserCloudCrop->points.size();
      for (int i = 0; i < laserCloudCropSize; i++) {
        point = laserCloudCrop->points[i];
        //point.x - vehicleX是地形点到当前点的向量
        int indX = int((point.x - vehicleX + terrainVoxelSize / 2) / terrainVoxelSize) + terrainVoxelHalfWidth;
        int indY = int((point.y - vehicleY + terrainVoxelSize / 2) / terrainVoxelSize) + terrainVoxelHalfWidth;
        //应对四舍五入为负的时候,例如x本来是-0.8,再加上0.5就变成了-0.3,求int就变成了0,但应该是-1
        if (point.x - vehicleX + terrainVoxelSize / 2 < 0)
          indX--;
        if (point.y - vehicleY + terrainVoxelSize / 2 < 0)
          indY--;

索引不超限的点保存到terrainVoxelCloud

if (indX >= 0 && indX < terrainVoxelWidth && indY >= 0 && indY < terrainVoxelWidth) {
          terrainVoxelCloud[terrainVoxelWidth * indX + indY]->push_back(point);
          terrainVoxelUpdateNum[terrainVoxelWidth * indX + indY]++;
         
        }

 然后,遍历terrainVoxelCloud每一个网格,当每一个网格内的点云超过一定数量或者超过一定时间时(针对后续点云帧,当点云帧相隔较多时出发),再进行处理:

        1. 对每个栅格内的点云terrainVoxelCloudPtr进行降采样得到laserCloudDwz,并对terrainVoxelCloudPtr进行clear()

        2. 对于laserCloudDwz中每一个点,当满足一定条件时,保存到terrainVoxelCloudPtr

        即仅保存terrainVoxelCloud中每个栅格内满足一定条件的点

for (int ind = 0; ind < terrainVoxelNum; ind++) {
        // 遍历每一个网格时,仅对点云数量大于voxelPointUpdateThre的网格进行处理或者时间超过一定阈值时
        // systemInitTime为第一帧激光帧的时间
        if (terrainVoxelUpdateNum[ind] >= voxelPointUpdateThre ||
            laserCloudTime - systemInitTime - terrainVoxelUpdateTime[ind] >= voxelTimeUpdateThre || clearingCloud) {
          pcl::PointCloud<pcl::PointXYZI>::Ptr terrainVoxelCloudPtr = terrainVoxelCloud[ind];

          laserCloudDwz->clear();
          downSizeFilter.setInputCloud(terrainVoxelCloudPtr);
          downSizeFilter.filter(*laserCloudDwz);

          terrainVoxelCloudPtr->clear();
          int laserCloudDwzSize = laserCloudDwz->points.size();
          for (int i = 0; i < laserCloudDwzSize; i++) {
            point = laserCloudDwz->points[i];
            float dis = sqrt((point.x - vehicleX) * (point.x - vehicleX) + (point.y - vehicleY) * (point.y - vehicleY));
            // 在体素栅格中,,需要被进行地面分割的点云满足以下要求,这些点云会被放入 terrainCloud ,用于地面分割。
            // 1. 点云高度大于最小阈值
            // 2. 点云高度小于最大阈值
            // 3. 当前点云的时间与要处理的点云时间差小于阈值 decayTime,或者距离小于 noDecayDis
            // 4. 此时不会清除距离外的点云,或者不在需要被清除的距离之内
            if (point.z - vehicleZ > minRelZ - disRatioZ * dis && point.z - vehicleZ < maxRelZ + disRatioZ * dis &&
                (laserCloudTime - systemInitTime - point.intensity < decayTime || dis < noDecayDis) &&
                !(dis < clearingDis && clearingCloud)) {
              // 仅保留满足上述条件的点
              terrainVoxelCloudPtr->push_back(point);
            }
          }

 

terrainVoxelCloud中所有点保存至terrainCloud中;

遍历terrainVoxelCloud中每一个点:

1. 获取每个点的二维坐标索引indX、indY

2. 当每个点的Z值在阈值之内时,将其扩散到一3*3的邻域,并将其保存至planarPointElev

3. 对于当前位置至点云的向量,将其旋转至sensor坐标系下,判断sensor坐标系下每个点的FOV视角的大小,当其在两个阈值之间时,planarVoxelDyObs[planarVoxelWidth * indX + indY]++

// 重新创建一个二维数组存储每一个点的高程信息,只不过分辨率从1变为了0.2
      int terrainCloudSize = terrainCloud->points.size();
      for (int i = 0; i < terrainCloudSize; i++) {
        point = terrainCloud->points[i];

        int indX = int((point.x - vehicleX + planarVoxelSize / 2) / planarVoxelSize) + planarVoxelHalfWidth;
        int indY = int((point.y - vehicleY + planarVoxelSize / 2) / planarVoxelSize) + planarVoxelHalfWidth;

        if (point.x - vehicleX + planarVoxelSize / 2 < 0)
          indX--;
        if (point.y - vehicleY + planarVoxelSize / 2 < 0)
          indY--;
        // 对Z值满足一定条件的点,保存其高程值
        if (point.z - vehicleZ > minRelZ && point.z - vehicleZ < maxRelZ) {
          // 每一个点不止放在同一个网格中,而是复制到一个3*3的邻域。
          for (int dX = -1; dX <= 1; dX++) {
            for (int dY = -1; dY <= 1; dY++) {
              // planarVoxelWidth = 51
              if (indX + dX >= 0 && indX + dX < planarVoxelWidth &&
                  indY + dY >= 0 && indY + dY < planarVoxelWidth) {
                // 高程信息
                planarPointElev[planarVoxelWidth * (indX + dX) + indY + dY].push_back(point.z);
              }
            }
          }
        }

 

遍历 laserCloudCrop中每一个点,获取其indX以及indY索引,当每个点的FOV超过一定阈值时,planarVoxelDyObs[planarVoxelWidth * indX + indY] = 0;

我觉得可以理解为对上一个的再判断

if (clearDyObs) {
        for (int i = 0; i < laserCloudCropSize; i++) {
          point = laserCloudCrop->points[i];

          int indX = int((point.x - vehicleX + planarVoxelSize / 2) / planarVoxelSize) + planarVoxelHalfWidth;
          int indY = int((point.y - vehicleY + planarVoxelSize / 2) / planarVoxelSize) + planarVoxelHalfWidth;

          if (point.x - vehicleX + planarVoxelSize / 2 < 0)
            indX--;
          if (point.y - vehicleY + planarVoxelSize / 2 < 0)
            indY--;

          if (indX >= 0 && indX < planarVoxelWidth && indY >= 0 && indY < planarVoxelWidth) {
            float pointX1 = point.x - vehicleX;
            float pointY1 = point.y - vehicleY;
            float pointZ1 = point.z - vehicleZ;

            float dis1 = sqrt(pointX1 * pointX1 + pointY1 * pointY1);
            float angle1 = atan2(pointZ1 - minDyObsRelZ, dis1) * 180.0 / PI;
            if (angle1 > minDyObsAngle) {
              planarVoxelDyObs[planarVoxelWidth * indX + indY] = 0;
            }
          }
        }
      }

 

评论 12
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

一起抬水泥

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值