双目相机障碍物层避障

1. 获取深度相机点云

一般由相机的ROS驱动发布出来,以realsensed 435i为例,将rs_rgbd.launch或rs_camera.launch文件中的<arg name="enable_pointcloud" default="true"/>由false改为true,

<arg name="enable_pointcloud" default="true"/>

点云话题名称:/camera/depth/color/points
刷新频率:10Hz左右

2. 点云滤波

获取的相机点云数据波动比较大,而且一般避障的时候只需要附近的一距离和角度范围,所以需要进行滤波,这里采用体素滤波。滤波y方向上的点云,也就是一定高度范围的点云,裁剪掉打到地面上的点云。体素滤波realsense_cloud_voxel_filtering.cpp

#include <ros/ros.h>
#include <pcl_ros/point_cloud.h>
#include <pcl/point_types.h>
#include <iostream>
#include <pcl/filters/voxel_grid.h>

typedef pcl::PointCloud<pcl::PointXYZ> PointCloud;
typedef pcl::PointXYZ PointXYZ;

class FilterAndPublish
{
    public:
        FilterAndPublish()
        {
            printf("Made object\n");
            pub = nh.advertise<PointCloud> ("/points_filtered", 1);
            sub = nh.subscribe<PointCloud>("/camera/depth/color/points", 1, &FilterAndPublish::callback, this);
            this->thresh = 15; // This is the minimum number of points that have to occupy a voxel in order for it to survive the downsample.
        }

        void callback(const PointCloud::ConstPtr& msg)
        {
            PointCloud::Ptr cloud (new PointCloud);
            PointCloud::Ptr cloud_filtered (new PointCloud);
            *cloud = *msg;

            // What to do here: 
            // 1. Take cloud and put it in a voxel grid while restricting the bounding box
            // 2. Go through the voxels and remove all points in a voxel that has less than this.thresh points
            // 3. Publish resulting cloud

            pcl::VoxelGrid<PointXYZ> vox;
            vox.setInputCloud(cloud);
            // The leaf size is the size of voxels pretty much. Note that this value affects what a good threshold value would be.
            vox.setLeafSize(0.05f, 0.05f, 0.05f);
            // I limit the overall volume being considered so lots of "far away" data that is just terrible doesn't even have to be considered.
            vox.setFilterLimits(-1.0, 1.0);
            // The line below is perhaps the most important as it reduces ghost points.
            vox.setMinimumPointsNumberPerVoxel(this->thresh);
            vox.filter(*cloud_filtered);
            
            pub.publish (cloud_filtered);
        }

    private:
        ros::NodeHandle nh;
        ros::Publisher pub;
        ros::Subscriber sub;
        int thresh;

};


int main(int argc, char** argv)
{
    ros::init(argc, argv, "pcl_filtering");
    FilterAndPublish f = FilterAndPublish();
    ros::spin();
    return 0;
}

订阅的话题是:/camera/depth/color/points;发布的话题是:points_cloud

3. 点云转激光

将裁剪后的点云转为激光数据。pointcloud_to_laserscan_melodic,这里需要注意launch文件里的几个参数。 sample_node.launch

<?xml version="1.0"?>
<launch>
    <arg name="camera" default="camera" />
    <!-- run pointcloud_to_laserscan node -->
    <node pkg="pointcloud_to_laserscan" type="pointcloud_to_laserscan_node" name="pointcloud_to_laserscan">
        <remap from="cloud_in" to="points_cloud"/>
        <remap from="scan" to="$(arg camera)/scan"/>
        <rosparam>
            target_frame: camera_link 
            transform_tolerance: 0.01
            min_height: -0.15
            max_height: 0.7
            <!-- # (-M_PI/2 -1.5708 ,# M_PI/2) -->
            <!-- -1.0467  1.0467 -->
            angle_min: -1.5708   
            angle_max: 1.5708 
            angle_increment: 0.0087 
            scan_time: 0.3333
            range_min: 0.25
            range_max: 3.0
            use_inf: true
            inf_epsilon: 1.0
            <!-- # Concurrency level, affects number of pointclouds queued for processing and number of threads used
            # 0 : Detect number of cores
            # 1 : Single threaded
            # 2->inf : Parallelism level -->
            concurrency_level: 1
        </rosparam>
    </node>
</launch>

订阅的话题是:points_cloud;发布的话题是:/camera/scan

4. 配置插件

(1) costmap_common_params.yaml

camera_layer:
  enabled: true
  max_obstacle_height: 2.0
  mark_threshold: 0
  combination_method: 1
  inflation_radius: 0.5 #1.5
  track_unknown_space: true 
  obstacle_range: 2.5 # 只有障碍物在这个范围内才会被标记    2.5
  raytrace_range: 3.0 # 这个范围内不存在的障碍物都会被清除  3.0
  observation_sources: laser_scan_sensor
  laser_scan_sensor:
    data_type: LaserScan
    topic: camera/scan
    marking: true
    clearing: true
    inf_is_valid: true

(2) global_costmap_params.yaml

global_costmap:
  global_frame: map
  robot_base_frame: base_link
  update_frequency: 1.0
  publish_frequency: 1.0
  static_map: true
  transform_tolerance: 2.0 # 0.5
  track_unknown_space: true
  resolution: 0.05
  plugins:
    - { name: static_layer, type: "costmap_2d::StaticLayer" }
    - { name: obstacle_layer, type: "costmap_2d::VoxelLayer" }
    - { name: camera_layer, type: "costmap_2d::VoxelLayer" }
    #     - {name: range_sensor_layer,          type: "range_sensor_layer::RangeSensorLayer"}
    #     - {name: vectormap_layer,         type: "vectormap_layer::VectorMapLayer"}
    - { name: global_inflation_layer, type: "costmap_2d::InflationLayer" }
#     - {name: polygon_obstacle_layer,   type: "polygon_obstacle_layer_namespace::PolygonObstacleLayer"}

(3) local_costmap_params.yaml

local_costmap:
  global_frame: odom
  robot_base_frame: base_link
  update_frequency: 7.0
  publish_frequency: 7.0
  static_map: false # 一般我们并不希望局部地图使用静态的地图,因为这不利于我们的局部避障所以局部地图static_map设为fasle
  rolling_window: true
  track_unknown_space: true
  width: 6.0
  height: 6.0
  resolution: 0.05
  transform_tolerance: 0.5
  plugins:
    - { name: static_layer, type: "costmap_2d::StaticLayer" }
    - { name: obstacle_layer, type: "costmap_2d::VoxelLayer" }
    - { name: camera_layer, type: "costmap_2d::VoxelLayer" }

    #   - {name: range_sensor_layer,      type: "range_sensor_layer::RangeSensorLayer"}
    #    - {name: vectormap_layer,     type: "vectormap_layer::VectorMapLayer"}
    - { name: local_inflation_layer, type: "costmap_2d::InflationLayer" }
  #
  #        - {name: polygon_obstacle_layer,  type: "polygon_obstacle_layer_namespace::PolygonObstacleLayer"}

5. 验证

rospack plugins --attrib=plugin costmap_2d

如果能检测到camera_layer则说明安装成功。也可以用rviz查看。

  • 2
    点赞
  • 29
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
动态障碍物避障是一种在移动机器人路径规划中常用的方法。根据引用\[1\]中的代码和引用\[2\]中的描述,可以使用DWA算法来实现动态障碍物避障。DWA算法通过定义动态窗口来限制机器人的速度范围,并根据机器人的机械特性和障碍物环境来生成采样轨迹。然后,使用评价函数对采样轨迹进行评分,选择分值最高的路径作为最优路径并执行。 在DWA算法中,动态窗口是根据机器人的最大和最小速度限制以及机器人自身的机械特性来定义的。动态窗口中的速度范围Vi表示机器人在自身最大和最小速度限制下可以实现的速度范围。而动态窗口中的速度范围Vj是考虑了机器人自身电机的影响后所能实现的实际速度范围。为了实现安全避障,还需要考虑与障碍物的碰撞,因此可以进一步缩小动态窗口范围得到范围Vk。具体的计算公式可以参考引用\[2\]中的描述。 在得到动态窗口范围后,可以根据采样点和前向仿真时间tsim生成采样轨迹。采样轨迹是根据移动机器人每个线速度角速度的采样点生成的。然后,使用评价函数对采样轨迹进行评分。评价函数包括偏转角评价子函数、安全系数评价子函数和速度评价子函数。偏转角评价子函数评价了轨迹末端方向与目标点之间的角度差,安全系数评价子函数剔除了可能与障碍物发生碰撞的路径,速度评价子函数选择了速度最快的路径。具体的评价函数公式可以参考引用\[3\]中的描述。 综上所述,动态障碍物避障的实现包括定义动态窗口、生成采样轨迹和使用评价函数对路径进行评分。最终选择分值最高的路径作为综合最优路径并执行。具体的实现可以参考引用\[1\]中的源代码。 #### 引用[.reference_title] - *1* *2* *3* [【路径规划】基于matlab DWA动态避障路径规划【含Matlab源码 2356期】](https://blog.csdn.net/TIQCmatlab/article/details/129015965)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^control_2,239^v3^insert_chatgpt"}} ] [.reference_item] [ .reference_list ]
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值