rbx1中的follow

github地址:https://github.com/robotBing/openrobot_follower.git 

视频展示地址:http://blog.jiutucao.com:9000/upload/2018/07/color_follow.mp4

1.  基本功能

1)跟随人行走

2)语音控制开始或停止

3)颜色标定

2. 跟随实现

1)代码:

#!/usr/bin/env python

import rospy

from roslib import message

from std_msgs.msg import String

from sensor_msgs import point_cloud2

from sensor_msgs.msg import PointCloud2

from geometry_msgs.msg import Twist

from math import copysign 

class Follower():

int status = 0

    def __init__(self):

        rospy.init_node("follower")

 

        rospy.on_shutdown(self.shutdown)

 

        self.goal_z = rospy.get_param("~goal_z", 0.6)

 

        self.z_threshold = rospy.get_param("~z_threshold", 0.05)

 

        self.x_threshold = rospy.get_param("~x_threshold", 0.05)

        self.z_scale = rospy.get_param("~z_scale", 1.0)     

        self.x_scale = rospy.get_param("~x_scale", 2.5)

   

        self.max_angular_speed = rospy.get_param("~max_angular_speed", 2.0)

        self.min_angular_speed = rospy.get_param("~min_angular_speed", 0.0)

        self.max_linear_speed = rospy.get_param("~max_linear_speed", 0.3)

        self.min_linear_speed = rospy.get_param("~min_linear_speed", 0.1

        self.low_down_factor = rospy.get_param("~slow_down_factor", 0.8)

        self.move_cmd = Twist()

        self.cmd_vel_pub = rospy.Publisher('/cmd_vel', Twist, queue_size=5)

        self.depth_subscriber = rospy.Subscriber('point_cloud', PointCloud2, self.set_cmd_vel, queue_size=1)

        rospy.Subscriber('/recognizer/output', String, self.speechcb)

        rospy.loginfo("Subscribing to point cloud...")

        rospy.wait_for_message('point_cloud', PointCloud2)

        rospy.loginfo("Ready to follow!"

def speechcb(self,msg):

        command = msg.data

        if command == 'stop':

               self.status = 0

        else if command == 'move' :

               self.status = 1

    def set_cmd_vel(self, msg):

        x = y = z = n = 0

        for point in point_cloud2.read_points(msg, skip_nans=True):

            pt_x = point[0]

            pt_y = point[1]

            pt_z = point[2]

            x += pt_x

            y += pt_y

            z += pt_z

            n += 1

        if n:

            x /= n

            y /= n

            z /= n

            if (abs(z - self.goal_z) > self.z_threshold):

                linear_speed = (z - self.goal_z) * self.z_scale

                self.move_cmd.linear.x = copysign(max(self.min_line

 

ar_speed,

                                        min(self.max_linear_speed, abs(linear_speed))), linear_speed)

            else:

                self.move_cmd.linear.x *= self.slow_down_factor

               

            if (abs(x) > self.x_threshold):    

                angular_speed = -x * self.x_scale

                self.move_cmd.angular.z = copysign(max(self.min_angular_speed,

                                        min(self.max_angular_speed, abs(angular_speed))), angular_speed)

            else:

                self.move_cmd.angular.z *= self.slow_down_factor

        else:

            self.move_cmd.linear.x *= self.slow_down_factor

            self.move_cmd.angular.z *= self.slow_down_factor

       if self.status == 0:

            self.cmd_vel_pub.publish(twist())

        else:

               self.cmd_vel_pub.publish(self.move_cmd)

    def shutdown(self):

        rospy.loginfo("Stopping the robot...")

        self.depth_subscriber.unregister()

        rospy.sleep(1)

        self.cmd_vel_pub.publish(Twist())

        rospy.sleep(1)     

if __name__ == '__main__':

    try:

        Follower()

        rospy.spin()

    except rospy.ROSInterruptException:

        rospy.loginfo("Follower node terminated.")

 

2)基本思路

订阅点云(pointcloud2)话题,点云数据包括图像的宽度,高度和一个个点的信息。

根据点云数据找出人的质心点,然后将质心点和图像的中间点和设定的距离点比较,

左右偏移则往/cmd_vel话题上发布左右消息,前后则发布前后消息

  3) launch文件

  <launch>

  <node pkg="nodelet" type="nodelet" name="pcl_manager" args="manager" output="screen" />

  <node pkg="nodelet" type="nodelet" name="voxel_grid_z" args="load pcl/VoxelGrid pcl_manager" output="screen">

    <remap from="~input" to="/camera/depth_registered/points" />

    <remap from="~output" to="/z_filtered" />

    <rosparam>

      filter_field_name: z

      filter_limit_min: 0.3

      filter_limit_max: 1.6

      filter_limit_negative: False

      leaf_size: 0.02

    </rosparam>

  </node>

  <node pkg="nodelet" type="nodelet" name="passthrough_x" args="load pcl/PassThrough pcl_manager" output="screen">

    <remap from="~input" to="/z_filtered" />

    <remap from="~output" to="/x_filtered" />

    <rosparam>

      filter_field_name: x

      filter_limit_min: -0.3

      filter_limit_max: 0.3

      filter_limit_negative: False

    </rosparam>

  </node>

  <node pkg="nodelet" type="nodelet" name="passthrough_y" args="load pcl/PassThrough pcl_manager" output="screen">

    <remap from="~input" to="/x_filtered" />

    <remap from="~output" to="/search_cloud" />

    <rosparam>

      filter_field_name: y

      filter_limit_min: -0.5

      filter_limit_max: -0.1

      filter_limit_negative: False

    </rosparam>

  </node>

 

  <node pkg="openrobot_follow" name="follower" type="follower2.py" output="screen">

    <remap from="point_cloud" to="search_cloud" />

    <rosparam>

       goal_z: 0.8

       z_threshold: 0.025

       x_threshold: 0.025

       z_scale: 1.0

       x_scale: 3.0

       max_angular_speed: 5.0

       min_angular_speed: 0.1

       max_linear_speed: 0.4

       min_linear_speed: 0.05

    </rosparam>

  </node>

</launch>

 

 

4)launch详解

nodelet是ros自带的节点管理系统,类似于java里的多线程

 

利用VoxelGrid算法,对pointcloud的深度信息过滤

 

然后利用passthrough算法,对x  ,y坐标过滤

 

过滤可以降低计算量,又不会影响follow结果

 

最后启动follow节点

 

remap表示话题的映射,相当于给话题改名

 

5)参数详解

filter_field_name:     选择过滤的坐标系(x,y,z)

filter_limit_min:      最短的过滤信息,即最短的检测距离

      filter_limit_max:      最大的过滤信息,即最大的检测距离

leaf_size:             算法切块的大小

goal_z:               距离目标的距离

      z_threshold:          深度偏差量

      x_threshold:          x方向偏差量,即允许人在图像的哪个范围

      z_scale:             距离权重,控制线速度

      x_scale:             x方向的权重,控制角速度

      max_angular_speed:   最大角速度

      min_angular_speed:    最小角速度

      max_linear_speed:     最大线速度

      min_linear_speed:     最小线速度

 

3.  语音实现

1)概述

利用ros自带的语音识别包pocketsphinx,订阅/input/recognizer话题,获得识别结果,然后给follow函数设置一个使能变量,在语音的回调函数里对使能变量做操作

 

2)环境配置

sudo apt-get install ros-indigo-turtlebot-bringup \

ros-indigo-turtlebot-create-desktop ros-indigo-openni-* \

ros-indigo-openni2-* ros-indigo-freenect-* ros-indigo-usb-cam \

ros-indigo-laser-* ros-indigo-hokuyo-node \

ros-indigo-audio-common gstreamer0.10-pocketsphinx \

ros-indigo-pocketsphinx ros-indigo-slam-gmapping \

ros-indigo-joystick-drivers python-rosinstall \

ros-indigo-orocos-kdl ros-indigo-python-orocos-kdl \

python-setuptools ros-indigo-dynamixel-motor-* \

libopencv-dev python-opencv ros-indigo-vision-opencv \

ros-indigo-depthimage-to-laserscan ros-indigo-arbotix-* \

ros-indigo-turtlebot-teleop ros-indigo-move-base \

ros-indigo-map-server ros-indigo-fake-localization \

ros-indigo-amcl git subversion mercurial

 

 

3)代码重写

 

#!/usr/bin/env python

import rospy

from roslib import message

from sensor_msgs import point_cloud2

from sensor_msgs.msg import PointCloud2

from geometry_msgs.msg import Twist

from std_msgs.msg import String

from math import copysig

class Follower():

    def __init__(self):

        rospy.init_node("follower")

        rospy.on_shutdown(self.shutdown)

        self.goal_z = rospy.get_param("~goal_z", 1.5)

        self.z_threshold = rospy.get_param("~z_threshold", 0.05)

        self.x_threshold = rospy.get_param("~x_threshold", 0.05)

        self.z_scale = rospy.get_param("~z_scale", 0.7) 

        self.x_scale = rospy.get_param("~x_scale", 2.2)

        self.max_angular_speed = rospy.get_param("~max_angular_speed", 2.0)

        self.min_angular_speed = rospy.get_param("~min_angular_speed", 0.0)        self.max_linear_speed = rospy.get_param("~max_linear_speed", 0.3)       

        self.min_linear_speed = rospy.get_param("~min_linear_speed", 0.1)

        self.slow_down_factor = rospy.get_param("~slow_down_factor", 0.8)

        self.move_cmd = Twist()

        self.cmd_vel_pub = rospy.Publisher('/cmd_vel', Twist, queue_size=10)

  self.sta = 1

  self.speech_sub = rospy.Subscriber('recognizer/output',String,self.speechcb,queue_size=1)

 

        # Subscribe to the point cloud

        self.depth_subscriber = rospy.Subscriber('point_cloud', PointCloud2, self.set_cmd_vel, queue_size=10)

 

        rospy.loginfo("Subscribing to point cloud...")

       

        # Wait for the pointcloud topic to become available

        rospy.wait_for_message('point_cloud', PointCloud2)

 

        rospy.loginfo("Ready to follow!")

 

    def speechcb(self,msg):

  if msg.data == 'stop':

         self.sta=0

  elif msg.data == 'move':

         self.sta=1

       

    def set_cmd_vel(self, msg):

        # Initialize the centroid coordinates point count

        x = y = z = n = 0

       

        # Read in the x, y, z coordinates of all points in the cloud

        for point in point_cloud2.read_points(msg, skip_nans=True):

            pt_x = point[0]

            pt_y = point[1]

            pt_z = point[2]

           

            x += pt_x

            y += pt_y

            z += pt_z

            n += 1

        if n:

            x /= n

            y /= n

            z /= n

 

            if (abs(z - self.goal_z) > self.z_threshold):

   

                linear_speed = (z - self.goal_z) * self.z_scale

                self.move_cmd.linear.x = copysign(max(self.min_linear_speed,

                                        min(self.max_linear_speed, abs(linear_speed))), linear_speed)

            else:

                self.move_cmd.linear.x *= self.slow_down_factor

               

            if (abs(x) > self.x_threshold):    

                angular_speed = -x * self.x_scale

                

                self.move_cmd.angular.z = copysign(max(self.min_angular_speed,

                                        min(self.max_angular_speed, abs(angular_speed))), angular_speed)

            else:

                self.move_cmd.angular.z *= self.slow_down_factor

               

        else:

            self.move_cmd.linear.x *= self.slow_down_factor

            self.move_cmd.angular.z *= self.slow_down_factor

           

     

  if self.sta == 0:

         self.cmd_vel_pub.publish(Twist())

  else:

         self.cmd_vel_pub.publish(self.move_cmd)

       

    def shutdown(self):

        rospy.loginfo("Stopping the robot...")

       

        self.depth_subscriber.unregister()

        rospy.sleep(1)

       

        self.cmd_vel_pub.publish(Twist())

        rospy.sleep(1)     

                  

if __name__ == '__main__':

    try:

        Follower()

        rospy.spin()

    except rospy.ROSInterruptException:

        rospy.loginfo("Follower node terminated.")

 

4)Launch重写

<launch>

 

<node name="recognizer" pkg="pocketsphinx" type="recognizer.py" output="screen">

    <param name="lm" value="$(find rbx1_speech)/config/nav_commands.lm"/>

    <param name="dict" value="$(find rbx1_speech)/config/nav_commands.dic"/>

  </node>

 

  <node pkg="nodelet" type="nodelet" name="pcl_manager" args="manager" output="screen" />

 

  <!-- Run a VoxelGrid filter on the z axis -->

  <node pkg="nodelet" type="nodelet" name="voxel_grid_z" args="load pcl/VoxelGrid pcl_manager" output="screen">

    <remap from="~input" to="/kinect2/qhd/points" />

    <remap from="~output" to="/z_filtered" />

    <rosparam>

      filter_field_name: z

      filter_limit_min: 0.3

      filter_limit_max: 1.6

      filter_limit_negative: False

      leaf_size: 0.02

    </rosparam>

  </node>

 

  <!-- Run a passthrough filter on the x axis -->

  <node pkg="nodelet" type="nodelet" name="passthrough_x" args="load pcl/PassThrough pcl_manager" output="screen">

    <remap from="~input" to="/z_filtered" />

    <remap from="~output" to="/x_filtered" />

    <rosparam>

      filter_field_name: x

      filter_limit_min: -0.3

      filter_limit_max: 0.3

      filter_limit_negative: False

    </rosparam>

  </node>

 

  <!-- Run a passthrough filter on the y axis -->

  <node pkg="nodelet" type="nodelet" name="passthrough_y" args="load pcl/PassThrough pcl_manager" output="screen">

    <remap from="~input" to="/x_filtered" />

    <remap from="~output" to="/search_cloud" />

    <rosparam>

      filter_field_name: y

      filter_limit_min: -0.5

      filter_limit_max: -0.1

      filter_limit_negative: False

    </rosparam>

  </node>

 

  <node pkg="openrobot_follow" name="follower" type="follower2.py" output="screen">

    <remap from="point_cloud" to="search_cloud" />

   

    <rosparam>

       goal_z: 1.2

       z_threshold: 0.025

       x_threshold: 0.025

       z_scale: 0.6

       x_scale: 2.1

       max_angular_speed: 3.0

       min_angular_speed: 0.1

       max_linear_speed: 0.4

       min_linear_speed: 0.05

    </rosparam>

   

  </node>

</launch>

 

 

4. 颜色标定实现

1 ) 概述

 利用opencv提供的python接口,处理图像信息,利用cv_bridge将ros话题上发布的摄像头消息数据转化成为opencv可以处理的图像数据。

 

2 )代码见github项目

 

 

 

 3 ) launch

    <launch> 

  <node name="recognizer" pkg="pocketsphinx" type="recognizer.py" output="screen">

    <param name="lm" value="/home/tt/my_ws/src/openrobot_follow/speech_conf/command.lm"/>

    <param name="dict" value="/home/tt/my_ws/src/openrobot_follow/speech_conf/command.dic"/>

  </node>

  <node pkg="openrobot_follow" name="color_follower" type="color_follower.py" output="screen">

<remap from="camera_info" to="/camera/rgb/camera_info" />

  <remap from="depth_image" to="/camera/depth_registered/image_raw" />

    <rosparam>

       rate: 20

       max_z: 2.0  # How far out do we want to detect

       min_z: 0.1

       goal_z: 0.7

       z_threshold: 0.1

       x_threshold: 0.3

       z_scale: 1.0 # forward/back scale

       x_scale: 1.3 # left/right scale

       max_rotation_speed: 1.0

       min_rotation_speed: 0.1

       max_linear_speed: 0.2

       min_linear_speed: 0.02

       scale_roi: 0.9

    </rosparam>

  </node>

</launch>

4 )launch详解

rate:   消息回调速度,一般不用动,如果出现卡顿再改动

       max_z:   检测的最大距离

       min_z:   检测的最小距离

       goal_z:   保持机器和目标的距离

       z_threshold: 距离的可控区域

       x_threshold: 图像的可控区域,即在屏幕上画个矩形,让目标保持在矩形内

       z_scale:    线速度的权值

       x_scale:    角速度的权值

       max_rotation_speed: 最大角速度

       min_rotation_speed: 最小角速度

       max_linear_speed:   最大线速度

       min_linear_speed:    最小线速度

       scale_roi:    对roi话题上发布的信息的缩小范围

1.     运行底盘驱动

rosrun openrobot Node_p_chassis

 

 2 .打开摄像头驱动(实验室用的奥比中光深度摄像头)

  roslaunch astra_launch astra.launch

 3.启动camshift 发布/roi

  roslaunch openrobot_follow camshift.launch

这时候屏幕上会出现3个窗口,在有图像的窗口选择需要跟随的物体。这时候程序会计算物体的颜色,另外两个窗口其中一个会出现颜色的柱状图,另一个窗口出现颜色柱状图。物体移动绿色的框会跟着移动。尽量选中颜色鲜艳的物体。4.启动跟随

  roslaunch openrobot_follow color_follow.launch

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值