一文讲清楚robot localization如何在自动驾驶小车上实现ROS机器人定位

环境:

    硬件环境:

        jetson nano/树莓派/地平线X3

        北京渡众机器人自动驾驶小车

        IMU、odom、UWB模拟GPS信号

    软件环境:

        ROS2

        C++

概述

卡尔曼滤波的意义就是如何利用各传感器的优点,将所有数据结合起来,取长补短,实现更好的定位效果。

今天我们用微缩ros机器人小车来实现模拟自动驾驶真实车辆融合定位算法,室外的RTK信号使用UWB来代替,topic为fix。

robot_localization 是状态估计节点的集合,每个节点都是在 3D 空间中移动的机器人的非线性状态估计器的实现。它包含两个状态估计节点和 ekf_localization_nodeukf_localization_node。此外,还提供 navsat_transform_node

优点:

  • 融合任意数量的传感器。节点不限制输入源的数量。例如,如果您的机器人有多个IMU或多个里程计信息源,则其中 robot_localization 的状态估计节点可以支持所有这些IMU。
  • 支持多种 ROS 消息类型。所有 robot_localization中的状态估计节点都可以接收 nav_msgs/Odometry、sensor_msgs/Imu、geometry_msgs/PoseWithCovarianceStamped 或 geometry_msgs/TwistWithCovarianceStamped 消息。
  • 每个传感器的输入自定义。如果给定的传感器消息包含您不希望包含在状态估计中的数据,则  robot_localization 状态估计节点允许您基于每个传感器排除该数据。
  • 连续估计。 robot_localization中的每个状态估计节点在收到单个测量值后立即开始估计车辆的状态。如果传感器数据中有一个长时间没有接收到数据时期,过滤器将继续通过内部运动模型估计机器人的状态。
  • 可以融合GPS/北斗数据
  • ros和ros2都支持

所有状态估计节点都跟踪车辆的 15 维状态: (X,Y,Z,roll,pitch,yaw,X˙,Y˙,Z˙,roll˙,pitch˙,yaw˙,X¨,Y¨,Z¨) 。

按照每3个数据一组,分别代表:

  • x-y-z坐标系的坐标(位置)、
  • x/y/z轴的角度(姿态)、
  • x/y/z轴的速度、
  • x/y/z轴的角速度、
  • x/y/z轴的加速度。

robot_localization 用途

1,扩展卡尔曼滤波融合IMU和Odom,输出Odom->base_link的tf,使用ekf_localization_node节点。

2,扩展卡尔曼滤波融合IMU和Odom和全局定位(GPS/北斗/UWB/landmark),输出全局定位->base_link的tf,使用ekf_localization_node加navsat_transform_node节点。

 

具体使用

融合IMU和Odom

直接先上最终使用配置参数:

### ekf config file ###
ekf_filter_node:
    ros__parameters:
        frequency: 20.0
        sensor_timeout: 0.01
        two_d_mode: true
        transform_time_offset: 0.0
        transform_timeout: 0.0
        print_diagnostics: true
        debug: false
        debug_out_file: /path/to/debug/file.txt
        permit_corrected_publication: false
        publish_acceleration: false
        publish_tf: true

        map_frame: map              # Defaults to "map" if unspecified
        odom_frame: /odom         # Defaults to "odom" if unspecified
        base_link_frame: /base_link  # Defaults to "base_link" if unspecified
        world_frame: /odom           # Defaults to the value of odom_frame if unspecified

        odom0: odom
        odom0_config: [true,  true,  false,
                       false, false, true,
                       true, true, false,
                       false, false, true,
                       false, false, false]
        odom0_queue_size: 1
        odom0_nodelay: false
        odom0_differential: true
        odom0_relative: false
        odom0_pose_rejection_threshold: 5.0
        odom0_twist_rejection_threshold: 5.0

        imu0: imu_data
        imu0_config: [false, false, false,
                      false,  false, true,
                      true, true, false,
                      false,  false,  true,
                      true, true,  false]
        imu0_nodelay: false
        imu0_differential: false
        imu0_relative: true
        imu0_queue_size: 1
        imu0_pose_rejection_threshold: 2.8                 # Note the difference in parameter names
        imu0_twist_rejection_threshold: 2.8                #
        imu0_linear_acceleration_rejection_threshold: 0.8  #

# [ADVANCED] Some IMUs automatically remove acceleration due to gravity, and others don't. If yours doesn't, please set
# this to true, and *make sure* your data conforms to REP-103, specifically, that the data is in ENU frame.
        imu0_remove_gravitational_acceleration: true

# [ADVANCED]  The EKF and UKF models follow a standard predict/correct cycle. During prediction, if there is no
# acceleration reference, the velocity at time t+1 is simply predicted to be the same as the velocity at time t. During
# correction, this predicted value is fused with the measured value to produce the new velocity estimate. This can be
# problematic, as the final velocity will effectively be a weighted average of the old velocity and the new one. When
# this velocity is the integrated into a new pose, the result can be sluggish covergence. This effect is especially
# noticeable with LIDAR data during rotations. To get around it, users can try inflating the process_noise_covariance
# for the velocity variable in question, or decrease the  variance of the variable in question in the measurement
# itself. In addition, users can also take advantage of the control command being issued to the robot at the time we
# make the prediction. If control is used, it will get converted into an acceleration term, which will be used during
# predicition. Note that if an acceleration measurement for the variable in question is available from one of the
# inputs, the control term will be ignored.
# Whether or not we use the control input during predicition. Defaults to false.
        use_control: false
# Whether the input (assumed to be cmd_vel) is a geometry_msgs/Twist or geometry_msgs/TwistStamped message. Defaults to
# false.
        stamped_control: false
# The last issued control command will be used in prediction for this period. Defaults to 0.2.
        control_timeout: 0.2
# Which velocities are being controlled. Order is vx, vy, vz, vroll, vpitch, vyaw.
        control_config: [true, false, false, false, false, true]
# Places limits on how large the acceleration term will be. Should match your robot's kinematics.
        acceleration_limits: [1.3, 0.0, 0.0, 0.0, 0.0, 3.4]
# Acceleration and deceleration limits are not always the same for robots.
        deceleration_limits: [1.3, 0.0, 0.0, 0.0, 0.0, 4.5]
# If your robot cannot instantaneously reach its acceleration limit, the permitted change can be controlled with these
# gains
        acceleration_gains: [0.8, 0.0, 0.0, 0.0, 0.0, 0.9]
# If your robot cannot instantaneously reach its deceleration limit, the permitted change can be controlled with these
# gains
        deceleration_gains: [1.0, 0.0, 0.0, 0.0, 0.0, 1.0]
# [ADVANCED] The process noise covariance matrix can be difficult to tune, and can vary for each application, so it is
# exposed as a configuration parameter. This matrix represents the noise we add to the total error after each
# prediction step. The better the omnidirectional motion model matches your system, the smaller these values can be.
# However, if users find that a given variable is slow to converge, one approach is to increase the
# process_noise_covariance diagonal value for the variable in question, which will cause the filter's predicted error
# to be larger, which will cause the filter to trust the incoming measurement more during correction. The values are
# ordered as x, y, z, roll, pitch, yaw, vx, vy, vz, vroll, vpitch, vyaw, ax, ay, az. Defaults to the matrix below if
# unspecified.
        process_noise_covariance: [1.0,    0.0,    0.0,    0.0,    0.0,    0.0,    0.0,     0.0,     0.0,    0.0,    0.0,    0.0,    0.0,    0.0,    0.0,
                                   0.0,    1.0,    0.0,    0.0,    0.0,    0.0,    0.0,     0.0,     0.0,    0.0,    0.0,    0.0,    0.0,    0.0,    0.0,
                                   0.0,    0.0,    10.0,   0.0,    0.0,    0.0,    0.0,     0.0,     0.0,    0.0,    0.0,    0.0,    0.0,    0.0,    0.0,
                                   0.0,    0.0,    0.0,    0.03, 0.0,    0.0,    0.0,     0.0,     0.0,    0.0,    0.0,    0.0,    0.0,    0.0,    0.0,
                                   0.0,    0.0,    0.0,    0.0,    0.03, 0.0,    0.0,     0.0,     0.0,    0.0,    0.0,    0.0,    0.0,    0.0,    0.0,
                                   0.0,    0.0,    0.0,    0.0,    0.0,    0.01, 0.0,     0.0,     0.0,    0.0,    0.0,    0.0,    0.0,    0.0,    0.0,
                                   0.0,    0.0,    0.0,    0.0,    0.0,    0.0,    0.25, 0.0,     0.0,    0.0,    0.0,    0.0,    0.0,    0.0,    0.0,
                                   0.0,    0.0,    0.0,    0.0,    0.0,    0.0,    0.0,     0.25, 0.0,    0.0,    0.0,    0.0,    0.0,    0.0,    0.0,
                                   0.0,    0.0,    0.0,    0.0,    0.0,    0.0,    0.0,     0.0,     0.04, 0.0,    0.0,    0.0,    0.0,    0.0,    0.0,
                                   0.0,    0.0,    0.0,    0.0,    0.0,    0.0,    0.0,     0.0,     0.0,    0.01, 0.0,    0.0,    0.0,    0.0,    0.0,
                                   0.0,    0.0,    0.0,    0.0,    0.0,    0.0,    0.0,     0.0,     0.0,    0.0,    0.01, 0.0,    0.0,    0.0,    0.0,
                                   0.0,    0.0,    0.0,    0.0,    0.0,    0.0,    0.0,     0.0,     0.0,    0.0,    0.0,    0.25, 0.0,    0.0,    0.0,
                                   0.0,    0.0,    0.0,    0.0,    0.0,    0.0,    0.0,     0.0,     0.0,    0.0,    0.0,    0.0,    0.01, 0.0,    0.0,
                                   0.0,    0.0,    0.0,    0.0,    0.0,    0.0,    0.0,     0.0,     0.0,    0.0,    0.0,    0.0,    0.0,    0.01, 0.0,
                                   0.0,    0.0,    0.0,    0.0,    0.0,    0.0,    0.0,     0.0,     0.0,    0.0,    0.0,    0.0,    0.0,    0.0,    0.015]
# [ADVANCED] This represents the initial value for the state estimate error covariance matrix. Setting a diagonal
# value (variance) to a large value will result in rapid convergence for initial measurements of the variable in
# question. Users should take care not to use large values for variables that will not be measured directly. The values
# are ordered as x, y, z, roll, pitch, yaw, vx, vy, vz, vroll, vpitch, vyaw, ax, ay, az. Defaults to the matrix below
# #if unspecified.
        initial_estimate_covariance: [1e-9, 0.0,    0.0,    0.0,    0.0,    0.0,    0.0,    0.0,    0.0,    0.0,     0.0,     0.0,     0.0,    0.0,    0.0,
                                      0.0,    1e-9, 0.0,    0.0,    0.0,    0.0,    0.0,    0.0,    0.0,    0.0,     0.0,     0.0,     0.0,    0.0,    0.0,
                                      0.0,    0.0,    1e-9, 0.0,    0.0,    0.0,    0.0,    0.0,    0.0,    0.0,     0.0,     0.0,     0.0,    0.0,    0.0,
                                      0.0,    0.0,    0.0,    1e-9, 0.0,    0.0,    0.0,    0.0,    0.0,    0.0,     0.0,     0.0,     0.0,    0.0,    0.0,
                                      0.0,    0.0,    0.0,    0.0,    1e-9, 0.0,    0.0,    0.0,    0.0,    0.0,     0.0,     0.0,     0.0,    0.0,    0.0,
                                      0.0,    0.0,    0.0,    0.0,    0.0,    1e-9, 0.0,    0.0,    0.0,    0.0,     0.0,     0.0,     0.0,    0.0,    0.0,
                                      0.0,    0.0,    0.0,    0.0,    0.0,    0.0,    1e-9, 0.0,    0.0,    0.0,     0.0,     0.0,     0.0,    0.0,    0.0,
                                      0.0,    0.0,    0.0,    0.0,    0.0,    0.0,    0.0,    1e-9, 0.0,    0.0,     0.0,     0.0,     0.0,    0.0,    0.0,
                                      0.0,    0.0,    0.0,    0.0,    0.0,    0.0,    0.0,    0.0,    1e-9, 0.0,     0.0,     0.0,     0.0,    0.0,    0.0,
                                      0.0,    0.0,    0.0,    0.0,    0.0,    0.0,    0.0,    0.0,    0.0,    1e-9,  0.0,     0.0,     0.0,    0.0,    0.0,
                                      0.0,    0.0,    0.0,    0.0,    0.0,    0.0,    0.0,    0.0,    0.0,    0.0,     1e-9,  0.0,     0.0,    0.0,    0.0,
                                      0.0,    0.0,    0.0,    0.0,    0.0,    0.0,    0.0,    0.0,    0.0,    0.0,     0.0,     1e-9,  0.0,    0.0,    0.0,
                                      0.0,    0.0,    0.0,    0.0,    0.0,    0.0,    0.0,    0.0,    0.0,    0.0,     0.0,     0.0,     1e-9, 0.0,    0.0,
                                      0.0,    0.0,    0.0,    0.0,    0.0,    0.0,    0.0,    0.0,    0.0,    0.0,     0.0,     0.0,     0.0,    1e-9, 0.0,
                                      0.0,    0.0,    0.0,    0.0,    0.0,    0.0,    0.0,    0.0,    0.0,    0.0,     0.0,     0.0,     0.0,    0.0,    1e-9]

这里面特别需要注意的就是odom0_config和imu0_config,还有最下面的两个协方差矩阵里面参数的设置。

仔细看odom0_config,如果是双编码器,也就是双轮速计,通过左右两个轮速计是可以估算出差速引起的旋转角速度的。

“robot_localization”中的状态估计节点允许用户融合任意数量的传感器。这允许用户使用多个源来测量某些状态向量变量 - 特别是姿态变量。如果机器人可能从多个 IMU 获取绝对方向信息,或者它可能有多个数据源提供其绝对位置的估计值。

就可以同时用  imu0_config和imu1_config

订阅的主题

Published Topics

  • odometry/gps - 包含转换为世界坐标系的GPS数据 的nav_msgs/Odometry消息 . 消息可以直接融合进估计节点。

  • gps/filtered - 将机器人世界坐标系位置转换为GPS坐标系中的 sensor_msgs/NavSatFix消息

Published Transforms

  • world_frame->utm (可选) - 当 the broadcast_utm_transform设置为 true, navsat_transform_node计算 utm 坐标系到传感坐标系的转换. 然而, 通常将 utm坐标系设置为odometry的子坐标系。

后续我会持续跟新此博文

如果有自动驾驶、自主导航、机器人、人工智能、智慧交通、智能网联、车路协同方面的问题,欢迎关注渡众机器人以及公司网站链接。

  • 14
    点赞
  • 32
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值