概率机器人(Probabilistic Robotics)笔记 Chapter 7: 移动机器人定位(Mobile Robot Localization)

1. 简介

可以把定位看作坐标系变换问题,即全局地图坐标系与机器人local坐标系的变换。
传感器通常不能直接测量位置,需要从其他数据中推算位置。
通常需要累积一段时间的数据,来进行定位。
定位算法通常针对特定地图表达方式,多种多样。

2. 定位问题分类

局部(local) v.s. 全局(global)

定位问题可按照初始化的信息和运行时的信息分三类,难度递增:

  1. 位置追踪(position tracking):假设初始状态已知。是一个局部问题,因为不确定性是局部的,且被限制在机器人的真实位置周围。
  2. 全局定位(global localization):初始状态未知。这种情况下不能假设位姿误差有界。
  3. 绑架问题(kidnapped robot problem):是全局定位问题的变种,但更难,因为机器人会保持原有的位姿估计,不知道已经不适用了。这对定位失败后的恢复非常重要

静态环境(static) v.s. 动态环境(dynamic)

  1. 静态环境:唯一的状态变量是机器人的位姿,有一些优美的数学特性,可以进行高效的概率估计。
  2. 动态环境:除机器人外,还有其他物体的位置或形态(configuration)发生改变。对于转瞬即逝的变化,最好当噪声处理;而对于有一定持续性、会影响多帧测量的变化,比如行人、日光、可移动的家具、门等,是最值得关心的。

两种方法应对动态环境:把动态物体加入状态变量中,使状态符合马尔科夫假设,但计算量大、模型复杂;还可以对传感器数据进行滤波,过滤掉未建模的动态信息。

被动()passive) v.s. 主动(Active)

根据定位算法是否控制机器人运动而区分。
被动定位下,机器人的运动并不是为了增强定位效果。主动定位下,运动的目的是减小定位误差或减小把一个定位不好的机器人移动到一个混乱的环境中的代价。
纯主动定位没有用——机器人的作用在于完成任务。有些主动定位技术是在被动定位之上构造的,也有些结合了具体任务。
本章只讲被动,主动在后续章节。

单机器人 v.s. 多机器人

单机器人没有通信的问题。
多机器人利用可以互相探测的关系,一个机器人的置信度可以影响另一个机器人。

3. 马尔科夫(Markov)定位

概率的定位算法都是贝叶斯滤波器的变种。
马尔科夫定位是贝叶斯滤波器在定位问题中的直接应用,可以解决静态环境中的全局定位、定位追踪、绑架问题。

算法:Algorithm Markov-localization( b e l ( x t − 1 ) , u t , z t , m bel(x_{t-1}),u_t,z_t,m bel(xt1),ut,zt,m):
1····for all x t x_t xt do
2········ b e l ^ ( x t ) = ∫ p ( x t ∣ u t , x t − 1 , m ) b e l ( x t − 1 ) d x \hat{bel}(x_t)=\int p(x_t|u_t,x_{t-1},m)bel(x_{t-1})dx bel^(xt)=p(xtut,xt1,m)bel(xt1)dx
3········ b e l ( x t ) = η p ( z t ∣ x t , m ) b e l ^ ( x t ) bel(x_t)=\eta p(z_t|x_t,m)\hat{bel}(x_t) bel(xt)=ηp(ztxt,m)bel^(xt)
4····endfor
5····return b e l ( x t ) bel(x_t) bel(xt)

  1. 对于定位追踪问题:如果初始状态已知,则用一个窄高斯分布。
  2. 对于全局定位问题:如果初始状态未知,则在整个状态空间上均匀分布。
  3. 对于其他定位问题:关于机器人未知的不完全信息可以转换为对应的初始置信分布。

4. 马尔科夫定位图示

一维的小机器人看门的例子,不详细记了。

5. EKF定位

EKF定位是马尔科夫定位的特殊情况,即各种东西都是高斯分布,然后用前两阶矩来表达(均值和方差)。

EKF定位假设地图是一系列特征,在时间点 t t t,机器人得到一系列特征的观测,每个观测包含距离、方向、身份 c t i c_t^i cti,即观测 z t = { z t 1 , z t 2 , . . . } z_t=\{z_t^1, z_t^2,...\} zt={ zt1,zt2,...},并且每个特征都是可区分的,不可区分的在后文。

5.1 图示

还是机器人一维运动,有三个小门。假设特征点可区分,也就是门都有独立的标签,所以测量模型是 p ( z t ∣ x t , c t ) p(z_t|x_t,c_t) p(ztxt,ct),对 x t x_t xt都是一个窄带高斯,其中 c t c_t ct取值为1、2、3。此外,再假设初始状态已知(窄高斯)。

5.2 EKF定位算法

输入包括初始估计(高斯),控制 u t u_t ut,地图 m m m,还有时间 t t t观测到的一系列特征 z t = { z t 1 , z t 2 , . . . } z_t=\{z_t^1, z_t^2,...\} zt

  • 1
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
英文版高清带书签 Contents Preface xvii Acknowledgments xix I Basics 1 1 Introduction 3 1.1 Uncertainty in Robotics 3 1.2 Probabilistic Robotics 4 1.3 Implications 9 1.4 Road Map 10 1.5 Teaching Probabilistic Robotics 11 1.6 Bibliographical Remarks 11 2 Recursive State Estimation 13 2.1 Introduction 13 2.2 Basic Concepts in Probability 14 2.3 Robot Environment Interaction 19 2.3.1 State 20 2.3.2 Environment Interaction 22 2.3.3 Probabilistic Generative Laws 24 2.3.4 Belief Distributions 25 2.4 Bayes Filters 26 2.4.1 The Bayes Filter Algorithm 26 2.4.2 Example 28 2.4.3 Mathematical Derivation of the Bayes Filter 31 2.4.4 The Markov Assumption 33 2.5 Representation and Computation 34 2.6 Summary 35 2.7 Bibliographical Remarks 36 2.8 Exercises 36 3 Gaussian Filters 39 3.1 Introduction 39 3.2 The Kalman Filter 40 3.2.1 Linear Gaussian Systems 40 3.2.2 The Kalman Filter Algorithm 43 3.2.3 Illustration 44 3.2.4 Mathematical Derivation of the KF 45 3.3 The Extended Kalman Filter 54 3.3.1 Why Linearize? 54 3.3.2 Linearization Via Taylor Expansion 56 3.3.3 The EKF Algorithm 59 3.3.4 Mathematical Derivation of the EKF 59 3.3.5 Practical Considerations 61 3.4 The Unscented Kalman Filter 65 3.4.1 Linearization Via the Unscented Transform 65 3.4.2 The UKF Algorithm 67 3.5 The Information Filter 71 3.5.1 Canonical Parameterization 71 3.5.2 The Information Filter Algorithm 73 3.5.3 Mathematical Derivation of the Information Filter 74 3.5.4 The Extended Information Filter Algorithm 75 3.5.5 Mathematical Derivation of the Extended Information Filter 76 3.5.6 Practical Considerations 77 3.6 Summary 79 3.7 Bibliographical Remarks 81 3.8 Exercises 81 4 Nonparametric Filters 85 4.1 The Histogram Filter 86 4.1.1 The Discrete Bayes Filter Algorithm 86 4.1.2 Continuous State 87 4.1.3 Mathematical Derivation of the Histogram Approximation 89 4.1.4 Decomposition Techniques 92 4.2 Binary Bayes Filters with Static State 94 4.3 The Particle Filter 96 4.3.1 Basic Algorithm 96 4.3.2 Importance Sampling 100 4.3.3 Mathematical Derivation of the PF 103 4.3.4 Practical Considerations and Properties of Particle Filters 104 4.4 Summary 113 4.5 Bibliographical Remarks 114 4.6 Exercises 115 5 Robot Motion 117 5.1 Introduction 117 5.2 Preliminaries 118 5.2.1 Kinematic Configuration 118 5.2.2 Probabilistic Kinematics 119 5.3 Velocity Motion Model 121 5.3.1 Closed Form Calculation 121 5.3.2 Sampling Algorithm 122 5.3.3 Mathematical Derivation of the Velocity Motion Model 125 5.4 Odometry Motion Model 132 5.4.1 Closed Form Calculation 133 5.4.2 Sampling Algorithm 137 5.4.3 Mathematical Derivation of the Odometry Motion Model 137 5.5 Motion and Maps 140 5.6 Summary 143 5.7 Bibliographical Remarks 145 5.8 Exercises 145 6 Robot Perception 149 6.1 Introduction 149 6.2 Maps 152 6.3 Beam Models of Range Finders 153 6.3.1 The Basic Measurement Algorithm 153 6.3.2 Adjusting the Intrinsic Model Parameters 158 6.3.3 Mathematical Derivation of the Beam Model 162 6.3.4 Practical Considerations 167 6.3.5 Limitations of the Beam Model 168 6.4 Likelihood Fields for Range Finders 169 6.4.1 Basic Algorithm 169 6.4.2 Extensions 172 6.5 Correlation-Based Measurement Models 174 6.6 Feature-Based Measurement Models 176 6.6.1 Feature Extraction 176 6.6.2 Landmark Measurements 177 6.6.3 Sensor Model with Known Correspondence 178 6.6.4 Sampling Poses 179 6.6.5 Further Considerations 180 6.7 Practical Considerations 182 6.8 Summary 183 6.9 Bibliographical Remarks 184 6.10 Exercises 185 II Localization 189 7 Mobile Robot Localization: Markov and Gaussian 191 7.1 A Taxonomy of Localization Problems 193 7.2 Markov Localization 197 7.3 Illustration of Markov Localization 200 7.4 EKF Localization 201 7.4.1 Illustration 201 7.4.2 The EKF Localization Algorithm 203 7.4.3 Mathematical Derivation of EKF Localization 205 7.4.4 Physical Implementation 210 7.5 Estimating Correspondences 215 7.5.1 EKF Localization with Unknown Correspondences 215 7.5.2 Mathematical Derivation of the ML Data Association 216 7.6 Multi-Hypothesis Tracking 218 7.7 UKF Localization 220 7.7.1 Mathematical Derivation of UKF Localization 220 7.7.2 Illustration 223 7.8 Practical Considerations 229 7.9 Summary 232 7.10 Bibliographical Remarks 233 7.11 Exercises 234 8 Mobile Robot Localization: Grid And Monte Carlo 237 8.1 Introduction 237 8.2 Grid Localization 238 8.2.1 Basic Algorithm 238 8.2.2 Grid Resolutions 239 8.2.3 Computational Considerations 243 8.2.4 Illustration 245 8.3 Monte Carlo Localization 250 8.3.1 Illustration 250 8.3.2 The MCL Algorithm 252 8.3.3 Physical Implementations 253 8.3.4 Properties of MCL 253 8.3.5 Random Particle MCL: Recovery from Failures 256 8.3.6 Modifying the Proposal Distribution 261 8.3.7 KLD-Sampling: Adapting the Size of Sample Sets 263 8.4 Localization in Dynamic Environments 267 8.5 Practical Considerations 273 8.6 Summary 274 8.7 Bibliographical Remarks 275 8.8 Exercises 276 III Mapping 279 9 Occupancy Grid Mapping 281 9.1 Introduction 281 9.2 The Occupancy Grid Mapping Algorithm 284 9.2.1 Multi-Sensor Fusion 293 9.3 Learning Inverse Measurement Models 294 9.3.1 Inverting the Measurement Model 294 9.3.2 Sampling from the Forward Model 295 9.3.3 The Error Function 296 9.3.4 Examples and Further Considerations 298 9.4 Maximum A Posteriori Occupancy Mapping 299 9.4.1 The Case for Maintaining Dependencies 299 9.4.2 Occupancy Grid Mapping with Forward Models 301 9.5 Summary 304 9.6 Bibliographical Remarks 305 9.7 Exercises 307 10 Simultaneous Localization and Mapping 309 10.1 Introduction 309 10.2 SLAM with Extended Kalman Filters 312 10.2.1 Setup and Assumptions 312 10.2.2 SLAM with Known Correspondence 313 10.2.3 Mathematical Derivation of EKF SLAM 317 10.3 EKF SLAM with Unknown Correspondences 323 10.3.1 The General EKF SLAM Algorithm 323 10.3.2 Examples 324 10.3.3 Feature Selection and Map Management 328 10.4 Summary 330 10.5 Bibliographical Remarks 332 10.6 Exercises 334 11 The GraphSLAM Algorithm 337 11.1 Introduction 337 11.2 Intuitive Description 340 11.2.1 Building Up the Graph 340 11.2.2 Inference 343 11.3 The GraphSLAM Algorithm 346 11.4 Mathematical Derivation of GraphSLAM 353 11.4.1 The Full SLAM Posterior 353 11.4.2 The Negative Log Posterior 354 11.4.3 Taylor Expansion 355 11.4.4 Constructing the Information Form 357 11.4.5 Reducing the Information Form 360 11.4.6 Recovering the Path and the Map 361 11.5 Data Association in GraphSLAM 362 11.5.1 The GraphSLAM Algorithm with Unknown Correspondence 363 11.5.2 Mathematical Derivation of the Correspondence Test 366 11.6 Efficiency Consideration 368 11.7 Empirical Implementation 370 11.8 Alternative Optimization Techniques 376 11.9 Summary 379 11.10 Bibliographical Remarks 381 11.11 Exercises 382 12 The Sparse Extended Information Filter 385 12.1 Introduction 385 12.2 Intuitive Description 388 12.3 The SEIF SLAM Algorithm 391 12.4 Mathematical Derivation of the SEIF 395 12.4.1 Motion Update 395 12.4.2 Measurement Updates 398 12.5 Sparsification 398 12.5.1 General Idea 398 12.5.2 Sparsification in SEIFs 400 12.5.3 Mathematical Derivation of the Sparsification 401 12.6 Amortized Approximate Map Recovery 402 12.7 How Sparse Should SEIFs Be? 405 12.8 Incremental Data Association 409 12.8.1 Computing Incremental Data Association Probabilities 409 12.8.2 Practical Considerations 411 12.9 Branch-and-Bound Data Association 415 12.9.1 Recursive Search 416 12.9.2 Computing Arbitrary Data Association Probabilities 416 12.9.3 Equivalence Constraints 419 12.10 Practical Considerations 420 12.11 Multi-Robot SLAM 424 12.11.1 Integrating Maps 424 12.11.2 Mathematical Derivation of Map Integration 427 12.11.3 Establishing Correspondence 429 12.11.4 Example 429 12.12 Summary 432 12.13 Bibliographical Remarks 434 12.14 Exercises 435 13 The FastSLAM Algorithm 437 13.1 The Basic Algorithm 439 13.2 Factoring the SLAM Posterior 439 13.2.1 Mathematical Derivation of the Factored SLAM Posterior 442 13.3 FastSLAM with Known Data Association 444 13.4 Improving the Proposal Distribution 451 13.4.1 Extending the Path Posterior by Sampling a New Pose 451 13.4.2 Updating the Observed Feature Estimate 454 13.4.3 Calculating Importance Factors 455 13.5 Unknown Data Association 457 13.6 Map Management 459 13.7 The FastSLAM Algorithms 460 13.8 Efficient Implementation 460 13.9 FastSLAM for Feature-Based Maps 468 13.9.1 Empirical Insights 468 13.9.2 Loop Closure 471 13.10 Grid-based FastSLAM 474 13.10.1 The Algorithm 474 13.10.2 Empirical Insights 475 13.11 Summary 479 13.12 Bibliographical Remarks 481 13.13 Exercises 482 IV Planning and Control 485 14 Markov Decision Processes 487 14.1 Motivation 487 14.2 Uncertainty in Action Selection 490 14.3 Value Iteration 495 14.3.1 Goals and Payoff 495 14.3.2 Finding Optimal Control Policies for the Fully Observable Case 499 14.3.3 Computing the Value Function 501 14.4 Application to Robot Control 503 14.5 Summary 507 14.6 Bibliographical Remarks 509 14.7 Exercises 510 15 Partially Observable Markov Decision Processes 513 15.1 Motivation 513 15.2 An Illustrative Example 515 15.2.1 Setup 515 15.2.2 Control Choice 516 15.2.3 Sensing 519 15.2.4 Prediction 523 15.2.5 Deep Horizons and Pruning 526 15.3 The Finite World POMDP Algorithm 527 15.4 Mathematical Derivation of POMDPs 531 15.4.1 Value Iteration in Belief Space 531 15.4.2 Value Function Representation 532 15.4.3 Calculating the Value Function 533 15.5 Practical Considerations 536 15.6 Summary 541 15.7 Bibliographical Remarks 542 15.8 Exercises 544 16 Approximate POMDP Techniques 547 16.1 Motivation 547 16.2 QMDPs 549 16.3 Augmented Markov Decision Processes 550 16.3.1 The Augmented State Space 550 16.3.2 The AMDP Algorithm 551 16.3.3 Mathematical Derivation of AMDPs 553 16.3.4 Application to Mobile Robot Navigation 556 16.4 Monte Carlo POMDPs 559 16.4.1 Using Particle Sets 559 16.4.2 The MC-POMDP Algorithm 559 16.4.3 Mathematical Derivation of MC-POMDPs 562 16.4.4 Practical Considerations 563 16.5 Summary 565 16.6 Bibliographical Remarks 566 16.7 Exercises 566 17 Exploration 569 17.1 Introduction 569 17.2 Basic Exploration Algorithms 571 17.2.1 Information Gain 571 17.2.2 Greedy Techniques 572 17.2.3 Monte Carlo Exploration 573 17.2.4 Multi-Step Techniques 575 17.3 Active Localization 575 17.4 Exploration for Learning Occupancy Grid Maps 580 17.4.1 Computing Information Gain 580 17.4.2 Propagating Gain 585 17.4.3 Extension to Multi-Robot Systems 587 17.5 Exploration for SLAM 593 17.5.1 Entropy Decomposition in SLAM 593 17.5.2 Exploration in FastSLAM 594 17.5.3 Empirical Characterization 598 17.6 Summary 600 17.7 Bibliographical Remarks 602 17.8 Exercises 604 Bibliography 607 Index 639

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值