解读DP-SLAM (4)
上次也说过,naive approach每次更新的数据量很大,有几个G,也可以说,
the naive approach is doing too much work
来分析一下粒子滤波的迭代过程,先规定一些术语:
when a particle is sampled at iteration i to produce a successor particle at iteration i+1
Parent: generation i particle
Children: generation i+1 particle
Siblings: Two children with the same parent
参数M指 the size of the occupancy grid map
参数P指 the numbers of the particle
对于Sliblings的更新状况,有这样的一个描述
Suppose the laser sweeps out an area of size A (A<<M) and consider two siblings s_1 and s_2.
Each sibling will correspond to a different robot pose and will make at most A updates to the map it inherits
from its parent.
Thus, s_1 and s_2 can differ in at most A map positions
说白了,就是这样的意思:
虽然粒子s1和s2都是同一父辈粒子的抽样,但是彼此不一样,根据观测信息更新它们,它们的更新轨迹也
当然不一样。既然不一样,就得开辟两个内存空间存储它们,即
recording a list of changes that each particle makes to its parent's map
但是会造成存储问题,
while this would solve the problem of making efficient map updates,
it would create a bad computational problem for localization
作者于是列举一个例子,
Tracking a line though the map to look for an obstacle would require:
a. working through the current particle's entire ancestry
b. consulting the stored list of differences for each particle in the ancestry
算法复杂度会增大
The complexity of this operation would be linear in the number of iterations of the particles
那么,问题来了:
The challenge is, therefore, to provide data structures that permit efficient updates to the map and efficient
localization queries with time complexity that is independent of the number of iterations of the particle filter
为了解决这个问题,
We call our solution to this problem Distributed Particle Mapping or DP-Mapping
Two data structures that are maintained:
a. the ancestry tree
b. the map itself
论文看到这里,我才慢慢明白:
这篇论文用的框架算法还是粒子滤波,但是他们改进存储粒子的数据结构,使得整个SLAM算法的复杂度降低了
今天就到这里