Graph-slam(三)

In GraphSLAM, the map and the path are obtained from the linearized information matrix Ω and the information vector ϵ , via the equations Σ=Ω1 and μ=Σϵ . This operation requires us to solve a system of linear equations. This raises the question on how efficiently we can recover the map estimate μ .
The answer to the complexity question depends on the topology of the world. If each feature is seen only locally in time, the graph represented by the constraints is linear. Thus, can be reordered so that it becomes a band-diagonal matrix,that is,all non-zero values occur near its diagonal.The equation μ=Ω1ϵ can then be computed in linear time. This intuition carries over to a cycle-free world that is traversed once,so that each feature is seen for a short,consecutive period of time.
The more common case, however, involves features that are observed multiple times, with large time delays in between. This might be the case because the robot goes back and forth through a corridor, or because the world possesses cycles. In either situation, there will exist features mj that are seen at drastically different time steps xt1 and xt2 , with t2>> t1. In our constraint graph,this introduces a cyclic dependence: xt1 and xt2 are linked through the sequence of controls ut1+1,ut1+2,...,ut2 and through the joint observation links between xt1 and mj ,and xt2 and mj ,respectively.Such link smake our variable reordering trick inapplicable, and recovering the map becomes more complex. In fact, since the inverse of is multiplied with a vector, the result can be computed with optimization techniques such as conjugate gradient, without explicitly computing the full inverse matrix. Since most worlds possess cycles, this is the case of interest.
这里写图片描述
这里写图片描述
这里写图片描述
The GraphSLAM algorithm now employs an important factorization trick, which we can think of as propagating information trough the information matrix (in fact, it is a generalization of the well-known variable elimination algorithm for matrix inversion). Suppose we would like to remove a feature mj from the information matrix Ω and the information state ϵ . In our spring mass model, this is equivalent to removing the node and all springs attached to this node.As we shall see below, this is possible by a remarkably simple operation. we can remove all those springs between mj and the poses at which mj was observed, by introducing new springs between any pair of such poses.
This process is illustrated in Figure 3, which shows the removal of two map features, m1 and m3 (the removal of m2 and m4 is trivial in this example). In both cases, the feature removal modifies the link between any pair of poses from which a feature was originally observed.As illustrated in Figure 3(b), this operation may lead to the introduction of new links in the graph. In the example shown there, the removal of m3 leads to a new link between x2 and x4 .
这里写图片描述
这里写图片描述
这里写图片描述

  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值