SLAM--卡尔曼滤波、粒子滤波

一.贝叶斯滤波

1.概率学基础

a.贝叶斯公式
b.归一化积
参考:
机器人学中的概率估计----p11-p34;
 

2.贝叶斯滤波器

设运动和观测模型:
状态方程 x k = f ( x k − 1 , v k , ω k ) \bm x_k=f(\bm x_{k-1},\bm v_k,\bm \omega_k) xk=f(xk1,vk,ωk)

观测方程 y k = g ( x k , n k ) \bm y_k=g(\bm x_k,\bm n_k) yk=g(xk,nk)

由CK方程:
p ( x k ∣ v 1 : k , y 0 : k − 1 ) = ∫ p ( x k ∣ x k − 1 , v 1 : k , y 0 : k − 1 ) p ( x k − 1 ∣ v 1 : k , y 0 : k − 1 ) d x k − 1 p(x_k|v_{1:k},y_{0:k-1})=\int p(x_k|x_{k-1},v_{1:k},y_{0:k-1})p(x_{k-1}|v_{1:k},y_{0:k-1})dx_{k-1} p(xkv1:k,y0:k1)=p(xkxk1,v1:k,y0:k1)p(xk1v1:k,y0:k1)dxk1
p ( x k ∣ v 1 : k , y 0 : k ) = μ   p ( y k ∣ x k ) p ( x k ∣ v 1 : k , y 0 : k ) p(x_k|v_{1:k},y_{0:k})= \mu \space p(y_k|x_k) p(x_k|v_{1:k},y_{0:k})\\ p(xkv1:k,y0:k)=μ p(ykxk)p(xkv1:k,y0:k)
可以看到这种计算开销是非常大的,需要考虑之前所有状态来推导当前状态。
所以,假设系统马尔科夫性,k时刻的状态是由k-1时刻决定:
p ( x k ∣ x k − 1 , v 1 : k , y 0 : k − 1 ) = p ( x k ∣ x k − 1 , v k ) p ( x k − 1 ∣ v 1 : k , y 0 : k − 1 ) = p ( x k − 1 ∣ v k − 1 , y k − 1 ) p(x_k|x_{k-1},v_{1:k},y_{0:k-1}) = p(x_k|x_{k-1},v_k)\\ p(x_{k-1}|v_{1:k},y_{0:k-1}) = p(x_{k-1}|v_{k-1},y_{k-1}) p(xkxk1,v1:k,y0:k1)=p(xkxk1,vk)p(xk1v1:k,y0:k1)=p(xk1vk1,yk1)
由此可以得到后验估计:
p ( x k ∣ v k , y k ) = μ   p ( y k ∣ x k ) ∫ p ( x k ∣ x k − 1 , v k ) p ( x k − 1 ∣ v k − 1 , y k − 1 ) d x k − 1 p(x_k|v_k,y_k)= \mu \space p(y_k|x_k)\int p(x_k|x_{k-1},v_k)p(x_{k-1}|v_{k-1},y_{k-1})dx_{k-1} p(xkvk,yk)=μ p(ykxk)p(xkxk1,vk)p(xk1vk1,yk1)dxk1
考虑系统的马尔科夫性,就可以从贝叶斯滤波中推导出扩展卡尔曼滤波。
 

二、卡尔曼滤波

1、条件高斯PDF

存在多元正态分布 ( x , y ) (x,y) (x,y),则联合高斯密度函数为:
p ( x , y ) = N ( [ μ x μ y ] , [ ∑ x x ∑ x y ∑ y x ∑ y y ] ) p(x,y)=\bm N(\begin{bmatrix} \bm \mu _x\\ \\\bm \mu _y \end{bmatrix},\begin{bmatrix} \bm {\sum} _{xx}& \bm {\sum} _{xy}\\ \\\bm {\sum} _{yx} & \bm {\sum} _{yy} \end{bmatrix}) p(x,y)=N(μxμy,xxyxxyyy)
其中 ∑ y x = ∑ x y T \bm {\sum} _{yx} =\bm {\sum} _{xy}^T yx=xyT

由贝叶斯法则:
p ( x , y ) = p ( x ∣ y ) ⋅ p ( y ) p ( x , y ) p ( y ) = p ( x ∣ y ) p(x,y)=p(x|y)\cdot p(y)\\ \frac {p(x,y)} {p(y)} = {p(x|y)} p(x,y)=p(xy)p(y)p(y)p(x,y)=p(xy)
分解指数p(x,y)的部分:
( [ x y ] − [ μ x μ y ] ) T [ ∑ x x ∑ x y ∑ y x ∑ y y ] − 1 ( [ x y ] − [ μ x μ y ] )   = ( x − μ x − ∑ x y ∑ y y − 1 ( y − μ y ) ) T ⋅ ( ∑ x x − ∑ x y ∑ y y − 1 ∑ y x ) − 1 ⋅ ( x − μ x − ∑ x y ∑ y y − 1 ( y − μ y ) ) + ( y − μ y ) T ∑ y y − 1 ( y − μ y ) \small \begin{aligned} &(\begin{bmatrix} \bm x\\ \\\bm y \end{bmatrix}-\begin{bmatrix} \bm \mu _x\\ \\\bm \mu _y \end{bmatrix})^T\begin{bmatrix} \bm {\sum} _{xx}& \bm {\sum} _{xy}\\ \\\bm {\sum} _{yx} & \bm {\sum} _{yy} \end{bmatrix}^{-1}(\begin{bmatrix} \bm x\\ \\\bm y \end{bmatrix}-\begin{bmatrix} \bm \mu _x\\ \\\bm \mu _y \end{bmatrix})\\ ~\\&=(\bm x-\bm \mu _x-\bm {\sum} _{xy}\bm {\sum} _{yy}^{-1}(\bm y-\bm \mu_y))^T \cdot ( \bm {\sum} _{xx}- \bm {\sum} _{xy} \bm {\sum} _{yy}^{-1} \bm {\sum} _{yx})^{-1}\cdot (\bm x-\bm \mu _x-\bm {\sum} _{xy}\bm {\sum} _{yy}^{-1}(\bm y-\bm \mu_y)) \\ &+(\bm y -\bm \mu _y)^T \bm {\sum} _{yy}^{-1} (\bm y -\bm \mu _y) \end{aligned}  ([xy][μxμy])Txxyxxyyy1([xy][μxμy])=(xμxxyyy1(yμy))T(xxxyyy1yx)1(xμxxyyy1(yμy))+(yμy)Tyy1(yμy)
根据指数分解的结果以及 p ( x , y ) = p ( x ∣ y ) ⋅ p ( y ) p(x,y)=p(x|y)\cdot p(y) p(x,y)=p(xy)p(y)可以得到:
p ( x ∣ y ) = N ( μ x + ∑ x y ∑ y y − 1 ( y − μ y ) , ∑ x x − ∑ x y ∑ y y − 1 ∑ y x ) p(x|y)=\bm N(\bm \mu _x+ \bm {\tiny \sum} _{xy}\bm {\tiny \sum} _{yy}^{-1}(\bm y-\bm \mu_y), \bm {\tiny \sum} _{xx}- \bm {\tiny \sum} _{xy} \bm {\tiny \sum} _{yy}^{-1} \bm {\tiny \sum} _{yx}) p(xy)=N(μx+xyyy1(yμy),xxxyyy1yx)

2.广义卡尔曼步骤:

K k = ∑ x y ∑ y y − 1   P ^ k = ∑ x x − ∑ x y ∑ y y − 1 ∑ y x   x ^ k = x ˇ k + K k ( y k − μ y ) K_k=\bm {\tiny \sum} _{xy}\bm {\tiny \sum} _{yy}^{-1} \\ ~\\ \bm {\hat{P}_k}=\bm {\tiny \sum} _{xx}- \bm {\tiny \sum} _{xy} \bm {\tiny \sum} _{yy}^{-1} \bm {\tiny \sum} _{yx}\\ ~\\ \bm {\hat{x}_k}= \bm {\check{x}_k}+K_k(\bm y_k-\bm \mu_y) Kk=xyyy1 P^k=xxxyyy1yx x^k=xˇk+Kk(ykμy)

针对线性的卡尔曼滤波

状态方程和观测方程:
状态方程 x k = A k x k − 1 + u k + ω k , ω k ∽ N ( 0 , Q k ) \bm x_k=\bm A_k\bm x_{k-1}+\bm u_k+\bm \omega_k,\bm \omega_k\backsim \bm N(0,\bm Q_k) xk=Akxk1+uk+ωk,ωkN(0,Qk)

观测方程 y k = C k x k + n k , n k ∽ N ( 0 , R k ) \bm y_k=\bm C_k\bm x_k+\bm n_k,\bm n_k\backsim \bm N(0,\bm R_k) yk=Ckxk+nk,nkN(0,Rk)
 
则:
K k = P ˇ k ⋅ C k T ( C k P ˇ k C k T + R k ) − 1   P ^ k = ( 1 − K k C k ) P k ˇ   x ^ k = x ˇ k + K k ( y k − C k x ˇ k ) K_k=\bm {\check{P}_k}\cdot \bm C_k^T(\bm C_k\bm{\check{P}_k}\bm C_k^T+\bm R_k)^{-1}\\ ~\\ \bm {\hat{P}_k}=(1-K_k\bm C_k)\bm {\check{P_k}}\\ ~\\ \bm {\hat{x}_k}= \bm {\check{x}_k}+K_k(\bm y_k-\bm C_k\bm {\check{x}_k}) Kk=PˇkCkT(CkPˇkCkT+Rk)1 P^k=(1KkCk)Pkˇ x^k=xˇk+Kk(ykCkxˇk)

3.扩展卡尔曼滤波EKF

存在非线性的状态方程和观测方程:

状态方程 x k = f ( x k − 1 , v k ) + ω k , ω k ∽ N ( 0 , Q k ) \bm x_k=f(\bm x_{k-1},\bm v_k)+\bm \omega_k,\bm \omega_k\backsim \bm N(0,\bm Q_k) xk=f(xk1,vk)+ωk,ωkN(0,Qk)

观测方程 y k = g ( x k ) + n k , n k ∽ N ( 0 , R k ) \bm y_k=g(\bm x_k)+\bm n_k,\bm n_k\backsim \bm N(0,\bm R_k) yk=g(xk)+nk,nkN(0,Rk)

对运动和观测模型线性化
f ( x k − 1 , v k ) ≈ x ˇ k + F k − 1 ( x k − 1 − x ^ k − 1 ) = x ˇ k + ∂ f ∂ x k − 1 ∣ x ^ k − 1 , v k ⋅ ( x k − 1 − x ^ k − 1 ) g ( x k ) ≈ y ˇ k + G k ( x k − x ˇ k ) = y ˇ k + ∂ g ∂ x k ∣ x ˇ k ( x k − x ˇ k ) \begin{aligned} f(x_{k-1 },v_k)&\approx \check{x}_k+\bm{F_{k-1}}(x_{k-1}-\hat{x}_{k-1})=\check{x}_k+\frac {\partial f } {\partial \bm x_{k-1}}\Bigg|_{\bm{\hat{x}_{k-1},v_k}}\cdot(x_{k-1}-\hat{x}_{k-1})\\ g(x_k)&\approx\check{y}_k+\bm {G_k}(x_k-\check x_k)=\check{y}_k+\frac {\partial g } {\partial \bm x_k} \Bigg|_{\bm{\check{x}_{k}}}(x_k-\check x_k) \end{aligned} f(xk1,vk)g(xk)xˇk+Fk1(xk1x^k1)=xˇk+xk1fx^k1,vk(xk1x^k1)yˇk+Gk(xkxˇk)=yˇk+xkgxˇk(xkxˇk)

先验估计
p ( x k ∣ x k − 1 , v k ) ≈ N ( x ˇ k + F k − 1 ( x k − 1 − x ^ k − 1 ) , Q k ) p(x_k|x_{k-1},v_k) \approx N(\check x_k+\bm{F_{k-1}}(x_{k-1}-\hat{x}_{k-1}),\bm Q_k) p(xkxk1,vk)N(xˇk+Fk1(xk1x^k1),Qk)
似然估计
p ( y k ∣ x k ) ≈ N ( y ˇ k + G k ( x k − x ˇ k ) , R k ) p(y_k|x_k)\approx N(\check y_k+\bm{G_k}(x_k-\check x_k),\bm R_k) p(ykxk)N(yˇk+Gk(xkxˇk),Rk)
代入贝叶斯优化中:

在这里插入图片描述
通过高斯融合:
p ( x k ∣ x ˇ 0 , v 1 : k , y 0 : k ) = N ( x ˇ k + K k ( y k − y ˇ k ) , ( 1 − K k G k ) ( F k − 1 P k − 1 F k − 1 T + Q k ) ) p(x_k|\check x_0,v_{1:k},y_{0:k})=N(\check x_k+\bm {K_k}(y_k-\check y_k),(1-\bm {K_kG_k})(\bm {F_{k-1}}\bm {P_{k-1}}\bm {F^T_{k-1}}+\bm Q_k) ) p(xkxˇ0,v1:k,y0:k)=N(xˇk+Kk(ykyˇk),(1KkGk)(Fk1Pk1Fk1T+Qk))
步骤:
K k = P ˇ k ⋅ G k T ( G k P ˇ k G k T + R k ) − 1   P ^ k = ( 1 − K k G k ) P k ˇ   x ^ k = x ˇ k + K k ( y k − g ( x ˇ k ) ) \begin{aligned} K_k&=\bm {\check{P}_k}\cdot \bm G_k^T(\bm G_k\bm{\check{P}_k}\bm G_k^T+\bm R_k)^{-1}\\ ~\\ \bm {\hat{P}_k}&=(1-K_k\bm G_k)\bm {\check{P_k}}\\ ~\\ \bm {\hat{x}_k}&= \bm {\check{x}_k}+K_k(\bm y_k-\bm g(\bm {\check{x}_k})) \end{aligned} Kk P^k x^k=PˇkGkT(GkPˇkGkT+Rk)1=(1KkGk)Pkˇ=xˇk+Kk(ykg(xˇk))

 

三、粒子滤波

粒子滤波算法

1.先从先验和运动噪声的联合密度函数中抽取M个样本:
[ x ^ k − 1 , m ω k , m ] [ \hat x_{k-1,m}\quad \omega_{k,m} ] [x^k1,mωk,m]

2.将每个先验粒子和噪声代入到非线性运动模型中:
x ˇ k , m = f ( x ^ k − 1 , m , v k , ω k , m ) \check x_{k,m}=f(\hat x_{k-1,m},v_k,\omega_{k,m}) xˇk,m=f(x^k1,m,vk,ωk,m)
3. 结合观测值 y k y_k yk对后验概率进行校正:
根据误差或者收敛程度 ∣ ∣ g ( x ^ k − 1 , m , 0 ) − y k ∣ ∣ ||g(\hat x_{k-1,m},0)-y_k|| g(x^k1,m,0)yk对每个粒子重新赋予权重 ω k , m \omega_{k,m} ωk,m:
ω k , m = p ( x ˇ k , m ∣ v k , y k ) p ( x ˇ k , m ∣ v k , y k − 1 ) = μ p ( y k ∣ x ˇ k , m ) \omega_{k,m} = \frac {p(\check x_{k,m}|v_k,y_k)} {p(\check x_{k,m}|v_k,y_{k-1})}=\mu p(y_k|\check x_{k,m}) ωk,m=p(xˇk,mvk,yk1)p(xˇk,mvk,yk)=μp(ykxˇk,m)
这里可以假设 p ( y k ∣ x ˇ k , m ) = p ( y k ∣ y ˇ k , m ) p(y_k|\check x_{k,m}) = p(y_k|\check y_{k,m}) p(ykxˇk,m)=p(ykyˇk,m),则满足高斯分布的概率密度函数:
q ( y k ∣ x ˇ k , m ) = 1 ( 2 π ) n 2 ( det ⁡ ∣ n k ∣ ) 1 2 exp ⁡ ( − 1 2 ( y k − y ˇ k , m ) T n k − 1 ( y k − y ˇ k , m ) ) q(y_k|\check x_{k,m})=\frac {1} {(2\pi)^{\frac n 2}\bm ({\det} |\bm n_k|)^{\frac 1 2}}\exp(-\frac 1 2(y_k-\check y_{k,m})^T{\bm n_k^{-1}(y_k-\check y_{k,m}) }) q(ykxˇk,m)=(2π)2n(detnk)211exp(21(ykyˇk,m)Tnk1(ykyˇk,m))
这里 y ˇ k , m \check y_{k,m} yˇk,m越接近 y k y_{k} yk权值越大。

4.重要性重采样
对权重大的粒子保留,重新生成粒子群。
参考:
https://blog.csdn.net/qq_29796781/article/details/80259339?utm_source=app&app_version=5.0.1&code=app_1562916241&uLinkId=usr1mkqgl919blen
5.期望估计:
首先对权值进行归一化:
ω ˉ k , m = ω k , m ∑ i = 1 m ω k , i \bar \omega_{k,m}=\frac {\omega_{k,m}} { \sum\limits_{i=1}^m \omega_{k,i}} ωˉk,m=i=1mωk,iωk,m
则估计值为:
x ^ k = ∑ i = 1 m ω ˉ k , m ⋅ x ˇ k , m \hat x_k=\sum_{i=1}^m \bar \omega_{k,m}\cdot \check x_{k,m} x^k=i=1mωˉk,mxˇk,m
return 2;

  • 1
    点赞
  • 9
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值