IKFoM学习笔记(2)状态更新部分

IKFoM状态更新

δ x ^ k = x k − x ^ k \delta \hat x_k=x_k-\hat x_k δx^k=xkx^k ,表示k时刻的状态真值减去k时刻的名义状态变量;
δ x ^ k j = x k − x ^ k j \delta \hat x^j_k=x_k-\hat x^j_k δx^kj=xkx^kj ,表示k时刻的状态真值减去k时刻第j次迭代的先验状态变量, j = 0 j=0 j=0时, x ^ k j = x ^ k \hat x^j_k=\hat x_k x^kj=x^k
x k − x ^ k x_k-\hat x_k xkx^k ,表示k时刻的状态真值减去k时刻的名义状态变量;
δ x k + 1 j \delta x^j_{k+1} δxk+1j,表示第j次更新的误差状态。

1、观测方程

首先引出观测方程:
r k + 1 j = z k + 1 ⊟ h ( x k + 1 j , 0 ) = h ( x k + 1 , v k + 1 ) ⊟ h ( x ^ k + 1 j , 0 ) ≈ D k + 1 j v k + 1 + H k + 1 j δ x j \textcolor{red}{r^j_{k+1}=z_{k+1} \boxminus h(x^j_{k+1},0)}\\[10pts] =h(x_{k+1}, v_{k+1})\boxminus h(\hat x^j_{k+1},0)\\[10pts] \approx D^j_{k+1}v_{k+1}+H^j_{k+1} \delta x_j rk+1j=zk+1h(xk+1j,0)=h(xk+1,vk+1)h(x^k+1j,0)Dk+1jvk+1+Hk+1jδxj
其中
H k + 1 j = ∂ ( h ( x ^ k + 1 j ⊞ δ x , 0 ) ⊟ h ( x ^ k + 1 j , 0 ) ) ∂ δ x ∣ δ x = 0 = ∂ h ( x ^ k + 1 j ⊞ δ x , 0 ) ∂ δ x ∣ δ x = 0 D k + 1 j = ∂ ( h ( x ^ k + 1 j ⊞ δ x , v ) ⊟ h ( x ^ k + 1 j , 0 ) ) ∂ δ v ∣ δ v = 0 = ∂ h ( x ^ k + 1 j ⊞ δ x , v ) ∂ δ x ∣ δ v = 0 \textcolor{red}{H^j_{k+1}}=\frac{\partial(h(\hat x^j_{k+1} \boxplus \delta x, 0)\boxminus h(\hat x^j_{k+1},0))}{\partial \delta x}|_{\delta x=0}\\[10pts] =\frac{\partial h(\hat x^j_{k+1} \boxplus \delta x, 0)}{\partial \delta x}|_{\delta x=0}\\[20pts] \textcolor{red}{D^j_{k+1}}=\frac{\partial(h(\hat x^j_{k+1} \boxplus \delta x, v)\boxminus h(\hat x^j_{k+1},0))}{\partial \delta v}|_{\delta v=0}\\[10pts] =\frac{\partial h(\hat x^j_{k+1} \boxplus \delta x, v)}{\partial \delta x}|_{\delta v=0}\\[20pts] Hk+1j=δx(h(x^k+1jδx,0)h(x^k+1j,0))δx=0=δxh(x^k+1jδx,0)δx=0Dk+1j=δv(h(x^k+1jδx,v)h(x^k+1j,0))δv=0=δxh(x^k+1jδx,v)δv=0
进一步得出:
D k + 1 j v k + 1 = r k + 1 j − H k + 1 j δ x j ,服从高斯分布 N ( 0 , R ‾ k + 1 ) R ‾ k + 1 = D k + 1 j R k + 1 ( D k + 1 j ) T \textcolor{red}{D^j_{k+1}v_{k+1}=r^j_{k+1}-H^j_{k+1} \delta x_j},服从高斯分布N(0,\overline R_{k+1})\\[10pts] \mathcal{\overline R_{k+1}}=D^j_{k+1} \mathcal{R_{k+1}} (D^j_{k+1})^T Dk+1jvk+1=rk+1jHk+1jδxj,服从高斯分布N(0,Rk+1)Rk+1=Dk+1jRk+1(Dk+1j)T

2、切空间投影变换

在这里插入图片描述

δ x ^ k + 1 = x k + 1 ⊟ x ^ x + 1 = ( x k + 1 j ⊞ δ x ^ k + 1 j ) ⊟ x ^ k + 1 = ( x k + 1 j ⊟ x ^ k + 1 ) + ( J k + 1 j ) − 1 δ x ^ k + 1 j \textcolor{red}{\delta \hat x_{k+1}}=x_{k+1} \boxminus \hat x_{x+1}\\[10pts] =(x^j_{k+1} \boxplus \delta \hat x^j_{k+1}) \boxminus \hat x_{k+1} \\[10pts] =(x^j_{k+1} \boxminus \hat x_{k+1}) + \textcolor{red}{(J^j_{k+1})^{-1}\delta \hat x^j_{k+1}}\\[20pts] δx^k+1=xk+1x^x+1=(xk+1jδx^k+1j)x^k+1=(xk+1jx^k+1)+(Jk+1j)1δx^k+1j
其中
J k + 1 j = ∂ ( ( ( x ⊞ u ) ⊕ v ) ⊟ y ) ∂ u ∣ x = x ^ k + 1 , u = x ^ k + 1 j ⊟ x ^ k + 1 , v = 0 , y = x ^ k + 1 j δ x ^ k + 1 ∼ N ( 0 , P ^ k + 1 ) \textcolor{red}{J^j_{k+1}}=\frac{\partial(((x \boxplus u) \oplus v)\boxminus y)}{\partial u}|_{x=\hat x_{k+1}, u=\hat x^j_{k+1} \boxminus \hat x_{k+1},v=0,y=\hat x^j_{k+1}}\\[20pts] \textcolor{red}{\delta \hat x_{k+1}} \sim \mathcal{N}( \textcolor{red}{0}, \hat P_{k+1}) Jk+1j=u(((xu)v)y)x=x^k+1,u=x^k+1jx^k+1,v=0,y=x^k+1jδx^k+1N(0,P^k+1)
∂ x ^ k + 1 \partial \hat x_{k+1} x^k+1关于 ∂ x ^ k + 1 j \partial \hat x^j_{k+1} x^k+1j雅可比矩阵

继而可以得出:
δ x ^ k + 1 j ∼ N ( − J k + 1 j ( x ^ k + 1 j ⊟ x ^ k + 1 ) , J k + 1 j P ^ k + 1 ( J k + 1 j ) T ) \delta \hat x^j_{k+1}\sim \mathcal{N}(\textcolor{red}{-J^j_{k+1}(\hat x^j_{k+1} \boxminus \hat x_{k+1})}, J^j_{k+1} \hat P_{k+1} (J^j_{k+1})^T) δx^k+1jN(Jk+1j(x^k+1jx^k+1),Jk+1jP^k+1(Jk+1j)T)
其中 − J k + 1 j ( x ^ k + 1 j ⊟ x ^ k + 1 ) -J^j_{k+1}(\hat x^j_{k+1} \boxminus \hat x_{k+1}) Jk+1j(x^k+1jx^k+1)由如下得出:
δ x ^ k + 1 = 0 = ( x k + 1 j ⊟ x ^ k + 1 ) + ( J k + 1 j ) − 1 δ x ^ k + 1 j − ( x k + 1 j ⊟ x ^ k + 1 ) = ( J k + 1 j ) − 1 δ x ^ k + 1 j δ x ^ k + 1 j = − J k + 1 j ( x k + 1 j ⊟ x ^ k + 1 ) \textcolor{red}{\delta \hat x_{k+1}}=0 =(x^j_{k+1} \boxminus \hat x_{k+1}) + \textcolor{red}{(J^j_{k+1})^{-1}\delta \hat x^j_{k+1}}\\[10pts] -(x^j_{k+1} \boxminus \hat x_{k+1}) = \textcolor{red}{(J^j_{k+1})^{-1}\delta \hat x^j_{k+1}}\\[10pts] \textcolor{red}{\delta \hat x^j_{k+1}}=\textcolor{red}{-J^j_{k+1}(x^j_{k+1} \boxminus \hat x_{k+1})}\\[10pts] δx^k+1=0=(xk+1jx^k+1)+(Jk+1j)1δx^k+1j(xk+1jx^k+1)=(Jk+1j)1δx^k+1jδx^k+1j=Jk+1j(xk+1jx^k+1)

3、迭代更新

综上所述迭代误差状态卡尔曼滤波可以等价为 δ x ^ k + 1 j \delta \hat x^j_{k+1} δx^k+1j的最大后验估计,其中:
先验分布为 δ x ^ k + 1 j ∼ N ( − J k + 1 j ( x ^ k + 1 j ⊟ x ^ k + 1 ) , J k + 1 j P ^ k + 1 ( J k + 1 j ) T ) \textcolor{red}{\delta \hat x^j_{k+1}}\sim \mathcal{N}(\textcolor{red}{-J^j_{k+1}(\hat x^j_{k+1} \boxminus \hat x_{k+1})},J^j_{k+1} \hat P_{k+1} (J^j_{k+1})^T) δx^k+1jN(Jk+1j(x^k+1jx^k+1),Jk+1jP^k+1(Jk+1j)T)
似然分布为 D k + 1 j v k + 1 ∼ N ( 0 , R ‾ k + 1 ) \textcolor{red}{D^j_{k+1}v_{k+1}}\sim \mathcal{N}(0,\overline R_{k+1}) Dk+1jvk+1N(0,Rk+1)

最大后验估计表示为:
arg ⁡ max ⁡ δ x ^ k + 1 j log ⁡ ( N ( δ x ^ k + 1 j ) N ( ( D k + 1 j v k + 1 ) ∣ δ x ^ k + 1 j ) ) = arg ⁡ min ⁡ δ x k + 1 j g ( δ x ^ k + 1 j ) ; g ( δ x ^ k + 1 j ) = ∥ r k + 1 j − H k + 1 j δ x ^ k + 1 j ∥ R ‾ k + 1 2 + ∥ ( x ^ k + 1 j ⊟ x ^ k + 1 ) + ( J k + 1 j ) − 1 δ x ^ k + 1 j ∥ P ^ k + 1 2 ; \arg \max_{\delta \mathbf{\hat x}^j_{k+1}} \log \left( \mathcal{N}(\delta \hat x^j_{k+1}) \mathcal{N}\left(( \mathbf{D}^j_{k+1}v_{k+1})|\delta \hat x^j_{k+1}\right) \right) = \arg \min_{\delta \mathbf{x}^j_{k+1}} g(\delta \mathbf{\hat x}^j_{k+1}); \\[20pts] g(\delta \mathbf{\hat x}^j_{k+1})= \lVert \textcolor{red}{\mathbf{r}_{k+1}^j - \mathbf{H}_{k+1}^j \delta \mathbf{\hat x}^j_{k+1}}\rVert^2_{\mathcal{\overline R}_{k+1}} + \lVert(\textcolor{red}{\mathbf{\hat x}_{k+1}^j \boxminus \mathbf{\hat x}_{k+1}) + (\mathbf{J}_{k+1}^j)^{-1} \delta \mathbf{\hat x}_{k+1}^j} \rVert^2_{\mathbf{\hat P}_{k+1}}; argδx^k+1jmaxlog(N(δx^k+1j)N((Dk+1jvk+1)δx^k+1j))=argδxk+1jming(δx^k+1j);g(δx^k+1j)=rk+1jHk+1jδx^k+1jRk+12+∥(x^k+1jx^k+1)+(Jk+1j)1δx^k+1jP^k+12;
下面是卡尔曼滤波的更新过程:
δ x k + 1 j = δ x ^ k + 1 j + K k + 1 j ( r k + 1 j − H k + 1 j δ x ^ k + 1 j ) = − J k + 1 j ( x ^ k + 1 j ⊟ x ^ k + 1 ) + K k + 1 j ( r k + 1 j + H k + 1 j J k + 1 j ( x ^ k + 1 j ⊟ x ^ k + 1 ) ) \delta x^j_{k+1}=\delta\hat x^j_{k+1}+\mathbf{K}^j_{k+1}(\mathbf{r}_{k+1}^j - \mathbf{H}_{k+1}^j \delta \hat x^j_{k+1})\\[10pts] =-\mathbf{J}^j_{k+1}(\hat x^j_{k+1} \boxminus \hat x_{k+1}) +\mathbf{K}^j_{k+1}( \mathbf{r}^j_{k+1}+ \mathbf{H}^j_{k+1} \mathbf{J}^j_{k+1}(\hat x^j_{k+1} \boxminus \hat x_{k+1})) δxk+1j=δx^k+1j+Kk+1j(rk+1jHk+1jδx^k+1j)=Jk+1j(x^k+1jx^k+1)+Kk+1j(rk+1j+Hk+1jJk+1j(x^k+1jx^k+1))
卡尔曼增益:
K k + 1 j = J k + 1 j P ^ k + 1 ( J k + 1 j ) T ( H k + 1 j ) T ( H k + 1 j J k + 1 j P ^ k + 1 ( J k + 1 j ) T ( H k + 1 j ) T + R ‾ k + 1 ) − 1 \mathbf{K}^j_{k+1}=\mathbf{J}^j_{k+1} \mathbf{\hat P}_{k+1} (\mathbf{J}^j_{k+1})^T (\mathbf{H}^j_{k+1})^T(\mathbf{H}^j_{k+1} \mathbf{J}^j_{k+1} \mathbf{\hat P}_{k+1} (\mathbf{J}^j_{k+1})^T (\mathbf{H}^j_{k+1})^T+ \mathcal{\overline R_{k+1}})^{-1} Kk+1j=Jk+1jP^k+1(Jk+1j)T(Hk+1j)T(Hk+1jJk+1jP^k+1(Jk+1j)T(Hk+1j)T+Rk+1)1
更新状态变量:
x ^ k + 1 j + 1 = x ^ k + 1 j ⊞ δ x k + 1 j \hat x^{j+1}_{k+1}=\hat x^{j}_{k+1} \boxplus \delta x^j_{k+1} x^k+1j+1=x^k+1jδxk+1j

4、状态更新

迭代收敛后首先更新协方差矩阵:
P k + 1 j = ( I − K k + 1 j H k + 1 j ) J k + 1 j P ^ k + 1 ( J k + 1 j ) ⊤ \mathbf{P}_{k+1}^j =(\mathbf{I} - \mathbf{K}_{k+1}^j \mathbf{H}_{k+1}^j) \mathbf{J}_{k+1}^j \mathbf{\hat P}_{k+1} (\mathbf{J}_{k+1}^j)^\top Pk+1j=(IKk+1jHk+1j)Jk+1jP^k+1(Jk+1j)
最后对协方差矩阵进行切空间投影
在这里插入图片描述
更新后的协方差矩阵 P ^ k + 1 j \hat P^j_{k+1} P^k+1j位于上图中 x k + 1 ∣ k + 1 k x^k_{k+1|k+1} xk+1∣k+1k的切空间中,而在下一次滤波中要把 P ^ k + 1 j \hat P^j_{k+1} P^k+1j作为先验协方差矩阵,投影的上图 x k + 1 ∣ k + 1 k + 1 x^{k+1}_{k+1|k+1} xk+1∣k+1k+1的切空间中:
P k + 1 = L k + 1 P ^ k + 1 j L k + 1 中 P_{k+1}=\mathbf{L}_{k+1}\hat P^j_{k+1}\mathbf{L}_{k+1}中 Pk+1=Lk+1P^k+1jLk+1

在迭代卡尔曼滤波中,迭代收敛后通常 L k + 1 = I \mathbf{L}_{k+1}=\mathbf{I} Lk+1=I

  • 29
    点赞
  • 17
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值