扩展卡尔曼滤波

扩展卡尔曼滤波就是贝叶斯滤波添加一些近似的实际应用:将置信度和噪声限制为高斯分布,并且对运动和观测模型进行线性化计算贝叶斯滤波中的积分(以及归一化积) 。
假设 x k \boldsymbol{x}_{k} xk 的概率密度函数限制为高斯分布:
p ( x k ∣ x ˇ 0 , u 1 : k , y 0 : k ) = N ( x ^ k , P ^ k ) p\left( \boldsymbol{x}_{k}\vert \check{\boldsymbol{x}}_{0},\boldsymbol{u}_{1:k},\boldsymbol{y}_{0:k}\right)=\mathcal{N}\left(\hat{\boldsymbol{x}}_k,\hat{\boldsymbol{P}}_k\right) p(xkxˇ0,u1:k,y0:k)=N(x^k,P^k)
其中 x ^ k \hat{\boldsymbol{x}}_{k} x^k 为均值, P ^ k \hat{\boldsymbol{P}}_k P^k为协方差。接下来我们假设噪声变量 w k \boldsymbol{w}_k wk n k \boldsymbol{n}_k nk也是高斯的:
w k ∼ N ( 0 , Q k ) n k ∼ N ( 0 , R k ) \begin{aligned} \boldsymbol{w}_k&\sim\mathcal{N}(\bf{0},\boldsymbol{Q}_k) \\ \boldsymbol{n}_k&\sim\mathcal{N}(\bf{0},\boldsymbol{R}_k) \end{aligned} wknkN(0,Qk)N(0,Rk)
高斯PDF通过非线性函数转换后,可能会成为非高斯的。

运动模型线性化:

x k = f ( x k − 1 , u k , w k ) ≈ x ˇ k + F k − 1 ( x k − 1 − x ^ k − 1 ) + w k ′ \begin{aligned} \boldsymbol{x}_{k}=f\left( \boldsymbol{x}_{k-1},\boldsymbol{u}_{k},\boldsymbol{w}_{k}\right) \approx \check{\boldsymbol{x}}_{k}+\boldsymbol{F}_{k-1}\left( \boldsymbol{x}_{k-1}-\hat{\boldsymbol{x}}_{k-1} \right) +\boldsymbol{w}_{k}' \end{aligned} xk=f(xk1,uk,wk)xˇk+Fk1(xk1x^k1)+wk
其中
x ˇ k = f ( x ^ k − 1 , u k , 0 ) F k − 1 = ∂ f ( x k − 1 , u k , ω k ) ∂ x k − 1 ∣ x ^ k − 1 , u k , 0 w k ′ = ∂ f ( x k − 1 , u k , w k ) ∂ w k ∣ x ^ k − 1 , u k , 0 w k = L k w k (state predict) \begin{aligned} \check{\boldsymbol{x}}_{k} &= f\left( \hat{\boldsymbol{x}}_{k-1},\boldsymbol{u}_{k},\bf{0}\right) \\ \boldsymbol{F}_{k-1} &= \left. \dfrac{\partial f\left( \boldsymbol{x}_{k-1},\boldsymbol{u}_{k},\omega _{k}\right) }{\partial \boldsymbol{x}_{k-1}}\right \rvert _{\hat{\boldsymbol{x}}_{k-1},\boldsymbol{u}_{k},\bf{0}}\\ \boldsymbol{w}_{k}' &= \left. \dfrac{\partial f\left( \boldsymbol{x}_{k-1},\boldsymbol{u}_{k},\boldsymbol{w}_{k}\right) }{\partial \boldsymbol{w}_{k}}\right\rvert _{\hat{\boldsymbol{x}}_{k-1},\boldsymbol{u}_{k},\bf{0}} \boldsymbol{w}_{k}=\boldsymbol{L}_k\boldsymbol{w}_k \end{aligned}\tag{state predict} xˇkFk1wk=f(x^k1,uk,0)=xk1f(xk1,uk,ωk)x^k1,uk,0=wkf(xk1,uk,wk)x^k1,uk,0wk=Lkwk(state predict)
给定过去的状态和最新输入,则当前状态 x k \boldsymbol{x}_k xk的统计学特性为:
E ( x k ) ≈ x ˇ k + F k − 1 E ( x k − 1 − x ^ k − 1 ) ⏟ E ( x k − 1 ) − x ^ k − 1 = 0 + E ( w k ′ ) ⏟ 0 E [ ( x k − E ( x k ) ) ( x k − E ( x k ) ) T ] ≈ E [ w k ′ w k ′ T ] ⏟ Q k ′ p ( x k ∣ x k − 1 , u k ) ≈ N ( x ˇ k + F k − 1 ( x k − 1 − x ^ k − 1 ) , Q k ′ ) \begin{aligned} &E\left( \boldsymbol{x}_{k}\right) \approx \check{\boldsymbol{x}}_{k}+\boldsymbol{F}_{k-1}\underbrace{E( \boldsymbol{x}_{k-1}-\hat{\boldsymbol{x}}_{k-1})}_{E(\boldsymbol{x_{k-1}})-\hat{\boldsymbol{x}}_{k-1}=\bf{0}} +\underbrace{E\left( \boldsymbol{w}_{k}'\right) }_{\bf{0}}\\ &E\left[ \left( \boldsymbol{x}_{k}-E\left( \boldsymbol{x}_{k}\right) \right) \left( \boldsymbol{x}_{k}-E\left( \boldsymbol{x}_{k}\right) \right) ^{T}\right] \approx \underbrace{E\left[ \boldsymbol{w} _{k}'\boldsymbol{w}_{k}'^{T}\right]}_{\boldsymbol{Q}'_{k}}\\ &p\left( \boldsymbol{x}_{k}\vert \boldsymbol{x}_{k-1},\boldsymbol{u}_{k}\right) \approx \mathcal{N}\left( \check{\boldsymbol{x}}_{k}+\boldsymbol{F}_{k-1}\left( \boldsymbol{x}_{k-1}-\hat{\boldsymbol{x}}_{k-1}\right) ,\boldsymbol{Q}'_{k}\right) \end{aligned} E(xk)xˇk+Fk1E(xk1)x^k1=0 E(xk1x^k1)+0 E(wk)E[(xkE(xk))(xkE(xk))T]Qk E[wkwkT]p(xkxk1,uk)N(xˇk+Fk1(xk1x^k1),Qk)

观测模型线性化:

y k = h ( x k , n k ) = y ˇ k + H k ( x k − x ˇ k ) + n k ′ \begin{aligned} \boldsymbol{y}_{k}=\boldsymbol{h}\left( \boldsymbol{x}_{k},\boldsymbol{n}_{k}\right) =\check{\boldsymbol{y}}_{k}+\boldsymbol{H}_{k}\left( \boldsymbol{x}_{k}-\check{\boldsymbol{x}}_{k}\right) +\boldsymbol{n}_{k}' \end{aligned} yk=h(xk,nk)=yˇk+Hk(xkxˇk)+nk
其中
y ˇ k = h ( x ˇ k , 0 ) H k = ∂ h ( x k , n k ) ∂ x k ∣ x ˇ k , 0 n k ′ = ∂ h ( x k , n k ) ∂ n k ∣ x ˇ k , 0 n k \begin{aligned} \check{\boldsymbol{y}}_{k}&=\boldsymbol{h}\left( \check{\boldsymbol{x}}_{k},\bf{0}\right) \\ \boldsymbol{H}_{k} &= \left.\dfrac{\partial \boldsymbol{h}\left( \boldsymbol{x}_{k},\boldsymbol{n}_{k}\right) }{\partial \boldsymbol{x}_{k}}\right\vert _{\check{\boldsymbol{x}}_{k},\bf{0}}\\ \boldsymbol{n}_{k}' &= \left.\dfrac{\partial \boldsymbol{h}\left( \boldsymbol{x}_{k},\boldsymbol{n}_{k}\right) }{\partial \boldsymbol{n}_{k}}\right\vert _{\check{\boldsymbol{x}}_{k},\bf{0}}\boldsymbol{n}_{k} \end{aligned} yˇkHknk=h(xˇk,0)=xkh(xk,nk)xˇk,0=nkh(xk,nk)xˇk,0nk
给定当前状态,则当前状态观测 y k \boldsymbol{y}_k yk的统计学特性为:
E ( y k ) ≈ y ˇ k + H k ( x k − x ˇ k ) + E ( n k ′ ) ⏟ 0 E [ ( y k − E ( y k ) ) ( y k − E ( y k ) ) T ] ≈ E [ n k ′ n ′ k T ] ⏟ R k ′ p ( y k ∣ x k ) ≈ N ( y ˇ k + H k ( x k − x ˇ k ) , R k ′ ) \begin{aligned} E\left( \boldsymbol{y}_{k}\right) \approx \check{\boldsymbol{y}}_{k}+\boldsymbol{H}_{k}\left( \boldsymbol{x}_{k}-\check{\boldsymbol{x}}_{k}\right) +\underbrace{E\left( \boldsymbol{n}_{k}'\right) }_{0}\\ E\left[\left( \boldsymbol{y}_{k}-E\left( \boldsymbol{y}_{k}\right) \right) \left( \boldsymbol{y}_{k}-E\left( \boldsymbol{y}_{k}\right) \right) ^{T}\right]\approx \underbrace{E\left[ \boldsymbol{n}'_{k}{\boldsymbol{n}'}_{k}^T\right] }_{\boldsymbol{R}_{k}'}\\ p\left( \boldsymbol{y}_{k}\vert \boldsymbol{x}_{k}\right) \approx \mathcal{N}\left( \check{\boldsymbol{y}}_{k}+\boldsymbol{H}_{k}\left( \boldsymbol{x}_{k}-\check{\boldsymbol{x}}_{k}\right) ,\boldsymbol{R}_{k}'\right) \end{aligned} E(yk)yˇk+Hk(xkxˇk)+0 E(nk)E[(ykE(yk))(ykE(yk))T]Rk E[nknkT]p(ykxk)N(yˇk+Hk(xkxˇk),Rk)

代入贝叶斯滤波框架

贝叶斯滤波,可知:
p ( x k ∣ x ˇ 0 , u 1 : k , y 0 : k ) ⏞ k 时 刻 状 态 估 计 后 验 概 率 = η p ( y k ∣ x k ) ⏞ k 时 刻 观 测 模 型 p ( x k ∣ x ˇ 0 , u 1 : k , y 0 : k − 1 ) ⏞ k 时 刻 状 态 估 计 先 验 概 率 p ( x k ∣ x ˇ 0 , u 1 : k , y 0 : k ) ⏟ N ( x ^ k , P ^ k ) = η p ( y k ∣ x k ) ⏟ N ( y k + H k ( x k − x ˇ k ) , R k ′ ) × ∫ p ( x k ∣ x k − 1 , u k ) ⏟ N ( x ˇ k + F k − 1 ( x k − 1 − x ^ k − 1 ) , Q k ′ ) p ( x k − 1 ∣ x ˇ 0 , u 1 : k − 1 , y 0 : k − 1 ) ⏟ N ( x ^ k − 1 , P ^ k − 1 ) d x k − 1 ⏞ 预 测 步 ⏞ 更 新 步 \begin{aligned} \overbrace{p\left( \boldsymbol{x}_{k}| \check{\boldsymbol{x}}_{0},\boldsymbol{u}_{1:k},\boldsymbol{y}_{0:k}\right)}^{k时刻状态估计后验概率} &=\eta \overbrace{p\left( \boldsymbol{y}_{k}| \boldsymbol{x}_{k}\right)}^{k时刻观测模型} \overbrace{p\left( \boldsymbol{x}_{k}| \check{\boldsymbol{x}}_{0},\boldsymbol{u}_{1:k},\boldsymbol{y}_{0:k-1}\right)}^{k时刻状态估计先验概率}\\ \underbrace{p\left( \boldsymbol{x}_{k}\vert \check{\boldsymbol{x}}_{0},\boldsymbol{u}_{1:k},\boldsymbol{y}_{0:k}\right)}_{\mathcal{N}\left( \hat{\boldsymbol{x}}_{k},\hat{\boldsymbol{P}}_{k}\right)} &=\overbrace{\eta \underbrace{p\left( \boldsymbol{y}_{k}\vert \boldsymbol{x}_{k}\right)}_{\mathcal{N}\left( \boldsymbol{y}_{k}+\boldsymbol{H}_k\left( \boldsymbol{x}_{k}-\check{\boldsymbol{x}}_{k}\right) ,\boldsymbol{R}'_k\right)} \times \overbrace{\int \underbrace{p\left( \boldsymbol{x}_{k}\vert \boldsymbol{x}_{k-1},\boldsymbol{u}_{k}\right)}_{\mathcal{N}\left( \check{\boldsymbol{x}}_{k}+\boldsymbol{F}_{k-1}\left( \boldsymbol{x}_{k-1}-\hat{\boldsymbol{x}}_{k-1}\right) ,\boldsymbol{Q}_{k}'\right)} \underbrace{p\left( \boldsymbol{x}_{k-1}\vert \check{\boldsymbol{x}}_{0},\boldsymbol{u}_{1:k-1},\boldsymbol{y}_{0:k-1}\right)}_{\mathcal{N}\left( \hat{\boldsymbol{x}}_{k-1},\hat{\boldsymbol{P}}_{k-1}\right) } {\rm d}\boldsymbol{x}_{k-1}}^{预测步}}^{更新步} \end{aligned} p(xkxˇ0,u1:k,y0:k) kN(x^k,P^k) p(xkxˇ0,u1:k,y0:k)=ηp(ykxk) kp(xkxˇ0,u1:k,y0:k1) k=ηN(yk+Hk(xkxˇk),Rk) p(ykxk)×N(xˇk+Fk1(xk1x^k1),Qk) p(xkxk1,uk)N(x^k1,P^k1) p(xk1xˇ0,u1:k1,y0:k1)dxk1

高斯分布的非线性变换可知:
p ( y ) ⏟ N ( μ y , R + G Σ x x G T ) = ∫ − ∞ + ∞ p ( y ∣ x ) ⏟ N ( μ y + G ( x − μ x ) , R )    p ( x ) ⏟ N ( μ x , Σ x x ) d x \begin{aligned} \underbrace{p\left( \boldsymbol{\boldsymbol{y}}\right)}_{\mathcal{N}(\boldsymbol{\mu}_y,\boldsymbol{R}+\boldsymbol{G}\boldsymbol{\Sigma}_{xx}\boldsymbol{G}^T)} &=\int_{-\infty }^{+\infty }\underbrace{p\left( \boldsymbol{\boldsymbol{y}}| \boldsymbol{\boldsymbol{x}}\right)}_{\mathcal{N}(\boldsymbol{\mu} _{y}+\boldsymbol{G}\left( \boldsymbol{x}-\boldsymbol{\mu} _x\right),\boldsymbol{R})}\ \ \underbrace{p\left( \boldsymbol{\boldsymbol{x}}\right)}_{\mathcal{N}(\boldsymbol{\mu}_x,\boldsymbol{\Sigma}_{xx})} {\rm d}\boldsymbol{\boldsymbol{x}} \end{aligned} N(μy,R+GΣxxGT) p(y)=+N(μy+G(xμx),R) p(yx)  N(μx,Σxx) p(x)dx

x → x k − 1 y → x k μ x → x ^ k − 1 μ y → x ˇ k G → F k − 1 R → Q k ′ Σ x x → P ^ k − 1 Σ y y → P ^ k \begin{aligned} \boldsymbol{x} \to \boldsymbol{x}_{k-1} &\quad \boldsymbol{y}\to \boldsymbol{x}_k\\ \boldsymbol{\mu}_x\to\hat{\boldsymbol{x}}_{k-1} &\quad \boldsymbol{\mu}_y \to \check{\boldsymbol{x}}_k\\ \boldsymbol{G} \to \boldsymbol{F}_{k-1} &\quad \boldsymbol{R} \to \boldsymbol{Q}'_k\\ \boldsymbol{\Sigma}_{xx} \to \hat{\boldsymbol{P}}_{k-1}&\quad \boldsymbol{\Sigma}_{yy}\to\hat{\boldsymbol{P}}_k \end{aligned} xxk1μxx^k1GFk1ΣxxP^k1yxkμyxˇkRQkΣyyP^k
于是可以得到积分号内两个高斯分布的积分结果(即预测步得到的第k步先验估计)
p ( x k ∣ x ˇ 0 , u 1 : k , y 0 : k − 1 ) ⏟ x k ∼ N ( x ˇ k , Q k ′ + F k − 1 P ^ k − 1 F k − 1 T ) = ∫ p ( x k ∣ x k − 1 , u k ) ⏟ N ( x ˇ k + F k − 1 ( x k − 1 − x ^ k − 1 ) , Q k ′ ) p ( x k − 1 ∣ x ˇ 0 , u 1 : k − 1 , y 0 : k − 1 ) ⏟ N ( x ^ k − 1 , P ^ k − 1 ) d x k − 1 \underbrace{ p\left(\boldsymbol{x}_{k}| \check{\boldsymbol{x}}_{0},\boldsymbol{u}_{1:k},\boldsymbol{y}_{0:k-1}\right)}_{\boldsymbol{x}_k\sim\mathcal{N}(\check{\boldsymbol{x}}_k,\boldsymbol{Q}'_k+\boldsymbol{F}_{k-1}\hat{\boldsymbol{P}}_{k-1} \boldsymbol{F}_{k-1}^T)} = \int \underbrace{p\left( \boldsymbol{x}_{k}\vert \boldsymbol{x}_{k-1},\boldsymbol{u}_{k}\right)}_{\mathcal{N}\left( \check{\boldsymbol{x}}_{k}+\boldsymbol{F}_{k-1}\left( \boldsymbol{x}_{k-1}-\hat{\boldsymbol{x}}_{k-1}\right) ,\boldsymbol{Q}_{k}'\right)} \underbrace{p\left( \boldsymbol{x}_{k-1}\vert \check{\boldsymbol{x}}_{0},\boldsymbol{u}_{1:k-1},\boldsymbol{y}_{0:k-1}\right)}_{\mathcal{N}\left( \hat{\boldsymbol{x}}_{k-1},\hat{\boldsymbol{P}}_{k-1}\right) } {\rm d}\boldsymbol{x}_{k-1} xkN(xˇk,Qk+Fk1P^k1Fk1T) p(xkxˇ0,u1:k,y0:k1)=N(xˇk+Fk1(xk1x^k1),Qk) p(xkxk1,uk)N(x^k1,P^k1) p(xk1xˇ0,u1:k1,y0:k1)dxk1
p ( x k ∣ x ˇ 0 , u 1 : k , y 0 : k ) ⏟ x k ∼ N ( x ^ k , P ^ k ) = η p ( y k ∣ x k ) ⏟ y k ∼ N ( y ˇ k + H k ( x k − x ˇ k ) , R k ′ ) × p ( x k ∣ x ˇ 0 , u 1 : k , y 0 : k − 1 ) ⏟ x k ∼ N ( x ˇ k , Q k ′ + F k − 1 P ^ k − 1 F k − 1 T ) \underbrace{p\left( \boldsymbol{x}_{k}\vert \check{\boldsymbol{x}}_{0},\boldsymbol{u}_{1:k},\boldsymbol{y}_{0:k}\right)}_{\boldsymbol{x}_k\sim\mathcal{N}\left( \hat{\boldsymbol{x}}_{k},\hat{\boldsymbol{P}}_{k}\right)} =\eta \underbrace{p\left( \boldsymbol{y}_{k}\vert \boldsymbol{x}_{k}\right)}_{\boldsymbol{y}_k\sim\mathcal{N}\left( \check{\boldsymbol{y}}_{k}+\boldsymbol{H}_k\left( \boldsymbol{x}_{k}-\check{\boldsymbol{x}}_{k}\right) ,\boldsymbol{R}'_k\right)} \times \underbrace{ p\left(\boldsymbol{x}_{k}| \check{\boldsymbol{x}}_{0},\boldsymbol{u}_{1:k},\boldsymbol{y}_{0:k-1}\right)}_{\boldsymbol{x}_k\sim\mathcal{N}(\check{\boldsymbol{x}}_k,\boldsymbol{Q}'_k+\boldsymbol{F}_{k-1}\hat{\boldsymbol{P}}_{k-1} \boldsymbol{F}_{k-1}^T)} xkN(x^k,P^k) p(xkxˇ0,u1:k,y0:k)=ηykN(yˇk+Hk(xkxˇk),Rk) p(ykxk)×xkN(xˇk,Qk+Fk1P^k1Fk1T) p(xkxˇ0,u1:k,y0:k1)
先验协方差矩阵
P ˇ k = Q k ′ + F k − 1 P ^ k F k − 1 T (cov predict) \begin{aligned} \check{\boldsymbol{P}}_{k}=\boldsymbol{Q}_{k}'+\boldsymbol{F}_{k-1}\hat{\boldsymbol{P}}_{k}\boldsymbol{F}_{k-1}^{T} \end{aligned}\tag{cov predict} Pˇk=Qk+Fk1P^kFk1T(cov predict)

p ( y k ∣ x k ) p\left( \boldsymbol{y}_{k}\vert \boldsymbol{x}_{k}\right) p(ykxk)看做以 x k \boldsymbol{x}_k xk为变量的函数:
p ( y k ∣ x k ) = ρ exp ⁡ ( ( y k − ( y ˇ k + H k ( x k − x ˇ k ) ) ) T R ′ k − 1 ( y k − ( y ˇ k + H k ( x k − x ˇ k ) ) ) ) = ρ exp ⁡ ( ( H k x k − ( H k x ˇ k + ( y k − y ˇ k ) ) ) T R ′ k − 1 ( H k x k − ( H k x ˇ k + ( y k − y ˇ k ) ) ) ) = ρ exp ⁡ ( ( x k − ( x ˇ k + H k − 1 ( y k − y ˇ k ) ) ) T H k T R ′ k − 1 H k ( x k − ( x ˇ k + H k − 1 ( y k − y ˇ k ) ) ) ) \begin{aligned} p\left( \boldsymbol{y}_{k}\vert \boldsymbol{x}_{k}\right)&=\rho\exp\left(\left(\boldsymbol{y}_k-\left( \check{\boldsymbol{y}}_{k}+\boldsymbol{H}_k\left( \boldsymbol{x}_{k}-\check{\boldsymbol{x}}_{k}\right)\right)\right)^T {\boldsymbol{R}'}_k^{-1} \left(\boldsymbol{y}_k-\left( \check{\boldsymbol{y}}_{k}+\boldsymbol{H}_k\left( \boldsymbol{x}_{k}-\check{\boldsymbol{x}}_{k}\right)\right)\right)\right)\\ &=\rho\exp\left(\left(\boldsymbol{H}_k \boldsymbol{x}_{k}-\left(\boldsymbol{H}_k\check{\boldsymbol{x}}_{k}+(\boldsymbol{y}_k-\check{\boldsymbol{y}}_{k})\right)\right)^T {\boldsymbol{R}'}_k^{-1} \left(\boldsymbol{H}_k \boldsymbol{x}_{k}-\left(\boldsymbol{H}_k\check{\boldsymbol{x}}_{k}+(\boldsymbol{y}_k-\check{\boldsymbol{y}}_{k})\right)\right)\right)\\ &=\rho\exp\left(\left(\boldsymbol{x}_{k}-\left(\check{\boldsymbol{x}}_{k}+ \boldsymbol{H}_k^{-1} (\boldsymbol{y}_k-\check{\boldsymbol{y}}_{k})\right)\right)^T\boldsymbol{H}_k ^T {\boldsymbol{R}'}_k^{-1} \boldsymbol{H}_k\left(\boldsymbol{x}_{k}-\left(\check{\boldsymbol{x}}_{k}+ \boldsymbol{H}_k^{-1} (\boldsymbol{y}_k-\check{\boldsymbol{y}}_{k})\right)\right)\right) \end{aligned} p(ykxk)=ρexp((yk(yˇk+Hk(xkxˇk)))TRk1(yk(yˇk+Hk(xkxˇk))))=ρexp((Hkxk(Hkxˇk+(ykyˇk)))TRk1(Hkxk(Hkxˇk+(ykyˇk))))=ρexp((xk(xˇk+Hk1(ykyˇk)))THkTRk1Hk(xk(xˇk+Hk1(ykyˇk))))
如果 H k \boldsymbol{H}_k Hk可逆,则可以进行如下转换:
p ( x k ∣ x ˇ 0 , u 1 : k , y 0 : k ) ⏟ x k ∼ N ( x ^ k , P ^ k ) = η p ( y k ∣ x k ) ⏟ x k ∼ N ( x ˇ k + H k − 1 ( y k − y ˇ k ) , ( H k T R ′ k − 1 H k ) − 1 ) × p ( x k ∣ x ˇ 0 , u 1 : k , y 0 : k − 1 ) ⏟ x k ∼ N ( x ˇ k , Q k ′ + F k − 1 P ^ k − 1 F k − 1 T ) \underbrace{p\left( \boldsymbol{x}_{k}\vert \check{\boldsymbol{x}}_{0},\boldsymbol{u}_{1:k},\boldsymbol{y}_{0:k}\right)}_{\boldsymbol{x}_k\sim\mathcal{N}\left( \hat{\boldsymbol{x}}_{k},\hat{\boldsymbol{P}}_{k}\right)} =\eta \underbrace{p\left( \boldsymbol{y}_{k}\vert \boldsymbol{x}_{k}\right)}_{\boldsymbol{x}_k\sim\mathcal{N}\left( \check{\boldsymbol{x}}_{k}+\boldsymbol{H}_k^{-1}\left( \boldsymbol{y}_{k}-\check{\boldsymbol{y}}_{k}\right) ,\left(\boldsymbol{H}_k^T {\boldsymbol{R}'}_k^{-1} \boldsymbol{H}_k\right)^{-1}\right)} \times \underbrace{ p\left(\boldsymbol{x}_{k}| \check{\boldsymbol{x}}_{0},\boldsymbol{u}_{1:k},\boldsymbol{y}_{0:k-1}\right)}_{\boldsymbol{x}_k\sim\mathcal{N}(\check{\boldsymbol{x}}_k,\boldsymbol{Q}'_k+\boldsymbol{F}_{k-1}\hat{\boldsymbol{P}}_{k-1} \boldsymbol{F}_{k-1}^T)} xkN(x^k,P^k) p(xkxˇ0,u1:k,y0:k)=ηxkN(xˇk+Hk1(ykyˇk),(HkTRk1Hk)1) p(ykxk)×xkN(xˇk,Qk+Fk1P^k1Fk1T) p(xkxˇ0,u1:k,y0:k1)
但是 H k \boldsymbol{H}_k Hk可能不可逆,更严谨的做法是根据高斯分布随机变量线性变换的归一化积公式,得

P ^ k − 1 = P ˇ k − 1 + H k T R ′ k − 1 H k ( 1 ) P ^ k − 1 x ^ k = P ˇ k − 1 x ˇ k + H k T R ′ k − 1 ( H k x ˇ k + ( y k − y ˇ k ) ) ( 2 ) \begin{aligned} \hat{\boldsymbol{P}}_{k}^{-1} &= \check{\boldsymbol{P}}_k^{-1}+\boldsymbol{H}_k^T {\boldsymbol{R}'}_k^{-1} \boldsymbol{H}_k &(1) \\ \hat{\boldsymbol{P}}_{k}^{-1}\hat{\boldsymbol{x}}_{k} &=\check{\boldsymbol{P}}_k^{-1}\check{\boldsymbol{x}}_k+ \boldsymbol{H}_k^T {\boldsymbol{R}'}_k^{-1} \left( \boldsymbol{H}_k\check{\boldsymbol{x}}_{k}+\left( \boldsymbol{y}_{k}-\check{\boldsymbol{y}}_{k}\right)\right) &(2) \end{aligned} P^k1P^k1x^k=Pˇk1+HkTRk1Hk=Pˇk1xˇk+HkTRk1(Hkxˇk+(ykyˇk))(1)(2)
矩阵求逆定理的(2)式 ( D + C A B ) − 1 ≡ D − 1 − D − 1 C ( A − 1 + B D − 1 C ) − 1 B D − 1 \left(\boldsymbol{D}+\boldsymbol{C}\boldsymbol{A}\boldsymbol{B}\right) ^{-1} \equiv \boldsymbol{D}^{-1}-\boldsymbol{D}^{-1}\boldsymbol{C}\left( \boldsymbol{A}^{-1}+\boldsymbol{B}\boldsymbol{D}^{-1}\boldsymbol{C}\right) ^{-1}\boldsymbol{B}\boldsymbol{D}^{-1} (D+CAB)1D1D1C(A1+BD1C)1BD1
P ^ k = ( P ˇ k − 1 + H k T R ′ k − 1 H k ) − 1 = P ˇ k − P ˇ k H k T ( R k ′ + H k P ˇ k H k T ) − 1 H k P ˇ k = ( I − P ˇ k H k T ( R ′ k + H k P ˇ k H k T ) − 1 H k ) P ˇ k \begin{aligned} \begin{aligned} \hat{\boldsymbol{P}}_{k} &=\left(\check{\boldsymbol{P}}_k^{-1}+\boldsymbol{H}_k^T {\boldsymbol{R}'}_k^{-1} \boldsymbol{H}_k\right)^{-1} \\ &= \check{\boldsymbol{P}}_{k}-\check{\boldsymbol{P}}_{k}\boldsymbol{H} _{k}^{T}\left( \boldsymbol{R}_{k}'+\boldsymbol{H}_{k}\check{\boldsymbol{P}}_{k}\boldsymbol{H}_{k}^{T}\right) ^{-1}\boldsymbol{H}_{k}\check{\boldsymbol{P}}_{k}\\ &=\left( \bf{I}-\check{\boldsymbol{P}}_{k}\boldsymbol{H}_{k}^{T}\left( {\boldsymbol{R}'}_{k}+\boldsymbol{H}_{k}\check{\boldsymbol{P}}_{k}\boldsymbol{H}_{k}^{T}\right) ^{-1}\boldsymbol{H}_{k}\right) \check{\boldsymbol{P}}_{k} \end{aligned} \end{aligned} P^k=(Pˇk1+HkTRk1Hk)1=PˇkPˇkHkT(Rk+HkPˇkHkT)1HkPˇk=(IPˇkHkT(Rk+HkPˇkHkT)1Hk)Pˇk
归一化积(1)式代入(2)式化简得
I = P ^ k ( P ˇ k − 1 + H k T R ′ k − 1 H k ) x ^ k = P ^ k ( P ˇ k − 1 + H k T R ′ k − 1 H k ) x ˇ k + P ^ k H k T R ′ k − 1 ( y k − y ˇ k ) = x ˇ k + P ^ k H k T R ′ k − 1 ( y k − y ˇ k ) = x ˇ k + ( P ˇ k − 1 + H k T R ′ k − 1 H k ) − 1 H k T R ′ k − 1 ( y k − y ˇ k ) \begin{aligned} \bf{I} &= \hat{\boldsymbol{P}}_{k}\left(\check{\boldsymbol{P}}_k^{-1}+\boldsymbol{H}_k^T {\boldsymbol{R}'}_k^{-1} \boldsymbol{H}_k\right)\\ \hat{\boldsymbol{x}}_{k}&=\hat{\boldsymbol{P}}_{k}\left(\check{\boldsymbol{P}}_{k}^{-1}+ \boldsymbol{H}_{k}^{T} {\boldsymbol{R}'}_{k}^{-1}\boldsymbol{H}_{k}\right) \check{\boldsymbol{x}}_{k}+\hat{\boldsymbol{P}}_{k}\boldsymbol{H}_{k}^{T}{\boldsymbol{R}'}_{k}^{-1}\left( \boldsymbol{y}_{k}-\check{\boldsymbol{y}}_{k}\right)\\ &=\check{\boldsymbol{x}}_{k}+\hat{\boldsymbol{P}}_{k}\boldsymbol{H}_{k}^{T}{\boldsymbol{R}'}_{k}^{-1}\left( \boldsymbol{y}_{k}-\check{\boldsymbol{y}}_{k}\right)\\ &=\check{\boldsymbol{x}}_{k}+\left(\check{\boldsymbol{P}}_k^{-1}+\boldsymbol{H}_k^T {\boldsymbol{R}'}_k^{-1} \boldsymbol{H}_k\right)^{-1}\boldsymbol{H}_{k}^{T}{\boldsymbol{R}'}_{k}^{-1}\left( \boldsymbol{y}_{k}-\check{\boldsymbol{y}}_{k}\right) \end{aligned} Ix^k=P^k(Pˇk1+HkTRk1Hk)=P^k(Pˇk1+HkTRk1Hk)xˇk+P^kHkTRk1(ykyˇk)=xˇk+P^kHkTRk1(ykyˇk)=xˇk+(Pˇk1+HkTRk1Hk)1HkTRk1(ykyˇk)
矩阵求逆定理的(4)式 ( D + C A B ) − 1 C A ≡ D − 1 C ( A − 1 + B D − 1 C ) − 1 \left( \boldsymbol{D}+\boldsymbol{C}\boldsymbol{A}\boldsymbol{B}\right)^{-1}\boldsymbol{C}\boldsymbol{A} \equiv \boldsymbol{D}^{-1}\boldsymbol{C}\left( \boldsymbol{A}^{-1}+\boldsymbol{B}\boldsymbol{D}^{-1}\boldsymbol{C}\right) ^{-1} (D+CAB)1CAD1C(A1+BD1C)1
( P ˇ k − 1 + H k T R ′ k − 1 H k ) − 1 H k T R ′ k − 1 = P ˇ k H k T ( R k ′ + H k P ˇ k H k T ) − 1 \left(\check{\boldsymbol{P}}_k^{-1} + \boldsymbol{H}_k^T {\boldsymbol{R}'}_k^{-1} \boldsymbol{H}_k\right)^{-1}\boldsymbol{H}_{k}^{T}{\boldsymbol{R}'}_{k}^{-1}=\check{\boldsymbol{P}}_{k}\boldsymbol{H}_k^{T}\left( \boldsymbol{R}_{k}'+\boldsymbol{H}_{k}\check{\boldsymbol{P}}_{k}\boldsymbol{H}_{k}^{T}\right) ^{-1} (Pˇk1+HkTRk1Hk)1HkTRk1=PˇkHkT(Rk+HkPˇkHkT)1
结合前面的预测方程(state predict)(cov predict)

设卡尔曼增益为
K k = P ˇ k H k T ( R k ′ + H k P ˇ k H k T ) − 1 \begin{aligned} \boldsymbol{K}_k=\check{\boldsymbol{P}}_{k}\boldsymbol{H}_k^{T}\left( \boldsymbol{R}_{k}'+\boldsymbol{H}_{k}\check{\boldsymbol{P}}_{k}\boldsymbol{H}_{k}^{T}\right) ^{-1} \end{aligned} Kk=PˇkHkT(Rk+HkPˇkHkT)1
则更新步:
P ^ k = ( I − K k H k ) P ˇ k ⏟ Q k ′ + F k − 1 P ^ k F k − 1 T x ^ k = x ˇ k ⏟ f ( x ^ k − 1 , u k , 0 ) + K k ( y k − y ˇ k ⏟ h ( x ˇ k , 0 ) ) \begin{aligned} \hat{\boldsymbol{P}}_{k} &=\left( \bf{I}-\boldsymbol{K}_k \boldsymbol{H}_{k} \right) \underbrace{\check{\boldsymbol{P}}_{k}}_{\boldsymbol{Q}_{k}'+\boldsymbol{F}_{k-1}\hat{\boldsymbol{P}}_{k}\boldsymbol{F}_{k-1}^{T}}\\ \hat{\boldsymbol{x}}_k &=\underbrace{\check{\boldsymbol{x}}_{k}}_{\boldsymbol{f}\left( \hat{\boldsymbol{x}}_{k-1},\boldsymbol{u}_{k},\bf{0}\right)}+\boldsymbol{K}_k( \boldsymbol{y}_{k}-\underbrace{\check{\boldsymbol{y}}_{k}}_{\boldsymbol{h}\left( \check{\boldsymbol{x}}_{k},\bf{0}\right)})\\ \end{aligned} P^kx^k=(IKkHk)Qk+Fk1P^kFk1T Pˇk=f(x^k1,uk,0) xˇk+Kk(ykh(xˇk,0) yˇk)

  1. 通过非线性的运动和观测模型来传递估计的均值
  2. 噪声协方差 Q k ′ \boldsymbol{Q}'_k Qk R k ′ \boldsymbol{R}'_k Rk中包含了雅可比矩阵,这是因为我们允许噪声应用于非线性模型中。

EKF并不能保证在一般的非线性系统中能够充分的发挥作用。EKF的主要问题在于,其线性化的工作点是估计状态的均值,而不是真实状态。

广义高斯滤波

一般来说,我们先从 k − 1 k-1 k1时刻的高斯后验开始:
p ( x k − 1 ∣ x ˇ 0 , u 1 : k − 1 , y 0 : k − 1 ) = N ( x ^ k − 1 , P ^ k − 1 ) {p\left( \boldsymbol{x}_{k-1}\vert \check{\boldsymbol{x}}_{0},\boldsymbol{u}_{1:k-1},\boldsymbol{y}_{0:k-1}\right)}={\mathcal{N}\left( \hat{\boldsymbol{x}}_{k-1},\hat{\boldsymbol{P}}_{k-1}\right)} p(xk1xˇ0,u1:k1,y0:k1)=N(x^k1,P^k1)

通过非线性运动模型 f ( ⋅ ) \boldsymbol{f}(\cdot) f()在时间上向前递推,以得到在 k k k时刻的高斯先验:
p ( x k ∣ x ˇ 0 , u 1 : k , y 0 : k − 1 ) = N ( x ˇ k , P ˇ k ) {p\left( \boldsymbol{x}_{k}\vert \check{\boldsymbol{x}}_{0},\boldsymbol{u}_{1:k},\boldsymbol{y}_{0:k-1}\right)}={\mathcal{N}\left( \check{\boldsymbol{x}}_{k},\check{\boldsymbol{P}}_{k}\right)} p(xkxˇ0,u1:k,y0:k1)=N(xˇk,Pˇk)
这是预测步骤,结合了最新的输入 u k \boldsymbol{u}_k uk

对于校正步骤,我们采用联合高斯概率密度函数,分解与推断的方法,写出 k k k时刻状态和最新测量的联合高斯分布:
p ( x k , y k ∣ x ˇ 0 , u 1 : k , y 0 : k − 1 ) = N ( [ μ x , k μ y , k ] , [ Σ x x , k Σ x y , k Σ y x , k Σ y y , k ] ) \begin{aligned} {p\left( \boldsymbol{x}_{k},\boldsymbol{y}_{k}\vert \check{\boldsymbol{x}}_{0},\boldsymbol{u}_{1:k},\boldsymbol{y}_{0:k-1}\right)} =\mathcal{N}\left( \begin{bmatrix} \boldsymbol{\mu }_{x,k} \\ \boldsymbol{\mu }_{y,k} \end{bmatrix},\begin{bmatrix} \boldsymbol{\Sigma}_{xx,k} & \boldsymbol{\Sigma}_{xy,k} \\ \boldsymbol{\Sigma}_{yx,k} & \boldsymbol{\Sigma}_{yy,k} \end{bmatrix}\right) \end{aligned} p(xk,ykxˇ0,u1:k,y0:k1)=N([μx,kμy,k],[Σxx,kΣyx,kΣxy,kΣyy,k])

x k \boldsymbol{x}_k xk的条件概率密度函数:
p ( x k ∣ x ˇ 0 , u 1 : k , y 0 : k ) = N ( μ x , k + Σ x y , k Σ y y , k − 1 ( y k − μ y , k ) ⏟ x ^ k , Σ x x , k − Σ x y , k Σ y y , k − 1 Σ y x , k ⏟ P k ^ ) \begin{aligned} {p\left( \boldsymbol{x}_{k}\vert \check{\boldsymbol{x}}_{0},\boldsymbol{u}_{1:k},\boldsymbol{y}_{0:k}\right)} = \mathcal{N}( \underbrace{\boldsymbol{\mu} _{x,k}+\boldsymbol{\Sigma} _{xy,k}\boldsymbol{\Sigma} _{yy,k}^{-1}\left( \boldsymbol{y}_k-\boldsymbol{\mu}_{y,k}\right)}_{\hat{\boldsymbol{x}}_k} , \underbrace{\boldsymbol{\Sigma}_{xx,k}-\boldsymbol{\Sigma}_{xy,k}\boldsymbol{\Sigma}_{yy,k}^{-1}\boldsymbol{\Sigma}_{yx,k}}_{\hat{\boldsymbol{P}_k}}) \end{aligned} p(xkxˇ0,u1:k,y0:k)=N(x^k μx,k+Σxy,kΣyy,k1(ykμy,k),Pk^ Σxx,kΣxy,kΣyy,k1Σyx,k)
x ^ k \hat{\boldsymbol{x}}_k x^k定义为均值, P ^ k \hat{\boldsymbol{P}}_k P^k定义为协方差,可以写出广义高斯滤波中校正步骤的方程:
K k = Σ x y , k Σ y y , k − 1 P ^ k = P ˇ k − K k Σ x y , k T x ^ k = x ˇ k + K k ( y k − μ y , k ) (GGF update) \begin{aligned} \boldsymbol{K}_k&=\boldsymbol{\Sigma}_{xy,k}\boldsymbol{\Sigma}_{yy,k}^{-1}\\ \hat{\boldsymbol{P}}_k&=\check{\boldsymbol{P}}_k-\boldsymbol{K}_k\boldsymbol{\Sigma}_{xy,k}^T\\ \hat{\boldsymbol{x}}_k&=\check{\boldsymbol{x}}_k+\boldsymbol{K}_k\left(\boldsymbol{y}_k-\boldsymbol{\mu}_{y,k}\right)\tag{GGF update} \end{aligned} KkP^kx^k=Σxy,kΣyy,k1=PˇkKkΣxy,kT=xˇk+Kk(ykμy,k)(GGF update)

其中
μ x , k = E ( x k ∣ x ˇ 0 , u 1 : k , y 0 : k − 1 ) = x ˇ k Σ x x , k = E ( ( x k − μ x , k ) ( x k − μ x , k ) T ∣ x ˇ 0 , u 1 : k , y 0 : k − 1 ) = P ˇ k \begin{aligned} \boldsymbol{\mu}_{x,k}&= E(\boldsymbol{x}_k\vert \check{\boldsymbol{x}}_{0},\boldsymbol{u}_{1:k},\boldsymbol{y}_{0:k-1})=\check{\boldsymbol{x}}_{k}\\ \boldsymbol{\Sigma}_{xx,k}&=E\left((\boldsymbol{x}_k-\boldsymbol{\mu}_{x,k})(\boldsymbol{x}_k-\boldsymbol{\mu}_{x,k})^T{\Large\vert} \check{\boldsymbol{x}}_{0},\boldsymbol{u}_{1:k},\boldsymbol{y}_{0:k-1}\right)=\check{\boldsymbol{P}}_{k} \end{aligned} μx,kΣxx,k=E(xkxˇ0,u1:k,y0:k1)=xˇk=E((xkμx,k)(xkμx,k)Txˇ0,u1:k,y0:k1)=Pˇk

对非线性观测模型在任意一点 x o p , k \boldsymbol{x}_{op,k} xop,k进行线性化,可得:
y k = h ( x k , n k ) = y o p , k + H k ( x k − x o p , k ) + n k ′ \begin{aligned} \boldsymbol{y}_{k}=\boldsymbol{h}\left( \boldsymbol{x}_{k},\boldsymbol{n}_{k}\right) =\boldsymbol{y}_{op,k}+\boldsymbol{H}_{k}\left( \boldsymbol{x}_{k}-\boldsymbol{x}_{op,k}\right) +\boldsymbol{n}_{k}' \end{aligned} yk=h(xk,nk)=yop,k+Hk(xkxop,k)+nk
其中
y o p , k = h ( x o p , k , 0 ) H k = ∂ h ( x k , n k ) ∂ x k ∣ x o p , k , 0 n k ′ = ∂ h ( x k , n k ) ∂ n k ∣ x o p , k , 0 n k \begin{aligned} \boldsymbol{y}_{op,k}&=\boldsymbol{h}\left( \boldsymbol{x}_{op,k},\bf{0}\right) \\ \boldsymbol{H}_{k} &= \left.\dfrac{\partial \boldsymbol{h}\left( \boldsymbol{x}_{k},\boldsymbol{n}_{k}\right) }{\partial \boldsymbol{x}_{k}}\right\vert _{\boldsymbol{x}_{op,k},\bf{0}}\\ \boldsymbol{n}_{k}' &= \left.\dfrac{\partial \boldsymbol{h}\left( \boldsymbol{x}_{k},\boldsymbol{n}_{k}\right) }{\partial \boldsymbol{n}_{k}}\right\vert _{\boldsymbol{x}_{op,k},\bf{0}}\boldsymbol{n}_{k} \end{aligned} yop,kHknk=h(xop,k,0)=xkh(xk,nk)xop,k,0=nkh(xk,nk)xop,k,0nk

μ y , k = E ( y k ∣ x ˇ 0 , u 1 : k , y 0 : k − 1 ) = E ( y o p , k − H k x o p , k ∣ x ˇ 0 , u 1 : k , y 0 : k − 1 ) ⏟ C o n s t a n t + E ( H k x k ∣ x ˇ 0 , u 1 : k , y 0 : k − 1 ) ⏟ H k E ( x k ∣ x ˇ 0 , u 1 : k , y 0 : k − 1 ) + E ( n k ′ ) ⏟ 0 = y o p , k − H k x o p , k + H k x ˇ k = y o p , k + H k ( x ˇ k − x o p , k ) Σ y y , k = E ( [ y k − μ y , k ] ⋅ [ y k − μ y , k ] T ∣ x ˇ 0 , u 1 : k , y 0 : k − 1 ) = E ( H k ( x k − x ˇ k ) ( x k − x ˇ k ) T H k T ⏟ H k E [ ( x k − x ˇ k ) ( x k − x ˇ k ) T ] H k T + n k ′ n k ′ T ⏟ E ( n k ′ n k ′ T ) + n k ′ [ H k ( x k − x ˇ k ) ] T ⏟ E ( n k ′ ) E ( ⋅ ) = 0 + H k ( x k − x ˇ k ) n k ′ ⏟ E ( ⋅ ) E ( n k ′ ) = 0 ∣ x ˇ 0 , u 1 : k , y 0 : k − 1 ) = H k P ˇ k H k T + R k ′ Σ x y = E [ ( x k − μ x , k ) ( y k − μ y , k ) T ∣ x ˇ 0 , u 1 : k , y 0 : k − 1 ] = E [ ( x k − μ x , k ) ( H k ( x k − μ x , k ) ) T ∣ x ˇ 0 , u 1 : k , y 0 : k − 1 ] = E [ ( x k − μ x , k ) ( x k − μ x , k ) T ∣ x ˇ 0 , u 1 : k , y 0 : k − 1 ] H k T = P ˇ k H k T Σ y x = Σ x y T = H k P ˇ k \begin{aligned} &\begin{aligned} \boldsymbol{\mu}_{y,k}&=E(\boldsymbol{y}_{k}\vert \check{\boldsymbol{x}}_{0},\boldsymbol{u}_{1:k},\boldsymbol{y}_{0:k-1}) \\ &= \underbrace{E\left(\boldsymbol{y}_{op,k}-\boldsymbol{H}_{k}\boldsymbol{x}_{op,k}\vert \check{\boldsymbol{x}}_{0},\boldsymbol{u}_{1:k},\boldsymbol{y}_{0:k-1}\right)}_{\rm Constant}+\underbrace{E\left(\boldsymbol{H}_{k} \boldsymbol{x}_{k}\vert \check{\boldsymbol{x}}_{0},\boldsymbol{u}_{1:k},\boldsymbol{y}_{0:k-1}\right)}_{\boldsymbol{H}_{k}E\left( \boldsymbol{x}_{k}\vert \check{\boldsymbol{x}}_{0},\boldsymbol{u}_{1:k},\boldsymbol{y}_{0:k-1}\right)}+\underbrace{E\left(\boldsymbol{n}_{k}'\right)}_{\bf{0}}\\ &= \boldsymbol{y}_{op,k}-\boldsymbol{H}_{k}\boldsymbol{x}_{op,k}+\boldsymbol{H}_{k}\check{\boldsymbol{x}}_k\\ &= \boldsymbol{y}_{op,k}+\boldsymbol{H}_{k}\left(\check{\boldsymbol{x}}_k-\boldsymbol{x}_{op,k}\right) \end{aligned}\\ &\begin{aligned} \boldsymbol{\Sigma}_{yy,k}&=E\left(\left[\boldsymbol{y}_{k}-\boldsymbol{\mu}_{y,k}\right]\cdot\left[\boldsymbol{y}_{k}-\boldsymbol{\mu}_{y,k}\right]^T{\Large \vert} \check{\boldsymbol{x}}_{0},\boldsymbol{u}_{1:k},\boldsymbol{y}_{0:k-1}\right) \\ &=E\left(\underbrace{\boldsymbol{H}_{k}\left( \boldsymbol{x}_{k}-\check{\boldsymbol{x}}_{k}\right) \left( \boldsymbol{x}_{k}-\check{\boldsymbol{x}}_{k}\right) ^{T}\boldsymbol{H}_{k}^{T}}_{\boldsymbol{H}_{k}E\left[\left( \boldsymbol{x}_{k}-\check{\boldsymbol{x}}_{k}\right) \left( \boldsymbol{x}_{k}-\check{\boldsymbol{x}}_{k}\right) ^{T}\right]\boldsymbol{H}_{k}^T}+\underbrace{\boldsymbol{n}_{k}'\boldsymbol{n}_{k}'^{T}}_{E(\boldsymbol{n}_{k}'\boldsymbol{n}_{k}'^{T})}+\underbrace{\boldsymbol{n}_{k}'\left[ \boldsymbol{H}_{k}\left( \boldsymbol{x}_{k}-\check{\boldsymbol{x}}_{k}\right) \right]^{T}}_{E(\boldsymbol{n}_k')E(\cdot)=\bf{0}}+\underbrace{\boldsymbol{H}_{k}\left( \boldsymbol{x}_{k}-\check{\boldsymbol{x}}_{k}\right) \boldsymbol{n}_{k}'}_{E(\cdot)E(\boldsymbol{n}'_k)=\bf{0}}{\Large \vert} \check{\boldsymbol{x}}_{0},\boldsymbol{u}_{1:k},\boldsymbol{y}_{0:k-1}\right)\\ &=\boldsymbol{H}_k\check{\boldsymbol{P}}_k \boldsymbol{H}_k^T+\boldsymbol{R}_k' \end{aligned}\\ &\begin{aligned} \boldsymbol{\Sigma}_{xy}&=E\left[ \left( \boldsymbol{x}_{k}-\boldsymbol{\mu} _{x,k}\right) \left( \boldsymbol{y}_{k}-\boldsymbol{\mu} _{y,k}\right) ^{T}{\Large \vert} \check{\boldsymbol{x}}_{0},\boldsymbol{u}_{1:k},\boldsymbol{y}_{0:k-1}\right] \\ &=E\left[ \left( \boldsymbol{x}_{k}-\boldsymbol{\mu} _{x,k}\right) \left( \boldsymbol{H}_{k}\left( \boldsymbol{x}_{k}-\boldsymbol{\mu} _{x,k}\right) \right)^T {\Large \vert} \check{\boldsymbol{x}}_{0},\boldsymbol{u}_{1:k},\boldsymbol{y}_{0:k-1}\right] \\ &=E\left[ \left( \boldsymbol{x}_{k}-\boldsymbol{\mu} _{x,k}\right) \left( \boldsymbol{x}_{k}-\boldsymbol{\mu} _{x,k}\right) ^{T}{\Large \vert} \check{\boldsymbol{x}}_{0},\boldsymbol{u}_{1:k},\boldsymbol{y}_{0:k-1}\right] \boldsymbol{H}_{k}^{T}\\ &=\check{\boldsymbol{P}}_k\boldsymbol{H}_k^T \end{aligned}\\ &\boldsymbol{\Sigma}_{yx}=\boldsymbol{\Sigma}_{xy}^T=\boldsymbol{H}_{k}\check{\boldsymbol{P}}_k \end{aligned} μy,k=E(ykxˇ0,u1:k,y0:k1)=Constant E(yop,kHkxop,kxˇ0,u1:k,y0:k1)+HkE(xkxˇ0,u1:k,y0:k1) E(Hkxkxˇ0,u1:k,y0:k1)+0 E(nk)=yop,kHkxop,k+Hkxˇk=yop,k+Hk(xˇkxop,k)Σyy,k=E([ykμy,k][ykμy,k]Txˇ0,u1:k,y0:k1)=EHkE[(xkxˇk)(xkxˇk)T]HkT Hk(xkxˇk)(xkxˇk)THkT+E(nknkT) nknkT+E(nk)E()=0 nk[Hk(xkxˇk)]T+E()E(nk)=0 Hk(xkxˇk)nkxˇ0,u1:k,y0:k1=HkPˇkHkT+RkΣxy=E[(xkμx,k)(ykμy,k)Txˇ0,u1:k,y0:k1]=E[(xkμx,k)(Hk(xkμx,k))Txˇ0,u1:k,y0:k1]=E[(xkμx,k)(xkμx,k)Txˇ0,u1:k,y0:k1]HkT=PˇkHkTΣyx=ΣxyT=HkPˇk

将上面四式 ⇑ \Uparrow 带入GGF update式得:
K k = P ˇ k H k T ( R k ′ + H k P ˇ k H k T ) − 1 P ^ k = ( I − K k G k ) P ˇ k x ^ k = x ˇ k − K k ( y k − y o p , k − G k ( x ˇ k − x o p , k ) ) (iterate update) \begin{aligned} \boldsymbol{K}_k&=\check{\boldsymbol{P}}_{k}\boldsymbol{H}_k^{T}\left( \boldsymbol{R}_{k}'+\boldsymbol{H}_{k}\check{\boldsymbol{P}}_{k}\boldsymbol{H}_{k}^{T}\right) ^{-1}\\ \hat{\boldsymbol{P}}_k&=\left(\bf{I}-\boldsymbol{K}_k\boldsymbol{G}_k\right)\check{\boldsymbol{P}}_k\\ \hat{\boldsymbol{x}}_k&=\check{\boldsymbol{x}}_k-\boldsymbol{K}_k\left(\boldsymbol{y}_k-\boldsymbol{y}_{op,k}-\boldsymbol{G}_k\left(\check{x}_k-\boldsymbol{x}_{op,k}\right)\right) \end{aligned}\tag{iterate update} KkP^kx^k=PˇkHkT(Rk+HkPˇkHkT)1=(IKkGk)Pˇk=xˇkKk(ykyop,kGk(xˇkxop,k))(iterate update)
上式与扩展卡尔曼滤波的增益和校正方程非常相似,唯一的区别在于线性化的工作点。如果我们将线性化的工作点设置为预测先验的均值,即 x o p , k = x ˇ k \boldsymbol{x}_{op,k}=\check{\boldsymbol{x}}_k xop,k=xˇk,那么两者完全相同。

如果我们迭代计算iterate update式,并且在每一次迭代中将工作点设置为上一次迭代的后验均值,将得到更好的结果:
x ^ k → x o p , k \hat{\boldsymbol{x}}_k\to\boldsymbol{x}_{op,k} x^kxop,k
在第一次迭代中,令 x o p , k = x ˇ k \boldsymbol{x}_{op,k}=\check{\boldsymbol{x}}_k xop,k=xˇk。在迭代的过程中,若 x o p , k \boldsymbol{x}_{op,k} xop,k的改变足够小就停止迭代。注意,在卡尔曼增益和后验状态收敛之后,协方差方程只需计算一次。

EKF、IEKF与MAP的关系

IEKF的结果对应于后验概率的(局部)极大值。由于EKF的校正部分并没有迭代,它可能远离局部极大值。
IEKF的均值对应于MAP的模(最大值)。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Shilong Wang

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值