卡尔曼滤波器Kalman Filter公式推导

引言

卡尔曼滤波器(Kalman Filter, KF)是一种状态空间模型,状态为内因素未知,观测为外因素可视为已知量,其模型如图所示:
卡尔曼滤波器状态空间模型
状态 s n {s}_{n} sn仅与其前一时刻状态 s n − 1 {s}_{n-1} sn1有关,与其历史状态无关,这里以线性关系为例,其表达式为:
{ X n = F n X n − 1 + V n Y n = H n X n − 1 + W n \left\{\begin{array}{c}\mathrm{X}_{n}=\mathrm{F}_{n} \mathrm{X}_{n-1}+\mathrm{V}_{n} \\ \mathrm{Y}_{n}=\mathrm{H}_{n} \mathrm{X}_{n-1}+\mathrm{W}_{n}\end{array}\right. {Xn=FnXn1+VnYn=HnXn1+Wn
其中 X n {X}_{n} Xn表示时刻 n 时的状态, Y n {Y}_{n} Yn表示时刻 n 时的观测值, V n {V}_{n} Vn为状态噪声, W n {W}_{n} Wn为观测噪声,假设 V n {V}_{n} Vn W n {W}_{n} Wn为零均值,且相互独立的高斯白噪声,即有:
{ E ( V n ) = E ( W n ) = 0 E ( V n W n ) = 0 E ( W n W m ) = Q n δ n m E ( V n V m ) = T n δ n m \left\{\begin{array}{c} \mathrm{E}\left(\mathrm{V}_{n}\right)=\mathrm{E}\left(\mathrm{W}_{n}\right)=0 \\ \mathrm{E}\left(\mathrm{V}_{n} \mathrm{W}_{n}\right)=0 \\ \mathrm{E}\left(\mathrm{W}_{n} \mathrm{W}_{\mathrm{m}}\right)=\mathrm{Q}_{n} \delta_{\mathrm{nm}} \\ \mathrm{E}\left(\mathrm{V}_{n} \mathrm{V}_{\mathrm{m}}\right)=\mathrm{T}_{n} \delta_{\mathrm{nm}} \end{array}\right. E(Vn)=E(Wn)=0E(VnWn)=0E(WnWm)=QnδnmE(VnVm)=Tnδnm
其中 Q n {Q}_{n} Qn T n {T}_{n} Tn分别表示 W n {W}_{n} Wn V n {V}_{n} Vn的自相关值, δ n m \delta_{\mathrm{nm}} δnm表示
δ n m = { 1 n = m 0  otherwise  \delta_{\mathrm{nm}}=\left\{\begin{array}{cc} 1 & n=m \\ 0 & \text { otherwise } \end{array}\right. δnm={10n=m otherwise 
F n {F}_{n} Fn H n {H}_{n} Hn Q n {Q}_{n} Qn T n {T}_{n} Tn为系数可认为是确定性的常量(即不含随机性),而 X n {X}_{n} Xn Y n {Y}_{n} Yn V n {V}_{n} Vn W n {W}_{n} Wn为随机变量(注意在随机过程中务必清楚什么量是确定的,什么量是随机的!!!)。卡尔曼滤波器要做的即是从含有噪声的观测值中,尽可能准确地估计出当前时刻的状态。
这里简单介绍下投影的概念,使用前m个时刻的观测值 Y 1 , Y 2 , … , Y m \mathrm{Y}_{1}, \mathrm{Y}_{2}, \ldots, \mathrm{Y}_{\mathrm{m}} Y1,Y2,,Ym和第n时刻的状态值 X n {X}_{n} Xn来估计n时刻的真实状态值 x ^ n ∣ m \hat{\mathrm{x}}_{\mathrm{n} \mid \mathrm{m}} x^nm,可表示为:
x ^ n ∣ m = Proj ⁡ { Y 1 , Y 2 , … , Y m } X n ( n > = m ) \hat{\mathrm{x}}_{\mathrm{n} \mid \mathrm{m}}=\operatorname{Proj}_{\left\{\mathrm{Y}_{1}, \mathrm{Y}_{2}, \ldots, \mathrm{Y}_{\mathrm{m}}\right\}} \mathrm{X}_{n} \quad(\mathrm{n}>=\mathrm{m}) x^nm=Proj{Y1,Y2,,Ym}Xn(n>=m)
X n {X}_{n} Xn在m维平面( Y 1 , Y 2 , … , Y m \mathrm{Y}_{1}, \mathrm{Y}_{2}, \ldots, \mathrm{Y}_{\mathrm{m}} Y1,Y2,,Ym)作出的投影。
卡尔曼滤波的推导分为两个步骤,即:

  • 预测: x ^ n − 1 ∣ n − 1 → x ^ n ∣ n − 1 \hat{\mathrm{x}}_{\mathrm{n}-1 \mid \mathrm{n}-1} \rightarrow \hat{\mathrm{x}}_{\mathrm{n} \mid \mathrm{n}-1} x^n1n1x^nn1,由n-1时刻的状态值预测第n时刻的状态值;
  • 校正: x ^ n ∣ n − 1 → x ^ n ∣ n \hat{\mathrm{x}}_{\mathrm{n} \mid \mathrm{n}-1} \rightarrow \hat{\mathrm{x}}_{\mathrm{n} \mid \mathrm{n}} x^nn1x^nn,对之前的结果作出校正

预测与校正

预测: x ^ n − 1 ∣ n − 1 → x ^ n ∣ n − 1 \hat{\mathrm{x}}_{\mathrm{n}-1 \mid \mathrm{n}-1} \rightarrow \hat{\mathrm{x}}_{\mathrm{n} \mid \mathrm{n}-1} x^n1n1x^nn1

x ^ n ∣ n − 1 = Proj ⁡ { Y 1 , Y 2 , … , Y n − 1 } X n = Proj ⁡ { Y 1 , Y 2 , … , Y n − 1 } ( F n X n − 1 + V n ) = Proj ⁡ { Y 1 , Y 2 , … , Y n − 1 } ( F n X n − 1 ) + Proj ⁡ { Y 1 , Y 2 , … , Y n − 1 } V n = Proj ⁡ { Y 1 , Y 2 , … , Y n − 1 } ( F n X n − 1 ) = F n Proj ⁡ { Y 1 , Y 2 , … , Y n − 1 } X n − 1 = F n X ^ n − 1 ∣ n − 1 \begin{aligned} \hat{x}_{n \mid n-1} &=\operatorname{Proj}_{\left\{Y_{1}, Y_{2}, \ldots, Y_{n-1}\right\}} X_{n}=\operatorname{Proj}_{\left\{Y_{1}, Y_{2}, \ldots, Y_{n-1}\right\}}\left(F_{n} X_{n-1}+V_{n}\right) \\ &=\operatorname{Proj}_{\left\{Y_{1}, Y_{2}, \ldots, Y_{n-1}\right\}}\left(F_{n} X_{n-1}\right)+\operatorname{Proj}_{\left\{Y_{1}, Y_{2}, \ldots, Y_{n-1}\right\}} V_{n} \\ &=\operatorname{Proj}_{\left\{Y_{1}, Y_{2}, \ldots, Y_{n-1}\right\}}\left(F_{n} X_{n-1}\right) \\ &=F_{n} \operatorname{Proj}_{\left\{Y_{1}, Y_{2}, \ldots, Y_{n-1}\right\}} X_{n-1} \\ &=F_{n} \hat{X}_{n-1 \mid n-1} \end{aligned} x^nn1=Proj{Y1,Y2,,Yn1}Xn=Proj{Y1,Y2,,Yn1}(FnXn1+Vn)=Proj{Y1,Y2,,Yn1}(FnXn1)+Proj{Y1,Y2,,Yn1}Vn=Proj{Y1,Y2,,Yn1}(FnXn1)=FnProj{Y1,Y2,,Yn1}Xn1=FnX^n1n1

注意V与Y是正交的,因此 Proj ⁡ { Y 1 , Y 2 , … , Y n − 1 } V n = 0 \operatorname{Proj}_{\left\{Y_{1}, Y_{2}, \ldots, Y_{n-1}\right\}} V_{n}=0 Proj{Y1,Y2,,Yn1}Vn=0

校正: x ^ n ∣ n − 1 → x ^ n ∣ n \hat{\mathrm{x}}_{\mathrm{n} \mid \mathrm{n}-1} \rightarrow \hat{\mathrm{x}}_{\mathrm{n} \mid \mathrm{n}} x^nn1x^nn

我们将 Y n {Y}_{n} Yn进行拆分, Y n = Z n + O n {Y}_{n}={Z}_{n}+{O}_{n} Yn=Zn+On,其中 O n = Proj ⁡ { Y 1 , Y 2 , … , Y n − 1 } Y n = Proj ⁡ { Y 1 , Y 2 , … , Y n − 1 } ( H n X n + W n ) = H n Proj ⁡ { Y 1 , Y 2 , … , Y n − 1 } X n = H n X ^ n ∣ n − 1 \begin{aligned} O_{n} &=\operatorname{Proj}_{\left\{Y_{1}, Y_{2}, \ldots, Y_{n-1}\right\}} Y_{n} \\ &=\operatorname{Proj}_{\left\{Y_{1}, Y_{2}, \ldots, Y_{n-1}\right\}}\left(H_{n} X_{n}+W_{n}\right) \\ &=H_{n} \operatorname{Proj}_{\left\{Y_{1}, Y_{2}, \ldots, Y_{n-1}\right\}} X_{n} \\ &=H_{n} \hat{X}_{n \mid n-1} \end{aligned} On=Proj{Y1,Y2,,Yn1}Yn=Proj{Y1,Y2,,Yn1}(HnXn+Wn)=HnProj{Y1,Y2,,Yn1}Xn=HnX^nn1
Z n = Y n − O n = Y n − H n x ^ n ∣ n − 1 {Z}_{n}={Y}_{n}-{O}_{n}={Y}_{n}-H_{n} \hat{x}_{n \mid n-1} Zn=YnOn=YnHnx^nn1表示投影误差(又称为新息,即新产生的信息)。
X ^ n ∣ n = Proj ⁡ { Y 1 , Y 2 , … , Y n } X n = Proj ⁡ { Y 1 , Y 2 , … , Y n − 1 } X n + Proj ⁡ Z n X n = X ^ n ∣ n − 1 + Proj ⁡ z n X n \begin{aligned} \hat{X}_{n \mid n} &=\operatorname{Proj}_{\left\{Y_{1}, Y_{2}, \ldots, Y_{n}\right\}} X_{n} \\ &=\operatorname{Proj}_{\left\{Y_{1}, Y_{2}, \ldots, Y_{n-1}\right\}} X_{n}+\operatorname{Proj}_{Z_{n}} X_{n} \\ &=\hat{X}_{n \mid n-1}+\operatorname{Proj}_{z_{n}} X_{n} \end{aligned} X^nn=Proj{Y1,Y2,,Yn}Xn=Proj{Y1,Y2,,Yn1}Xn+ProjZnXn=X^nn1+ProjznXn
Proj ⁡ z n X n \operatorname{Proj}_{z_{n}} X_{n} ProjznXn即为校正量,即预测误差,有:
Proj ⁡ z n X n = K n ( Y n − H n X ^ n ∣ n − 1 ) \operatorname{Proj}_{z_{n}} X_{n}=K_{n}\left(Y_{n}-H_{n} \hat{X}_{n \mid n-1}\right) ProjznXn=Kn(YnHnX^nn1)
其中 K n {K}_{n} Kn称为卡尔曼增益,将在下节介绍。最终得到校正结果:
X ^ n ∣ n = X ^ n ∣ n − 1 + K n ( Y n − H n X ^ n ∣ n − 1 ) \hat{X}_{n \mid n}=\hat{X}_{n \mid n-1}+K_{n}\left(Y_{n}-H_{n} \hat{X}_{n \mid n-1}\right) X^nn=X^nn1+Kn(YnHnX^nn1)

卡尔曼增益 K n {K}_{n} Kn

注意到上式得到的校正结果分为两部分,第一部分为预测模型,第二部分为观察模型,由卡尔曼增益 K n {K}_{n} Kn调节两者的比例,即卡尔曼增益高越大表示更相信观察模型,反之则更相信预测模型,下面我们对该值进行求解。
K n = E ( X n Z n T ) ( E ( Z n Z n T ) ) − 1 K_{n}=\mathrm{E}\left(X_{n} Z_{n}^{T}\right)\left(\mathrm{E}\left(Z_{n} Z_{n}^{T}\right)\right)^{-1} Kn=E(XnZnT)(E(ZnZnT))1
其中E表示求期望(均值),上角标T表示对矩阵求转置。

第一项

E ( X n Z n T ) = E ( X n ( Y n − H n X ^ n ∣ n − 1 ) T ) = E ( X n ( H n X n + W n − H n X ^ n ∣ n − 1 ) T ) = E ( X n ( X n − X ^ n ∣ n − 1 ) T H n T ) + E ( X n W n T ) = E ( X n ( X n − X ^ n ∣ n − 1 ) T H n T ) = E ( X n ( X n − X ^ n ∣ n − 1 ) ) H n T \begin{aligned} \mathrm{E}\left(X_{n} Z_{n}^{T}\right) &=\mathrm{E}\left(X_{n}\left(Y_{n}-H_{n} \hat{X}_{n \mid n-1}\right)^{T}\right) \\ &=\mathrm{E}\left(X_{n}\left(H_{n} X_{n}+W_{n}-H_{n} \hat{X}_{n \mid n-1}\right)^{T}\right) \\ &=\mathrm{E}\left(X_{n}\left(X_{n}-\hat{X}_{n \mid n-1}\right)^{T} H_{n}^{T}\right)+\mathrm{E}\left(X_{n} W_{n}^{T}\right) \\ &=\mathrm{E}\left(X_{n}\left(X_{n}-\hat{X}_{n \mid n-1}\right)^{T} H_{n}^{T}\right) \\ &=\mathrm{E}\left(X_{n}\left(X_{n}-\hat{X}_{n \mid n-1}\right)\right) H_{n}^{T} \end{aligned} E(XnZnT)=E(Xn(YnHnX^nn1)T)=E(Xn(HnXn+WnHnX^nn1)T)=E(Xn(XnX^nn1)THnT)+E(XnWnT)=E(Xn(XnX^nn1)THnT)=E(Xn(XnX^nn1))HnT
注意到 X ^ n ∣ n − 1 = Proj ⁡ { Y 1 , Y 2 , … , Y n − 1 } X n \hat{X}_{n \mid n-1}=\operatorname{Proj}_{\left\{Y_{1}, Y_{2}, \ldots, Y_{n-1}\right\}} X_{n} X^nn1=Proj{Y1,Y2,,Yn1}Xn,即 X ^ n ∣ n − 1 \hat{X}_{n \mid n-1} X^nn1 X n X_{n} Xn在超平面 Y 1 , Y 2 , … , Y n − \mathrm{Y}_{1}, \mathrm{Y}_{2}, \ldots, \mathrm{Y}_{\mathrm{n-}} Y1,Y2,,Yn上的投影,如图所示:
投影
投影误差 X n − X ^ n ∣ n − 1 X_{n}-\hat{X}_{n \mid n-1} XnX^nn1与投影量 X ^ n ∣ n − 1 \hat{X}_{n \mid n-1} X^nn1正交(此即正交性原理,即最优估计的误差正交于估计原材料),即 X ^ n ∣ n − 1 ( X n − X ^ n ∣ n − 1 ) T = 0 \hat{X}_{n \mid n-1}\left(X_{n}-\hat{X}_{n \mid n-1}\right)^{T}=0 X^nn1(XnX^nn1)T=0,因此上式可进一步改写为:
E ( X n Z n T ) = E ( ( X n − X ^ n ∣ n − 1 ) ( X n − X ^ n ∣ n − 1 ) T ) H n T \mathrm{E}\left(X_{n} Z_{n}^{T}\right)=\mathrm{E}\left(\left(X_{n}-\hat{X}_{n \mid n-1}\right)\left(X_{n}-\hat{X}_{n \mid n-1}\right)^{T}\right) H_{n}^{T} E(XnZnT)=E((XnX^nn1)(XnX^nn1)T)HnT
其中 R ^ n ∣ n − 1 = E ( ( X n − X ^ n ∣ n − 1 ) ( X n − X ^ n ∣ n − 1 ) T ) \hat{R}_{n \mid n-1}=\mathrm{E}\left(\left(X_{n}-\hat{X}_{n \mid n-1}\right)\left(X_{n}-\hat{X}_{n \mid n-1}\right)^{T}\right) R^nn1=E((XnX^nn1)(XnX^nn1)T)为预测误差的协方差矩阵,因此:
E ( X n Z n T ) = R ^ n ∣ n − 1 H n T \mathrm{E}\left(X_{n} Z_{n}^{T}\right)=\widehat{R}_{n \mid n-1} H_{n}^{T} E(XnZnT)=R nn1HnT

第二项

E ( Z n Z n T ) = E ( ( Y n − H n X ^ n ∣ n − 1 ) ( Y n − H n X ^ n ∣ n − 1 ) T ) = E ( ( H n X n + W n − H n X ^ n ∣ n − 1 ) ( H n X n + W n − H n X ^ n ∣ n − 1 ) T ) = H n E ( ( X n − X ^ n ∣ n − 1 ) ( X n − X ^ n ∣ n − 1 ) ) H n T + E ( W n W n T ) = H n R ^ n ∣ n − 1 H n T + Q n \begin{aligned} \mathrm{E}\left(Z_{n} Z_{n}^{T}\right) &=E\left(\left(Y_{n}-H_{n} \hat{X}_{n \mid n-1}\right)\left(Y_{n}-H_{n} \hat{X}_{n \mid n-1}\right)^{T}\right) \\ &=E\left(\left(H_{n} X_{n}+W_{n}-H_{n} \hat{X}_{n \mid n-1}\right)\left(H_{n} X_{n}+W_{n}-H_{n} \hat{X}_{n \mid n-1}\right)^{T}\right) \\ &=H_{n} E\left(\left(X_{n}-\hat{X}_{n \mid n-1}\right)\left(X_{n}-\hat{X}_{n \mid n-1}\right)\right) H_{n}^{T}+E\left(W_{n} W_{n}^{T}\right) \\ &=H_{n} \hat{R}_{n \mid n-1} H_{n}^{T}+Q_{n} \end{aligned} E(ZnZnT)=E((YnHnX^nn1)(YnHnX^nn1)T)=E((HnXn+WnHnX^nn1)(HnXn+WnHnX^nn1)T)=HnE((XnX^nn1)(XnX^nn1))HnT+E(WnWnT)=HnR^nn1HnT+Qn
综上可以得到卡尔曼增益的表达式:
K n = R ^ n ∣ n − 1 H n T ( H n R ^ n ∣ n − 1 H n T + Q n ) − 1 K_{n}=\hat{R}_{n \mid n-1} H_{n}^{T}\left(H_{n} \hat{R}_{n \mid n-1} H_{n}^{T}+Q_{n}\right)^{-1} Kn=R^nn1HnT(HnR^nn1HnT+Qn)1

预测误差的协方差阵 R ^ n ∣ n − 1 \hat{R}_{n \mid n-1} R^nn1

同求解状态过程步骤类似,对协方差阵 R ^ n ∣ n − 1 \hat{R}_{n \mid n-1} R^nn1的求解同样分为预测、校正两部分,即:

  • R ^ n ∣ n − 1 → R ^ n ∣ n \hat{\mathrm{R}}_{\mathrm{n} \mid \mathrm{n}-1} \rightarrow \hat{\mathrm{R}}_{\mathrm{n} \mid \mathrm{n}} R^nn1R^nn
  • R ^ n ∣ n → R ^ n + 1 ∣ n \hat{\mathrm{R}}_{\mathrm{n} \mid \mathrm{n}} \rightarrow \hat{\mathrm{R}}_{\mathrm{n+1} \mid \mathrm{n}} R^nnR^n+1n

R ^ n + 1 ∣ n = E ( ( X n + 1 − X ^ n + 1 ∣ n ) ( X n + 1 − X ^ n + 1 ∣ n ) T ) = E ( ( F n + 1 X n + V n + 1 − F n + 1 X ^ n ∣ n ) ( F n + 1 X n + V n + 1 − F n + 1 X ^ n ∣ n ) T ) = E ( ( F n + 1 ( X n − X ^ n ∣ n ) + V n + 1 ) ( F n + 1 ( X n − X ^ n ∣ n ) + V n + 1 ) T ) = F n + 1 E ( ( X n − X ^ n ∣ n ) ( X n − X ^ n ∣ n ) T F n + 1 T + E ( V n + 1 V n + 1 T ) = F n + 1 R ^ n ∣ n F n + 1 T + T n + 1 \begin{aligned} \hat{R}_{n+1 \mid n} &=\mathrm{E}\left(\left(X_{n+1}-\hat{X}_{n+1 \mid n}\right)\left(X_{n+1}-\hat{X}_{n+1 \mid n}\right)^{T}\right) \\ &=\mathrm{E}\left(\left(F_{n+1} X_{n}+V_{n+1}-F_{n+1} \hat{X}_{n \mid n}\right)\left(F_{n+1} X_{n}+V_{n+1}-F_{n+1} \hat{X}_{n \mid n}\right)^{T}\right) \\ &=\mathrm{E}\left(\left(F_{n+1}\left(X_{n}-\hat{X}_{n \mid n}\right)+V_{n+1}\right)\left(F_{n+1}\left(X_{n}-\hat{X}_{n \mid n}\right)+V_{n+1}\right)^{T}\right) \\ &=F_{n+1} \mathrm{E}\left(\left(X_{n}-\hat{X}_{n \mid n}\right)\left(X_{n}-\hat{X}_{n \mid n}\right)^{T} F_{n+1}^{T}+\mathrm{E}\left(V_{n+1} V_{n+1}^{T}\right)\right.\\ &=F_{n+1} \hat{R}_{n \mid n} F_{n+1}^{T}+T_{n+1} \end{aligned} R^n+1n=E((Xn+1X^n+1n)(Xn+1X^n+1n)T)=E((Fn+1Xn+Vn+1Fn+1X^nn)(Fn+1Xn+Vn+1Fn+1X^nn)T)=E((Fn+1(XnX^nn)+Vn+1)(Fn+1(XnX^nn)+Vn+1)T)=Fn+1E((XnX^nn)(XnX^nn)TFn+1T+E(Vn+1Vn+1T)=Fn+1R^nnFn+1T+Tn+1

R ^ n ∣ n = E ( ( X n − X ^ n ∣ n ) ( X n − X ^ n ∣ n ) T ) = E ( ( X n − X ^ n ∣ n − 1 − K n ( Y n − H n X ^ n ∣ n − 1 ) ) ( X n − X ^ n ∣ n − 1 − K n ( Y n − H n X ^ n ∣ n − 1 ) ) T ) = E ( ( X n − X ^ n ∣ n − 1 − K n ( H n X n + W n ) + K n H n X ^ n ∣ n − 1 ) ( X n − X ^ n ∣ n − 1 − K n ( H n X n + W n ) + K n H n X ^ n ∣ n − 1 ) T ) = E ( ( I − K n H n ) ( X n − X ^ n ∣ n − 1 ) − K n W n ) ( ( I − K n H n ) ( X n − X ^ n ∣ n − 1 ) − K n W n ) T ) = ( I − K n H n ) E ( ( X n − X ^ n ∣ n − 1 ) ( X n − X ^ n ∣ n − 1 ) T ) ( I − K n H n ) T + K n Q n K n T = ( I − K n H n ) R ^ n ∣ n − 1 ( I − K n H n ) T + K n Q n K n T = ( I − K n H n ) R ^ n ∣ n − 1 \begin{aligned} \hat{R}_{n \mid n} &=\mathrm{E}\left(\left(X_{n}-\hat{X}_{n \mid n}\right)\left(X_{n}-\hat{X}_{n \mid n}\right)^{T}\right) \\ &=\mathrm{E}\left(\left(X_{n}-\hat{X}_{n \mid n-1}-K_{n}\left(Y_{n}-H_{n} \hat{X}_{n \mid n-1}\right)\right)\left(X_{n}-\hat{X}_{n \mid n-1}-K_{n}\left(Y_{n}-H_{n} \hat{X}_{n \mid n-1}\right)\right)^{T}\right) \\ &=E\left(\left(X_{n}-\hat{X}_{n \mid n-1}-K_{n}\left(H_{n} X_{n}+W_{n}\right)+K_{n} H_{n} \hat{X}_{n \mid n-1}\right)\left(X_{n}-\hat{X}_{n \mid n-1}-K_{n}\left(H_{n} X_{n}+W_{n}\right)+K_{n} H_{n} \hat{X}_{n \mid n-1}\right)^{T}\right) \\ &\left.=E\left(\left(I-K_{n} H_{n}\right)\left(X_{n}-\hat{X}_{n \mid n-1}\right)-K_{n} W_{n}\right)\left(\left(I-K_{n} H_{n}\right)\left(X_{n}-\hat{X}_{n \mid n-1}\right)-K_{n} W_{n}\right)^{T}\right) \\ &=\left(I-K_{n} H_{n}\right) E\left(\left(X_{n}-\hat{X}_{n \mid n-1}\right)\left(X_{n}-\hat{X}_{n \mid n-1}\right)^{T}\right)\left(I-K_{n} H_{n}\right)^{T}+K_{n} Q_{n} K_{n}^{T} \\ &=\left(I-K_{n} H_{n}\right) \hat{R}_{n \mid n-1}\left(I-K_{n} H_{n}\right)^{T}+K_{n} Q_{n} K_{n}^{T} \\ &=\left(I-K_{n} H_{n}\right) \hat{R}_{n \mid n-1} \end{aligned} R^nn=E((XnX^nn)(XnX^nn)T)=E((XnX^nn1Kn(YnHnX^nn1))(XnX^nn1Kn(YnHnX^nn1))T)=E((XnX^nn1Kn(HnXn+Wn)+KnHnX^nn1)(XnX^nn1Kn(HnXn+Wn)+KnHnX^nn1)T)=E((IKnHn)(XnX^nn1)KnWn)((IKnHn)(XnX^nn1)KnWn)T)=(IKnHn)E((XnX^nn1)(XnX^nn1)T)(IKnHn)T+KnQnKnT=(IKnHn)R^nn1(IKnHn)T+KnQnKnT=(IKnHn)R^nn1

  • 0
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Ice&Bing~

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值