卡尔曼滤波的推导

与维纳滤波不同,卡尔曼滤波不是简单地把系统看做一个黑匣子。卡尔曼滤波认为,可以对系统建立状态方程来增加对于系统的了解。

事实上,这种看法在我们生活中是具有普遍意义的。比如火箭的轨迹,严格遵守动力学原理;天气预报严格遵守大气学原理等等

卡尔曼滤波的的模型依赖于两个方程,即系统的状态方程和观测方程,如下
X n = F n ( X n − 1 , V n ) Y n = H n ( X n , W n ) X_n=F_n(X_{n-1},V_n)\\ Y_n=H_n(X_n,W_n) Xn=Fn(Xn1,Vn)Yn=Hn(Xn,Wn)
其中,我们称 X ^ n \hat{X}_n X^n为n时刻的系统值, Y n Y_n Yn为 n 时刻的观测值。 V n , W n V_n,W_n Vn,Wn分别为系统噪声和观测噪声。如果我们将系统看做是线性系统,并且观测值线性依赖于真实值,可以将模型具体化为如下形式
X n = F n X n − 1 + V n , s t a t e f u n c t i o n Y n = H n X n + W n , o b s e r v a t i o n F u n c t i o n X_n = F_nX_{n-1}+V_n \quad ,state \quad function\\ Y_n = H_nX_n+W_n\quad,observation Function Xn=FnXn1+Vn,statefunctionYn=HnXn+Wn,observationFunction
为了方便计算,对噪声做如下假设
E ( V n ) = E ( W n ) = 0 E ( V n V m ) = R n δ m − n E ( W n W m ) = P n δ m − n E ( V n W n ) = 0 E(V_n)=E(W_n) = 0\\ E(V_nV_m) = R_n\delta_{m-n}\\ E(W_nW_m) = P_n\delta_{m-n}\\ E(V_nW_n) = 0 E(Vn)=E(Wn)=0E(VnVm)=RnδmnE(WnWm)=PnδmnE(VnWn)=0
我们试图通过迭代的算法根据系统在(n-1)时刻的估计值得到系统在(n)时刻的真实值即: X ^ n − 1 ∣ n − 1 → X ^ n ∣ n \hat{X}_{n-1|n-1} \rightarrow\hat{X}_{n|n} X^n1n1X^nn

我们的做法是先利用(n-1)时刻的估计值,去预测系统在n时刻的值,即 X ^ n − 1 ∣ n − 1 → X ^ n ∣ n − 1 \hat{X}_{n-1|n-1} \rightarrow \hat{X}_{n|n-1} X^n1n1X^nn1;然后根据(n)时刻的观测值,校正得到n时刻的估计值,即 X ^ n ∣ n − 1 → X ^ n ∣ n \hat{X}_{n|n-1}\rightarrow \hat{X}_{n|n} X^nn1X^nn.

预测,即 X ^ n − 1 ∣ n − 1 → X ^ n ∣ n − 1 \hat{X}_{n-1|n-1} \rightarrow \hat{X}_{n|n-1} X^n1n1X^nn1

下面先实现利用(n-1)时刻的观测数据预测 n 时刻的系统值

根据正交性原理,n时刻的最有预测结果应该是n时刻的系统值在观测值构成的空间 { Y 1 , ⋯   , Y n − 1 } \{ Y_1,\cdots,Y_{n-1}\} {Y1,,Yn1}上的投影,即
X ^ n ∣ n − 1 = P r o j { Y 1 , ⋯   , Y n − 1 } X ^ n = P r o j { Y 1 , ⋯   , Y n − 1 } ( F n X ^ n − 1 + V n ) = P r o j { Y 1 , ⋯   , Y n − 1 } ( F n X ^ n − 1 ) = F n P r o j { Y 1 , ⋯   , Y n − 1 } X ^ n − 1 = F n X ^ n − 1 ∣ n − 1 \hat{X}_{n|n-1} = Proj_{ \{Y_1,\cdots,Y_{n-1}\} }\hat{X}_n \\ =Proj_{ \{Y_1,\cdots,Y_{n-1}\} }(F_n\hat{X}_{n-1}+V_n)\\ =Proj_{ \{Y_1,\cdots,Y_{n-1}\} }(F_n\hat{X}_{n-1})\\ =F_nProj_{ \{Y_1,\cdots,Y_{n-1}\} }\hat{X}_{n-1}\\ =F_n\hat{X}_{n-1|n-1} X^nn1=Proj{Y1,,Yn1}X^n=Proj{Y1,,Yn1}(FnX^n1+Vn)=Proj{Y1,,Yn1}(FnX^n1)=FnProj{Y1,,Yn1}X^n1=FnX^n1n1

校正,即 X ^ n ∣ n − 1 → X ^ n ∣ n \hat{X}_{n|n-1}\rightarrow \hat{X}_{n|n} X^nn1X^nn

完成上一步的预测之后,得到n时刻的观测值 Y n Y_n Yn,便可以做校正
X ^ n ∣ n = P r o j { Y 1 , ⋯   , Y n } X n \hat{X}_{n|n} =Proj_{\{Y_1,\cdots,Y_n\}}X_n X^nn=Proj{Y1,,Yn}Xn
Y n Y_n Yn投影到 Y 1 , ⋯   , Y n − 1 {Y_1,\cdots,Y_{n-1}} Y1,,Yn1上可得 O n = P r o j { Y 1 , ⋯   , Y n − 1 } Y n O_n=Proj_{\{Y_1,\cdots,Y_{n-1}\}}Y_n On=Proj{Y1,,Yn1}Yn,则数据 Y n Y_n Yn带来的有效信息为 Z n = Y n − O n = Y n − P r o j { Y 1 , ⋯   , Y n − 1 } Y n Z_n=Y_n-O_n=Y_n-Proj_{\{Y_1,\cdots,Y_{n-1}\}}Y_n Zn=YnOn=YnProj{Y1,,Yn1}Yn,下面计算冗余信息 O n O_n On
O n = P r o j { Y 1 , ⋯   , Y n − 1 } Y n = P r o j { Y 1 , ⋯   , Y n − 1 } ( H n X n + W n ) = H n P r o j { Y 1 , ⋯   , Y n − 1 } X n = H n X ^ n ∣ n − 1 O_n = Proj_{\{Y_1,\cdots,Y_{n-1}\}}Y_n \\ =Proj_{\{Y_1,\cdots,Y_{n-1}\}}(H_nX_n+W_n)\\ =H_nProj_{\{Y_1,\cdots,Y_{n-1}\}}X_n\\ =H_n\hat{X}_{n|n-1} On=Proj{Y1,,Yn1}Yn=Proj{Y1,,Yn1}(HnXn+Wn)=HnProj{Y1,,Yn1}Xn=HnX^nn1
Z n = Y n − H n X ^ n ∣ n − 1 Z_n=Y_n-H_n\hat{X}_{n|n-1} Zn=YnHnX^nn1

X n X_n Xn Z n Z_n Zn上的投影可以有 Z n Z_n Zn的线性组合 K n K_n Kn得到,即
P r o j Z n X n = K n ( Y n − H n X ^ n ∣ n − 1 ) Proj_{Z_n}X_n=K_n(Y_n-H_n\hat{X}_{n|n-1}) ProjZnXn=Kn(YnHnX^nn1)
因此根据正交性原理可得
X ^ n ∣ n = P r o j { Y 1 , ⋯   , Y n } X n = P r o j { Y 1 , ⋯   , Y n − 1 } X n + P r o j Z n X n = X ^ n ∣ n − 1 + K n ( Y n − H n X ^ n ∣ n − 1 ) \hat{X}_{n|n}=Proj_{\{Y_1,\cdots,Y_n\}}X_n=Proj_{\{Y_1,\cdots,Y_{n-1}\}}X_n+Proj_{Z_n}X_n \\ =\hat{X}_{n|n-1}+K_n(Y_n-H_n\hat{X}_{n|n-1}) X^nn=Proj{Y1,,Yn}Xn=Proj{Y1,,Yn1}Xn+ProjZnXn=X^nn1+Kn(YnHnX^nn1)
从(2)式中可以看出,n时刻预测的系统值,等于在(n-1)时刻的预测值加上误差 Y n − H n ( ^ X n ∣ n − 1 ) Y_n-H_n\hat(X_{n|n-1}) YnHn(^Xnn1) K n K_n Kn倍。

计算校正增益 K n K_n Kn

在计算 X n X_n Xn在有效信息 Z n Z_n Zn上的投影时,引入了线性组合 K n K_n Kn,也就是误差的增益,这里给出计算方法。

由正交性原理可知
K n = E [ ( X ^ n Z n T ) ] E [ ( Z n Z n T ) ] − 1 K_n=E[(\hat{X}_nZ_n^T)]E[(Z_nZ_n^T)]^{-1}\\ Kn=E[(X^nZnT)]E[(ZnZnT)]1
计算第一项
E [ ( X n Z n T ) ] = E [ X ^ n ( Y n − H n X ^ n ∣ n − 1 ) T ] = E [ X ^ n ( H n X ^ n + W n − H n X ^ n ∣ n − 1 ) T ] = E [ X ^ n ( H n ( X ^ n − X ^ n ∣ n − 1 ) − W n ) T ] = E [ X ^ n ( X ^ n − X ^ n ∣ n − 1 ) T H n T ] = E [ X ^ n ( X ^ n − X ^ n ∣ n − 1 ) T ] H n T s i n c e E [ X ^ n ∣ n − 1 ( X ^ n − X ^ n ∣ n − 1 ) T ] = 0 = E [ ( X n ^ − X ^ n ∣ n − 1 ) ( X n ^ − X ^ n ∣ n − 1 ) T ] H n T E[(X_nZ_n^T)]=E[\hat{X}_n(Y_n-H_n\hat{X}_{n|n-1})^T] \\ =E[\hat{X}_n(H_n\hat{X}_n+W_n-H_n\hat{X}_{n|n-1})^T]\\ =E[\hat{X}_n(H_n(\hat{X}_n-\hat{X}_{n|n-1})-W_n)^T]\\ =E[\hat{X}_n(\hat{X}_n-\hat{X}_{n|n-1})^TH_n^T]\\ =E[\hat{X}_n(\hat{X}_n-\hat{X}_{n|n-1})^T]H_n^T\qquad since\quad E[\hat{X}_{n|n-1}(\hat{X}_n-\hat{X}_{n|n-1})^T]=0\\ =E[(\hat{X_n}-\hat{X}_{n|n-1})(\hat{X_n}-\hat{X}_{n|n-1})^T]H_n^T E[(XnZnT)]=E[X^n(YnHnX^nn1)T]=E[X^n(HnX^n+WnHnX^nn1)T]=E[X^n(Hn(X^nX^nn1)Wn)T]=E[X^n(X^nX^nn1)THnT]=E[X^n(X^nX^nn1)T]HnTsinceE[X^nn1(X^nX^nn1)T]=0=E[(Xn^X^nn1)(Xn^X^nn1)T]HnT
R n ∣ n − 1 = E [ ( X n ^ − X ^ n ∣ n − 1 ) ( X n ^ − X ^ n ∣ n − 1 ) T ] H n T R_{n|n-1}=E[(\hat{X_n}-\hat{X}_{n|n-1})(\hat{X_n}-\hat{X}_{n|n-1})^T]H_n^T Rnn1=E[(Xn^X^nn1)(Xn^X^nn1)T]HnT,则 E [ ( X ^ n Z n T ) ] = R n ∣ n − 1 H n T E[(\hat{X}_nZ_n^T)]=R_{n|n-1}H_n^T E[(X^nZnT)]=Rnn1HnT

计算第二项
E [ Z n Z n T ] = E [ ( Y n − H n X ^ n ∣ n − 1 ) ( Y n − H n X ^ n ∣ n − 1 ) T ] = E [ ( H n X ^ n − H n X ^ n ∣ n − 1 − W n ) ( H n X ^ n − H n X ^ n ∣ n − 1 − W n ) T ] = E [ ( H n ( X ^ n − X ^ n ∣ n − 1 ) − W n ) ( H n ( X ^ n − X ^ n ∣ n − 1 ) − W n ) T ] = H n E [ ( X ^ n − X ^ n ∣ n − 1 ) ( X ^ n − X ^ n ∣ n − 1 ) T ] H n T = H n R n ∣ n − 1 H n T E[Z_nZ_n^T]=E[(Y_n-H_n\hat{X}_{n|n-1})(Y_n-H_n\hat{X}_{n|n-1})^T]\\ =E[(H_n\hat{X}_n-H_n\hat{X}_{n|n-1}-W_n)(H_n\hat{X}_n-H_n\hat{X}_{n|n-1}-W_n)^T]\\ =E[(H_n(\hat{X}_n-\hat{X}_{n|n-1})-W_n)(H_n(\hat{X}_n-\hat{X}_{n|n-1})-W_n)^T]\\ =H_nE[(\hat{X}_n-\hat{X}_{n|n-1})(\hat{X}_n-\hat{X}_{n|n-1})^T]H_n^T\\ =H_nR_{n|n-1}H_n^T E[ZnZnT]=E[(YnHnX^nn1)(YnHnX^nn1)T]=E[(HnX^nHnX^nn1Wn)(HnX^nHnX^nn1Wn)T]=E[(Hn(X^nX^nn1)Wn)(Hn(X^nX^nn1)Wn)T]=HnE[(X^nX^nn1)(X^nX^nn1)T]HnT=HnRnn1HnT
因此
K n = R n ∣ n − 1 H n T ( H n R n ∣ n − 1 H n T ) − 1 K_n=R_{n|n-1}H_n^T(H_nR_{n|n-1}H_n^T)^{-1} Kn=Rnn1HnT(HnRnn1HnT)1

计算协方差矩阵 R n ∣ n − 1 R_{n|n-1} Rnn1

在上一步计算误差校正增益的过程中,引入了协方差矩阵 R n ∣ n − 1 = E [ ( X ^ n − X ^ n ∣ n − 1 ) ( X ^ n − X ^ n ∣ n − 1 ) T ] R_{n|n-1}=E[(\hat{X}_{n}-\hat{X}_{n|n-1})(\hat{X}_{n}-\hat{X}_{n|n-1})^T] Rnn1=E[(X^nX^nn1)(X^nX^nn1)T],下面给出 R n ∣ n − 1 R_{n|n-1} Rnn1的迭代步骤,即 R n ∣ n − 1 → R n ∣ n → R n + 1 ∣ n R_{n|n-1}\rightarrow R_{n|n}\rightarrow R_{n+1|n} Rnn1RnnRn+1n

先算第二步: R n ∣ n → R n + 1 ∣ n R_{n|n} \rightarrow R_{n+1|n} RnnRn+1n
R n + 1 ∣ n = E [ ( X ^ n + 1 − X ^ n + 1 ∣ n ) ( X ^ n + 1 − X ^ n + 1 ∣ n ) T ] = E [ ( F n + 1 X ^ n − X ^ n + 1 ∣ n + V n ) ( F n + 1 X ^ n − X ^ n + 1 ∣ n + V n ) T ] = E [ ( F n + 1 X ^ n − F n + 1 X ^ n ∣ n + V n ) ( F n + 1 X ^ n − F n + 1 X ^ n ∣ n + V n ) T ] = F n + 1 E [ ( X ^ n − X ^ n ∣ n ) ( X ^ n − X ^ n ∣ n ) T ] F n + 1 T + E [ V n V n T ] = F n + 1 R n ∣ n F n + 1 T + P n R_{n+1|n} = E[(\hat{X}_{n+1}-\hat{X}_{n+1|n})(\hat{X}_{n+1}-\hat{X}_{n+1|n})^T] \\ =E[(F_{n+1}\hat{X}_n-\hat{X}_{n+1|n}+V_n)(F_{n+1}\hat{X}_n-\hat{X}_{n+1|n}+V_n)^T]\\ =E[(F_{n+1}\hat{X}_n-F_{n+1}\hat{X}_{n|n}+V_n)(F_{n+1}\hat{X}_n-F_{n+1}\hat{X}_{n|n}+V_n)^T]\\ =F_{n+1}E[(\hat{X}_n-\hat{X}_{n|n})(\hat{X}_n-\hat{X}_{n|n})^T]F_{n+1}^T+E[V_nV_n^T]\\ =F_{n+1}R_{n|n}F_{n+1}^T+P_n Rn+1n=E[(X^n+1X^n+1n)(X^n+1X^n+1n)T]=E[(Fn+1X^nX^n+1n+Vn)(Fn+1X^nX^n+1n+Vn)T]=E[(Fn+1X^nFn+1X^nn+Vn)(Fn+1X^nFn+1X^nn+Vn)T]=Fn+1E[(X^nX^nn)(X^nX^nn)T]Fn+1T+E[VnVnT]=Fn+1RnnFn+1T+Pn
再算第一步: R n ∣ n − 1 → R n ∣ n R_{n|n-1} \rightarrow R_{n|n} Rnn1Rnn
R n ∣ n = E [ ( X ^ n − X ^ n ∣ n ) ( X ^ n − X ^ n ∣ n ) T ] = E [ ( X ^ n − X ^ n ∣ n − 1 − K n ( Y n − H n X ^ n ∣ n − 1 ) ) ( X ^ n − X ^ n ∣ n − 1 − K n ( Y n − H n X ^ n ∣ n − 1 ) ) T ] = E [ ( X ^ n − X ^ n ∣ n − 1 − K n ( H n X n − H n X ^ n ∣ n − 1 − W n ) ) ( X ^ n − X ^ n ∣ n − 1 − K n ( H n X n − H n X ^ n ∣ n − 1 − W n ) ) T ] = E [ ( ( X ^ n − X ^ n ∣ n − 1 ) ( I − K n H n ) − K n W n ) ) ( ( X ^ n − X ^ n ∣ n − 1 ) ( I − K n H n ) − K n W n ) T ] = ( I − K n H n ) E ( ( X ^ n − X ^ n ∣ n − 1 ) ( X ^ n − X ^ n ∣ n − 1 ) T ) ( I − K n H n ) T + K n E [ W n W n T ] K n = ( I − K n H n ) R n ∣ n − 1 ( I − K n H n ) T + K n Q n K n T R_{n|n} = E[(\hat{X}_{n}-\hat{X}_{n|n})(\hat{X}_{n}-\hat{X}_{n|n})^T] \\ =E[(\hat{X}_n-\hat{X}_{n|n-1}-K_n(Y_n-H_n\hat{X}_{n|n-1}))(\hat{X}_n-\hat{X}_{n|n-1}-K_n(Y_n-H_n\hat{X}_{n|n-1}))^T]\\ =E[(\hat{X}_n-\hat{X}_{n|n-1}-K_n(H_nX_n-H_n\hat{X}_{n|n-1}-W_n))(\hat{X}_n-\hat{X}_{n|n-1}-K_n(H_nX_n-H_n\hat{X}_{n|n-1}-W_n))^T] \\ =E[((\hat{X}_n-\hat{X}_{n|n-1})(I-K_nH_n)-K_nW_n))((\hat{X}_n-\hat{X}_{n|n-1})(I-K_nH_n)-K_nW_n)^T]\\ =(I-K_nH_n)E((\hat{X}_n-\hat{X}_{n|n-1})(\hat{X}_n-\hat{X}_{n|n-1})^T)(I-K_nH_n)^T+K_nE[W_nW_n^T]K_n \\ =(I-K_nH_n)R_{n|n-1}(I-K_nH_n)^T +K_nQ_nK_n^T Rnn=E[(X^nX^nn)(X^nX^nn)T]=E[(X^nX^nn1Kn(YnHnX^nn1))(X^nX^nn1Kn(YnHnX^nn1))T]=E[(X^nX^nn1Kn(HnXnHnX^nn1Wn))(X^nX^nn1Kn(HnXnHnX^nn1Wn))T]=E[((X^nX^nn1)(IKnHn)KnWn))((X^nX^nn1)(IKnHn)KnWn)T]=(IKnHn)E((X^nX^nn1)(X^nX^nn1)T)(IKnHn)T+KnE[WnWnT]Kn=(IKnHn)Rnn1(IKnHn)T+KnQnKnT

总结

以上推到结果可总结为如下五个式子
{ X ^ n ∣ n − 1 = F n X ^ n − 1 ∣ n − 1 X ^ n ∣ n = X ^ n ∣ n − 1 + K n ( Y n − H n X ^ n ∣ n − 1 ) K n = R n ∣ n − 1 H n T ( H n R n ∣ n − 1 H n T ) − 1 R n + 1 ∣ n = F n + 1 R n ∣ n F n + 1 T + n R n ∣ n = ( I − K n H n ) R n ∣ n − 1 ( I − K n H n ) T + K n Q n K n T \begin{cases} \hat{X}_{n|n-1}=F_n\hat{X}_{n-1|n-1}\\ \hat{X}_{n|n}=\hat{X}_{n|n-1}+K_n(Y_n-H_n\hat{X}_{n|n-1})\\ K_n=R_{n|n-1}H_n^T(H_nR_{n|n-1}H_n^T)^{-1}\\ R_{n+1|n}=F_{n+1}R_{n|n}F_{n+1}^T+_n\\ R_{n|n}=(I-K_nH_n)R_{n|n-1}(I-K_nH_n)^T +K_nQ_nK_n^T \end{cases} X^nn1=FnX^n1n1X^nn=X^nn1+Kn(YnHnX^nn1)Kn=Rnn1HnT(HnRnn1HnT)1Rn+1n=Fn+1RnnFn+1T+nRnn=(IKnHn)Rnn1(IKnHn)T+KnQnKnT
上式中, P n P_n Pn为系统状态噪声的协方差矩阵, Q n Q_n Qn为观测噪声的协方差矩阵

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值