维纳滤波
介绍这两种算法之前先来简单介绍下维纳滤波的问题
x
(
n
)
x\left( n \right)
x(n)和
y
(
n
)
y\left( n \right)
y(n)是零均值的平稳离散信号,并且已知它们的二阶矩,采用最小均方误差准则(MMSE, Minimum Mean-Squared Error)可以设计最优线性滤波器。
均方误差
J
J
J为
J
=
E
[
e
2
(
n
)
]
=
E
{
[
y
(
n
)
−
y
~
(
n
)
]
2
}
J=E\left[ {{e}^{2}}\left( n \right) \right]=E\left\{ {{\left[ y\left( n \right)-\tilde{y}\left( n \right) \right]}^{2}} \right\}
J=E[e2(n)]=E{[y(n)−y~(n)]2}
假设滤波器的冲激响应为
h
(
n
)
h\left( n \right)
h(n),为了方便,将其记作
h
n
{{h}_{n}}
hn,则滤波器输出的结果为
y
~
(
n
)
=
h
n
∗
x
(
n
)
=
∑
i
h
i
x
(
n
−
i
)
\tilde{y}\left( n \right)={{h}_{n}}*x\left( n \right)=\sum\limits_{i}^{{}}{{{h}_{i}}x\left( n-i \right)}
y~(n)=hn∗x(n)=i∑hix(n−i)
而
e
(
n
)
=
y
(
n
)
−
y
~
(
n
)
=
y
(
n
)
−
∑
i
h
i
x
(
n
−
i
)
e\left( n \right)=y\left( n \right)-\tilde{y}\left( n \right)=y\left( n \right)-\sum\limits_{i}^{{}}{{{h}_{i}}x\left( n-i \right)}
e(n)=y(n)−y~(n)=y(n)−i∑hix(n−i)
那么可以对均方误差关于冲激响应求导
∂
J
∂
h
j
=
2
E
[
e
(
n
)
∂
e
(
n
)
∂
h
j
]
=
−
2
E
[
e
(
n
)
x
(
n
−
j
)
]
=
0
\frac{\partial J}{\partial {{h}_{j}}}=2E\left[ e\left( n \right)\frac{\partial e\left( n \right)}{\partial {{h}_{j}}} \right]=-2E\left[ e\left( n \right)x\left( n-j \right) \right]=0
∂hj∂J=2E[e(n)∂hj∂e(n)]=−2E[e(n)x(n−j)]=0
因此可以得到
E
[
e
(
n
)
x
(
n
−
j
)
]
=
0
E\left[ e\left( n \right)x\left( n-j \right) \right]=0
E[e(n)x(n−j)]=0
同样也可以得到
E
[
e
(
n
)
y
~
(
n
)
]
=
0
E\left[ e\left( n \right)\tilde{y}\left( n \right) \right]=0
E[e(n)y~(n)]=0
接下来,可以定义
r
c
(
j
)
=
E
[
y
(
n
)
x
(
n
−
j
)
]
{{r}_{c}}\left( j \right)=E\left[ y\left( n \right)x\left( n-j \right) \right]
rc(j)=E[y(n)x(n−j)]
r
(
j
)
=
E
[
x
(
n
)
x
(
n
−
j
)
]
r\left( j \right)=E\left[ x\left( n \right)x\left( n-j \right) \right]
r(j)=E[x(n)x(n−j)]
可以得到
r
c
(
j
)
=
∑
i
h
i
r
(
j
−
i
)
{{r}_{c}}\left( j \right)=\sum\limits_{i}^{{}}{{{h}_{i}}r\left( j-i \right)}
rc(j)=i∑hir(j−i)
这就是著名的Weiner-Hopf方程,线性最优滤波(维纳滤波)等价于将参考信号分为两路正交分量(误差信号和滤波器输出信号),并且误差信号和输入信号正交,滤波器输出信号与输入信号不正交。
考虑
N
N
N阶FIR维纳滤波器的解,系统的输出可以表示为
X
(
n
)
=
[
x
(
n
)
,
x
(
n
−
1
)
,
⋯
,
x
(
n
−
N
+
1
)
]
T
\mathbf{X}\left( n \right)={{\left[ x\left( n \right),x\left( n-1 \right),\cdots ,x\left( n-N+1 \right) \right]}^{T}}
X(n)=[x(n),x(n−1),⋯,x(n−N+1)]T
滤波器的系数
H
=
[
h
0
,
h
1
,
⋯
,
h
N
−
1
]
T
\mathbf{H}={{\left[ {{h}_{0}},{{h}_{1}},\cdots ,{{h}_{N-1}} \right]}^{T}}
H=[h0,h1,⋯,hN−1]T
则滤波器的输出可以表示为
y
~
(
n
)
=
H
T
X
(
n
)
=
X
T
(
n
)
H
\tilde{y}\left( n \right)={{\mathbf{H}}^{T}}\mathbf{X}\left( n \right)={{\mathbf{X}}^{T}}\left( n \right)\mathbf{H}
y~(n)=HTX(n)=XT(n)H
由正交性条件
E
[
e
(
n
)
x
(
n
−
j
)
]
=
0
E\left[ e\left( n \right)x\left( n-j \right) \right]=0
E[e(n)x(n−j)]=0可以得到
E
[
e
(
n
)
X
(
n
)
]
=
0
E\left[ e\left( n \right)\mathbf{X}\left( n \right) \right]=0
E[e(n)X(n)]=0
E
{
[
y
(
n
)
−
H
T
X
(
n
)
]
X
(
n
)
}
=0
E\left\{ \left[ y\left( n \right)-{{\mathbf{H}}^{T}}\mathbf{X}\left( n \right) \right]\mathbf{X}\left( n \right) \right\}\text{=0}
E{[y(n)−HTX(n)]X(n)}=0
即
E
[
y
(
n
)
X
(
n
)
]
−
E
[
X
(
n
)
X
T
(
n
)
]
H
=
0
E\left[ y\left( n \right)\mathbf{X}\left( n \right) \right]-E\left[ \mathbf{X}\left( n \right){{\mathbf{X}}^{T}}\left( n \right) \right]\mathbf{H}=0
E[y(n)X(n)]−E[X(n)XT(n)]H=0
那么最优解可以表示为
H
opt
=
R
x
x
−
1
r
y
x
{{\mathbf{H}}_{\text{opt}}}=\mathbf{R}_{xx}^{-1}{{\mathbf{r}}_{yx}}
Hopt=Rxx−1ryx
其中
R
x
x
=
[
r
(
0
)
r
(
1
)
⋯
r
(
N
−
1
)
r
(
1
)
r
(
0
)
⋯
r
(
N
−
2
)
⋮
⋮
⋱
⋮
r
(
N
−
1
)
r
(
N
−
2
)
⋯
r
(
0
)
]
{{\mathbf{R}}_{xx}}=\left[ \begin{matrix} r\left( 0 \right) & r\left( 1 \right) & \cdots & r\left( N-1 \right) \\ r\left( 1 \right) & r\left( 0 \right) & \cdots & r\left( N-2 \right) \\ \vdots & \vdots & \ddots & \vdots \\ r\left( N-1 \right) & r\left( N-2 \right) & \cdots & r\left( 0 \right) \\ \end{matrix} \right]
Rxx=⎣⎢⎢⎢⎡r(0)r(1)⋮r(N−1)r(1)r(0)⋮r(N−2)⋯⋯⋱⋯r(N−1)r(N−2)⋮r(0)⎦⎥⎥⎥⎤
r
y
x
=
[
r
c
(
0
)
,
r
c
(
1
)
,
⋯
,
r
c
(
N
−
1
)
]
{{\mathbf{r}}_{yx}}=\left[ {{r}_{c}}\left( 0 \right),{{r}_{c}}\left( 1 \right),\cdots ,{{r}_{c}}\left( N-1 \right) \right]
ryx=[rc(0),rc(1),⋯,rc(N−1)]
r
c
(
p
)
=
E
[
y
(
n
)
x
(
n
−
p
)
]
{{r}_{c}}\left( p \right)=E\left[ y\left( n \right)x\left( n-p \right) \right]
rc(p)=E[y(n)x(n−p)]
均方误差
J
J
J可以表示为
J
=
E
[
e
2
(
n
)
]
=
E
{
[
y
(
n
)
−
y
~
(
n
)
]
2
}
=
E
[
y
2
(
n
)
]
−
2
E
[
y
(
n
)
H
T
X
(
n
)
]
+
E
[
H
T
X
(
n
)
X
T
(
n
)
H
]
=
E
[
y
2
(
n
)
]
−
2
r
y
x
T
H
+
H
T
R
x
x
H
\begin{aligned} & J=E\left[ {{e}^{2}}\left( n \right) \right]=E\left\{ {{\left[ y\left( n \right)-\tilde{y}\left( n \right) \right]}^{2}} \right\} \\ & =E\left[ {{y}^{2}}\left( n \right) \right]-2E\left[ y\left( n \right){{\mathbf{H}}^{T}}\mathbf{X}\left( n \right) \right]+E\left[ {{\mathbf{H}}^{T}}\mathbf{X}\left( n \right){{\mathbf{X}}^{T}}\left( n \right)\mathbf{H} \right] \\ & =E\left[ {{y}^{2}}\left( n \right) \right]-2\mathbf{r}_{yx}^{T}\mathbf{H}+{{\mathbf{H}}^{T}}{{\mathbf{R}}_{xx}}\mathbf{H} \end{aligned}
J=E[e2(n)]=E{[y(n)−y~(n)]2}=E[y2(n)]−2E[y(n)HTX(n)]+E[HTX(n)XT(n)H]=E[y2(n)]−2ryxTH+HTRxxH
最小化均方误差为
J
min
=
E
[
y
2
(
n
)
]
−
2
r
y
x
T
H
opt
+
H
opt
T
R
x
x
H
opt
=
E
[
y
2
(
n
)
]
−
H
opt
T
R
x
x
H
opt
=
E
[
y
2
(
n
)
]
−
H
opt
T
r
y
x
\begin{aligned} & {{J}_{\min }}=E\left[ {{y}^{2}}\left( n \right) \right]-2\mathbf{r}_{yx}^{T}{{\mathbf{H}}_{\text{opt}}}+\mathbf{H}_{\text{opt}}^{T}{{\mathbf{R}}_{xx}}{{\mathbf{H}}_{\text{opt}}} \\ & =E\left[ {{y}^{2}}\left( n \right) \right]-\mathbf{H}_{\text{opt}}^{T}{{\mathbf{R}}_{xx}}{{\mathbf{H}}_{\text{opt}}} \\ & =E\left[ {{y}^{2}}\left( n \right) \right]-\mathbf{H}_{\text{opt}}^{T}{{\mathbf{r}}_{yx}} \\ \end{aligned}
Jmin=E[y2(n)]−2ryxTHopt+HoptTRxxHopt=E[y2(n)]−HoptTRxxHopt=E[y2(n)]−HoptTryx
误差性能曲面可以表示为
J
=
E
[
y
2
(
n
)
]
−
2
r
y
x
T
H
+
H
T
R
x
x
H
=
J
min
+
(
H
−
H
opt
)
T
R
x
x
(
H
−
H
opt
)
\begin{aligned} & J=E\left[ {{y}^{2}}\left( n \right) \right]-2\mathbf{r}_{yx}^{T}\mathbf{H}+{{\mathbf{H}}^{T}}{{\mathbf{R}}_{xx}}\mathbf{H} \\ & ={{J}_{\min }}+{{\left( \mathbf{H}-{{\mathbf{H}}_{\text{opt}}} \right)}^{T}}{{\mathbf{R}}_{xx}}\left( \mathbf{H}-{{\mathbf{H}}_{\text{opt}}} \right) \end{aligned}
J=E[y2(n)]−2ryxTH+HTRxxH=Jmin+(H−Hopt)TRxx(H−Hopt)
最陡梯度下降算法
对于维纳滤波问题来说,就是为了求最优的滤波器系数,同时有
J
(
H
opt
)
≤
J
(
H
)
J\left( {{\mathbf{H}}_{\text{opt}}} \right)\le J\left( \mathbf{H} \right)
J(Hopt)≤J(H)
那么,可以通过构造迭代算法来更新滤波器的系数
H
(
0
)
,
H
(
1
)
,
⋯
,
H
(
k
+
1
)
\mathbf{H}\left( 0 \right),\mathbf{H}\left( 1 \right),\cdots ,\mathbf{H}\left( k+1 \right)
H(0),H(1),⋯,H(k+1),使得均方误差随着滤波器系数矢量的更新而下降,即
J
[
H
(
k
)
]
>
J
[
H
(
k
+
1
)
]
J\left[ \mathbf{H}\left( k \right) \right]>J\left[ \mathbf{H}\left( k+1 \right) \right]
J[H(k)]>J[H(k+1)]
可以采用沿着均方误差的最陡下降方向,即沿着均方误差的梯度方向的相反方向来更新滤波器的系数。
均方误差相对于系数矢量的梯度
V
G
(
n
)
=
∂
E
[
e
2
(
n
)
]
∂
H
∣
H
=
H
(
n
)
=
∂
J
(
H
)
∂
H
∣
H
=
H
(
n
)
=
−
2
E
[
e
(
n
)
X
(
n
)
]
=
2
R
x
x
H
(
n
)
−
2
r
y
x
\begin{aligned} & {{\mathbf{V}}_{G}}\left( n \right)=\frac{\partial E\left[ {{e}^{2}}\left( n \right) \right]}{\partial \mathbf{H}}\left| _{\mathbf{H}=\mathbf{H}\left( n \right)} \right.=\frac{\partial J\left( \mathbf{H} \right)}{\partial \mathbf{H}}\left| _{\mathbf{H}=\mathbf{H}\left( n \right)} \right. \\ & =-2E\left[ e\left( n \right)\mathbf{X}\left( n \right) \right]=2{{\mathbf{R}}_{xx}}\mathbf{H}\left( n \right)-2{{\mathbf{r}}_{yx}} \end{aligned}
VG(n)=∂H∂E[e2(n)]∣∣H=H(n)=∂H∂J(H)∣∣H=H(n)=−2E[e(n)X(n)]=2RxxH(n)−2ryx
那么滤波器系数的更新可以表示为
H
(
n
+
1
)
=
H
(
n
)
−
1
2
δ
V
G
(
n
)
=
H
(
n
)
+
δ
[
r
y
x
−
R
x
x
H
(
n
)
]
\mathbf{H}\left( n+1 \right)=\mathbf{H}\left( n \right)-\frac{1}{2}\delta {{\mathbf{V}}_{G}}\left( n \right)\text{=}\ \mathbf{H}\left( n \right)\text{+}\delta \left[ {{\mathbf{r}}_{yx}}-{{\mathbf{R}}_{xx}}\mathbf{H}\left( n \right) \right]
H(n+1)=H(n)−21δVG(n)= H(n)+δ[ryx−RxxH(n)]
其中,
δ
>
0
\delta >0
δ>0是进行迭代的步长。
从上面的过程可以看出,最陡梯度下降方法取决于滤波器的初值
H
(
0
)
\mathbf{H}\left( 0 \right)
H(0),梯度
V
G
{{\mathbf{V}}_{G}}
VG,迭代步长
δ
\delta
δ,在迭代步长足够小的时候其收敛于维纳最优解。
最小均方误差(Least Mean Square,LMS)算法
虽说维纳滤波是最优解,但是其需要一定的先验知识,那么在缺少先验知识的时候该方法就不再适用。LMS自适应是一种重要的梯度自适应算法,它是用瞬时值代替估计值的一种方法,LMS具体的过程为
步骤一:给定初值
H
(
0
)
\mathbf{H}\left( 0 \right)
H(0)
步骤二:迭代
{
e
(
n
+
1
)
=
y
(
n
+
1
)
−
H
T
(
n
)
X
(
n
+
1
)
H
(
n
+
1
)
=
H
(
n
)
+
δ
e
(
n
+
1
)
X
(
n
+
1
)
\left\{ \begin{aligned} & e\left( n+1 \right)=y\left( n+1 \right)-{{\mathbf{H}}^{\text{T}}}\left( n \right)\mathbf{X}\left( n+1 \right) \\ & \mathbf{H}\left( n+1 \right)\text{=}\mathbf{H}\left( n \right)+\delta e\left( n+1 \right)\mathbf{X}\left( n+1 \right) \\ \end{aligned} \right.
{e(n+1)=y(n+1)−HT(n)X(n+1)H(n+1)=H(n)+δe(n+1)X(n+1)
二阶FIR的LMS自适应滤波器
下图是用一个二阶FIR的LMS自适应滤波器消除正弦干扰的一个方案。
数值分析
最陡下降算法
H
(
n
+
1
)
=
H
(
n
)
−
1
2
δ
V
G
(
n
)
n
=
0
,
1
,
2
,
⋯
\mathbf{H}\left( n+1 \right)=\mathbf{H}\left( n \right)-\frac{1}{2}\delta {{\mathbf{V}}_{G}}\left( n \right)\ \ \ \ \ \ \ n=0,1,2,\cdots
H(n+1)=H(n)−21δVG(n) n=0,1,2,⋯
式中,
δ
=
0.4
\delta =0.4
δ=0.4表示迭代的步长,
H
(
0
)
=
[
3
,
−
4
]
T
\mathbf{H}\left( 0 \right)={{[3,-4]}^{\text{T}}}
H(0)=[3,−4]T
梯度计算
V
G
(
n
)
=
∂
E
[
e
2
(
n
)
]
∂
H
∣
H
=
H
(
n
)
=
∂
J
(
H
)
∂
H
∣
H
=
H
(
n
)
=
−
2
E
[
e
(
n
)
X
(
n
)
]
=
2
R
x
x
H
(
n
)
−
2
r
y
x
\begin{aligned} & {{\mathbf{V}}_{G}}\left( n \right)=\frac{\partial E\left[ {{e}^{2}}\left( n \right) \right]}{\partial \mathbf{H}}\left| _{\mathbf{H}=\mathbf{H}\left( n \right)} \right.=\frac{\partial J\left( \mathbf{H} \right)}{\partial \mathbf{H}}\left| _{\mathbf{H}=\mathbf{H}\left( n \right)} \right. \\ & =-2E\left[ e\left( n \right)\mathbf{X}\left( n \right) \right]=2{{\mathbf{R}}_{xx}}\mathbf{H}\left( n \right)-2{{\mathbf{r}}_{yx}} \end{aligned}
VG(n)=∂H∂E[e2(n)]∣∣H=H(n)=∂H∂J(H)∣∣H=H(n)=−2E[e(n)X(n)]=2RxxH(n)−2ryx
式中
R
x
x
(
k
)
=
1
16
∑
i
=
0
15
(
2
sin
2
π
i
16
)
(
2
sin
2
π
(
i
−
k
)
16
)
=
cos
2
π
k
16
{{\mathbf{R}}_{xx}}(k)=\frac{1}{16}\sum\limits_{i=0}^{15}{\left( \sqrt{2}\sin \frac{2\pi i}{16} \right)}\left( \sqrt{2}\sin \frac{2\pi (i-k)}{16} \right)=\cos \frac{2\pi k}{16}
Rxx(k)=161i=0∑15(2sin162πi)(2sin162π(i−k))=cos162πk
r
y
x
(
k
)
=
1
16
∑
i
=
0
15
[
sin
(
2
π
i
16
+
π
10
)
]
(
2
sin
2
π
(
i
−
k
)
16
)
=
1
2
cos
(
2
π
k
16
+
π
10
)
{{\mathbf{r}}_{yx}}(k)=\frac{1}{16}\sum\limits_{i=0}^{15}{\left[ \sin \left( \frac{2\pi i}{16}+\frac{\pi }{10} \right) \right]}\left( \sqrt{2}\sin \frac{2\pi (i-k)}{16} \right)=\frac{1}{\sqrt{2}}\cos \left( \frac{2\pi k}{16}+\frac{\pi }{10} \right)
ryx(k)=161i=0∑15[sin(162πi+10π)](2sin162π(i−k))=21cos(162πk+10π)
则可以求出
V
G
(
0
)
=
2
[
R
x
x
(
0
)
R
x
x
(
1
)
R
x
x
(
1
)
R
x
x
(
0
)
]
[
H
(
0
)
H
(
1
)
]
−
2
[
r
y
x
(
0
)
r
y
x
(
1
)
]
=
2
[
1
0.9239
0.9239
1
]
[
3
−
4
]
−
2
[
0.6725
0.5377
]
=
[
−
2.7326
−
3.5320
]
\begin{aligned} & {{\mathbf{V}}_{G}}\left( 0 \right)=2\left[ \begin{matrix} {{\mathbf{R}}_{xx}}\left( 0 \right) & {{\mathbf{R}}_{xx}}\left( 1 \right) \\ {{\mathbf{R}}_{xx}}\left( 1 \right) & {{\mathbf{R}}_{xx}}\left( 0 \right) \\ \end{matrix} \right]\left[ \begin{aligned} & \mathbf{H}\left( 0 \right) \\ & \mathbf{H}\left( 1 \right) \\ \end{aligned} \right]-2\left[ \begin{aligned} & {{r}_{yx}}\left( 0 \right) \\ & {{r}_{yx}}\left( 1 \right) \\ \end{aligned} \right] \\ & =2\left[ \begin{matrix} 1 & 0.9239 \\ 0.9239 & 1 \\ \end{matrix} \right]\left[ \begin{matrix} 3 \\ -4 \\ \end{matrix} \right]-2\left[ \begin{aligned} & 0.6725 \\ & 0.5377 \\ \end{aligned} \right] \\ & =\left[ \begin{aligned} & -2.7326 \\ & -3.5320 \\ \end{aligned} \right] \end{aligned}
VG(0)=2[Rxx(0)Rxx(1)Rxx(1)Rxx(0)][H(0)H(1)]−2[ryx(0)ryx(1)]=2[10.92390.92391][3−4]−2[0.67250.5377]=[−2.7326−3.5320]
同时可以计算出
E
[
y
2
(
n
)
]
=
r
y
y
(
0
)
=
r
s
s
(
0
)
+
r
N
0
N
0
(
0
)
=
0.05
+
0.5
=
0.55
E\left[ {{y}^{2}}\left( n \right) \right]={{\mathbf{r}}_{yy}}\left( 0 \right)={{\mathbf{r}}_{ss}}\left( 0 \right)+{{\mathbf{r}}_{{{N}_{0}}{{N}_{0}}}}\left( 0 \right)=0.05+0.5=0.55
E[y2(n)]=ryy(0)=rss(0)+rN0N0(0)=0.05+0.5=0.55
得到最优滤波器的系数计算公式
H
opt
=
R
x
x
−
1
r
y
x
=
[
h
0
∗
,
h
1
∗
]
T
{{\mathbf{H}}_{\text{opt}}}=\mathbf{R}_{xx}^{-1}{{\mathbf{r}}_{yx}}\text{=}{{\left[ {{h}_{0}}^{*},h_{1}^{*} \right]}^{\text{T}}}
Hopt=Rxx−1ryx=[h0∗,h1∗]T
将各个初值带入计算可以得到
[
h
0
∗
h
1
∗
]
=
[
R
x
x
(
0
)
R
x
x
(
1
)
R
x
x
(
1
)
R
x
x
(
0
)
]
−
1
[
r
y
x
(
0
)
r
y
x
(
1
)
]
=
[
1.200
−
0.571
]
\left[ \begin{aligned} & h_{0}^{*} \\ & h_{1}^{*} \\ \end{aligned} \right]={{\left[ \begin{matrix} {{\mathbf{R}}_{xx}}\left( 0 \right) & {{\mathbf{R}}_{xx}}\left( 1 \right) \\ {{\mathbf{R}}_{xx}}\left( 1 \right) & {{\mathbf{R}}_{xx}}\left( 0 \right) \\ \end{matrix} \right]}^{-1}}\left[ \begin{aligned} & {{\mathbf{r}}_{yx}}\left( 0 \right) \\ & {{\mathbf{r}}_{yx}}\left( 1 \right) \\ \end{aligned} \right]=\left[ \begin{matrix} 1.200 \\ -0.571 \\ \end{matrix} \right]
[h0∗h1∗]=[Rxx(0)Rxx(1)Rxx(1)Rxx(0)]−1[ryx(0)ryx(1)]=[1.200−0.571]
误差函数为
J
(
n
)
=
E
[
y
2
(
n
)
]
−
2
r
y
x
T
H
+
H
T
R
x
x
H
=
0.55
+
h
0
2
+
h
1
2
+
2
h
0
h
1
cos
π
8
−
2
h
0
cos
π
10
−
2
h
1
cos
9
π
40
\begin{aligned} & J(n)=E\left[ {{y}^{2}}(n) \right]-2\mathbf{r}_{yx}^{T}\mathbf{H}+{{\mathbf{H}}^{T}}{{\mathbf{R}}_{xx}}\mathbf{H} \\ & =0.55+h_{0}^{2}+h_{1}^{2}+2{{h}_{0}}{{h}_{1}}\cos \frac{\pi }{8}-\sqrt{2}{{h}_{0}}\cos \frac{\pi }{10}-\sqrt{2}{{h}_{1}}\cos \frac{9\pi }{40} \end{aligned}
J(n)=E[y2(n)]−2ryxTH+HTRxxH=0.55+h02+h12+2h0h1cos8π−2h0cos10π−2h1cos409π
此时可以求出最小误差
J
min
=
E
[
y
2
(
n
)
]
−
H
opt
T
R
x
x
H
o
p
t
=
r
y
y
(
0
)
−
H
opt
T
r
y
x
=
0.05
{{J}_{\min }}=E\left[ {{y}^{2}}\left( n \right) \right]-\mathbf{H}_{\text{opt}}^{T}{{\mathbf{R}}_{xx}}{{\mathbf{H}}_{opt}}={{\mathbf{r}}_{yy}}\left( 0 \right)-\mathbf{H}_{\text{opt}}^{T}{{\mathbf{r}}_{yx}}=0.05
Jmin=E[y2(n)]−HoptTRxxHopt=ryy(0)−HoptTryx=0.05
可以根据上面的分析画出误差性能曲面如下:
从图中可以看出,误差性能曲面是存在最小值的,这与上面的分析也一致。为了更加方便的观察,也可以给出其二维平面图,即等高线
很明显的看出中间就是最小值的地方。
为了更好地说明这两种算法的区别,下面给出两种算法的搜索过程
从图中可以看出,最陡梯度下降方法能够很好地接近最终的结果,但是LMS算法却在最终结果附近来回波动,同时在搜索的时候LMS算法的波动范围也比较大,可以通过增加实验次数来减小这种现象,100次实验结果如下
从图中可以看出,经过多次实验后LMS算法有了明显的改善,其搜索过程逐渐向最终的解靠近,这也与上面分析一致。
代码如下:
clear;
close all;
clc;
N=1000; %输入信号的点数
n=0:1:N-1;
iter=100; %实验次数
H_g1=zeros(N,iter);
H_g2=zeros(N,iter);
H0_a=zeros(N-1,iter); %存放h0
H1_a=zeros(N-1,iter); %存放h1
for j=1:iter
sn=sqrt(0.05)*randn(1,N); %产生均值为0,方差为0.05的高斯白噪声
xn=sqrt(2)*sin(2*pi*n/16); %xn的表达形式
yn=sn+sin(2*pi*n/16+pi/10); %yn的表达形式
delta=0.4; %定义迭代步长
%% 最陡梯度下降
delta=0.4; %定义迭代步长
H_g=zeros(2,N); %初始化滤波器矩阵大小
H_g(:,1)=[3 -4]; %滤波器初始值
V_G=[-2.7326;-3.5320]; %初始化V_G
Rxx=[1 0.9239;0.9239 1]; %初始化Rxx
Ryx=[0.6725;0.5377]; %初始化Ryx
for i=1:N
V_G=2.*Rxx*H_g(:,i)-2.*Ryx; %计算V_G的迭代值
H_g(:,i+1)=H_g(:,i)-1/2.*delta.*V_G; %计算滤波器的更新值
H_g1(i,j)=H_g(1,i);
H_g2(i,j)=H_g(2,i);
end
%% LMS算法
H_L=zeros(2,N); %初始化滤波器矩阵大小
H_L(:,1)=[3;-4]; %初始化h0,h1的值
e=zeros(1,N); %存放误差的矩阵
for i=1:N-1
e(1,i)=yn(i+1)-H_L(:,i).'*[xn(i+1);xn(i)]; %计算每次的误差
H_L(:,i+1)=H_L(:,i)+delta.*e(1,i).*[xn(i+1);xn(i)];
H0_a(i,j)=H_L(1,i);
H1_a(i,j)=H_L(2,i);
end
end
%% 平均值
for i=1:N-1
H0_g(i)=sum(H_g1(i,:))./iter;
H1_g(i)=sum(H_g2(i,:))./iter;
H0(i)=sum(H0_a(i,:))./iter;
H1(i)=sum(H1_a(i,:))/iter;
end
[h0,h1]=meshgrid(-2:0.1:4,-4:0.1:2); %形成坐标格点矩阵
J_n=0.55+h0.^2+h1.^2+2*h0.*h1.*cos(pi/8)-sqrt(2).*h0.*cos(pi/10)-sqrt(2).*h1.*cos(9*pi/40); %误差函数表达式
figure(1);
v=0:0.2:2; %画出高度为v的等值线
contour(h0,h1,J_n,v,'linewidth',1.5);
set(gca,'xtick',-2:1:3); %设置坐标
set(gca,'ytick',-4:1:2);
xlabel('h0','fontsize',20);ylabel('h1','position',[-2.5 -1],'fontsize',20,'rotation',360);
hold on;
plot(H0_g,H1_g,'k','linewidth',3)
plot(H0,H1,'r','linewidth',3);
xlim([-2 4]);
h0_i=1.200;h1_i=-0.571;
line([-2,4],[h1_i,h1_i],'linewidth',2); %画出点(h0,h1)
line([h0_i,h0_i],[-4,2],'linewidth',2);
plot(h0_i,h1_i,'b*')
l=legend('误差等高线','最陡梯度下降算法','LMS算法');
set(l,'fontsize',10);