The Levinson Recursion

Reference:
Slides of EE4C03, TUD
Hayes M H. Statistical digital signal processing and modeling

Development of the Recursion

All-pole modeling using Prony’s method or the autocorrelation method requires that we solve the normal equations which, for a p p pth-order model, are
r x ( k ) + ∑ l = 1 p a p ( l ) r x ( k − l ) = 0 ; k = 1 , 2 , ⋯   , p (D.1) r_x(k)+\sum_{l=1}^pa_p(l)r_x(k-l)=0;\quad k=1,2,\cdots ,p\tag{D.1} rx(k)+l=1pap(l)rx(kl)=0;k=1,2,,p(D.1)
where the modeling error is
ϵ p = r x ( 0 ) + ∑ l = l p a p ( l ) r x ( l ) (D.2) \epsilon_p=r_x(0)+\sum_{l=l}^pa_p(l)r_x(l)\tag{D.2} ϵp=rx(0)+l=lpap(l)rx(l)(D.2)
Combining ( D . 1 ) (D.1) (D.1) and ( D . 2 ) (D.2) (D.2) into matrix form we have
[ r x ( 0 ) r x ∗ ( 1 ) r x ∗ ( 2 ) ⋯ r x ∗ ( p ) r x ( 1 ) r x ( 0 ) r x ∗ ( 1 ) ⋯ r x ∗ ( p − 1 ) r x ( 2 ) r x ( 1 ) r x ( 0 ) ⋯ r x ∗ ( p − 2 ) ⋮ ⋮ ⋮ ⋮ r x ( p ) r x ( p − 1 ) r x ( p − 2 ) ⋯ r x ( 0 ) ] [ 1 a p ( 1 ) a p ( 2 ) ⋮ a p ( p ) ] = ϵ p [ 1 0 0 ⋮ 0 ] (D.3) \left[\begin{array}{ccccc} r_{x}(0) & r_{x}^{*}(1) & r_{x}^{*}(2) & \cdots & r_{x}^{*}(p) \\ r_{x}(1) & r_{x}(0) & r_{x}^{*}(1) & \cdots & r_{x}^{*}(p-1) \\ r_{x}(2) & r_{x}(1) & r_{x}(0) & \cdots & r_{x}^{*}(p-2) \\ \vdots & \vdots & \vdots & & \vdots \\ r_{x}(p) & r_{x}(p-1) & r_{x}(p-2) & \cdots & r_{x}(0) \end{array}\right]\left[\begin{array}{c} 1 \\ a_{p}(1) \\ a_{p}(2) \\ \vdots \\ a_{p}(p) \end{array}\right]=\epsilon_{p}\left[\begin{array}{c} 1 \\ 0 \\ 0 \\ \vdots \\ 0 \end{array}\right] \tag{D.3} rx(0)rx(1)rx(2)rx(p)rx(1)rx(0)rx(1)rx(p1)rx(2)rx(1)rx(0)rx(p2)rx(p)rx(p1)rx(p2)rx(0)1ap(1)ap(2)ap(p)=ϵp1000(D.3)
which is a set of p + 1 p+1 p+1 linear equations in the p + 1 p+1 p+1 unknowns a p ( 1 ) , a p ( 2 ) , ⋯   , a p ( p ) a_p(1),a_p(2),\cdots, a_p(p) ap(1),ap(2),,ap(p) and ϵ p \epsilon_p ϵp. Equivalently, ( D . 3 ) (D.3) (D.3) may be written as
R p a p = ϵ p u 1 (D.4) \mathbf R_p \mathbf a_p=\epsilon_p \mathbf u_1\tag{D.4} Rpap=ϵpu1(D.4)


The Levinson-Durbin recursion for solving (D.4) is an algorithm that is recursive in the model order. In other words, the coefficients of the ( j + 1 ) (j + 1) (j+1)st-order all-pole model, a j + 1 \mathbf a_{j+1} aj+1, are found from the coefficients of the j j j-pole model, a j \mathbf a_j aj.

Let a j ( i ) a_j (i) aj(i) be the solution to the j j jth-order normal equations
[ r x ( 0 ) r x ∗ ( 1 ) r x ∗ ( 2 ) ⋯ r x ∗ ( j ) r x ( 1 ) r x ( 0 ) r x ∗ ( 1 ) ⋯ r x ∗ ( j − 1 ) r x ( 2 ) r x ( 1 ) r x ( 0 ) ⋯ r x ∗ ( j − 2 ) ⋮ ⋮ ⋮ ⋮ r x ( j ) r x ( j − 1 ) r x ( j − 2 ) ⋯ r x ( 0 ) ] [ 1 a p ( 1 ) a p ( 2 ) ⋮ a p ( j ) ] = [ ϵ j 0 0 ⋮ 0 ] (D.5) \left[\begin{array}{ccccc} r_{x}(0) & r_{x}^{*}(1) & r_{x}^{*}(2) & \cdots & r_{x}^{*}(j) \\ r_{x}(1) & r_{x}(0) & r_{x}^{*}(1) & \cdots & r_{x}^{*}(j-1) \\ r_{x}(2) & r_{x}(1) & r_{x}(0) & \cdots & r_{x}^{*}(j-2) \\ \vdots & \vdots & \vdots & & \vdots \\ r_{x}(j) & r_{x}(j-1) & r_{x}(j-2) & \cdots & r_{x}(0) \end{array}\right]\left[\begin{array}{c} 1 \\ a_{p}(1) \\ a_{p}(2) \\ \vdots \\ a_{p}(j) \end{array}\right]=\left[\begin{array}{c} \epsilon_{j} \\ 0 \\ 0 \\ \vdots \\ 0 \end{array}\right] \tag{D.5} rx(0)rx(1)rx(2)rx(j)rx(1)rx(0)rx(1)rx(j1)rx(2)rx(1)rx(0)rx(j2)rx(j)rx(j1)rx(j2)rx(0)1ap(1)ap(2)ap(j)=ϵj000(D.5)
which, in matrix notation is
R j a j = ϵ j u 1 (D.6) \mathbf R_j \mathbf a_j=\epsilon_j \mathbf u_1\tag{D.6} Rjaj=ϵju1(D.6)
Given a j \mathbf a_j aj, we want to derive the solution to the ( j + 1 ) (j+1) (j+1)st-order normal equations,
R j + 1 a j + 1 = ϵ j + 1 u 1 (D.7) \mathbf R_{j+1} \mathbf a_{j+1}=\epsilon_{j+1} \mathbf u_1\tag{D.7} Rj+1aj+1=ϵj+1u1(D.7)
Suppose we append a zero to the vector a j \mathbf a_j aj and multiply the resulting vector by R j + 1 \mathbf R_{j+1} Rj+1. The result is
[ r x ( 0 ) r x ∗ ( 1 ) r x ∗ ( 2 ) ⋯ r x ∗ ( j ) r x ∗ ( j + 1 ) r x ( 1 ) r x ( 0 ) r x ∗ ( 1 ) ⋯ r x ∗ ( j − 1 ) r x ∗ ( j ) r x ( 2 ) r x ( 1 ) r x ( 0 ) ⋯ r x ∗ ( j − 2 ) r x ∗ ( j − 1 ) ⋮ ⋮ ⋮ ⋮ ⋮ r x ( j ) r x ( j − 1 ) r x ( j − 2 ) ⋯ r x ( 0 ) r x ∗ ( 1 ) r x ( j + 1 ) r x ( j ) r x ( j − 1 ) ⋯ r x ( 1 ) r x ( 0 ) ] [ 1 a j ( 1 ) a j ( 2 ) ⋮ a j ( j ) 0 ] = [ ϵ j 0 0 ⋮ 0 γ j ] (D.8) \left[\begin{array}{ccccc} r_{x}(0) & r_{x}^{*}(1) & r_{x}^{*}(2) & \cdots & r_{x}^{*}(j) & r_{x}^{*}(j+1) \\ r_{x}(1) & r_{x}(0) & r_{x}^{*}(1) & \cdots & r_{x}^{*}(j-1) & r_{x}^{*}(j) \\ r_{x}(2) & r_{x}(1) & r_{x}(0) & \cdots & r_{x}^{*}(j-2) & r_{x}^{*}(j-1) \\ \vdots & \vdots & \vdots & & \vdots & \vdots \\ r_{x}(j) & r_{x}(j-1) & r_{x}(j-2) & \cdots & r_{x}(0) & r_{x}^{*}(1) \\ r_{x}(j+1) & r_{x}(j) & r_{x}(j-1) & \cdots & r_{x}(1) & r_{x}(0) \end{array}\right]\left[\begin{array}{c} 1 \\ a_{j}(1) \\ a_{j}(2) \\ \vdots \\ a_{j}(j) \\ 0 \end{array}\right]=\left[\begin{array}{c} \epsilon_{j} \\ 0 \\ 0 \\ \vdots \\ 0 \\ \gamma_{j} \end{array}\right] \tag{D.8} rx(0)rx(1)rx(2)rx(j)rx(j+1)rx(1)rx(0)rx(1)rx(j1)rx(j)rx(2)rx(1)rx(0)rx(j2)rx(j1)rx(j)rx(j1)rx(j2)rx(0)rx(1)rx(j+1)rx(j)rx(j1)rx(1)rx(0)1aj(1)aj(2)aj(j)0=ϵj000γj(D.8)
where the parameter γ j \gamma_j γj is
γ j = r x ( j + 1 ) + ∑ i = 1 j a j ( i ) r x ( j + 1 − i ) (D.9) \gamma_j=r_x(j+1)+\sum_{i=1}^j a_j(i)r_x(j+1-i)\tag{D.9} γj=rx(j+1)+i=1jaj(i)rx(j+1i)(D.9)
Note that if γ j = 0 , \gamma_{j}=0, γj=0, then the right side of ( D . 8 ) (D.8) (D.8) is a scaled unit vector and a j + 1 = \mathbf{a}_{j+1}= aj+1= [ 1 , a j ( 1 ) , … , a j ( j ) , 0 ] T \left[1, a_{j}(1), \ldots, a_{j}(j), 0\right]^{T} [1,aj(1),,aj(j),0]T is the solution to the ( j + 1 ) (j+1) (j+1)st-order normal equations ( D . 7 ) . (D.7) . (D.7). In general, however, γ j ≠ 0 \gamma_{j} \neq 0 γj=0 and [ 1 , a j ( 1 ) , … , a j ( j ) , 0 ] T \left[1, a_{j}(1), \ldots, a_{j}(j), 0\right]^{T} [1,aj(1),,aj(j),0]T is not the solution to ( D . 7 ) (D.7) (D.7).

The key step in the derivation of the Levinson-Durbin recursion is to note that the Hermitian Toeplitz property of R j + 1 R_{j+1} Rj+1 allows us to rewrite ( D . 8 ) (D.8) (D.8) in the equivalent form
[ r x ( 0 ) r x ( 1 ) r x ( 2 ) ⋯ r x ( j ) r x ( j + 1 ) r x ∗ ( 1 ) r x ( 0 ) r x ( 1 ) ⋯ r x ( j − 1 ) r x ( j ) r x ∗ ( 2 ) r x ∗ ( 1 ) r x ( 0 ) ⋯ r x ( j − 2 ) r x ( j − 1 ) ⋮ ⋮ ⋮ ⋮ ⋮ r x ∗ ( j ) r x ∗ ( j − 1 ) r x ∗ ( j − 2 ) ⋯ r x ( 0 ) r x ( 1 ) r x ∗ ( j + 1 ) r x ∗ ( j ) r x ∗ ( j − 1 ) ⋯ r x ∗ ( 1 ) r x ( 0 ) ] [ 0 a j ( j ) a j ( j − 1 ) ⋮ a j ( 1 ) 1 ] = [ γ j 0 0 ⋮ 0 ϵ j ] (D.10) \left[\begin{array}{ccccc} r_{x}(0) & r_{x}(1) & r_{x}(2) & \cdots & r_{x}(j) & r_{x}(j+1) \\ r_{x}^{*}(1) & r_{x}(0) & r_{x}(1) & \cdots & r_{x}(j-1) & r_{x}(j) \\ r_{x}^{*}(2) & r_{x}^{*}(1) & r_{x}(0) & \cdots & r_{x}(j-2) & r_{x}(j-1) \\ \vdots & \vdots & \vdots & & \vdots & \vdots \\ r_{x}^{*}(j) & r_{x}^{*}(j-1) & r_{x}^{*}(j-2) & \cdots & r_{x}(0) & r_{x}(1) \\ r_{x}^{*}(j+1) & r_{x}^{*}(j) & r_{x}^{*}(j-1) & \cdots & r_{x}^{*}(1) & r_{x}(0) \end{array}\right]\left[\begin{array}{c} 0 \\ a_{j}(j) \\ a_{j}(j-1) \\ \vdots \\ a_{j}(1) \\ 1 \end{array}\right]=\left[\begin{array}{c} \gamma_{j} \\ 0 \\ 0 \\ \vdots \\ 0 \\ \epsilon_{j} \end{array}\right] \tag{D.10} rx(0)rx(1)rx(2)rx(j)rx(j+1)rx(1)rx(0)rx(1)rx(j1)rx(j)rx(2)rx(1)rx(0)rx(j2)rx(j1)rx(j)rx(j1)rx(j2)rx(0)rx(1)rx(j+1)rx(j)rx(j1)rx(1)rx(0)0aj(j)aj(j1)aj(1)1=γj000ϵj(D.10)
Taking the complex conjugate of ( D . 10 ) (D.10) (D.10)
[ r x ( 0 ) r x ∗ ( 1 ) r x ∗ ( 2 ) ⋯ r x ∗ ( j ) r x ∗ ( j + 1 ) r x ( 1 ) r x ( 0 ) r x ∗ ( 1 ) ⋯ r x ∗ ( j − 1 ) r x ∗ ( j ) r x ( 2 ) r x ( 1 ) r x ( 0 ) ⋯ r x ∗ ( j − 2 ) r x ∗ ( j − 1 ) ⋮ ⋮ ⋮ ⋮ ⋮ r x ( j ) r x ( j − 1 ) r x ( j − 2 ) ⋯ r x ( 0 ) r x ∗ ( 1 ) r x ( j + 1 ) r x ( j ) r x ( j − 1 ) ⋯ r x ( 1 ) r x ( 0 ) ] [ 0 a j ∗ ( j ) a j ∗ ( j − 1 ) ⋮ a j ∗ ( 1 ) 1 ] = [ γ j ∗ 0 0 ⋮ 0 ϵ j ∗ ] (D.11) \left[\begin{array}{ccccc} r_{x}(0) & r_{x}^{*}(1) & r_{x}^{*}(2) & \cdots & r_{x}^{*}(j) & r_{x}^{*}(j+1) \\ r_{x}(1) & r_{x}(0) & r_{x}^{*}(1) & \cdots & r_{x}^{*}(j-1) & r_{x}^{*}(j) \\ r_{x}(2) & r_{x}(1) & r_{x}(0) & \cdots & r_{x}^{*}(j-2) & r_{x}^{*}(j-1) \\ \vdots & \vdots & \vdots & & \vdots & \vdots \\ r_{x}(j) & r_{x}(j-1) & r_{x}(j-2) & \cdots & r_{x}(0) & r_{x}^{*}(1) \\ r_{x}(j+1) & r_{x}(j) & r_{x}(j-1) & \cdots & r_{x}(1) & r_{x}(0) \end{array}\right]\left[\begin{array}{c} 0 \\ a_{j}^*(j) \\ a_{j}^*(j-1) \\ \vdots \\ a_{j}^*(1) \\ 1 \end{array}\right]=\left[\begin{array}{c} \gamma_{j}^* \\ 0 \\ 0 \\ \vdots \\ 0 \\ \epsilon_{j}^* \end{array}\right] \tag{D.11} rx(0)rx(1)rx(2)rx(j)rx(j+1)rx(1)rx(0)rx(1)rx(j1)rx(j)rx(2)rx(1)rx(0)rx(j2)rx(j1)rx(j)rx(j1)rx(j2)rx(0)rx(1)rx(j+1)rx(j)rx(j1)rx(1)rx(0)0aj(j)aj(j1)aj(1)1=γj000ϵj(D.11)
Although [ 1 , a j ( 1 ) , … , a j ( j ) , 0 ] T \left[1, a_{j}(1), \ldots, a_{j}(j), 0\right]^{T} [1,aj(1),,aj(j),0]T is not the solution to ( D . 7 ) (D.7) (D.7), the linear combination of [ 1 , a j ( 1 ) , … , a j ( j ) , 0 ] T \left[1, a_{j}(1), \ldots, a_{j}(j), 0\right]^{T} [1,aj(1),,aj(j),0]T and [ 0 , a j ∗ ( j ) , … , a j ∗ ( 1 ) , 1 ] T \left[0, a_{j}^*(j), \ldots, a_{j}^*(1), 1\right]^{T} [0,aj(j),,aj(1),1]T can be the solution to ( D . 7 ) (D.7) (D.7). To see this,
R j + 1 { [ 1 a j ( 1 ) a j ( 2 ) ⋮ a j ( j ) 0 ] + Γ j + 1 [ 0 a j ∗ ( j ) a j ∗ ( j − 1 ) ⋮ a j ∗ ( 1 ) 1 ] } = [ ϵ j 0 0 ⋮ 0 γ j ] + Γ j + 1 [ γ j ∗ 0 0 ⋮ 0 ϵ j ∗ ] (D.12) \mathbf{R}_{j+1}\left\{\left[\begin{array}{c} 1 \\ a_{j}(1) \\ a_{j}(2) \\ \vdots \\ a_{j}(j) \\ 0 \end{array}\right]+\Gamma_{j+1}\left[\begin{array}{c} 0 \\ a_{j}^{*}(j) \\ a_{j}^{*}(j-1) \\ \vdots \\ a_{j}^{*}(1) \\ 1 \end{array}\right]\right\}=\left[\begin{array}{c} \epsilon_{j} \\ 0 \\ 0 \\ \vdots \\ 0 \\ \gamma_{j} \end{array}\right]+\Gamma_{j+1}\left[\begin{array}{c} \gamma_{j}^{*} \\ 0 \\ 0 \\ \vdots \\ 0 \\ \epsilon_{j}^{*} \end{array}\right]\tag{D.12} Rj+11aj(1)aj(2)aj(j)0+Γj+10aj(j)aj(j1)aj(1)1=ϵj000γj+Γj+1γj000ϵj(D.12)
If we set
Γ j + 1 = − γ j ϵ j ∗ (D.13) \Gamma_{j+1}=-\frac{\gamma_j}{\epsilon_j^*} \tag{D.13} Γj+1=ϵjγj(D.13)
then ( D . 12 ) (D.12) (D.12) becomes
R j + 1 a j + 1 = ϵ j + 1 u 1 \mathbf R_{j+1}\mathbf a_{j+1}=\epsilon_{j+1}\mathbf u_1 Rj+1aj+1=ϵj+1u1
where
a j + 1 = [ 1 a j ( 1 ) a j ( 2 ) ⋮ a j ( j ) 0 ] + Γ j + 1 [ 0 a j ∗ ( j ) a j ∗ ( j − 1 ) ⋮ a j ∗ ( 1 ) 1 ] (D.14) \mathbf{a}_{j+1}=\left[\begin{array}{c} 1 \\ a_{j}(1) \\ a_{j}(2) \\ \vdots \\ a_{j}(j) \\ 0 \end{array}\right]+\Gamma_{j+1}\left[\begin{array}{c} 0 \\ a_{j}^{*}(j) \\ a_{j}^{*}(j-1) \\ \vdots \\ a_{j}^{*}(1) \\ 1 \end{array}\right]\tag{D.14} aj+1=1aj(1)aj(2)aj(j)0+Γj+10aj(j)aj(j1)aj(1)1(D.14)
Furthermore,
ϵ j + 1 = ϵ j + Γ j + 1 γ j ∗ = ϵ j [ 1 − ∣ Γ j + 1 ∣ 2 ] (D.15) \epsilon_{j+1}=\epsilon_j+\Gamma_{j+1}\gamma_j^*=\epsilon_j[1-|\Gamma_{j+1}|^2]\tag{D.15} ϵj+1=ϵj+Γj+1γj=ϵj[1Γj+12](D.15)
The whole process can also be presented as
R j + 1 [ 1 0 a j ( 1 ) a j ∗ ( j ) ⋮ ⋮ a j ( j ) a j ∗ ( 1 ) 0 1 ] [ 1 Γ j + 1 ∗ Γ j + 1 1 ] = R j + 1 [ 1 a j + 1 ∗ ( j + 1 ) a j + 1 ( 1 ) a j + 1 ∗ ( j ) ⋮ ⋮ a j + 1 ( j ) a j + 1 ∗ ( 1 ) a j + 1 ( j + 1 ) 1 ] = [ ϵ j γ j ∗ 0 0 ⋮ ⋮ 0 0 γ j ϵ j ∗ ] [ 1 Γ j + 1 ∗ Γ j + 1 1 ] = [ ϵ j + 1 0 0 0 ⋮ ⋮ 0 0 0 ϵ j + 1 ∗ ] (D.16) \mathbf{R}_{j+1} \left[\begin{array}{ccc} 1 & 0 \\ a_{j}(1) & a_{j}^*(j) \\ \vdots & \vdots \\ a_{j}(j) & a_{j}^*(1) \\ 0 & 1 \end{array}\right] \left[\begin{array}{ccc} 1 & \Gamma_{j+1}^* \\ \Gamma_{j+1} & 1 \end{array}\right]=\mathbf{R}_{j+1} \left[\begin{array}{ccc} 1 & a_{j+1}^*(j+1) \\ a_{j+1}(1) & a_{j+1}^*(j) \\ \vdots & \vdots \\ a_{j+1}(j) & a_{j+1}^*(1) \\ a_{j+1}(j+1) & 1 \end{array}\right] =\left[\begin{array}{cc} \epsilon_{j} & \gamma_{j}^* \\ 0 & 0 \\ \vdots & \vdots \\ 0 & 0 \\ \gamma_{j} & \epsilon_{j}^* \end{array}\right]\left[\begin{array}{ccc} 1 & \Gamma_{j+1}^* \\ \Gamma_{j+1} & 1 \end{array}\right]=\left[\begin{array}{cc} \epsilon_{j+1} & 0\\ 0 & 0 \\ \vdots & \vdots \\ 0 & 0 \\ 0 & \epsilon_{j+1}^* \end{array}\right]\tag{D.16} Rj+11aj(1)aj(j)00aj(j)aj(1)1[1Γj+1Γj+11]=Rj+11aj+1(1)aj+1(j)aj+1(j+1)aj+1(j+1)aj+1(j)aj+1(1)1=ϵj00γjγj00ϵj[1Γj+1Γj+11]=ϵj+1000000ϵj+1(D.16)
All that is required to complete the recursion is to define the conditions necessary to initialize the recursion:
a 0 ( 0 ) = 1 , ϵ 0 = r x ( 0 ) (D.17) a_0(0)=1,\quad \epsilon_0=r_x(0) \tag{D.17} a0(0)=1,ϵ0=rx(0)(D.17)
In summary, the steps of the Levinson-Durbin recursion are as follows:
{ r x ( 0 ) , r x ( 1 ) , … , r x ( p ) } ⟶ L E V { Γ 1 , Γ 2 , … , Γ p , ϵ p a p ( 1 ) , a p ( 2 ) , … , a p ( p ) , b ( 0 ) \left\{r_{x}(0), r_{x}(1), \ldots, r_{x}(p)\right\} \stackrel{L E V}{\longrightarrow}\left\{\begin{array}{c} \Gamma_{1}, \Gamma_{2}, \ldots, \Gamma_{p}, \epsilon_{p}\\ a_{p}(1), a_{p}(2), \ldots, a_{p}(p), b(0) \end{array}\right. {rx(0),rx(1),,rx(p)}LEV{Γ1,Γ2,,Γp,ϵpap(1),ap(2),,ap(p),b(0)

在这里插入图片描述

Alternatively, using the notation in ( D . 16 ) (D.16) (D.16),

  1. Get ϵ j , γ j \epsilon_j,\gamma_j ϵj,γj from
    R j + 1 [ 1 0 a j ( 1 ) a j ∗ ( j ) ⋮ ⋮ a j ( j ) a j ∗ ( 1 ) 0 1 ] = [ ϵ j γ j ∗ 0 0 ⋮ ⋮ 0 0 γ j ϵ j ∗ ] \mathbf{R}_{j+1} \left[\begin{array}{ccc} 1 & 0 \\ a_{j}(1) & a_{j}^*(j) \\ \vdots & \vdots \\ a_{j}(j) & a_{j}^*(1) \\ 0 & 1 \end{array}\right] =\left[\begin{array}{cc} \epsilon_{j} & \gamma_{j}^* \\ 0 & 0 \\ \vdots & \vdots \\ 0 & 0 \\ \gamma_{j} & \epsilon_{j}^* \end{array}\right] Rj+11aj(1)aj(j)00aj(j)aj(1)1=ϵj00γjγj00ϵj

  2. Obtain Γ j + 1 = − γ j / ϵ j \Gamma_{j+1}=-\gamma_j/\epsilon_j Γj+1=γj/ϵj.

  3. Obtain a j + 1 \mathbf a_{j+1} aj+1 from
    [ 1 0 a j ( 1 ) a j ∗ ( j ) ⋮ ⋮ a j ( j ) a j ∗ ( 1 ) 0 1 ] [ 1 Γ j + 1 ∗ Γ j + 1 1 ] = [ 1 a j + 1 ∗ ( j + 1 ) a j + 1 ( 1 ) a j + 1 ∗ ( j ) ⋮ ⋮ a j + 1 ( j ) a j + 1 ∗ ( 1 ) a j + 1 ( j + 1 ) 1 ] \left[\begin{array}{ccc} 1 & 0 \\ a_{j}(1) & a_{j}^*(j) \\ \vdots & \vdots \\ a_{j}(j) & a_{j}^*(1) \\ 0 & 1 \end{array}\right] \left[\begin{array}{ccc} 1 & \Gamma_{j+1}^* \\ \Gamma_{j+1} & 1 \end{array}\right]= \left[\begin{array}{ccc} 1 & a_{j+1}^*(j+1) \\ a_{j+1}(1) & a_{j+1}^*(j) \\ \vdots & \vdots \\ a_{j+1}(j) & a_{j+1}^*(1) \\ a_{j+1}(j+1) & 1 \end{array}\right] 1aj(1)aj(j)00aj(j)aj(1)1[1Γj+1Γj+11]=1aj+1(1)aj+1(j)aj+1(j+1)aj+1(j+1)aj+1(j)aj+1(1)1

The Lattice Filter

Define the reciprocal vector a j R \mathbf a_j^R ajR, which is the vector that is formed by reversing the order of the elements in a j \mathbf a_j aj and taking the complex conjugate,
a j = [ 1 a j ( 1 ) a j ( 2 ) ⋮ a j ( j − 1 ) a j ( j ) ] ⟹ [ a j ∗ ( j ) a j ∗ ( j − 1 ) a j ∗ ( j − 2 ) ⋮ a j ∗ ( 1 ) 1 ] = a j R (LF.1) \mathbf{a}_{j}=\left[\begin{array}{c} 1 \\ a_{j}(1) \\ a_{j}(2) \\ \vdots \\ a_{j}(j-1) \\ a_{j}(j) \end{array}\right] \Longrightarrow\left[\begin{array}{c} a_{j}^{*}(j) \\ a_{j}^{*}(j-1) \\ a_{j}^{*}(j-2) \\ \vdots \\ a_{j}^{*}(1) \\ 1 \end{array}\right]=\mathbf{a}_{j}^{R} \tag{LF.1} aj=1aj(1)aj(2)aj(j1)aj(j)aj(j)aj(j1)aj(j2)aj(1)1=ajR(LF.1)
or
a j R ( i ) = a j ∗ ( j − i ) ,  for  i = 0 , 1 , ⋯   , j (LF.2) a_j^R(i)=a_j^*(j-i),\text { for }i=0,1,\cdots,j \tag{LF.2} ajR(i)=aj(ji), for i=0,1,,j(LF.2)

FIR lattice filter

From ( D . 16 ) (D.16) (D.16), we have
[ 1 0 a j ( 1 ) a j ∗ ( j ) ⋮ ⋮ a j ( i ) a j ∗ ( j − i + 1 ) ⋮ ⋮ a j ( j ) a j ∗ ( 1 ) 0 1 ] [ 1 Γ j + 1 ∗ Γ j + 1 1 ] = [ 1 a j + 1 ∗ ( j + 1 ) a j + 1 ( 1 ) a j + 1 ∗ ( j ) ⋮ ⋮ a j + 1 ( i ) a j + 1 ∗ ( j + 1 − i ) ⋮ ⋮ a j + 1 ( j ) a j + 1 ∗ ( 1 ) a j + 1 ( j + 1 ) 1 ] \left[\begin{array}{ccc} 1 & 0 \\ a_{j}(1) & a_{j}^*(j) \\ \vdots & \vdots \\ a_j(i) & a_j^*(j-i+1) \\ \vdots & \vdots \\ a_{j}(j) & a_{j}^*(1) \\ 0 & 1 \end{array}\right] \left[\begin{array}{ccc} 1 & \Gamma_{j+1}^* \\ \Gamma_{j+1} & 1 \end{array}\right]= \left[\begin{array}{ccc} 1 & a_{j+1}^*(j+1) \\ a_{j+1}(1) & a_{j+1}^*(j) \\ \vdots & \vdots \\ a_{j+1}(i) & a_{j+1}^*(j+1-i) \\ \vdots & \vdots \\ a_{j+1}(j) & a_{j+1}^*(1) \\ a_{j+1}(j+1) & 1 \end{array}\right] 1aj(1)aj(i)aj(j)00aj(j)aj(ji+1)aj(1)1[1Γj+1Γj+11]=1aj+1(1)aj+1(i)aj+1(j)aj+1(j+1)aj+1(j+1)aj+1(j)aj+1(j+1i)aj+1(1)1

i.e.,
a j + 1 ( n ) = a j ( n ) + Γ j + 1 a j R ( n − 1 ) a j + 1 R ( n ) = a j R ( n − 1 ) + Γ j + 1 ∗ a j ( n ) (LF.3) \begin{aligned} & a_{j+1}(n)=a_j(n)+\Gamma_{j+1}a_j^R(n-1)\\ & a_{j+1}^R(n)=a_j^R(n-1)+\Gamma^*_{j+1}a_j(n) \end{aligned} \tag{LF.3} aj+1(n)=aj(n)+Γj+1ajR(n1)aj+1R(n)=ajR(n1)+Γj+1aj(n)(LF.3)

在这里插入图片描述

在这里插入图片描述

We can define filter functions corresponding to the filter coefficients
{ A j ( z ) = 1 + a j ( 1 ) z − 1 + ⋯ + a j ( j ) z − j A j R ( z ) = a j ∗ ( j ) + a j ∗ ( j − 1 ) z − 1 + ⋯ + z − j (LF.4) \left\{\begin{array}{l} A_{j}(z)=1+a_{j}(1) z^{-1}+\cdots+a_{j}(j) z^{-j} \\ A_{j}^{R}(z)=a_{j}^*(j)+a_{j}^*(j-1) z^{-1}+\cdots+z^{-j} \end{array}\right. \tag{LF.4} {Aj(z)=1+aj(1)z1++aj(j)zjAjR(z)=aj(j)+aj(j1)z1++zj(LF.4)
which are the z z z-transform of sequences a j = [ 1 , a j ( 1 ) , ⋯   , a j ( j ) ] T \mathbf a_j=[1,a_j(1),\cdots,a_j(j)]^T aj=[1,aj(1),,aj(j)]T and a j R = [ a j ∗ ( j ) , a j ∗ ( j − 1 ) , ⋯   , 1 ] T \mathbf a_j^R=[a_j^*(j),a_j^*(j-1),\cdots,1]^T ajR=[aj(j),aj(j1),,1]T.

According to ( L F . 2 ) (LF.2) (LF.2), it can be verified that
A j R ( z ) = z − j A j ∗ ( 1 / z ∗ ) (LF.5) A_j^R(z)=z^{-j}A_j^*(1/z^*) \tag{LF.5} AjR(z)=zjAj(1/z)(LF.5)
From ( L F . 3 ) (LF.3) (LF.3), we obtain
A j + 1 ( z ) = A j ( z ) + z − 1 Γ j + 1 A j R ( z ) A j + 1 R ( z ) = z − 1 A j R ( z ) + Γ j + 1 ∗ A j ( z ) (LF.6) \begin{aligned} & A_{j+1}(z)=A_j(z)+z^{-1}\Gamma_{j+1}A_j^R(z)\\ & A_{j+1}^R(z)=z^{-1}A_j^R(z)+\Gamma_{j+1}^*A_j(z) \end{aligned} \tag{LF.6} Aj+1(z)=Aj(z)+z1Γj+1AjR(z)Aj+1R(z)=z1AjR(z)+Γj+1Aj(z)(LF.6)
(Note that the notations of the slides is different from the book. ρ \rho ρ equals to − Γ -\Gamma Γ and the slides only consider real valued signals)

在这里插入图片描述

在这里插入图片描述

Thus, the resulting system has impulse response A p ( z ) A_p(z) Ap(z) (and also A P R ( z ) A_P^R(z) APR(z), not used).

If we use this system with input x [ n ] x[n] x[n], we obtain the prediction error sequence v P [ n ] v_P[n] vP[n]: (Note that in this process, we only need to know ρ i \rho_i ρi, a p ( n ) a_p(n) ap(n) is not necessary)

在这里插入图片描述

The filter structure is known as a FIR lattice filter: the filter response is A p ( z ) A_p(z) Ap(z).


IIR lattice filter

Rearrange ( L F . 6 ) (LF.6) (LF.6) as
A j ( z ) = A j + 1 ( z ) − z − 1 Γ j + 1 A j R ( z ) A j + 1 R ( z ) = z − 1 A j R ( z ) + Γ j + 1 ∗ A j ( z ) (LF.7) \begin{aligned} & A_{j}(z)=A_{j+1}(z)-z^{-1}\Gamma_{j+1}A_j^R(z)\\ & A_{j+1}^R(z)=z^{-1}A_j^R(z)+\Gamma_{j+1}^*A_j(z) \end{aligned} \tag{LF.7} Aj(z)=Aj+1(z)z1Γj+1AjR(z)Aj+1R(z)=z1AjR(z)+Γj+1Aj(z)(LF.7)

在这里插入图片描述

在这里插入图片描述

在这里插入图片描述

This allows to compute x [ n ] x[n] x[n] from v p [ n ] v_p[n] vp[n]: the filter response is 1 A p ( z ) \frac{1}{A_p(z)} Ap(z)1. This is an IIR filter and the filter coefficients of 1 A p ( z ) \frac{1}{A_p(z)} Ap(z)1 are not explicitly computed.

From the correlation sequence { r x ( 0 ) , ⋯   , r x ( p ) } \{r_x(0),\cdots ,r_x(p)\} {rx(0),,rx(p)} of x [ n ] x[n] x[n], we can obtain ρ 1 , ⋯   , ρ p \rho_1,\cdots,\rho_p ρ1,,ρp and the variance ϵ p \epsilon_p ϵp of v p [ n ] v_p[n] vp[n]. If we replace v p [ n ] v_p[n] vp[n] by any white noise sequence with variance ϵ p \epsilon_p ϵp, then the resulting output signal is a random process with the same correlation sequence { r x ( 0 ) , ⋯   , r x ( p ) } \{r_x(0),\cdots ,r_x(p)\} {rx(0),,rx(p)} as x [ n ] x[n] x[n].

Properties

more in book Chapter 5.2.3

To show that the Levinson recursion works and does not break down, we need to prove that we always can compute a suitable ρ \rho ρ. This will follow from the main assumption:
R x ≻ 0  for all  p . \mathbf{R}_{x}\succ0 \text{ for all }p. Rx0 for all p.

  • ϵ p > 0. \epsilon_{p}>0 . ϵp>0.

    This follows from the YW equations:
    [ 1 a p ( 1 ) a p ( 2 ) ⋮ a p ( p ) ] = [ r x ( 0 ) r x ∗ ( 1 ) r x ∗ ( 2 ) ⋯ r x ∗ ( p ) r x ( 1 ) r x ( 0 ) r x ∗ ( 1 ) ⋯ r x ∗ ( p − 1 ) r x ( 2 ) r x ( 1 ) r x ( 0 ) ⋱ ⋮ ⋮ ⋱ ⋱ ⋱ r x ∗ ( 1 ) r x ( p ) r x ( p − 1 ) ⋯ r x ( 1 ) r x ( 0 ) ] − 1 [ ϵ p 0 0 ⋮ 0 ] \left[\begin{array}{c} 1 \\ a_{p}(1) \\ a_{p}(2) \\ \vdots \\ a_{p}(p) \end{array}\right]=\left[\begin{array}{ccccc} r_{x}(0) & r_{x}^*(1) & r_{x}^*(2) & \cdots & r_{x}^*(p) \\ r_{x}(1) & r_{x}(0) & r_{x}^*(1) & \cdots & r_{x}^*(p-1) \\ r_{x}(2) & r_{x}(1) & r_{x}(0) & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & r_{x}^*(1) \\ r_{x}(p) & r_{x}(p-1) & \cdots & r_{x}(1) & r_{x}(0) \end{array}\right]^{-1}\left[\begin{array}{c} \epsilon_{p} \\ 0 \\ 0 \\ \vdots \\ 0 \end{array}\right] 1ap(1)ap(2)ap(p)=rx(0)rx(1)rx(2)rx(p)rx(1)rx(0)rx(1)rx(p1)rx(2)rx(1)rx(0)rx(1)rx(p)rx(p1)rx(1)rx(0)1ϵp000
    In particular, 1 = [ R x − 1 ] 0 , 0 ϵ p . 1=\left[\mathrm{R}_{x}^{-1}\right]_{0,0} \epsilon_{p} . 1=[Rx1]0,0ϵp. since R x \mathrm{R}_{x} Rx is strictly positive definite (hence also invertible), the inverse R x − 1 \mathrm{R}_{x}^{-1} Rx1 exists and is also strictly positive definite. This implies that [ R x − 1 ] 0 , 0 > 0. \left[\mathbf{R}_{x}^{-1}\right]_{0,0}>0 . [Rx1]0,0>0. Hence ϵ p > 0 \epsilon_{p}>0 ϵp>0 for any p p p. In particular, ϵ p ≠ 0 , \epsilon_{p} \neq 0, ϵp=0, so that we can compute ρ ρ + 1 \rho_{\rho+1} ρρ+1.

  • ∣ ρ p + 1 ∣ < 1 \left|\rho_{p+1}\right|<1 ρp+1<1
    The update equation is
    ϵ p + 1 = ϵ p − ρ p + 1 γ p = ϵ p − γ p 2 ϵ p = ϵ p 2 − γ p 2 ϵ p > 0 ⇒ ∣ ϵ p ∣ > ∣ γ p ∣ \epsilon_{p+1}=\epsilon_{p}-\rho_{p+1} \gamma_{p}=\epsilon_{p}-\frac{\gamma_{p}^{2}}{\epsilon_{p}}=\frac{\epsilon_{p}^{2}-\gamma_{p}^{2}}{\epsilon_{p}}>0 \Rightarrow\left|\epsilon_{p}\right|>\left|\gamma_{p}\right| ϵp+1=ϵpρp+1γp=ϵpϵpγp2=ϵpϵp2γp2>0ϵp>γp
    so that ∣ ρ p + 1 ∣ < 1 \left|\rho_{p+1}\right|<1 ρp+1<1. From this we also see that ϵ p + 1 ≤ ϵ p : \epsilon_{p+1} \leq \epsilon_{p}: ϵp+1ϵp: the modeling/prediction error always decreases.

    Special case:

    If we have a true AR process of order p , p, p, then we will find ϵ p + 1 = ϵ p \epsilon_{p+1}=\epsilon_{p} ϵp+1=ϵp and γ p = 0. \gamma_{p}=0 . γp=0. At this point the recursion stops (we can take ρ p + 1 = 0 , \rho_{p+1}=0, ρp+1=0, so that the AR coefficients do not change anymore).

    Special case:

    If ∣ ρ p + 1 ∣ = 1 , \left|\rho_{p+1}\right|=1, ρp+1=1, then ϵ p + 1 = 0. \epsilon_{p+1}=0 . ϵp+1=0. The prediction error is zero, x [ n ] x[n] x[n] can be exactly predicted from its past, the process is called deterministic. This can occur only if R x \mathbf{R}_{x} Rx is singular (thus the matrix is not strictly positive definite).

  • The roots of A P ( z ) A_P(z) AP(z) lie inside the unit circle, i.e., the IIR filter is stable.

The Schur Algorithm

  • The Levinson algorithm involves two times 2 p 2 p 2p multiplications and p p p additions. With p p p computational processors, this work can be done in parallel. However, the computation of γ p ( p \gamma_{p}(p γp(p additions) is not easily parallelized.
  • The Schur algorithm is an alternative to Levinson to solve the same equations. It is based on the idea that we do not need the filter coefficients { a p ( 1 ) , ⋯ a p ( p ) } \left\{a_{p}(1), \cdots a_{p}(p)\right\} {ap(1),ap(p)} for all p , p, p, the reflection coefficients { ρ 1 , ⋯   , ρ p } \left\{\rho_{1}, \cdots, \rho_{p}\right\} {ρ1,,ρp} completely specify the filter.

The development of the Schur recursion begins with the autocorrelation normal equations for a j j jth-order model (see ( D . 1 ) (D.1) (D.1))
r x ( k ) + ∑ l = 1 j a j ( l ) r x ( k − l ) = 0 ; k = 1 , 2 , ⋯   , j r_x(k)+\sum_{l=1}^ja_j(l)r_x(k-l)=0;\quad k=1,2,\cdots ,j rx(k)+l=1jaj(l)rx(kl)=0;k=1,2,,j
If we set a j ( 0 ) = 1 a_j(0)=1 aj(0)=1 then the j j jth-order normal equations may be written as
∑ l = 0 j a j ( l ) r x ( k − l ) = 0 ; k = 1 , 2 , ⋯   , j \sum_{l=0}^ja_j(l)r_x(k-l)=0;\quad k=1,2,\cdots ,j l=0jaj(l)rx(kl)=0;k=1,2,,j
which we will refer to as the orthogonality condition.


By introducing new variables g j ( k ) g_j(k) gj(k) and g j R ( k ) g_j^R(k) gjR(k), we can avoid the computation of the filter coefficients a j ( k ) a_j(k) aj(k).

Define
g j ( k ) = ∑ l = 0 j a j ( l ) r x ( k − l ) = a j ( k ) ∗ r x ( k ) (S.1) g_j(k)=\sum_{l=0}^j a_j(l)r_x(k-l)=a_j(k)*r_x(k) \tag{S.1} gj(k)=l=0jaj(l)rx(kl)=aj(k)rx(k)(S.1)
i.e., g j ( k ) g_j(k) gj(k) is the response of the filter A j ( z ) A_j(z) Aj(z) to the input r x ( k ) r_x(k) rx(k). Then the orthogonality condition can be presented as
g j ( k ) = 0 ; k = 1 , 2 , ⋯   , j (S.2) g_j(k)=0;\quad k=1,2,\cdots,j\tag{S.2} gj(k)=0;k=1,2,,j(S.2)
In addition, from ( D . 2 ) (D.2) (D.2), we obtain that g j ( 0 ) g_j(0) gj(0) is equal to the j j jth-order modeling error:
g j ( 0 ) = ∑ l = 0 j a j ( l ) r x ( l ) = ϵ j (S.3) g_j(0)=\sum_{l=0}^ja_j(l)r_x(l)=\epsilon_j \tag{S.3} gj(0)=l=0jaj(l)rx(l)=ϵj(S.3)
(Note that the length of g j \mathbf g_j gj is not j + 1 j+1 j+1 but p + 1 p+1 p+1.)

Similarly, we can define
g j R ( k ) = ∑ l = 0 j a j R ( l ) r x ( k − l ) = a j R ( k ) ∗ r x ( k ) (S.4) g_{j}^{R}(k)=\sum_{l=0}^{j} a_{j}^{R}(l) r_{x}(k-l)=a_{j}^{R}(k) * r_{x}(k)\tag{S.4} gjR(k)=l=0jajR(l)rx(kl)=ajR(k)rx(k)(S.4)
Thus, g j R ( k ) g_{j}^{R}(k) gjR(k) is the response of the filter A j R ( z ) A_{j}^{R}(z) AjR(z) to the input r x ( k ) . r_{x}(k) . rx(k). since a j R ( k ) = a j ∗ ( j − k ) a_{j}^{R}(k)=a_{j}^{*}(j-k) ajR(k)=aj(jk) then
g j R ( k ) = ∑ l = 0 j a j ∗ ( j − l ) r x ( k − l ) = ∑ l = 0 j a j ∗ ( l ) r x ( k − j + l ) (S.5) g_{j}^{R}(k)=\sum_{l=0}^{j} a_{j}^{*}(j-l) r_{x}(k-l)=\sum_{l=0}^{j} a_{j}^{*}(l) r_{x}(k-j+l)\tag{S.5} gjR(k)=l=0jaj(jl)rx(kl)=l=0jaj(l)rx(kj+l)(S.5)
Using the conjugate symmetry of the autocorrelation sequence, ( S . 5 ) (S.5) (S.5) becomes
g j R ( k ) = ∑ l = 0 j a j ∗ ( l ) r x ∗ ( [ j − k ] − l ) = g j ∗ ( j − k ) (S.6) g_{j}^{R}(k)=\sum_{l=0}^{j} a_{j}^{*}(l) r_{x}^{*}([j-k]-l)=g_{j}^{*}(j-k)\tag{S.6} gjR(k)=l=0jaj(l)rx([jk]l)=gj(jk)(S.6)
Therefore, it follows from ( S . 2 ) (S.2) (S.2) and ( S . 3 ) (S.3) (S.3) that
g j R ( k ) = 0 ; k = 0 , 1 , ⋯   , j − 1 (S.7) g_j^R(k)=0;\quad k=0,1,\cdots,j-1 \tag{S.7} gjR(k)=0;k=0,1,,j1(S.7)
and
g j R ( j ) = ϵ j (S.8) g_j^R(j)=\epsilon_j \tag{S.8} gjR(j)=ϵj(S.8)
To see this in matrix form,
[ g j T ( g j R ) T ] = [ ϵ j ⋯ 0 0 g j ( j + 1 ) ⋯ g j ( p ) 0 ⋯ 0 ϵ j g j R ( j + 1 ) ⋯ g j R ( p ) ] (S.9) \left[\begin{matrix}\mathbf g_j^T\\(\mathbf g_j^R)^T\end{matrix}\right]=\left[\begin{matrix}\epsilon_j & \cdots & 0 & 0& g_j(j+1)& \cdots& g_j(p)\\0 & \cdots & 0 & \epsilon_j& g_j^R(j+1)& \cdots& g^R_j(p)\end{matrix}\right]\tag{S.9} [gjT(gjR)T]=[ϵj0000ϵjgj(j+1)gjR(j+1)gj(p)gjR(p)](S.9)


The next step is to use the Levinson-Durbin recursion to show how the sequences g j ( k ) g_j(k) gj(k) and g j R ( k ) g_j^R(k) gjR(k) may be updated to form g j + 1 ( k ) g_{j+1}(k) gj+1(k) and g j + 1 R ( k ) g_{j+1}^R (k) gj+1R(k).

Using the Levinson order-update equation ( L F . 3 ) (LF.3) (LF.3), we have
g j + 1 ( k ) = a j + 1 ( k ) ∗ r x ( k ) = [ a j ( k ) + Γ j + 1 a j R ( k − 1 ) ] ∗ r x ( k ) = g j ( k ) + Γ j + 1 g j R ( k − 1 ) g j + 1 R ( k ) = a j + 1 R ( k ) ∗ r x ( k ) = [ a j R ( k − 1 ) + Γ j + 1 ∗ a j ( k ) ] ∗ r x ( k ) = g j R ( k − 1 ) + Γ j + 1 ∗ g j ( k ) (S.10) \begin{aligned} &g_{j+1}(k)=a_{j+1}(k)*r_x(k)=[a_j(k)+\Gamma_{j+1}a_j^R(k-1)]*r_x(k)=g_j(k)+\Gamma_{j+1}g_j^R(k-1)\\ &g^R_{j+1}(k)=a^R_{j+1}(k)*r_x(k)=[a_j^R(k-1)+\Gamma_{j+1}^*a_j(k)]*r_x(k)=g_j^R(k-1)+\Gamma_{j+1}^*g_j(k) \end{aligned} \tag{S.10} gj+1(k)=aj+1(k)rx(k)=[aj(k)+Γj+1ajR(k1)]rx(k)=gj(k)+Γj+1gjR(k1)gj+1R(k)=aj+1R(k)rx(k)=[ajR(k1)+Γj+1aj(k)]rx(k)=gjR(k1)+Γj+1gj(k)(S.10)
Note that the recursive equations for g j ( k ) , g j R ( k ) g_j(k),g^R_j(k) gj(k),gjR(k) ( S . 10 ) (S.10) (S.10) and the recursive equations for a j ( k ) , a j R ( k ) a_j(k),a^R_j(k) aj(k),ajR(k) ( L F . 3 ) (LF.3) (LF.3) are identical. The only difference between the two recursions is in the initial condition. Specifically, for a j ( k ) a_j(k) aj(k) we have a 0 ( k ) = a 0 R ( k ) = δ ( k ) a_0(k)=a_0^R(k)=\delta(k) a0(k)=a0R(k)=δ(k), whereas for g j ( k ) g_j(k) gj(k) we have g 0 ( k ) = g 0 R ( k ) = r x ( k ) g_0(k)=g_0^R(k)=r_x(k) g0(k)=g0R(k)=rx(k).

Taking z z z-transform of ( S . 10 ) (S.10) (S.10) and putting them in matrix form gives
[ G j + 1 ( z ) G j + 1 R ( z ) ] = [ 1 Γ j + 1 z − 1 Γ j + 1 ∗ z − 1 ] [ G j ( z ) G j R ( z ) ] (S.11) \left[\begin{matrix}G_{j+1}(z)\\ G_{j+1}^R(z)\end{matrix}\right]=\left[\begin{matrix}1 & \Gamma_{j+1}z^{-1}\\ \Gamma^*_{j+1} & z^{-1}\end{matrix}\right]\left[\begin{matrix}G_{j}(z)\\ G_{j}^R(z)\end{matrix}\right]\tag{S.11} [Gj+1(z)Gj+1R(z)]=[1Γj+1Γj+1z1z1][Gj(z)GjR(z)](S.11)

在这里插入图片描述


Recall that our goal is to derive a recursion that will take a sequence of autocorrelation values and generate the corresponding sequence of reflection coefficients:
{ r x ( 0 ) , r x ( 1 ) , … , r x ( p ) } ⟶ Schur { Γ 1 , Γ 2 , … , Γ p , ϵ p } \left\{r_{x}(0), r_{x}(1), \ldots, r_{x}(p)\right\} \stackrel{\text {Schur}}{\longrightarrow}\left\{\Gamma_{1}, \Gamma_{2}, \ldots, \Gamma_{p}, \epsilon_{p}\right\} {rx(0),rx(1),,rx(p)}Schur{Γ1,Γ2,,Γp,ϵp}
Therefore, we need to derive a method to find the reflection coefficient Γ j + 1 \Gamma_{j+1} Γj+1 from g j ( k ) g_j(k) gj(k) and g j R ( k ) g_j^R(k) gjR(k). Since g j + 1 ( j + 1 ) = 0 g_{j+1}(j+1)=0 gj+1(j+1)=0, evaluating ( S . 10 ) (S.10) (S.10) for k = j + 1 k=j+1 k=j+1 we have
Γ j + 1 = − g j ( j + 1 ) g j R ( j ) . \Gamma_{j+1}=-\frac{g_j(j+1)}{g_j^R(j)}. Γj+1=gjR(j)gj(j+1).

在这里插入图片描述


Additional insight into the operation of the Schur recursion may be gained if it is formulated using vector notation.

From ( S . 9 ) (S.9) (S.9):
[ g j T ( g j R ) T ] = [ ϵ j ⋯ 0 0 g j ( j + 1 ) ⋯ g j ( p ) 0 ⋯ 0 g j R ( j ) g j R ( j + 1 ) ⋯ g j R ( p ) ] (S.9) \left[\begin{matrix}\mathbf g_j^T\\(\mathbf g_j^R)^T\end{matrix}\right]=\left[\begin{matrix}\epsilon_j & \cdots & 0 & 0& g_j(j+1)& \cdots& g_j(p)\\0 & \cdots & 0 & g_j^R(j)& g_j^R(j+1)& \cdots& g^R_j(p)\end{matrix}\right]\tag{S.9} [gjT(gjR)T]=[ϵj0000gjR(j)gj(j+1)gjR(j+1)gj(p)gjR(p)](S.9)

  1. Shift the second row of the matrix to the right by one with g j R ( − 1 ) g_j^R(-1) gjR(1) entering as the first element of the second row (this corresponds to a delay of g j R ( k ) g_j^R(k) gjR(k) by one or a multiplication of G j R ( z ) G_j^R (z) GjR(z) by z − 1 z^{- 1} z1).
    [ ϵ j ⋯ 0 0 g j ( j + 1 ) ⋯ g j ( p ) g j R ( − 1 ) ⋯ 0 0 g j R ( j ) ⋯ g j R ( p − 1 ) ] \left[\begin{matrix}\epsilon_j & \cdots & 0 & 0& g_j(j+1)& \cdots& g_j(p)\\g_j^R(-1) & \cdots & 0 &0 & g_j^R(j)& \cdots& g^R_j(p-1)\end{matrix}\right] [ϵjgjR(1)0000gj(j+1)gjR(j)gj(p)gjR(p1)]
    (Note that g j R ( − 1 ) = g j ( j + 1 ) = γ j g_j^R(-1)=g_j(j+1)=\gamma_j gjR(1)=gj(j+1)=γj and g j R ( j ) = ϵ j g_j^R(j)=\epsilon_j gjR(j)=ϵj). After this shift, the ratio of the two terms in column number ( j + 2 j+2 j+2) is equal to − Γ j + 1 -\Gamma_{j+1} Γj+1.

  2. Applying the relationship in ( S . 10 ) (S.10) (S.10), multiplying the shifted matrix by θ j + 1 = [ 1 Γ j + 1 Γ j + 1 ∗ 1 ] \boldsymbol{\theta}_{j+1}=\left[\begin{matrix}1 & \Gamma_{j+1}\\ \Gamma_{j+1}^* & 1\end{matrix}\right] θj+1=[1Γj+1Γj+11]:
    [ g j + 1 T ( g j + 1 R ) T ] = [ 1 Γ j + 1 Γ j + 1 ∗ 1 ] [ ϵ j ⋯ 0 g j ( j + 1 ) ⋯ g j ( p ) g j R ( − 1 ) ⋯ 0 g j R ( j ) ⋯ g j R ( p − 1 ) ] = [ ϵ j + 1 ⋯ 0 0 g j + 1 ( j + 2 ) ⋯ g j + 1 ( p ) 0 ⋯ 0 g j + 1 R ( j + 1 ) g j + 1 R ( j + 2 ) ⋯ g j R ( p − 1 ) ] \begin{aligned} \left[\begin{matrix}\mathbf g_{j+1}^T\\(\mathbf g_{j+1}^R)^T\end{matrix}\right]&=\left[\begin{matrix}1 & \Gamma_{j+1}\\ \Gamma_{j+1}^* & 1\end{matrix}\right]\left[\begin{matrix}\epsilon_j & \cdots & 0& g_j(j+1)& \cdots& g_j(p)\\g_j^R(-1) & \cdots &0 & g_j^R(j)& \cdots& g^R_j(p-1)\end{matrix}\right]\\ &=\left[\begin{matrix}\epsilon_{j+1} & \cdots & 0 & 0& g_{j+1}(j+2)&\cdots& g_{j+1}(p)\\0 & \cdots & 0 & g_{j+1}^R(j+1)& g^R_{j+1}(j+2)&\cdots& g^R_j(p-1)\end{matrix}\right] \end{aligned} [gj+1T(gj+1R)T]=[1Γj+1Γj+11][ϵjgjR(1)00gj(j+1)gjR(j)gj(p)gjR(p1)]=[ϵj+10000gj+1R(j+1)gj+1(j+2)gj+1R(j+2)gj+1(p)gjR(p1)]

This completes one step of the recursion.

Note that the first column of the matrix ( S . 9 ) (S.9) (S.9) never enters into the calculations, we may suppress the evaluation of these entries by initializing the first element in the first column to zero and by bringing in a zero into the second row each time that is shifted to the right. Since g j R ( j ) = ϵ j , g j R ( − 1 ) = g j ( j + 1 ) g_j^R(j)=\epsilon_j, g_j^R(-1)=g_j(j+1) gjR(j)=ϵj,gjR(1)=gj(j+1), no information is discarded with this simplification.

In summary, the steps described above lead to the following matrix formulation of the Schur recursion. Beginning with
G 0 = [ 0 r x ( 1 ) r x ( 2 ) ⋯ r x ( p ) r x ( 0 ) r x ( 1 ) r x ( 2 ) ⋯ r x ( p ) ] \mathbf{G}_{0}=\left[\begin{array}{ccccc} 0 & r_{x}(1) & r_{x}(2) & \cdots & r_{x}(p) \\ r_{x}(0) & r_{x}(1) & r_{x}(2) & \cdots & r_{x}(p) \end{array}\right] G0=[0rx(0)rx(1)rx(1)rx(2)rx(2)rx(p)rx(p)]
which is referred to as the generator matrix, a new matrix, G ~ 0 , \widetilde{\mathbf{G}}_{0}, G 0, is formed by shifting the second row of G 0 \mathbf{G}_{0} G0 to the right by one
G ~ 0 = [ 0 r x ( 1 ) r x ( 2 ) ⋯ r x ( p ) 0 r x ( 0 ) r x ( 1 ) ⋯ r x ( p − 1 ) ] \widetilde{\mathbf{G}}_{0}=\left[\begin{array}{lllll} 0 & r_{x}(1) & r_{x}(2) & \cdots & r_{x}(p) \\ 0 & r_{x}(0) & r_{x}(1) & \cdots & r_{x}(p-1) \end{array}\right] G 0=[00rx(1)rx(0)rx(2)rx(1)rx(p)rx(p1)]
Setting Γ 1 \Gamma_{1} Γ1 equal to the negative of the ratio of the two terms in the second column of G ~ 0 \widetilde{\mathbf{G}}_{0} G 0, we then form the matrix
Θ 1 = [ 1 Γ 1 Γ 1 ∗ 1 ] \Theta_{1}=\left[\begin{array}{cc} 1 & \Gamma_{1} \\ \Gamma_{1}^{*} & 1 \end{array}\right] Θ1=[1Γ1Γ11]
and evaluate G 1 \mathbf{G}_{1} G1 as follows
G 1 = Θ 1 G ~ 0 = [ 0 0 g 1 ( 2 ) ⋯ g 1 ( p ) 0 g 1 R ( 1 ) g 1 R ( 2 ) ⋯ g 1 R ( p ) ] \mathbf{G}_{1}=\boldsymbol{\Theta}_{1} \tilde{\mathbf{G}}_{0}=\left[\begin{array}{ccccc} 0 & 0 & g_{1}(2) & \cdots & g_{1}(p) \\ 0 & g_{1}^{R}(1) & g_{1}^{R}(2) & \cdots & g_{1}^{R}(p) \end{array}\right] G1=Θ1G~0=[000g1R(1)g1(2)g1R(2)g1(p)g1R(p)]
The recursion then repeats these three steps where, in general, at the j j j th step we

  1. Shift the second row of G j \mathbf{G}_{j} Gj to the right by one,
  2. Multiply G ~ j \widetilde{\mathbf{G}}_{j} G j by Θ j + 1 \Theta_{j+1} Θj+1 to form G j + 1 \mathbf{G}_{j+1} Gj+1.

在这里插入图片描述

( H H H in slides represents G ~ \widetilde G G in book)

Cholesky Factorization of a Toeplitz Matrix

Definition: For a positive matrix C C C, the Cholesky factorization is a factorization as
C = L D L T , L : lower triangular,  D : diagonal C=LDL^T,\quad L:\text{lower triangular, }D:\text{diagonal} C=LDLT,L:lower triangular, D:diagonal
The Levinson recursion or Schur algorithm provides a factorization of the Toeplitz matrix R x R_x Rx as follows:

Define the matrix A p A_{p} Ap in terms of the solutions of the Yule-Walker equations of order 0 0 0 until p : p: p:
A p = [ 1 a 1 ( 1 ) a 2 ( 2 ) ⋮ a p ( p )   1 a 2 ( 1 ) ⋮ ⋮     1 ⋮ a p ( 2 )       1 a p ( 1 )         1 ] \mathbf{A}_{p}=\left[\begin{array}{c|c|c|c|c} 1 & a_{1}(1) & a_{2}(2) & \vdots & a_{p}(p) \\ ~ & 1 & a_{2}(1) & \vdots & \vdots\\ ~ & ~ & 1 & \vdots & a_{p}(2) \\ ~ & ~ & ~ & 1& a_{p}(1) \\ ~ & ~ & ~ &~ & 1 \end{array}\right] Ap=1    a1(1)1   a2(2)a2(1)1  1 ap(p)ap(2)ap(1)1
Recall that for any order p p p the Yule-Walker equation on the reverse sequence is
[ r x ( 0 ) r x ( 1 ) r x ( 2 ) ⋯ r x ( p ) r x ( 1 ) r x ( 0 ) r x ( 1 ) ⋯ r x ( p − 1 ) r x ( 2 ) r x ( 1 ) r x ( 0 ) ⋱ ⋮ ⋮ ⋱ ⋱ ⋱ r x ( 1 ) r x ( p ) r x ( p − 1 ) ⋯ r x ( 1 ) r x ( 0 ) ] [ a p ( p ) ⋮ a p ( 2 ) a p ( 1 ) 1 ] = [ 0 0 ⋮ 0 ϵ p ] \left[\begin{array}{ccccc} r_{x}(0) & r_{x}(1) & r_{x}(2) & \cdots & r_{x}(p) \\ r_{x}(1) & r_{x}(0) & r_{x}(1) & \cdots & r_{x}(p-1) \\ r_{x}(2) & r_{x}(1) & r_{x}(0) & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & r_{x}(1) \\ r_{x}(p) & r_{x}(p-1) & \cdots & r_{x}(1) & r_{x}(0) \end{array}\right]\left[\begin{array}{c} a_{p}(p) \\ \vdots \\ a_{p}(2) \\ a_{p}(1) \\ 1 \end{array}\right]=\left[\begin{array}{c} 0 \\ 0 \\ \vdots \\ 0 \\ \epsilon_{p} \end{array}\right] rx(0)rx(1)rx(2)rx(p)rx(1)rx(0)rx(1)rx(p1)rx(2)rx(1)rx(0)rx(1)rx(p)rx(p1)rx(1)rx(0)ap(p)ap(2)ap(1)1=000ϵp
It follows that
[ r x ( 0 ) r x ( 1 ) r x ( 2 ) ⋯ r x ( p ) r x ( 1 ) r x ( 0 ) r x ( 1 ) ⋯ r x ( p − 1 ) r x ( 2 ) r x ( 1 ) r x ( 0 ) ⋱ ⋮ ⋮ ⋱ ⋱ ⋱ r x ( 1 ) r x ( p ) r x ( p − 1 ) ⋯ r x ( 1 ) r x ( 0 ) ] [ 1 a 1 ( 1 ) a 2 ( 2 ) ⋮ a p ( p )   1 a 2 ( 1 ) ⋮ ⋮     1 ⋮ a p ( 2 )       1 a p ( 1 )         1 ] = [ ϵ 0 ∗ ϵ 1 ∗ ∗ ϵ 2 ∗ ∗ ∗ ∗ ⋮ ⋮ ⋮ ⋮ ϵ p ] ⇔ R x A p = E \left[\begin{array}{ccccc} r_{x}(0) & r_{x}(1) & r_{x}(2) & \cdots & r_{x}(p) \\ r_{x}(1) & r_{x}(0) & r_{x}(1) & \cdots & r_{x}(p-1) \\ r_{x}(2) & r_{x}(1) & r_{x}(0) & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & r_{x}(1) \\ r_{x}(p) & r_{x}(p-1) & \cdots & r_{x}(1) & r_{x}(0) \end{array}\right]\left[\begin{array}{c|c|c|c|c} 1 & a_{1}(1) & a_{2}(2) & \vdots & a_{p}(p) \\ ~ & 1 & a_{2}(1) & \vdots & \vdots\\ ~ & ~ & 1 & \vdots & a_{p}(2) \\ ~ & ~ & ~ & 1& a_{p}(1) \\ ~ & ~ & ~ &~ & 1 \end{array}\right]=\left[\begin{array}{c|c|c|c|c} \epsilon_0 & & & & \\ *& \epsilon_1 & & & \\ *& * & \epsilon_2 & & \\ *& * & * & *& \\ \vdots & \vdots & \vdots &\vdots & \epsilon_p \end{array}\right]\\ \Leftrightarrow \quad R_{x} A_{p}=E rx(0)rx(1)rx(2)rx(p)rx(1)rx(0)rx(1)rx(p1)rx(2)rx(1)rx(0)rx(1)rx(p)rx(p1)rx(1)rx(0)1    a1(1)1   a2(2)a2(1)1  1 ap(p)ap(2)ap(1)1=ϵ0ϵ1ϵ2ϵpRxAp=E
Consider now A p T R x A p . {A}_{p}^{T} {R}_{x} {A}_{p} . ApTRxAp. Because A p T {A}_{p}^{T} ApT and R x A p {R}_{x} {A}_{p} RxAp are lower triangular matrices, it must be lower. But it is also a symmetric matrix. Hence it is a diagonal matrix. The entries on the main diagonal are seen to be { ϵ 0 , ⋯   , ϵ p } \left\{\epsilon_{0}, \cdots, \epsilon_{p}\right\} {ϵ0,,ϵp}.
We thus found
A p T R x A p = D p  (diagonal)  ⇒ R x = A p − T D p A p − 1 , R x − 1 = A p D p − 1 A p T {A}_{p}^{T} {R}_{x} {A}_{p}=D_{p} \quad \text { (diagonal) } \quad \Rightarrow \quad {R}_{x}={A}_{p}^{-T} {D}_{p} {A}_{p}^{-1}, \quad {R}_{x}^{-1}={A}_{p} {D}_{p}^{-1} {A}_{p}^{T} ApTRxAp=Dp (diagonal) Rx=ApTDpAp1,Rx1=ApDp1ApT
These are Cholesky factorizations of R x {R}_{x} Rx and R x − 1 {R}_{x}^{-1} Rx1

The Step-Up and Step-Down Recursions

Step-Up

{ Γ 1 , Γ 2 , … , Γ p , ϵ p } ⟶ S t e p − u p { a p ( 1 ) , a p ( 2 ) , … , a p ( p ) , b ( 0 ) } \left\{\Gamma_{1}, \Gamma_{2}, \ldots, \Gamma_{p}, \epsilon_{p}\right\} \stackrel{S t e p-u p}{\longrightarrow}\left\{a_{p}(1), a_{p}(2), \ldots, a_{p}(p), b(0)\right\} {Γ1,Γ2,,Γp,ϵp}Stepup{ap(1),ap(2),,ap(p),b(0)}

The Levinson order-update equation given in ( L F . 3 ) (LF.3) (LF.3) is a recursion for deriving the filter coefficients a p ( k ) a_{p}(k) ap(k) from the reflection coefficients, Γ i \Gamma_{i} Γi. Specifically, since
a j + 1 ( i ) = a j ( i ) + Γ j + 1 a j ∗ ( j − i + 1 ) a_{j+1}(i)=a_{j}(i)+\Gamma_{j+1} a_{j}^{*}(j-i+1) aj+1(i)=aj(i)+Γj+1aj(ji+1)
then the filter coefficients a j + 1 ( i ) a_{j+1}(i) aj+1(i) may be easily found from a j ( i ) a_{j}(i) aj(i) and Γ j + 1 . \Gamma_{j+1} . Γj+1. The recursion is initialized by setting a 0 ( 0 ) = 1 a_{0}(0)=1 a0(0)=1 and, after the coefficients a p ( k ) a_{p}(k) ap(k) have been determined, the recursion is completed by setting b ( 0 ) = ϵ p . b(0)=\sqrt{\epsilon_{p}} . b(0)=ϵp .

In matrix form, we have
[ 1 a j + 1 ( 1 ) a j + 1 ( 2 ) ⋮ a j + 1 ( j ) a j + 1 ( j + 1 ) ] = [ 1 a j ( 1 ) a j ( 2 ) ⋮ a j ( j ) 0 ] + Γ j + 1 [ 0 a j ∗ ( j ) a j ∗ ( j − 1 ) ⋮ a j ∗ ( 1 ) 1 ] \left[\begin{array}{c} 1 \\ a_{j+1}(1) \\ a_{j+1}(2) \\ \vdots \\ a_{j+1}(j) \\ a_{j+1}(j+1) \end{array}\right]=\left[\begin{array}{c} 1 \\ a_{j}(1) \\ a_{j}(2) \\ \vdots \\ a_{j}(j) \\ 0 \end{array}\right]+\Gamma_{j+1}\left[\begin{array}{c} 0 \\ a_{j}^{*}(j) \\ a_{j}^{*}(j-1) \\ \vdots \\ a_{j}^{*}(1) \\ 1 \end{array}\right] 1aj+1(1)aj+1(2)aj+1(j)aj+1(j+1)=1aj(1)aj(2)aj(j)0+Γj+10aj(j)aj(j1)aj(1)1

在这里插入图片描述

在这里插入图片描述


Step-Down

{ a p ( 1 ) , a p ( 2 ) , … , a p ( p ) , b ( 0 ) } ⟶ Step-down { Γ 1 , Γ 2 , … , Γ p , ϵ p } \left\{a_{p}(1), a_{p}(2), \ldots, a_{p}(p), b(0)\right\} \stackrel{\text {Step-down}}{\longrightarrow}\left\{\Gamma_{1}, \Gamma_{2}, \ldots, \Gamma_{p}, \epsilon_{p}\right\} {ap(1),ap(2),,ap(p),b(0)}Step-down{Γ1,Γ2,,Γp,ϵp}

Based on the fact that since
Γ j = a j ( j ) \Gamma_j=a_j(j) Γj=aj(j)
the the reflection coefficients may be compute by running the Levinson-Durbin recursion backwards. Specifically, beginning with a P \mathbf a_P aP we set Γ p = a p ( p ) \Gamma_p=a_p(p) Γp=ap(p). Then, we recursively find each of the lower-order models, a j \mathbf a_j aj, for j = p − 1 , p − 2 , ⋯   , 1 j=p-1,p-2,\cdots,1 j=p1,p2,,1 and set Γ j = a j ( j ) \Gamma_j=a_j(j) Γj=aj(j) as illustrated below:

image-20201005164856818

From ( L F . 6 ) (LF.6) (LF.6) we have
[ A j + 1 ( z ) A j + 1 R ( z ) ] = [ 1 Γ j + 1 Γ j + 1 ∗ 1 ] [ 1 0 0 z − 1 ] [ A j ( z ) A j R ( z ) ] \left[\begin{array}{c} A_{j+1}(z) \\ A_{j+1}^{R}(z) \end{array}\right]=\left[\begin{array}{cc} 1 & \Gamma_{j+1} \\ \Gamma_{j+1}^* & 1 \end{array}\right]\left[\begin{array}{cc} 1 & 0 \\ 0 & z^{-1} \end{array}\right]\left[\begin{array}{c} A_{j}(z) \\ A_{j}^{R}(z) \end{array}\right] [Aj+1(z)Aj+1R(z)]=[1Γj+1Γj+11][100z1][Aj(z)AjR(z)]
Hence
[ A j ( z ) A j R ( z ) ] = [ 1 0 0 z ] 1 1 − ∣ Γ j + 1 ∣ 2 [ 1 − Γ j + 1 − Γ j + 1 ∗ 1 ] [ A j + 1 ( z ) A j + 1 R ( z ) ] \left[\begin{array}{c} A_{j}(z) \\ A_{j}^{R}(z) \end{array}\right]=\left[\begin{array}{cc} 1 & 0 \\ 0 & z \end{array}\right]\frac{1}{1-|\Gamma_{j+1}|^2} \left[\begin{array}{cc} 1 & -\Gamma_{j+1} \\ -\Gamma_{j+1}^* & 1 \end{array}\right] \left[\begin{array}{c} A_{j+1}(z) \\ A_{j+1}^{R}(z) \end{array}\right] [Aj(z)AjR(z)]=[100z]1Γj+121[1Γj+1Γj+11][Aj+1(z)Aj+1R(z)]

在这里插入图片描述

Therefore
A j ( z ) = 1 1 − ∣ Γ j + 1 ∣ 2 [ A j + 1 ( z ) − Γ j + 1 A j + 1 R ( z ) ] A_{j}(z)=\frac{1}{1-\left|\Gamma_{j+1}\right|^{2}}\left[A_{j+1}(z)-\Gamma_{j+1} A_{j+1}^{R}(z)\right] Aj(z)=1Γj+121[Aj+1(z)Γj+1Aj+1R(z)]
Or by taking the reverse z z z-transform,
a j ( i ) = 1 1 − ∣ Γ j + 1 ∣ 2 [ a j + 1 ( i ) − Γ j + 1 a j + 1 ∗ ( j − i + 1 ) ] a_{j}(i)=\frac{1}{1-\left|\Gamma_{j+1}\right|^{2}}\left[a_{j+1}(i)-\Gamma_{j+1} a_{j+1}^{*}(j-i+1)\right] aj(i)=1Γj+121[aj+1(i)Γj+1aj+1(ji+1)]
which is the step-down recursion. This recursion may also be written in vector form as follows
[ a j ( 1 ) a j ( 2 ) ⋮ a j ( j ) ] = 1 1 − ∣ Γ j + 1 ∣ 2 { [ a j + 1 ( 1 ) a j + 1 ( 2 ) ⋮ a j + 1 ( j ) ] − Γ j + 1 [ a j + 1 ∗ ( j ) a j + 1 ∗ ( j − 1 ) ⋮ a j + 1 ∗ ( 1 ) ] } \left[\begin{array}{c} a_{j}(1) \\ a_{j}(2) \\ \vdots \\ a_{j}(j) \end{array}\right]=\frac{1}{1-\left|\Gamma_{j+1}\right|^{2}}\left\{\left[\begin{array}{c} a_{j+1}(1) \\ a_{j+1}(2) \\ \vdots \\ a_{j+1}(j) \end{array}\right]-\Gamma_{j+1}\left[\begin{array}{c} a_{j+1}^{*}(j) \\ a_{j+1}^{*}(j-1) \\ \vdots \\ a_{j+1}^{*}(1) \end{array}\right]\right\} aj(1)aj(2)aj(j)=1Γj+121aj+1(1)aj+1(2)aj+1(j)Γj+1aj+1(j)aj+1(j1)aj+1(1)
(Note that Γ j + 1 = a j + 1 ( j + 1 ) \Gamma_{j+1}=a_{j+1}(j+1) Γj+1=aj+1(j+1) and Γ j = a j ( j ) \Gamma_j=a_j(j) Γj=aj(j))

在这里插入图片描述

在这里插入图片描述

Application: The Schur-Cohn Stability Test

This test is based on Property 2 (p.226), which states that the roots of a polynomial will lie inside the unit circle if and only if the magnitudes of the reflection coefficients are less than one.

Therefore, given a causal, linear shift-invariant filter with a rational system function,
H ( z ) = B ( z ) A ( z ) H(z)=\frac{B(z)}{A(z)} H(z)=A(z)B(z)
the filter may be tested for stability as follows:

First, the step-down recursion is applied to the coefficients of the denominator polynomial A ( z ) A(z) A(z) to generate a reflection coefficient sequence, Γ j \Gamma_j Γj. The filter will then be stable if and only if all of the reflection coefficients are less than one in magnitude.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值