Eigenvalues and Eigenvectors
Eigenvalues
The Eigenvalues of a triangular matrix are the entries on its main diagonal.
If
v
1
,
⋯
 
,
v
r
v_1,\cdots,v_r
v1,⋯,vr are eigenvectors that correspond to distinct eigenvalues
λ
1
,
⋯
 
,
λ
r
\lambda_1,\cdots,\lambda_r
λ1,⋯,λr of an
n
×
n
n\times n
n×n matrix
A
A
A, then the set
{
v
1
,
⋯
 
,
v
r
}
\{v_1,\cdots,v_r\}
{v1,⋯,vr} is linearly independent.
The characteristic equation
Let A be an
n
×
n
n \times n
n×n matrix. Then
A
A
A is invertible if and only if:
s.the number 0 is not a eigenvalues of
A
A
A
t.the determinant of
A
A
A is not zero
A scalar
λ
\lambda
λ is an eigenvalue of an
n
×
n
n \times n
n×n matrix
A
A
A if and only if
λ
\lambda
λ satisfies the characteristic equation
d
e
t
(
A
−
λ
I
)
=
0
det(A-\lambda I)=0
det(A−λI)=0
Similarity
A
A
A is similar to
B
B
B if there is an invertible matrix
P
P
P that
P
−
1
A
P
=
B
P^{-1}AP=B
P−1AP=B
If
n
×
n
n \times n
n×n matrix
A
A
A and
B
B
B are similar, then they have same characteristic polynomial and hence the same eigenvalues(with same multiplicities).
Diagonalization
An
n
×
n
n \times n
n×n matrix
A
A
A is diagoalizable if and only if
A
A
A has
n
n
n linearly independent eigenvectors.
In fact,
A
=
P
D
P
−
1
A= PDP^{-1}
A=PDP−1, with
D
D
D a diagonal matrix, if and only if the columns of
P
P
P are linearly independent eigenvectors of
A
A
A. In this case, the diagonal entries of
D
D
D are eigenvalues of
A
A
A that correspond, respectively, to the eigenvectors in
P
P
P.
An
n
×
n
n \times n
n×n matrix with
n
n
n distinct eigenvalues is diagonalizable
Let
A
A
A ba an
n
×
n
n \times n
n×n matrix whose distinct eigenvalues are
λ
1
,
⋯
 
,
λ
p
\lambda_1 , \cdots, \lambda_p
λ1,⋯,λp.
a. For
1
≤
k
≤
p
1 \leq k \leq p
1≤k≤p, the dimension of the eigenspace for
λ
k
\lambda_k
λk is less than or equal to the multiplicity of the eigenvalue
λ
k
\lambda_k
λk.
b. The matrix
A
A
A is diagonalizable if and only if the sum of the dimensions of the eigenspaces equals
n
n
n, and this happens if and only if
(
i
)
(i)
(i) the characteristic polynomial factors completely into linear factors and
(
i
i
)
(ii)
(ii) the dimension of the eigenspace for each
λ
k
\lambda_k
λk equals the multiplicity of
λ
k
\lambda_k
λk
c. If
A
A
A is diagonalizable and
β
k
\beta_k
βk is a basis for the eigenspace corresponding to
λ
k
\lambda_k
λk for each
k
k
k, then the total collection of vactors in the sets
β
1
,
⋯
 
,
β
p
\beta_1,\cdots,\beta_p
β1,⋯,βp forms an eigenvectors basis for
R
n
R^n
Rn.
Eigenvectors and Linear Transformations
there are two vector space
V
V
V
R
n
R^n
Rn and
W
W
W
R
m
R^m
Rm, coordinate vector
[
x
]
β
[x]_{\beta}
[x]β is of
R
n
R^n
Rn and
[
T
(
x
)
]
C
[T(x)]_{C}
[T(x)]C is in
R
m
R^m
Rm.
b
1
,
⋯
 
,
b
n
{b_1,\cdots,b_n}
b1,⋯,bn is the basis
β
\beta
β for
V
V
V. Then the matrix
M
=
[
[
T
(
b
1
)
]
c
,
⋯
 
,
[
T
(
b
n
)
]
c
]
M=[[T(b_1)]_c,\cdots,[T(b_n)]_c]
M=[[T(b1)]c,⋯,[T(bn)]c] The action
of T on
X
X
X may be viewed as left-multiplication by
M
M
M.
Linear Transformations on R n R^n Rn
Suppose A = P D P − 1 A=PDP^{-1} A=PDP−1, where D D D is a diagonal n × n n\times n n×n matrix. If B B B is the basis for R n R^n Rn formed from the columns of P P P, then D D D is the β \beta βmatrix for the transformation x → A x x\rightarrow Ax x→Ax.
Similarity of Matrix Representattions
A = P C P − 1 A=PCP^{-1} A=PCP−1
Iterative Estimates For Eigenvalues
If there is an power method applies to an
n
×
n
n \times n
n×n matrix
A
A
A with a strictly domaint eigenvalue
λ
1
\lambda_1
λ1 meaning that
λ
1
\lambda_1
λ1 must be larger in aboluate value than all the other eigenvalues. That is
∣
λ
1
∣
>
∣
λ
2
∣
≥
∣
l
a
m
b
d
a
3
∣
⋯
≥
∣
λ
n
∣
|\lambda_1|>|\lambda_2|\geq |lambda_3| \cdots \geq |\lambda_n|
∣λ1∣>∣λ2∣≥∣lambda3∣⋯≥∣λn∣ Then through the below method it is easy for us to get a eigenvector:
1. Select an initial vector
x
0
x_0
x0 whose largest entry is 1.
2. For
k
=
0
,
1
,
⋯
 
,
k=0,1,\cdots,
k=0,1,⋯,
a.compute
A
x
k
Ax_k
Axk
b.Let
μ
k
\mu_k
μk be an entry in
A
x
k
Ax_k
Axk whose abolute value is as large as possible
c.Compute
x
k
+
1
=
(
1
/
μ
k
)
A
x
k
x_{k+1}=(1/\mu_k)Ax_k
xk+1=(1/μk)Axk
3. For almost all choices of
x
0
x_0
x0, the sequence
{
μ
k
}
\{\mu_k\}
{μk} approaches the domaint eigenvalue, and the sequence
{
x
k
}
\{x_k\}
{xk} approaches a corresponding eigenvector.
The inverse Power Method For estimating An Eigenvalue λ \lambda λ Of A A A
- Select an initial estimate α \alpha α sufficiently close to λ \lambda λ
- select an initial vector x 0 x_0 x0 whose largest entry is 1
- For
k
=
0
,
1
,
⋯
 
,
k = 0,1,\cdots,
k=0,1,⋯,
a.slove ( A − α I ) y k = x k (A-\alpha I)y_k=x_k (A−αI)yk=xk for y k y_k yk
b.Let μ k \mu_k μk be an entry in y k y_k yk whose absolute value is as large as possible.
c.Compute v k = α + ( 1 / μ k ) v_k=\alpha+(1/\mu_k) vk=α+(1/μk)
d.Compute x k + 1 = ( 1 / μ k ) y k x_{k+1}=(1/\mu_k)y_k xk+1=(1/μk)yk - For almost all choices of x 0 x_0 x0, the sequence { v k } \{v_k\} {vk} approaches the eigenvalue λ \lambda λ of A A A, and the sequence { x k } \{x_k\} {xk} approaches a corresponding eigenvector.