Chapter 5 (Eigenvalues and Eigenvectors): Iterative estimates for eigenvalues (特征值的迭代估计)

本文为《Linear algebra and its applications》的读书笔记

  • In scientific applications of linear algebra, eigenvalues are seldom known precisely. Fortunately, a close numerical approximation is usually quite satisfactory.

The Power Method

幂方法

  • In fact, some applications require only a rough approximation to the largest eigenvalue. The first algorithm described below can work well for this case. Also, it provides a foundation for a more powerful method that can give fast estimates for other eigenvalues as well.

  • The power method applies to an n × n n \times n n×n matrix A A A with a strictly dominant eigenvalue λ 1 \lambda_1 λ1, which means that λ 1 \lambda_1 λ1 must be larger in absolute value than all the other eigenvalues. In this case, the power method produces a scalar sequence that approaches λ 1 \lambda_1 λ1 and a vector sequence that approaches a corresponding eigenvector.
  • Assume for simplicity that A A A is diagonalizable and R n \mathbb R^n Rn has a basis of eigenvectors v 1 , . . . , v n \boldsymbol v_1,...,\boldsymbol v_n v1,...,vn, arranged so their corresponding eigenvalues λ 1 , . . . , λ n \lambda_1,...,\lambda_n λ1,...,λn decrease in size, with the strictly dominant eigenvalue first. That is,
    在这里插入图片描述If x \boldsymbol x x in R n \mathbb R^n Rn is written as x = c 1 v 1 + . . . + c n v n \boldsymbol x = c_1\boldsymbol v_1+...+ c_n\boldsymbol v_n x=c1v1+...+cnvn, then
    在这里插入图片描述Assume c 1 ≠ 0 c_1 \neq 0 c1=0. Then,
    在这里插入图片描述The fractions λ 2 / λ 1 , . . . , λ n / λ 1 \lambda_2/\lambda_1,...,\lambda_n/\lambda_1 λ2/λ1,...,λn/λ1 are all less than 1 in magnitude and so their powers go to zero. Hence
    在这里插入图片描述Thus, for large k k k, a scalar multiple of A k x A^k\boldsymbol x Akx determines almost the same direction as the eigenvector c 1 v 1 c_1\boldsymbol v_1 c1v1, provided c 1 ≠ 0 c_1 \neq 0 c1=0.

EXAMPLE 1

  • Let A = [ 1.8 . 8 . 2 1.2 ] A=\begin{bmatrix} 1.8&.8\\.2&1.2 \end{bmatrix} A=[1.8.2.81.2], v 1 = [ 4 1 ] \boldsymbol v_1=\begin{bmatrix} 4\\1 \end{bmatrix} v1=[41], and x = [ − . 5 1 ] \boldsymbol x=\begin{bmatrix} -.5\\1 \end{bmatrix} x=[.51]. Then A A A has eigenvalues 2 and 1, and the eigenspace for λ 1 = 2 \lambda_1 = 2 λ1=2 is the line through 0 \boldsymbol 0 0 and v 1 \boldsymbol v_1 v1.
    在这里插入图片描述
  • We can scale each A k x A^k\boldsymbol x Akx to make its largest entry a 1. It turns out that the resulting sequence { x k } \{\boldsymbol x_k\} {xk} will converge to a multiple of v 1 \boldsymbol v_1 v1 whose largest entry is 1 1 1. Figure 2 shows the scaled sequence for Example 1. The eigenvalue λ 1 \lambda_1 λ1 can be estimated from the sequence { x k } \{\boldsymbol x_k\} {xk}, too.
    在这里插入图片描述

在这里插入图片描述

PROOF

参考:https://wenku.baidu.com/view/ee7ecbeca98271fe910ef9fc.html?from=search

在这里插入图片描述在这里插入图片描述


  • In general, the rate of convergence depends on the ratio ∣ λ 2 / λ 1 ∣ |\lambda_2/\lambda_1| λ2/λ1. If ∣ λ 2 / λ 1 ∣ |\lambda_2/\lambda_1| λ2/λ1 is close to 1, then μ k \mu_k μk and x k \boldsymbol x_k xk can converge very slowly, and other approximation methods may be preferred.
  • With the power method, there is a slight chance that the chosen initial vector x \boldsymbol x x will have no component in the v 1 \boldsymbol v_1 v1 direction (when c 1 = 0 c_1 = 0 c1=0). But computer rounding errors during the calculations of the x k \boldsymbol x_k xk are likely to create a vector with at least a small component in the direction of v 1 \boldsymbol v_1 v1. If that occurs, the x k \boldsymbol x_k xk will start to converge to a multiple of v 1 \boldsymbol v_1 v1.

The Inverse Power Method

逆幂法

  • This method provides an approximation for a n y any any eigenvalue, provided a good initial estimate α \alpha α of the eigenvalue λ \lambda λ is known.
  • In this case, we let B = ( A − α I ) − 1 B = (A -\alpha I)^{-1} B=(AαI)1 and apply the power method to B B B. It can be shown that if the eigenvalues of A A A are λ 1 , . . . , λ n \lambda_1,...,\lambda_n λ1,...,λn, then the eigenvalues of B B B are
    在这里插入图片描述and the corresponding eigenvectors are the same as those for A A A.
  • If α \alpha α is really close to λ k \lambda_k λk, then 1 / ( λ k − α ) 1/(\lambda_k-\alpha) 1/(λkα) is m u c h much much larger than the other eigenvalues of B B B, and the inverse power method produces a very rapid approximation to λ k \lambda_k λk for almost all choices of x 0 \boldsymbol x_0 x0. The following algorithm gives the details.

在这里插入图片描述

  • Notice that instead of computing ( A − α I ) − 1 x k (A - \alpha I)^{-1}\boldsymbol x_k (AαI)1xk to get the next vector in the sequence, it is better to solve the equation ( A − α I ) y k = x k (A - \alpha I)\boldsymbol y_k =\boldsymbol x_k (AαI)yk=xk for y k \boldsymbol y_k yk. Since this equation for y k \boldsymbol y_k yk must be solved for each k k k, an L U LU LU factorization of A − α I A -\alpha I AαI will speed up the process.
  • The Inverse Power Method can be used to approximate λ k \lambda_k λk with the smallest absolute value by applying the power method to A − 1 A^{-1} A1. ( α = 0 \alpha=0 α=0)

QR algorithm

  • A more robust and widely used iterative method is the QR algorithm. For instance, it is the heart of the MATLAB command eig(A), which rapidly computes eigenvalues and eigenvectors of A A A. A brief description of the QR algorithm was given in Section 5.2. Further details are presented in most modern numerical analysis texts.
  • 3
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值