The Power Method for Eigenvectors



The Power Method for Eigenvectors

 来自:http://mathfaculty.fullerton.edu/mathews//n2003/PowerMethodMod.html

Power Method

    
We now describe the power method for computing the dominant eigenpair.  Its extension to the inverse power method is practical for finding any eigenvalue provided that a good initial approximation is known.  Some schemes for finding eigenvalues use other methods that converge fast, but have limited precision.  The inverse power method is then invoked to refine the numerical values and gain full precision.  To discuss the situation, we will need the following definitions.

 

Definition  If [Graphics:Images/PowerMethodMod_gr_1.gif] is an eigenvalue of A  that is larger in absolute value than any  other eigenvalue, it is called the dominant eigenvalue. An eigenvector  [Graphics:Images/PowerMethodMod_gr_2.gif] corresponding to [Graphics:Images/PowerMethodMod_gr_3.gif] is called a dominant eigenvector.

 

Definition  An eigenvector  V  is said to be normalized if the coordinate of largest magnitude is equal to unity (i.e., the largest coordinate in the vector  V  is the number 1).

Remark.  It is easy to normalize an eigenvector  [Graphics:Images/PowerMethodMod_gr_4.gif]   by forming a new vector  [Graphics:Images/PowerMethodMod_gr_5.gif]  where  [Graphics:Images/PowerMethodMod_gr_6.gif]  and  [Graphics:Images/PowerMethodMod_gr_7.gif].  

 

Theorem (Power Method)  Assume that the n×n matrix  A  has  n  distinct eigenvalues  [Graphics:Images/PowerMethodMod_gr_8.gif]  and that they are ordered in decreasing magnitude; that is,  [Graphics:Images/PowerMethodMod_gr_9.gif].  If  [Graphics:Images/PowerMethodMod_gr_10.gif]  is chosen appropriately, then the sequences [Graphics:Images/PowerMethodMod_gr_11.gif]  and  [Graphics:Images/PowerMethodMod_gr_12.gif]  generated recursively by  

        
[Graphics:Images/PowerMethodMod_gr_13.gif]  
    and
        
[Graphics:Images/PowerMethodMod_gr_14.gif]  

where  
[Graphics:Images/PowerMethodMod_gr_15.gif]  and  [Graphics:Images/PowerMethodMod_gr_16.gif],  will converge to the dominant eigenvector  [Graphics:Images/PowerMethodMod_gr_17.gif]  and eigenvalue  [Graphics:Images/PowerMethodMod_gr_18.gif],  respectively. That is,

        
[Graphics:Images/PowerMethodMod_gr_19.gif]  and  [Graphics:Images/PowerMethodMod_gr_20.gif].  

Remark.  If  [Graphics:Images/PowerMethodMod_gr_21.gif] is an eigenvector and [Graphics:Images/PowerMethodMod_gr_22.gif], then some other starting vector must be chosen.

Proof  Power Method  Power Method  

 

Speed of Convergence

    
In the iteration in the theorem uses the equation
    
        
[Graphics:Images/PowerMethodMod_gr_23.gif],  
    
and the coefficient of  
[Graphics:Images/PowerMethodMod_gr_24.gif] that is used to form   [Graphics:Images/PowerMethodMod_gr_25.gif]  goes to zero in proportion to [Graphics:Images/PowerMethodMod_gr_26.gif].  Hence, the speed of convergence of  [Graphics:Images/PowerMethodMod_gr_27.gif] to [Graphics:Images/PowerMethodMod_gr_28.gif] is governed by the terms  [Graphics:Images/PowerMethodMod_gr_29.gif].  Consequently, the rate of convergence is linear.  Similarly, the convergence of the sequence of constants [Graphics:Images/PowerMethodMod_gr_30.gif] to [Graphics:Images/PowerMethodMod_gr_31.gif] is linear.  The Aitken [Graphics:Images/PowerMethodMod_gr_32.gif] method can be used for any linearly convergent sequence [Graphics:Images/PowerMethodMod_gr_33.gif] to form a new sequence,
    
    
     [Graphics:Images/PowerMethodMod_gr_34.gif],

that converges faster. The  Aitken 
[Graphics:Images/PowerMethodMod_gr_35.gif] can be adapted to speed up the convergence of the power method.  

 

Shifted-Inverse Power Method

    
We will now discuss the shifted inverse power method.  It requires a good starting approximation for an eigenvalue, and then iteration is used to obtain a precise solution.  Other procedures such as the 
QMand Givens’ method are used first to obtain the starting approximations.  Cases involving complex eigenvalues, multiple eigenvalues, or the presence of two eigenvalues with the same magnitude or approximately the same
magnitude will cause computational difficulties and require more advanced methods.  Our illustrations will focus on the case where the eigenvalues are distinct.  The shifted inverse power method is based on the following three results (the proofs are left as exercises).

 

Theorem (Shifting Eigenvalues)  Suppose that  [Graphics:Images/PowerMethodMod_gr_36.gif],V  is an eigenpair of  A.  If  [Graphics:Images/PowerMethodMod_gr_37.gif]  is any constant, then  [Graphics:Images/PowerMethodMod_gr_38.gif],V  is an eigenpair of the matrix  [Graphics:Images/PowerMethodMod_gr_39.gif].

 

Theorem (Inverse Eigenvalues)   Suppose that  [Graphics:Images/PowerMethodMod_gr_40.gif],V  is an eigenpair of  A.   If  [Graphics:Images/PowerMethodMod_gr_41.gif],  then  [Graphics:Images/PowerMethodMod_gr_42.gif],V  is an eigenpair of the matrix  [Graphics:Images/PowerMethodMod_gr_43.gif].

 

Theorem (Shifted-Inverse Eigenvalues)  Suppose that  [Graphics:Images/PowerMethodMod_gr_44.gif],V  is an eigenpair of  A.   If  [Graphics:Images/PowerMethodMod_gr_45.gif],   then  [Graphics:Images/PowerMethodMod_gr_46.gif],V  is an eigenpair of the matrix  [Graphics:Images/PowerMethodMod_gr_47.gif].

 

Theorem (Shifted-Inverse Power Method)  Assume that the n×n matrix  A  has distinct eigenvalues  [Graphics:Images/PowerMethodMod_gr_48.gif]  and consider the eigenvalue [Graphics:Images/PowerMethodMod_gr_49.gif]. Then a constant  [Graphics:Images/PowerMethodMod_gr_50.gif]  can be chosen so that   [Graphics:Images/PowerMethodMod_gr_51.gif] is the dominant eigenvalue of  [Graphics:Images/PowerMethodMod_gr_52.gif].  Furthermore, if   [Graphics:Images/PowerMethodMod_gr_53.gif]  is chosen appropriately, then the  sequences [Graphics:Images/PowerMethodMod_gr_54.gif]  and  [Graphics:Images/PowerMethodMod_gr_55.gif]  generated recursively by  

        
[Graphics:Images/PowerMethodMod_gr_56.gif]  
    and
        
[Graphics:Images/PowerMethodMod_gr_57.gif]  

where  
[Graphics:Images/PowerMethodMod_gr_58.gif]  and  [Graphics:Images/PowerMethodMod_gr_59.gif],  will converge to the dominant eigenpair  [Graphics:Images/PowerMethodMod_gr_60.gif],[Graphics:Images/PowerMethodMod_gr_61.gif]  of the matrix  [Graphics:Images/PowerMethodMod_gr_62.gif]  Finally, the corresponding eigenvalue for the matrix  A  is given by the calculation  

        
[Graphics:Images/PowerMethodMod_gr_63.gif]

Remark.  For practical implementations of this Theorem, a linear system solver is used to compute  [Graphics:Images/PowerMethodMod_gr_64.gif] in each step by solving the linear system [Graphics:Images/PowerMethodMod_gr_65.gif].  

Proof  Power Method  Power Method  

 

Computer Programs  Power Method  Power Method  

Mathematica Subroutine (Power Method).  To compute the dominant value  [Graphics:Images/PowerMethodMod_gr_66.gif]  and its associated eigenvector  [Graphics:Images/PowerMethodMod_gr_67.gif]  for the n×n matrix  A.  It is assumed that the n eigenvalues have the dominance property   [Graphics:Images/PowerMethodMod_gr_68.gif].  

[Graphics:Images/PowerMethodMod_gr_69.gif]

Example 1.  Use the power method to find the dominant eigenvalue and eigenvector for the matrix  [Graphics:Images/PowerMethodMod_gr_70.gif].  
Solution 1.

 

Example 2.  Use the power method to find the dominant eigenvalue and eigenvector for the matrix  [Graphics:Images/PowerMethodMod_gr_131.gif].  
Solution 2.

 

 

Shifted Inverse Power Method   

    If a good approximation to an eigenvalue is known, then the shifted inverse power method can be used and convergence is faster.  Other methods such as the QM method or Givens method are used to obtain approximate starting values.  

Program (Shifted Inverse Power Method).  To compute the dominant eigenvalue  [Graphics:Images/PowerMethodMod_gr_210.gif]  and its associated eigenvector  [Graphics:Images/PowerMethodMod_gr_211.gif]  for the  n by n  matrix  A.  It is assumed that the n eigenvalues are  [Graphics:Images/PowerMethodMod_gr_212.gif]  and  α  is a real number such that  [Graphics:Images/PowerMethodMod_gr_213.gif]  for each  [Graphics:Images/PowerMethodMod_gr_214.gif].  

[Graphics:Images/PowerMethodMod_gr_215.gif]

Example 3.  Find the dominant eigenvalue and eigenvector for the matrix  [Graphics:Images/PowerMethodMod_gr_216.gif].  
Use the shift  [Graphics:Images/PowerMethodMod_gr_217.gif] in the shifted inverse power method.
Solution 3.

 

 

Application to Markov Chains

    In the study of Markov chains the elements of the transition matrix are the probabilities of moving from any state to any other state.  A Markov process can be described by a square matrix whose entries are all positive and the column sums are all equal to 1.  For example, a 3×3 transition matrix looks like

        [Graphics:Images/PowerMethodMod_gr_257.gif]

where   [Graphics:Images/PowerMethodMod_gr_258.gif],  [Graphics:Images/PowerMethodMod_gr_259.gif]  and  [Graphics:Images/PowerMethodMod_gr_260.gif].  The initial state vector is  [Graphics:Images/PowerMethodMod_gr_261.gif].

The computation  [Graphics:Images/PowerMethodMod_gr_262.gif]  shows how the  [Graphics:Images/PowerMethodMod_gr_263.gif] is redistributed in the next state.   Similarly we see that

    [Graphics:Images/PowerMethodMod_gr_264.gif] shows how the  [Graphics:Images/PowerMethodMod_gr_265.gif] is redistributed in the next state.
and
    [Graphics:Images/PowerMethodMod_gr_266.gif]shows how the  [Graphics:Images/PowerMethodMod_gr_267.gif] is redistributed in the next state.
    
Therefore, the distribution for the next state is

    [Graphics:Images/PowerMethodMod_gr_268.gif]

A recursive sequence is generated using the general rule

    [Graphics:Images/PowerMethodMod_gr_269.gif]     for  k = 0, 1, 2, ... .

We desire to know the limiting distribution  [Graphics:Images/PowerMethodMod_gr_270.gif].  Since we will also have  [Graphics:Images/PowerMethodMod_gr_271.gif]  we obtain the relation

    [Graphics:Images/PowerMethodMod_gr_272.gif]

From which it follows that

    [Graphics:Images/PowerMethodMod_gr_273.gif]

Therefore the limiting distribution  P  is the eigenvector corresponding to the dominant eigenvalue  [Graphics:Images/PowerMethodMod_gr_274.gif].  The following subroutine reminds us of the iteration used in the power method.  

 

Mathematica Subroutine (Markov Process).  

[Graphics:Images/PowerMethodMod_gr_275.gif]

Example 4.  Let  [Graphics:Images/PowerMethodMod_gr_276.gif]  record the number of people in a certain city who use brands XY, and Z, respectively.  
Each month people decide to keep using the same brand or switch brands.  
The probability that a user of brand 
X will switch to brand Y or Z is 0.3 and 0.3, respectively.
The probability that a user of brand 
Y will switch to brand X or Z is 0.3 and 0.2, respectively.
The probability that a user of brand 
Z will switch to brand X or Y is 0.1 and 0.3, respectively.
The transition matrix for this process is 
[Graphics:Images/PowerMethodMod_gr_277.gif] or
        [Graphics:Images/PowerMethodMod_gr_278.gif]
Assume that the initial distribution [Graphics:Images/PowerMethodMod_gr_279.gif].
4 (a).  Find the first few terms in the sequence [Graphics:Images/PowerMethodMod_gr_280.gif].  
4 (b).  Verify that  [Graphics:Images/PowerMethodMod_gr_281.gif] is the dominant eigenvector of  A.  
4 (c).  Verify that a corresponding eigenvector is  [Graphics:Images/PowerMethodMod_gr_282.gif].
4 (d).  Conclude that the limiting distribution is  [Graphics:Images/PowerMethodMod_gr_283.gif].  
Solution 4.

 

Example 5.  Let  [Graphics:Images/PowerMethodMod_gr_319.gif]  record the number of people in a certain city who use brands XY, and Z, respectively.  
Each month people decide to keep using the same brand or switch brands.  
The probability that a user of brand 
X will switch to brand Y or Z is 0.4 and 0.2, respectively.
The probability that a user of brand 
Y will switch to brand X or Z is 0.3 and 0.2, respectively.
The probability that a user of brand 
Z will switch to brand X or Y is 0.1 and 0.3, respectively.
The transition matrix for this process is  
[Graphics:Images/PowerMethodMod_gr_320.gif] or
        [Graphics:Images/PowerMethodMod_gr_321.gif]
Assume that the initial distribution [Graphics:Images/PowerMethodMod_gr_322.gif].
5 (a).  Find the first few terms in the sequence [Graphics:Images/PowerMethodMod_gr_323.gif].  
5 (b).  Verify that  [Graphics:Images/PowerMethodMod_gr_324.gif] is the dominant eigenvector of  A.  
5 (c).  Verify that a corresponding eigenvector is  [Graphics:Images/PowerMethodMod_gr_325.gif].
5 (d).  Conclude that the limiting distribution is  [Graphics:Images/PowerMethodMod_gr_326.gif].  
Solution 5.

 

Research Experience for Undergraduates

Power Method  Power Method  Internet hyperlinks to web sites and a bibliography of articles.  

 

Download this Mathematica Notebook The Power Method for Eigenvectors

 

Return to Numerical Methods - Numerical Analysis

 

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Sure, here's the modified code that outputs the requested results: ```matlab % Define the matrix A A = [2 -3 1; -2 5 -2; 3 -2 -5]; % Set the stopping criterion as 1% tolerance = 0.01; % Initialize the estimated eigenvector x = [1; 1; 1]; % Iterate until convergence while true % Calculate the next estimate of the eigenvector x_new = inv(A) * x; % Normalize the estimate x_new = x_new / norm(x_new); % Calculate the corresponding eigenvalue lambda = x_new' * inv(A) * x_new; % Check for convergence if norm(x_new - x) < tolerance break; end % Output the results of the current iteration fprintf('Estimated Eigenvector (normalized) and Estimated Highest Eigenvalue of iteration %d:\n', iter); disp(x_new); fprintf('%.4f\n', lambda); % Calculate the relative error from the second iteration onwards if iter > 1 rel_error = norm(x_new - x_prev) / norm(x_new); fprintf('Relative error from the previous iteration: %.4f\n', rel_error); end % Update the estimate and iteration counter x_prev = x_new; x = x_new; iter = iter + 1; end % Output the final results fprintf('Estimated Eigenvector (normalized) and Estimated Highest Eigenvalue after convergence:\n'); disp(x_new); fprintf('%.4f\n', lambda); ``` This code first initializes the variables `x`, `iter`, and `x_prev`. It then iterates until convergence, calculating the estimated eigenvector (`x_new`) and eigenvalue (`lambda`) at each iteration. After the first iteration, the code outputs the estimated eigenvector and eigenvalue of that iteration. From the second iteration onwards, the code also calculates the relative error between the current and previous estimated eigenvectors and outputs it. Finally, the code outputs the estimated eigenvector and eigenvalue after convergence.

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值