Conjugate gradient method(共轭梯度算法)

Conjugate gradient method

In mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is symmetric and positive definite. The conjugate gradient method is an iterative method, so it can be applied to sparsesystems which are too large to be handled by direct methods such as the Cholesky decomposition. Such systems arise regularly when numerically solving partial differential equations.

The conjugate gradient method can also be used to solve unconstrained optimization problems such as energy minimization.

The biconjugate gradient method provides a generalization to non-symmetric matrices. Variousnonlinear conjugate gradient methods seek minima of nonlinear equations.

Description of the method

Suppose we want to solve the following system of linear equations

Ax =  b

where the n-by-n matrix A is symmetric (i.e., AT = A), positive definite (i.e., xTAx > 0 for all non-zero vectors x in Rn), and real.

We denote the unique solution of this system by x*.

The conjugate gradient method as a direct method

We say that two non-zero vectors u and v are conjugate (with respect to A) if

 \mathbf{u}^{\mathrm{T}} \mathbf{A} \mathbf{v} = \mathbf{0}.

Since A is symmetric and positive definite, the left-hand side defines an inner product

 \langle \mathbf{u},\mathbf{v} \rangle_\mathbf{A} <wbr>:= \langle \mathbf{A}^{\mathrm{T}} \mathbf{u}, \mathbf{v}\rangle = \langle \mathbf{A} \mathbf{u}, \mathbf{v}\rangle = \langle \mathbf{u}, \mathbf{A}\mathbf{v} \rangle = \mathbf{u}^{\mathrm{T}} \mathbf{A} \mathbf{v}.

So, two vectors are conjugate if they are orthogonal with respect to this inner product. Being conjugate is a symmetric relation: if u is conjugate to v, then v is conjugate to u. (Note: This notion of conjugate is not related to the notion of complex conjugate.)

Suppose that {pk} is a sequence of n mutually conjugate directions. Then the pk form a basis ofRn, so we can expand the solution x* of Ax = b in this basis:

 \mathbf{x}_* = \sum^{n}_{i=1} \alpha_i \mathbf{p}_i

The coefficients are given by

 \mathbf{A}\mathbf{x}_* = \sum^{n}_{i=1} \alpha_i \mathbf{A} \mathbf{p}_i = \mathbf{b}.
 \mathbf{p}_k^{\mathrm{T}} \mathbf{A}\mathbf{x}_* = \sum^{n}_{i=1} \alpha_i\mathbf{p}_k^{\mathrm{T}} \mathbf{A} \mathbf{p}_i= \mathbf{p}_k^{\mathrm{T}} \mathbf{b}.
 \alpha_k = \frac{\mathbf{p}_k^{\mathrm{T}} \mathbf{b}}{\mathbf{p}_k^{\mathrm{T}} \mathbf{A} \mathbf{p}_k} = \frac{\langle \mathbf{p}_k, \mathbf{b}\rangle}{\,\,\,\langle \mathbf{p}_k, \mathbf{p}_k\rangle_\mathbf{A}} = \frac{\langle \mathbf{p}_k, \mathbf{b}\rangle}{\,\,\,\|\mathbf{p}_k\|_\mathbf{A}^2}.

This result is perhaps most transparent by considering the inner product defined above.

This gives the following method for solving the equation Ax = b. We first find a sequence of nconjugate directions and then we compute the coefficients αk.

The conjugate gradient method as an iterative method

If we choose the conjugate vectors pk carefully, then we may not need all of them to obtain a good approximation to the solution x*. So, we want to regard the conjugate gradient method as an iterative method. This also allows us to solve systems where n is so large that the direct method would take too much time.

We denote the initial guess for x* by x0. We can assume without loss of generality that x0 = 0 (otherwise, consider the system Az = b − Ax0 instead). Note that the solution x* is also the unique minimizer of the quadratic form

 f(\mathbf{x}) = \frac12 \mathbf{x}^{\mathrm{T}} \mathbf{A}\mathbf{x} - \mathbf{b}^{\mathrm{T}} \mathbf{x} , \quad \mathbf{x}\in\mathbf{R}^n.

This suggests taking the first basis vector p1 to be the gradient of f at x = x0, which equalsAx0b or, since x0 = 0−b. The other vectors in the basis will be conjugate to the gradient, hence the name conjugate gradient method.

Let rk be the residual at the kth step:

 \mathbf{r}_k = \mathbf{b} - \mathbf{Ax}_k. \,

Note that rk is the negative gradient of f at x = xk, so the gradient descent method would be to move in the direction rk. Here, we insist that the directions pk are conjugate to each other, so we take the direction closest to the gradient rk under the conjugacy constraint. This gives the following expression:

 \mathbf{p}_{k+1} = \mathbf{r}_k - \frac{\mathbf{p}_k^{\mathrm{T}} \mathbf{A} \mathbf{r}_k}{\mathbf{p}_k^{\mathrm{T}}\mathbf{A} \mathbf{p}_k} \mathbf{p}_k

(see the picture at the top of the article for the effect of the conjugacy constraint on convergence).

The resulting algorithm

After some simplifications, this results in the following algorithm for solving Ax = b where A is a real, symmetric, positive-definite matrix. The input vector x0 can be an approximate initial solution or 0.

r_0 <wbr>:= b - A x_0 \,
p_0 <wbr>:= r_0 \,
k <wbr>:= 0 \,
repeat
\alpha_k <wbr>:= \frac{r_k^\top r_k}{p_k^\top A p_k} \,
x_{k+1} <wbr>:= x_k + \alpha_k p_k \,
r_{k+1} <wbr>:= r_k - \alpha_k A p_k \,
if  r k+1 is "sufficiently small"  then exit loop  end if
\beta_k <wbr>:= \frac{r_{k+1}^\top r_{k+1}}{r_k^\top r_k} \,
p_{k+1} <wbr>:= r_{k+1} + \beta_k p_k \,
k <wbr>:= k + 1 \,
end repeat
The result is  x_{k+1} \,

Example of conjugate gradient method for Octave

function [x] = conjgrad(A,b,x0) 
r = b - A*x0; 
w = -r; z = A*w; 
a = (r'*w)/(w'*z); 
x = x0 + a*w; 
B = 0; 
for i = 1:size(A)(1); 
    r = r - a*z; 
    if( norm(r) < 1e-10 ) 
        break; 
    endif     
    B = (r'*z)/(w'*z);     
    w = -r + B*w; z = A*w; 
    a = (r'*w)/(w'*z); 
    x = x + a*w; 
    end 
end

Preconditioner

preconditioner is a matrix P such that P-1A has a smaller condition number (κ) than A and so solving P-1Ax=P-1b is faster than solving Ax=b (see preconditioned conjugate gradient method).

Conjugate gradient on the normal equations

The conjugate gradient method can be applied to an arbitrary n-by-m matrix by applying it tonormal equations ATA and right-hand side vector ATb, since ATA is a symmetric positive (semi-)definite matrix for any A. The result is conjugate gradient on the normal equations (CGNR).

A T Ax =  A T b

As an iterative method, it is not necessary to form ATA explicitly in memory but only to perform the matrix-vector and transpose matrix-vector multiplications. Therefore CGNR is particularly useful when A is a sparse matrix since these operations are usually extremely efficient. However the downside of forming the normal equations is that the condition number κ(ATA) is equal to κA2and so the rate of convergence of CGNR may be slow. Finding a good preconditioner is often an important part of using the CGNR method.

Several algorithms have been proposed (e.g., CGLS, LSQR). The LSQR algorithm purportedly has the best numerical stability when A is ill-conditioned, i.e., A has a large condition number.

See also

References

The conjugate gradient method was originally proposed in

Descriptions of the method can be found in the following text books:

  • Kendell A. Atkinson (1988), An introduction to numerical analysis (2nd ed.), Section 8.9, John Wiley and Sons. ISBN 0-471-50023-2.
  • Mordecai Avriel (2003). Nonlinear Programming: Analysis and Methods. Dover Publishing. ISBN 0-486-43227-0.
  • Gene H. Golub and Charles F. Van Loan, Matrix computations (3rd ed.), Chapter 10, Johns Hopkins University Press. ISBN 0-8018-5414-8.

External links

  • 3
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值