数值分析作业(第三章):代码+手写计算:Jacobi迭代法、Gauss-Seidel迭代法、SOR法、最速梯度下降法、共轭梯度下降法

《数值计算方法》丁丽娟-数值实验作业-第三章(MATLAB)

作业P87: 2 ,5,6,7,9,12
数值实验P89: 1, 2

Jacobi迭代法、Gauss-Seidel迭代法、SOR法、最速梯度下降法、共轭梯度下降法(CG)、收敛条件判定

推荐参考文章
共轭梯度法通俗讲义
最速下降法、牛顿法、共轭梯度法原理及对比
机器学习|共轭梯度法
Video: 数值分析6(3共轭梯度法)苏州大学

数值实验作业(第三章)

代码仓库:https://github.com/sylvanding/bit-numerical-analysis-hw

P89. 1

分别试用(1)Jacobi迭代法;(2)Gauss-Seidel迭代法和;(3)共轭梯度法解线性方程组

[ 10 1 2 3 4 1 9 − 1 2 − 3 2 − 1 7 3 − 5 3 2 3 12 − 1 4 − 3 − 5 − 1 15 ] [ x 1 x 2 x 3 x 4 x 5 ] = [ 12 − 27 14 − 17 12 ] \left[\begin{array}{rrrrr} 10 & 1 & 2 & 3 & 4 \\ 1 & 9 & -1 & 2 & -3 \\ 2 & -1 & 7 & 3 & -5 \\ 3 & 2 & 3 & 12 & -1 \\ 4 & -3 & -5 & -1 & 15 \end{array}\right]\left[\begin{array}{r} x_1 \\ x_2 \\ x_3 \\ x_4 \\ x_5 \end{array}\right]=\left[\begin{array}{r} 12 \\ -27 \\ 14 \\ -17 \\ 12 \end{array}\right] 1012341912321735323121435115 x1x2x3x4x5 = 1227141712

迭代初始向量取 x ( 0 ) = ( 0 , 0 , 0 , 0 , 0 ) ⊤ \boldsymbol{x}^{(0)}=(0,0,0,0,0)^{\top} x(0)=(0,0,0,0,0).

实验内容、步骤及结果

chap-3\jacobi.m:

function [x, iter, last_error] = jacobi(A, b, max_iter, x0, tol)
    % Jacobi Iteration Method
    % Inputs:
    % A - Coefficient matrix
    % b - Right-hand side vector
    % max_iter - Maximum number of iterations
    % x0 - Initial guess (optional)
    % tol - Tolerance for convergence (optional)
    % Outputs:
    % x - Solution vector
    % iter - Number of iterations performed
    % last_error - Error of the last iteration

    % Set default initial guess
    if nargin < 4 || isempty(x0)
        x0 = zeros(size(b));
    end

    % Set default tolerance
    if nargin < 5 || isempty(tol)
        tol = 0;
    end

    % % disp the inputs
    % disp('--- Jacobi Iteration Method Conf. Start ---');
    % disp('Coefficient matrix A:');
    % disp(A);
    % disp('Right-hand side vector b:');
    % disp(b);
    % disp('Maximum number of iterations:');
    % disp(max_iter);
    % disp('Initial guess x0:');
    % disp(x0);
    % disp('Tolerance for convergence:');
    % disp(tol);
    % disp('--- Jacobi Iteration Method Conf. End ---');

    % Initial guess
    x = x0;
    n = length(b);
    iter = 0;
    last_error = inf;
    
    % Iterate
    for k = 1:max_iter
        x_new = zeros(size(x));
        for i = 1:n
            sum = 0;
            for j = 1:n
                if j ~= i
                    sum = sum + A(i, j) * x(j);
                end
            end
            x_new(i) = (b(i) - sum) / A(i, i);
        end
        
        % Check for convergence
        last_error = norm(x_new - x, inf);
        if tol > 0
            if last_error < tol
                x = x_new;
                iter = k;
                return;
            end
        end
        
        x = x_new;
        iter = k;
    end
    
    warning('Jacobi method did not converge within the maximum number of iterations');
end

chap-3\gauss_seidel.m:

function [x, iter, last_error] = gauss_seidel(A, b, max_iter, x0, tol)
    % Gauss-Seidel Iteration Method
    % Inputs:
    % A - Coefficient matrix
    % b - Right-hand side vector
    % max_iter - Maximum number of iterations
    % x0 - Initial guess (optional)
    % tol - Tolerance for convergence (optional)
    % Outputs:
    % x - Solution vector
    % iter - Number of iterations performed
    % last_error - Error of the last iteration

    % Set default initial guess
    if nargin < 4 || isempty(x0)
        x0 = zeros(size(b));
    end

    % Set default tolerance
    if nargin < 5 || isempty(tol)
        tol = 0;
    end

    % % disp the inputs
    % disp('--- Gauss-Seidel Iteration Method Conf. Start ---');
    % disp('Coefficient matrix A:');
    % disp(A);
    % disp('Right-hand side vector b:');
    % disp(b);
    % disp('Maximum number of iterations:');
    % disp(max_iter);
    % disp('Initial guess x0:');
    % disp(x0);
    % disp('Tolerance for convergence:');
    % disp(tol);
    % disp('--- Gauss-Seidel Iteration Method Conf. End ---');

    % Initial guess
    x = x0;
    n = length(b);
    iter = 0;
    last_error = inf;
    
    % Iterate
    for k = 1:max_iter
        x_old = x;
        for i = 1:n
            sum = 0;
            for j = 1:n
                if j ~= i
                    sum = sum + A(i, j) * x(j);
                end
            end
            x(i) = (b(i) - sum) / A(i, i);
        end
        
        % Check for convergence
        last_error = norm(x - x_old, inf);
        if tol > 0
            if last_error < tol
                iter = k;
                return;
            end
        end
        
        iter = k;
    end
    
    warning('Gauss-Seidel method did not converge within the maximum number of iterations');
end

chap-3\cg.m:

function [x, iter, last_error] = cg(A, b, max_iter, x0, tol)
    % Conjugate Gradient Method
    % Inputs:
    % A - Coefficient matrix (must be symmetric positive definite)
    % b - Right-hand side vector
    % max_iter - Maximum number of iterations
    % x0 - Initial guess (optional)
    % tol - Tolerance for convergence (optional)
    % Outputs:
    % x - Solution vector
    % iter - Number of iterations performed
    % last_error - Error of the last iteration

    % Set default initial guess
    if nargin < 4 || isempty(x0)
        x0 = zeros(size(b));
    end

    % Set default tolerance
    if nargin < 5 || isempty(tol)
        tol = 0;
    end

    % % disp the inputs
    % disp('--- Conjugate Gradient Method Conf. Start ---');
    % disp('Coefficient matrix A:');
    % disp(A);
    % disp('Right-hand side vector b:');
    % disp(b);
    % disp('Maximum number of iterations:');
    % disp(max_iter);
    % disp('Initial guess x0:');
    % disp(x0);
    % disp('Tolerance for convergence:');
    % disp(tol);
    % disp('--- Conjugate Gradient Method Conf. End ---');

    % Initial guess
    x = x0;
    r = b - A * x;
    p = r;
    rsold = r' * r;
    iter = 0;
    last_error = inf;

    for k = 1:max_iter
        Ap = A * p;
        alpha = rsold / (p' * Ap);
        x = x + alpha * p;
        r = r - alpha * Ap; % r = b - A * x
        rsnew = r' * r;
        last_error = sqrt(rsnew);

        % Check for convergence
        if tol > 0
            if last_error < tol
                iter = k;
                return;
            end
        end

        p = r + (rsnew / rsold) * p;
        rsold = rsnew;
        iter = k;
    end

    warning('Conjugate Gradient method did not converge within the maximum number of iterations');
end

chap-3\P89_Q1.m:

% Define the coefficient matrix A and the right-hand side vector b
A = [10, 1, 2, 3, 4; 1, 9, -1, 2, -3; 2, -1, 7, 3, -5; 3, 2, 3, 12, -1; 4, -3, -5, -1, 15];
b = [12; -27; 14; -17; 12];

max_iter = inf;
x0 = zeros(size(b));
epsilon = 1e-16;

% Call the Jacobi function
[x, iter, last_error] = jacobi(A, b, max_iter, x0, epsilon);
disp('--- Jacobi Iteration Method Outp. Start ---');
disp('Solution vector x:');
disp(x);
disp('Number of iterations:');
disp(iter);
disp('Error of the last iteration:');
disp(last_error);
disp('--- Jacobi Iteration Method Outp. End ---');

% Call the Gauss-Seidel function
[x, iter, last_error] = gauss_seidel(A, b, max_iter, x0, epsilon);
disp('--- Gauss-Seidel Iteration Method Outp. Start ---');
disp('Solution vector x:');
disp(x);
disp('Number of iterations:');
disp(iter);
disp('Error of the last iteration:');
disp(last_error);
disp('--- Gauss-Seidel Iteration Method Outp. End ---');

% Call the CG function
[x, iter, last_error] = cg(A, b, max_iter, x0, epsilon);
disp('--- CG Method Outp. Start ---');
disp('Solution vector x:');
disp(x);
disp('Number of iterations:');
disp(iter);
disp('Error of the last iteration:');
disp(last_error);
disp('--- CG Method Outp. End ---');

实验结果分析

>> P89_Q1
Warning: Too many FOR loop iterations. Stopping after 9223372036854775806 iterations.
> In jacobi (line 45)
In P89_Q1 (line 10)

--- Jacobi Iteration Method Outp. Start ---
Solution vector x:
     1
    -2
     3
    -2
     1

Number of iterations:
   190

Error of the last iteration:
     0

--- Jacobi Iteration Method Outp. End ---
Warning: Too many FOR loop iterations. Stopping after 9223372036854775806 iterations.
> In gauss_seidel (line 45)
In P89_Q1 (line 21)

--- Gauss-Seidel Iteration Method Outp. Start ---
Solution vector x:
     1
    -2
     3
    -2
     1

Number of iterations:
    99

Error of the last iteration:
     0

--- Gauss-Seidel Iteration Method Outp. End ---
Warning: Too many FOR loop iterations. Stopping after 9223372036854775806 iterations.
> In cg (line 46)
In P89_Q1 (line 32)

--- CG Method Outp. Start ---
Solution vector x:
    1.0000
   -2.0000
    3.0000
   -2.0000
    1.0000

Number of iterations:
     9

Error of the last iteration:
   2.4446e-17

--- CG Method Outp. End ---

CG每次选择A-共轭的搜索方向P,迭代收敛速度最快(9 steps)。G-S迭代次之(99 steps),Jacobi迭代方法速度最慢(190 steps)。


共轭梯度法(CG)最早是为了解决大型稀疏对称正定矩阵的线性方程组 A x = b \mathbf{Ax} = \mathbf{b} Ax=b 而提出的。尽管这种方法与二次型函数的优化有关,但其基本思路和过程也适用于求解线性方程组。这主要是因为求解线性方程组和优化二次型函数之间存在自然的联系。

具体来说,共轭梯度法的基础可以通过以下两个方面来理解:

  1. 二次型函数优化:
    假设我们要最小化一个二次型函数 f ( x ) = 1 2 x T A x − b T x f(\mathbf{x}) = \frac{1}{2} \mathbf{x}^T \mathbf{A} \mathbf{x} - \mathbf{b}^T \mathbf{x} f(x)=21xTAxbTx,其中 A \mathbf{A} A 是一个对称正定矩阵,那么其梯度为 ∇ f ( x ) = A x − b \nabla f(\mathbf{x}) = \mathbf{A} \mathbf{x} - \mathbf{b} f(x)=Axb。最小值处的梯度为零,即 A x − b = 0 \mathbf{A} \mathbf{x} - \mathbf{b} = 0 Axb=0,这恰好对应线性方程组 A x = b \mathbf{A} \mathbf{x} = \mathbf{b} Ax=b

  2. 线性方程组的求解:
    对于对称正定矩阵 A \mathbf{A} A 和向量 b \mathbf{b} b,我们可以定义一个二次型 f ( x ) f(\mathbf{x}) f(x),其形式符合上述的对称正定二次型函数。而共轭梯度法正是通过在这个函数上的迭代优化过程逐步逼近 f ( x ) f(\mathbf{x}) f(x) 的最小值,进而找到使得 A x − b = 0 \mathbf{A} \mathbf{x} - \mathbf{b} = 0 Axb=0 的解,即 A x = b \mathbf{A} \mathbf{x} = \mathbf{b} Ax=b 的解。

P89. 2

若仅保存系数矩阵 A = ⟨ a i j ⟩ n × n \boldsymbol{A}=\left\langle a_{i j}\right\rangle_{n \times n} A=aijn×n的非零元,用共轭梯度法求解 n = 1 0 5 n=10^5 n=105阶的方程组 A x = b \mathbf{A} \mathbf{x} = \mathbf{b} Ax=b,其中

a i j = { 3 , i = j − 1 , ∣ i − j ∣ = 1 1 2 , i + j = n + 1 , i ≠ n 2 , n 2 + 1 0 ,  其他  ( i , j = 1 , 2 , ⋯   , n ) \boldsymbol{a}_{i j}=\left\{\begin{array}{ll} 3, & i=j \\ -1, & |i-j|=1 \\ \frac{1}{2}, & i+j=n+1, i \neq \frac{n}{2}, \frac{n}{2}+1 \\ 0, & \text { 其他 } \end{array} \quad(i, j=1,2, \cdots, n)\right. aij= 3,1,21,0,i=jij=1i+j=n+1,i=2n,2n+1 其他 (i,j=1,2,,n)

b = ( 2.5 , 1.5 , ⋯   , 1.5 , 1.0 , 1.0 , 1.5 , ⋯   , 1.5 , 2.5 ) T b=(2.5,1.5, \cdots, 1.5,1.0,1.0,1.5, \cdots, 1.5,2.5)^{\mathrm{T}} b=(2.5,1.5,,1.5,1.0,1.0,1.5,,1.5,2.5)T.

实验内容、步骤及结果

chap-3\P89_Q2.m:

n = 10 ^ 5;
disp(['n: ', num2str(n)]);

% Initialize matrix A with zeros
if n <= 10
    A = zeros(n, n);
else
    A = sparse(n, n);
end

% Fill the diagonal with 3
for i = 1:n
    A(i, i) = 3;
end

% Fill the sub-diagonal and super-diagonal with -1
for i = 1:n - 1
    A(i, i + 1) = -1;
    A(i + 1, i) = -1;
end

% Fill the anti-diagonal with 1/2, except for the middle elements
for i = 1:n
    j = n + 1 - i;

    if i ~= n / 2 && i ~= n / 2 + 1
        A(i, j) = 0.5;
    end

end

if n <= 10
    disp('A');
    disp(A);
end

% Initialize vector b
b = ones(n, 1) * 1.5;
b(1) = 2.5;
b(n) = 2.5;
b(floor(n / 2)) = 1.0;
b(floor(n / 2) + 1) = 1.0;

if n <= 10
    disp('b');
    disp(b);
end

max_iter = inf;
x0 = zeros(size(b));
epsilon = 1e-16;

% Call the CG function
[x, iter, last_error] = cg(A, b, max_iter, x0, epsilon);
disp('--- CG Method Outp. Start ---');
if n <= 10
    disp('Solution vector x:');
    disp(x);
else
    disp('Solution vector x: (only the first 10 elements)');
    disp(x(1:10));
end
disp('Number of iterations:');
disp(iter);
disp('Error of the last iteration:');
disp(last_error);
disp('--- CG Method Outp. End ---');

实验结果分析

>> P89_Q2
n: 10
A
    3.0000   -1.0000         0         0         0         0         0         0         0    0.5000
   -1.0000    3.0000   -1.0000         0         0         0         0         0    0.5000         0
         0   -1.0000    3.0000   -1.0000         0         0         0    0.5000         0         0
         0         0   -1.0000    3.0000   -1.0000         0    0.5000         0         0         0
         0         0         0   -1.0000    3.0000   -1.0000         0         0         0         0
         0         0         0         0   -1.0000    3.0000   -1.0000         0         0         0
         0         0         0    0.5000         0   -1.0000    3.0000   -1.0000         0         0
         0         0    0.5000         0         0         0   -1.0000    3.0000   -1.0000         0
         0    0.5000         0         0         0         0         0   -1.0000    3.0000   -1.0000
    0.5000         0         0         0         0         0         0         0   -1.0000    3.0000


b
    2.5000
    1.5000
    1.5000
    1.5000
    1.0000
    1.0000
    1.5000
    1.5000
    1.5000
    2.5000


--- CG Method Outp. Start ---
Solution vector x:
    1.0000
    1.0000
    1.0000
    1.0000
    1.0000
    1.0000
    1.0000
    1.0000
    1.0000
    1.0000


Number of iterations:
     5


Error of the last iteration:
   2.3716e-17


--- CG Method Outp. End ---

>> P89_Q2
n: 100000
Warning: Too many FOR loop iterations. Stopping after 9223372036854775806 iterations.
> In cg (line 46)
In P89_Q2 (line 54)


--- CG Method Outp. Start ---
Solution vector x: (only the first 10 elements)
    1.0000
    1.0000
    1.0000
    1.0000
    1.0000
    1.0000
    1.0000
    1.0000
    1.0000
    1.0000


Number of iterations:
    35


Error of the last iteration:
   4.3128e-17


--- CG Method Outp. End ---

虽然A矩阵阶数很高,但共轭梯度法在有限的一组A-共轭的搜索方向P内(35 steps)完成收敛。

补充代码:P87. Q2, Q7

chap-3\sor.m:

function [x, iter, last_error] = sor(A, b, omega, max_iter, x0, tol)
    % Successive Over-Relaxation (SOR) Iteration Method
    % Inputs:
    % A - Coefficient matrix
    % b - Right-hand side vector
    % omega - Relaxation factor
    % max_iter - Maximum number of iterations
    % x0 - Initial guess (optional)
    % tol - Tolerance for convergence (optional)
    % Outputs:
    % x - Solution vector
    % iter - Number of iterations performed
    % last_error - Error of the last iteration

    % Set default initial guess
    if nargin < 5 || isempty(x0)
        x0 = zeros(size(b));
    end

    % Set default tolerance
    if nargin < 6 || isempty(tol)
        tol = 0;
    end

    % % disp the inputs
    % disp('--- SOR Iteration Method Conf. Start ---');
    % disp('Coefficient matrix A:');
    % disp(A);
    % disp('Right-hand side vector b:');
    % disp(b);
    % disp('Relaxation factor omega:');
    % disp(omega);
    % disp('Maximum number of iterations:');
    % disp(max_iter);
    % disp('Initial guess x0:');
    % disp(x0);
    % disp('Tolerance for convergence:');
    % disp(tol);
    % disp('--- SOR Iteration Method Conf. End ---');

    % Initial guess
    x = x0;
    n = length(b);
    iter = 0;
    last_error = inf;
    
    % Iterate
    for k = 1:max_iter
        x_old = x;
        for i = 1:n
            sum1 = 0;
            sum2 = 0;
            for j = 1:i-1
                sum1 = sum1 + A(i, j) * x(j);
            end
            for j = i+1:n
                sum2 = sum2 + A(i, j) * x_old(j);
            end
            x(i) = (1 - omega) * x_old(i) + omega * (b(i) - sum1 - sum2) / A(i, i);
        end
        
        % Check for convergence
        last_error = norm(x - x_old, inf);
        if tol > 0
            if last_error < tol
                iter = k;
                return;
            end
        end
        
        iter = k;
    end
    
    warning('SOR method did not converge within the maximum number of iterations');
end

chap-3\P87_Q2.m:

% Define the coefficient matrix A and the right-hand side vector b
A = [2, -1, 0, 0; -1, 2, -1, 0; 0, -1, 2, -1; 0, 0, -1, 2];
b = [1; 0; 1; 0];

% Call the Jacobi function
for max_iter = [1, 2, 3, inf]
    [x, iter, last_error] = jacobi(A, b, max_iter, ones(size(b)), 1e-3);
    disp('--- Jacobi Iteration Method Outp. Start ---');
    disp('Solution vector x:');
    disp(x);
    disp('Number of iterations:');
    disp(iter);
    disp('Error of the last iteration:');
    disp(last_error);
    disp('--- Jacobi Iteration Method Outp. End ---');
end

% Call the Gauss-Seidel function
for max_iter = [1, 2, 3, inf]
    [x, iter, last_error] = gauss_seidel(A, b, max_iter, ones(size(b)), 1e-3);
    disp('--- Gauss-Seidel Iteration Method Outp. Start ---');
    disp('Solution vector x:');
    disp(x);
    disp('Number of iterations:');
    disp(iter);
    disp('Error of the last iteration:');
    disp(last_error);
    disp('--- Gauss-Seidel Iteration Method Outp. End ---');
end

% Call the SOR function
for max_iter = [1, 2, 3, inf]
    [x, iter, last_error] = sor(A, b, 1.46, max_iter, ones(size(b)), 1e-3);
    disp('--- SOR Iteration Method Outp. Start ---');
    disp('Solution vector x:');
    disp(x);
    disp('Number of iterations:');
    disp(iter);
    disp('Error of the last iteration:');
    disp(last_error);
    disp('--- SOR Iteration Method Outp. End ---');
end

chap-3\P87_Q7.m:

D = diag([10, 10, 10]);
L = [0, 0, 0; 4, 0, 0; 4, 8, 0];
U = L';
b = [13; 11; 25];
I = eye(size(D));
A = D + L + U;

% Jacobi

M = I - D \ A;
g = D \ b;

disp('--- Jacobi M & g ---');
disp(rats(M));
disp(rats(g));
eigenvalues = eig(M);
spectral_radius = max(abs(eigenvalues));
disp(spectral_radius);

% Gauss-Seidel
M =- (D + L) \ U;
g = (D + L) \ b;

disp('--- Gauss-Seidel M & g ---');
disp(rats(M));
disp(rats(g));
eigenvalues = eig(M);
spectral_radius = max(abs(eigenvalues));
disp(spectral_radius);

% SOR
omega = 1.35;
M =- (D + omega * L) \ ((1 - omega) * D - omega * U);
g = omega * (D + omega * L) \ b;

disp('--- SOR M & g ---');
disp(rats(M));
disp(rats(g));
eigenvalues = eig(M);
spectral_radius = max(abs(eigenvalues));
disp(spectral_radius);

手写作业

部分作业使用上述代码进行计算。

在这里插入图片描述

在这里插入图片描述

在这里插入图片描述

在这里插入图片描述

在这里插入图片描述

在这里插入图片描述

在这里插入图片描述

评论 5
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值