Classical Gram-Schmidt and Modified Gram-Schmidt


一、Classical Gram-Schmidt (CGS)

For vecotr set { x 1 , x 2 , ⋯   , x m } \{\boldsymbol{x}_{1},\boldsymbol{x}_{2},\cdots,\boldsymbol{x}_{m}\} {x1,x2,,xm},
CGS
There are two versions of the above algorithm implementation in matlab. (The first one is realized by myself, and the second one is modified by my supervisor)

  • CGS Algorithm implementation (v1)
function [Q] = my_cgs(A)
% A is a n-by-n matrix
% Q is column orthonormal matrix. If matrix A is nonsigular, the size Q will
% be n-by-n.
V = zeros(size(A));
Q = zeros(size(A));
r = zeros(size(A));
V(:,1) = A(:,1);
Q(:,1) = V(:,1)/sqrt(V(:,1)'*V(:,1));
for j=2:size(A,2)
    proj_sum = zeros(size(A,1),1);
    V(:,j)=A(:,j);
    for k=1:j-1
        r(j,k)=V(:,j)'*Q(:,k);
        proj_sum = proj_sum  + r(j,k)*Q(:,k);
    end
    V(:,j)=V(:,j)-proj_sum;
    if V(:,j)'*V(:,j) == 0
        break
    end
    Q(:,j)=V(:,j)/sqrt(V(:,j)'*V(:,j));
end
  • CGS Algorithm implementation (v2)
function [Q] = my_cgs_v2(A)
% A is a n-by-n matrix
% Q is column orthonormal matrix. If matrix A is nonsigular, the size Q will
% be n-by-n.
%V = zeros(size(A));
Q = zeros(size(A));
Q(:,1) = A(:,1)/sqrt(A(:,1)'*A(:,1));
for j=2:size(A,2)
    v = A(:,j);
    r = v'*Q(:,1:j-1); %
    q_j=v-Q(:,1:j-1)*r';
    if q_j'*q_j == 0
        break
    end
    Q(:,j)=q_j/norm(q_j);
end

二、Modified Gram-Schmidt (MGS)

For vecotr set { x 1 , x 2 , ⋯   , x m } \{\boldsymbol{x}_{1},\boldsymbol{x}_{2},\cdots,\boldsymbol{x}_{m}\} {x1,x2,,xm},
MGS
The above MGS algorithm is also implemented in matlab by myself.

  • MGS Algorithm implementation
function [Q] = my_mgs(A)
% A is a n-by-n matrix
% Q is column orthonormal matrix. If matrix A is nonsigular, the size Q will
% be n-by-n.
V = zeros(size(A));
Q = zeros(size(A));
r = zeros(size(A));
V(:,1) = A(:,1);
Q(:,1) = V(:,1)/sqrt(V(:,1)'*V(:,1));

for j=2:size(A,2)
    %proj_sum = zeros(size(A,1),1);
    V(:,j)=A(:,j);
    for k=1:j-1
        r(j,k)=V(:,j)'*Q(:,k);
        %proj_sum = proj_sum  + r(j,k)*Q(:,k);
        V(:,j)=V(:,j)-r(j,k)*Q(:,k);
    end
    %V(:,j)=V(:,j)-proj_sum;
    if V(:,j)'*V(:,j) == 0
        break
    end
    Q(:,j)=V(:,j)/sqrt(V(:,j)'*V(:,j));
end

三、Differences in Error Accumulation and Elimination

If an error is made in computing q 2 \boldsymbol{q}_{2} q2, so that ( q 1 , q 2 ) = δ (\boldsymbol{q}_{1},\boldsymbol{q}_{2})=\delta (q1,q2)=δ is small, but not-zero.

  • In CGS, this error will not be corrected for in any of the computations that follow.
    Such as, q ^ 3 = x 3 − ( x 3 , q 1 ) q 1 − ( x 3 , q 2 ) q 2 \hat{\boldsymbol{q}}_{3}=\boldsymbol{x}_{3}-(\boldsymbol{x}_{3},\boldsymbol{q}_{1})\boldsymbol{q}_{1}-(\boldsymbol{x}_{3},\boldsymbol{q}_{2})\boldsymbol{q}_{2} q^3=x3(x3,q1)q1(x3,q2)q2
    then
    ( q 1 , q 3 ^ ) = ( q 1 , x 3 ) − ( x 3 , q 1 ) − ( x 3 , q 2 ) δ = − ( x 3 , q 2 ) δ (\boldsymbol{q}_{1},\hat{\boldsymbol{q}_{3}})=(\boldsymbol{q}_{1},\boldsymbol{x}_{3})-(\boldsymbol{x}_{3},\boldsymbol{q}_{1})-(\boldsymbol{x}_{3},\boldsymbol{q}_{2})\delta=-(\boldsymbol{x}_{3},\boldsymbol{q}_{2})\delta (q1,q3^)=(q1,x3)(x3,q1)(x3,q2)δ=(x3,q2)δ
    ( q 2 , q 3 ^ ) = ( q 2 , x 3 ) − ( x 3 , q 1 ) δ − ( x 3 , q 2 ) = − ( x 3 , q 1 ) δ (\boldsymbol{q}_{2},\hat{\boldsymbol{q}_{3}})=(\boldsymbol{q}_{2},\boldsymbol{x}_{3})-(\boldsymbol{x}_{3},\boldsymbol{q}_{1})\delta-(\boldsymbol{x}_{3},\boldsymbol{q}_{2})=-(\boldsymbol{x}_{3},\boldsymbol{q}_{1})\delta (q2,q3^)=(q2,x3)(x3,q1)δ(x3,q2)=(x3,q1)δ
    that is, q 3 ^ \hat{\boldsymbol{q}_{3}} q3^ is not orthogonal to q 2 \boldsymbol{q}_{2} q2 and q 1 \boldsymbol{q}_{1} q1.

  • In MGS, this error can be eliminated in q 2 T q 3 \boldsymbol{q}_{2}^{T}\boldsymbol{q}_{3} q2Tq3 and the follow. As,
    q 3 ^ ( 0 ) = x 3 ; q 3 ^ ( 1 ) = q 3 ^ ( 0 ) − ( q 1 , q 3 ^ ( 0 ) ) q 1 ; q 3 ^ = q 3 ^ ( 1 ) − ( q 2 , q 3 ^ ( 1 ) ) q 2 ; \begin{aligned} &\hat{\boldsymbol{q}_{3}}^{(0)}=\boldsymbol{x}_{3};\\ &\hat{\boldsymbol{q}_{3}}^{(1)} = \hat{\boldsymbol{q}_{3}}^{(0)}-(\boldsymbol{q}_{1},\hat{\boldsymbol{q}_{3}}^{(0)})\boldsymbol{q}_{1};\\ &\hat{\boldsymbol{q}_{3}} = \hat{\boldsymbol{q}_{3}}^{(1)}-(\boldsymbol{q}_{2},\hat{\boldsymbol{q}_{3}}^{(1)})\boldsymbol{q}_{2}; \end{aligned} q3^(0)=x3;q3^(1)=q3^(0)(q1,q3^(0))q1;q3^=q3^(1)(q2,q3^(1))q2;

    • then for q 2 \boldsymbol{q}_{2} q2,
      ( q 2 , q 3 ^ ) = ( q 2 , q 3 ^ ( 1 ) ) − ( q 2 , q 3 ^ ( 1 ) ) = 0 (\boldsymbol{q}_{2},\hat{\boldsymbol{q}_{3}})=(\boldsymbol{q}_{2},\hat{\boldsymbol{q}_{3}}^{(1)})-(\boldsymbol{q}_{2},\hat{\boldsymbol{q}_{3}}^{(1)})=0 (q2,q3^)=(q2,q3^(1))(q2,q3^(1))=0
      so, q 3 ^ \hat{\boldsymbol{q}_{3}} q3^ perserves orthogonality to q 2 \boldsymbol{q}_{2} q2.
    • then for q 1 \boldsymbol{q}_{1} q1,
      ( q 1 , q 3 ^ ) = ( q 1 , q 3 ^ ( 1 ) ) − ( q 2 , q 3 ^ ( 1 ) ) δ = − ( ( q 2 , q 3 ^ ( 0 ) ) − ( q 1 , q 3 ^ ( 0 ) ) δ ) δ = − ( q 2 , q 3 ^ ( 0 ) ) δ + ( q 1 , q 3 ^ ( 0 ) ) δ 2 \begin{array}{ll} (\boldsymbol{q}_{1},\hat{\boldsymbol{q}_{3}})&=(\boldsymbol{q}_{1},\hat{\boldsymbol{q}_{3}}^{(1)})-(\boldsymbol{q}_{2},\hat{\boldsymbol{q}_{3}}^{(1)})\delta\\ &=-((\boldsymbol{q}_{2},\hat{\boldsymbol{q}_{3}}^{(0)})-(\boldsymbol{q}_{1},\hat{\boldsymbol{q}_{3}}^{(0)})\delta)\delta\\ &=-(\boldsymbol{q}_{2},\hat{\boldsymbol{q}_{3}}^{(0)})\delta+(\boldsymbol{q}_{1},\hat{\boldsymbol{q}_{3}}^{(0)})\delta^{2}\\ \end{array} (q1,q3^)=(q1,q3^(1))(q2,q3^(1))δ=((q2,q3^(0))(q1,q3^(0))δ)δ=(q2,q3^(0))δ+(q1,q3^(0))δ2
      since, δ \delta δ is very small, δ 2 \delta^{2} δ2 is much smaller, then the error in ( q 1 , q 3 ^ ) (\boldsymbol{q}_{1},\hat{\boldsymbol{q}_{3}}) (q1,q3^) is not worse than in CGS, but MGS has eliminated the errors in ( q 2 , q 3 ^ ) (\boldsymbol{q}_{2},\hat{\boldsymbol{q}_{3}}) (q2,q3^).

References:
[1] 谷同祥. 迭代方法和预处理技术(上).
[2] 潘建瑜. 矩阵计算讲义.
[3] The Modified Gram-Schmidt Procedure. [https://www.math.uci.edu/~ttrogdon/105A/html/Lecture23.html]

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值