Derive Modified Gram Schmidt QR Decomposition Algorithm from Gram Schmidt Orthogonalisation (part 2)

All rights reserved. Please don't share this article without notifying me. Email address: westonhunter@zju.edu.cn

From eq. 10 in part 1 we can find the Classical Gram Schmidt Algorithm, which is numerically unstable or sensitive to perturbation:

Q=zeros((m,n))
R=zeros((n,n))
for k= 1 to n
    tmpk=cAk
    for i= 1 to k-1
        R[i][k]=cAk^T*cQi
        tmpk=tmpk-R[i][k]*cQi
    R[k][k]=||tmpk||
    cQk=tmpk/R[k][k]

Q calculated by Classical Gram Schmidt algorithm is not very orthogonal due to rounding errors. An example is provided by X. Jiao[1]

 

  

The error introduced in q1 also cause errors in q2 and q3. We will discuss how to avoid this in the next part.

[1] http://www.ams.sunysb.edu/~jiao/teaching/ams526_fall11/lectures/lecture06.pdf

转载于:https://www.cnblogs.com/cxxszz/p/8512517.html

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
The stochastic gradient descent (SGD) algorithm is a popular optimization algorithm used in machine learning. It is an iterative algorithm that updates the model parameters in small steps based on the gradient of the loss function with respect to the parameters. The algorithm works as follows: 1. Initialize the model parameters randomly. 2. Set the learning rate, which determines the step size of the updates. 3. For each training example: - Compute the gradient of the loss function with respect to the parameters using the current example. - Update the model parameters by subtracting the gradient multiplied by the learning rate. The key difference between SGD and regular gradient descent is that in SGD, the gradient is computed and the parameters are updated for each training example, rather than for the entire training set. This makes the algorithm faster and more scalable for large datasets. The stochastic aspect of the algorithm comes from the fact that the training examples are sampled randomly from the training set, rather than being processed in a fixed order. This randomness can help the algorithm escape from local minima and find better solutions. Here is the pseudocode for the SGD algorithm: ``` Input: Training set (X, Y), learning rate α, number of iterations T Output: Model parameters θ Initialize θ randomly for t = 1 to T do Sample a training example (x, y) from (X, Y) randomly Compute the gradient ∇θ L(θ; x, y) using the current example Update the parameters: θ ← θ - α * ∇θ L(θ; x, y) end for return θ ```
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值