深蓝学院《从零开始手写VIO》作业3

1. Levenberg-Marquardt 算法

在这里插入图片描述

LM 算法有两种推导和实现,一种是算法发明者使用的阻尼法,一种是后来学者补充的置信域法,参考知乎:https://zhuanlan.zhihu.com/p/136143299
vio课件中讲的是阻尼法,slam十四讲第二版中是置信域法

Levenberg-Marquardt Method 阻尼法推导

在这里插入图片描述在这里插入图片描述在这里插入图片描述在这里插入图片描述
在这里插入图片描述

可见Marquardt 策略有较严重的震荡,增加了无意义的计算量,之后采用Nielsen 策略去近似该曲线的方式避免震荡:

在这里插入图片描述

Levenberg-Marquardt Method 置信域法推导

在这里插入图片描述在这里插入图片描述

1.1 请绘制样例代码中 LM 阻尼因子 µ 随着迭代变化的曲线图

在problem.cc中增加以下存csv文件代码:

    ///存csv文件
    ostringstream csvfilename;
    std::ofstream Lambda_data;
    csvfilename << "Lambda.csv";
    Lambda_data.open(csvfilename.str(), ios_base::out);

    while (!stop && (iter < iterations)) {
        std::cout << "iter: " << iter << " , chi= " << currentChi_ << " , Lambda= " << currentLambda_
                  << std::endl;
        
        Lambda_data << currentChi_ << "," << currentLambda_ << endl;

在这里插入图片描述
画图:

import numpy as np;
import matplotlib
import matplotlib.pyplot as plt
import os
import csv

filepath = os.path.abspath('.')
Chi_ = []
Lambda_ = []

#用reder读取csv文件
with open('/home/jevin/code/vio-homework/hw3/Lambda.csv','r') as csvFile:
    reader = csv.reader(csvFile)
    for line in reader:
        print line
        # odom = line.split()        #将单个数据分隔开存好  
        numbers_float = map(float, line) #转化为浮点数  
        Chi_.append( numbers_float[0] )
        Lambda_.append( numbers_float[1] )

plt.figure(1)  
plt.plot(Chi_)

plt.figure(2)
plt.plot(Lambda_)
plt.show()

残差:
在这里插入图片描述
阻尼因子 μ \mu μ
在这里插入图片描述

说明:这里画出来的都是成功迭代一次后残差与阻尼因子 μ \mu μ,不满足更新策略的中间结果省去

1.2 将曲线函数改成 y = a x 2 + b x + c y = ax^2 + bx + c y=ax2+bx+c,请修改样例代码中残差计算,雅克比计算等函数,完成曲线参数估计。

    // 构造 N 次观测
    for (int i = 0; i < N; ++i) {

        double x = i/100.;
        double n = noise(generator);
        // 观测 y
        double y = a*x*x + b*x + c + n;
    // 计算曲线模型误差
    virtual void ComputeResidual() override
    {
        Vec3 abc = verticies_[0]->Parameters();  // 估计的参数
        residual_(0) = abc(0)*x_*x_ + abc(1)*x_ + abc(2)  - y_;  // 构建残差
    }
    // 计算残差对变量的雅克比
    virtual void ComputeJacobians() override
    {
        Eigen::Matrix<double, 1, 3> jaco_abc;  // 误差为1维,状态量 3 个,所以是 1x3 的雅克比矩阵
        jaco_abc << x_ * x_ , x_ , 1 ;
        jacobians_[0] = jaco_abc;
    }

点数为100,数据较少受噪声影响大迭代效果不好:
在这里插入图片描述
设置数据点数为1000:
在这里插入图片描述

再继续增加数据点数至3000:
在这里插入图片描述
发现初始参数设置不好迭代一次就停止,修改初始参数如下:

    // 设定待估计参数 a, b, c初始值
    vertex->SetParameters(Eigen::Vector3d (1.,1.,1.));

计算结果如下:
在这里插入图片描述

1.3 实现其他更优秀的阻尼因子策略

参考论文Henri Gavin. “The Levenberg-Marquardt method for nonlinear least squares curve-fitting problems”
中 4.1.1 节,迭代之后若误差增大(说明步长不够小近似效果不好,则不使用高斯牛顿的二阶方法)则增大 λ \lambda λ (同时丢弃此次迭代),即增大阻尼减小迭代步长,此时也更近似最速下降法。反之好的迭代效果下 λ \lambda λ不断减小。

在这里插入图片描述
在这里插入图片描述在这里插入图片描述

1.3.1 策略1

策略1中使用上图式13,16更新,不同于另两种策略,除IsGoodStepInLM()还需修改AddLambdatoHessianLM()RemoveLambdaHessianLM()

void Problem::AddLambdatoHessianLM() {
    ulong size = Hessian_.cols();
    assert(Hessian_.rows() == Hessian_.cols() && "Hessian is not square");
    for (ulong i = 0; i < size; ++i) {
        Hessian_(i, i) *= (1.+currentLambda_);
    }
}

void Problem::RemoveLambdaHessianLM() {
    ulong size = Hessian_.cols();
    assert(Hessian_.rows() == Hessian_.cols() && "Hessian is not square");
    // TODO:: 这里不应该减去一个,数值的反复加减容易造成数值精度出问题?而应该保存叠加lambda前的值,在这里直接赋值
    for (ulong i = 0; i < size; ++i) {
        Hessian_(i, i) /= (1.+currentLambda_);
    }
}
bool Problem::IsGoodStepInLM() {

    // recompute residuals after update state
    // 统计所有的残差
    double tempChi = 0.0;
    for (auto edge: edges_) {
        edge.second->ComputeResidual();
        tempChi += edge.second->Chi2();
    }

    ulong size = Hessian_.cols();
    MatXX diag_hessian(MatXX::Zero(size, size));
    for (ulong i = 0; i < size; ++i) {
        diag_hessian(i, i) = Hessian_(i, i);
    }

    double scale = 0;
    scale = delta_x_.transpose() * (currentLambda_ * diag_hessian * delta_x_ + b_);
    scale += 1e-3;    // make sure it's non-zero :)

    double rho = (currentChi_ - tempChi) / scale;
//    std::cout << "rho = " << rho << std::endl;

    double L_down = 9.0;
    double L_up = 11.0;

    if (rho > 0 && isfinite(tempChi))   // last step was good, 误差在下降
    {
        currentLambda_ = std::max(currentLambda_ / L_down, 1e-7);
        currentChi_ = tempChi;
        return true;
    } else {
        currentLambda_ = std::min(currentLambda_ * L_up, 1e7);
        return false;
    }
}

在这里插入图片描述

1.3.2 策略2
bool Problem::IsGoodStepInLM() {

    // recompute residuals after update state
    // 统计所有的残差
    double tempChi = 0.0;
    for (auto edge: edges_) {
        edge.second->ComputeResidual();
        tempChi += edge.second->Chi2();
    }
//    cout << "currentChi_:" << currentChi_ << endl;
//    cout << "tempChi:" << tempChi << endl;

    double Numerator = b_.transpose() * delta_x_;
    double alpha = Numerator / ((currentChi_ - tempChi)/2. + 2.*Numerator);
    alpha = std::max(alpha, 1e-1);
//    cout << "Numerator:" << Numerator << endl;
//    cout << "alpha:" << alpha << endl;

    RollbackStates();
    delta_x_ *=alpha;
    UpdateStates();

    tempChi = 0.0;
    for (auto edge: edges_) {
        edge.second->ComputeResidual();
        tempChi += edge.second->Chi2();
    }

    double scale = 0;
    scale = delta_x_.transpose() * (currentLambda_ * delta_x_ + b_);
    scale += 1e-3;    // make sure it's non-zero :)
    double rho = (currentChi_ - tempChi) / scale;
//    std::cout << "rho = " << rho << std::endl;

    if (rho > 0 && isfinite(tempChi))   // last step was good, 误差在下降
    {
        currentLambda_ = std::max(currentLambda_ / (1.+alpha), 1e-7);
        currentChi_ = tempChi;
        return true;
    } else {
        currentLambda_ += abs(currentChi_ - tempChi)/(2.*alpha);
        return false;
    }
}

在这里插入图片描述

1.3.1 策略3 (对应原代码)

原代码中已实现上述策略3,即Nielsen 策略:

bool Problem::IsGoodStepInLM() {
    double scale = 0;
    scale = delta_x_.transpose() * (currentLambda_ * delta_x_ + b_);
    scale += 1e-3;    // make sure it's non-zero :)

    // recompute residuals after update state
    // 统计所有的残差
    double tempChi = 0.0;
    for (auto edge: edges_) {
        edge.second->ComputeResidual();
        tempChi += edge.second->Chi2();
    }

    double rho = (currentChi_ - tempChi) / scale;
    if (rho > 0 && isfinite(tempChi))   // last step was good, 误差在下降
    {
        double alpha = 1. - pow((2 * rho - 1), 3);
        alpha = std::min(alpha, 2. / 3.);
        double scaleFactor = (std::max)(1. / 3., alpha);
        currentLambda_ *= scaleFactor;
        ni_ = 2;
        currentChi_ = tempChi;
        return true;
    } else {
        currentLambda_ *= ni_;
        ni_ *= 2;
        return false;
    }
}

在这里插入图片描述

2. 公式推导,完成 F, G 中如下两项的推导过程

在这里插入图片描述在这里插入图片描述
预积分总结与公式推导

在这里插入图片描述

在这里插入图片描述

2.1 推导 f 15 f_{15} f15

类比如下 f 35 f_{35} f35推导过程
在这里插入图片描述

α b i b k + 1 = α b i b k + β b i b k δ t + 1 2 a δ t 2 \boldsymbol{\alpha}_{b_{i} b_{k+1}}=\boldsymbol{\alpha}_{b_{i} b_{k}}+\boldsymbol{\beta}_{b_{i} b_{k}} \delta t+\frac{1}{2} \mathbf{a} \delta t^{2} αbibk+1=αbibk+βbibkδt+21aδt2

只有含有 ω \omega ω的与 δ b k g \delta b_{k}^{g} δbkg有关,前两项与 δ b k g \delta b_{k}^{g} δbkg无关直接略去,其中
a = 1 2 ( q b i b k ( a b k − b k a ) + q b i b k + 1 ( a b k + 1 − b k a ) ) = 1 2 ( q b i b k ( a b k − b k a ) + q b i b k + 1 ⊗ [ 1 1 2 ω δ t ] ( a b k + 1 − b k a ) ) \begin{aligned} \mathbf{a} &=\frac{1}{2}\left(\mathbf{q}_{b_{i} b_{k}}\left(\mathbf{a}^{b_{k}}-\mathbf{b}_{k}^{a}\right)+\mathbf{q}_{b_{i} b_{k+1}}\left(\mathbf{a}^{b_{k+1}}-\mathbf{b}_{k}^{a}\right)\right) \\ &=\frac{1}{2}\left(\mathbf{q}_{b_{i} b_{k}}\left(\mathbf{a}^{b_{k}}-\mathbf{b}_{k}^{a}\right)+\mathbf{q}_{b_{i} b_{k+1}} \otimes\left[\begin{array}{c} 1 \\ \frac{1}{2} \mathbf{ \omega} \delta t \end{array}\right]\left(\mathbf{a}^{b_{k+1}}-\mathbf{b}_{k}^{a}\right)\right) \end{aligned} a=21(qbibk(abkbka)+qbibk+1(abk+1bka))=21(qbibk(abkbka)+qbibk+1[121ωδt](abk+1bka))
其中
ω = 1 2 ( ( ω ‾ b k + n k g − b k g ) + ( ω ‾ b k + 1 + n k + 1 g − b k g ) ) \boldsymbol{\omega}=\frac{1}{2}\left(\left(\overline{\boldsymbol{\omega}}^{b_{k}}+\mathbf{n}_{k}^{g}-\mathbf{b}_{k}^{g}\right)+\left(\overline{\boldsymbol{\omega}}^{b_{k+1}}+\mathbf{n}_{k+1}^{g}-\mathbf{b}_{k}^{g}\right)\right) ω=21((ωbk+nkgbkg)+(ωbk+1+nk+1gbkg))
n k g \mathbf{n}_{k}^{g} nkg n k + 1 g \mathbf{n}_{k+1}^{g} nk+1g表现在矩阵 G \mathbf G G中,在此考虑
ω = 1 2 ( ω ‾ b k + ω ‾ b k + 1 ) − b k g \boldsymbol{\omega}=\frac{1}{2}\left( \overline{\boldsymbol{\omega}}^{b_{k}} + \overline{\boldsymbol{\omega}}^{b_{k+1}} \right) - \mathbf{b}_{k}^{g} ω=21(ωbk+ωbk+1)bkg
只考虑增量 δ b k g \delta \mathbf{b}_{k}^{g} δbkg部分得:

f 15 = ∂ δ α b i b k + 1 ∂ δ b k g = ∂ 1 4 q b i b k ⊗ [ 1 1 2 ( ω − δ b k g ) δ t ] ( a b k + 1 − b k a ) δ t 2 ∂ δ b k g = 1 4 ∂ R b i b k exp ⁡ ( [ ( ω − δ b k g ) δ t ] × ) ( a b k + 1 − b k a ) δ t 2 ∂ δ b k g = 1 4 ∂ R b i b k exp ⁡ ( [ ω δ t ] × ) exp ⁡ ( [ − J r ( ω δ t ) δ b k g δ t ] × ) ( a b k + 1 − b k a ) δ t 2 ∂ δ b k g ≈ 1 4 ∂ − R b i b k + 1 ( [ ( a b k + 1 − b k a ) δ t 2 ] × ) ( − J r ( ω δ t ) δ b k g δ t ) ∂ δ b k g = − 1 4 ( R b i b k + 1 [ ( a b k + 1 − b k a ) ] × δ t 2 ) ( − J r ( ω δ t ) δ t ) ≈ − 1 4 ( R b i b k + 1 [ ( a b k + 1 − b k a ) ] × δ t 2 ) ( − δ t ) \begin{aligned} \mathbf{f}_{15} &=\frac{\partial \delta \boldsymbol{\alpha}_{b_{i} b_{k+1}}}{\partial \delta \mathbf{b}_{k}^{g}} \\ &=\frac{\partial \frac{1}{4} \mathbf{q}_{b_{i} b_{k}} \otimes\left[\begin{array}{c} 1 \\ \frac{1}{2}\left( \boldsymbol{\omega} - \delta\mathbf{b}_{k}^{g}\right) \delta t \end{array} \right]\left(\mathbf{a}^{b_{k+1}}-\mathbf{b}_{k}^{a}\right) \delta t^{2}}{\partial \delta \mathbf{b}_{k}^{g}} \\ &=\frac{1}{4} \frac{\partial \mathbf{R}_{b_{i} b_{k}} \exp \left(\left[\left( \boldsymbol{\omega} - \delta\mathbf{b}_{k}^{g}\right) \delta t\right]_{\times}\right)\left(\mathbf{a}^{b_{k+1}}-\mathbf{b}_{k}^{a}\right) \delta t^{2}}{\partial \delta \mathbf{b}_{k}^{g}} \\ &=\frac{1}{4} \frac{\partial \mathbf{R}_{b_{i} b_{k}} \exp \left(\left[ \boldsymbol{\omega} \delta t\right]_{\times}\right) \exp \left(\left[- J_{r}(\boldsymbol{\omega} \delta t) \delta \mathbf{b}_{k}^{g} \delta t\right]_{\times}\right) \left(\mathbf{a}^{b_{k+1}}-\mathbf{b}_{k}^{a}\right) \delta t^{2}}{\partial \delta \mathbf{b}_{k}^{g}} \\ &\approx \frac{1}{4} \frac{\partial-\mathbf{R}_{b_{i} b_{k+1}}\left(\left[\left(\mathbf{a}^{b_{k+1}}-\mathbf{b}_{k}^{a}\right) \delta t^{2} \right]_{\times}\right) \left(-J_{r}(\boldsymbol{\omega} \delta t) \delta \mathbf{b}_{k}^{g} \delta t\right)}{\partial \delta \mathbf{b}_{k}^{g}}\\ &=-\frac{1}{4}\left(\mathbf{R}_{b_{i} b_{k+1}}\left[\left(\mathbf{a}^{b_{k+1}}-\mathbf{b}_{k}^{a}\right)\right]_{\times} \delta t^{2}\right) \left(-J_{r}(\boldsymbol{\omega} \delta t) \delta t\right)\\ &\approx -\frac{1}{4}\left(\mathbf{R}_{b_{i} b_{k+1}}\left[\left(\mathbf{a}^{b_{k+1}}-\mathbf{b}_{k}^{a}\right)\right]_{\times} \delta t^{2}\right) \left(- \delta t\right) \end{aligned} f15=δbkgδαbibk+1=δbkg41qbibk[121(ωδbkg)δt](abk+1bka)δt2=41δbkgRbibkexp([(ωδbkg)δt]×)(abk+1bka)δt2=41δbkgRbibkexp([ωδt]×)exp([Jr(ωδt)δbkgδt]×)(abk+1bka)δt241δbkgRbibk+1([(abk+1bka)δt2]×)(Jr(ωδt)δbkgδt)=41(Rbibk+1[(abk+1bka)]×δt2)(Jr(ωδt)δt)41(Rbibk+1[(abk+1bka)]×δt2)(δt)

其中
R b i b k exp ⁡ ( [ ω δ t ] × ) = R b i b k + 1 \mathbf{R}_{b_{i} b_{k}} \exp \left(\left[ \boldsymbol{\omega} \delta t\right]_{\times}\right) = \mathbf{R}_{b_{i} b_{k+1}} Rbibkexp([ωδt]×)=Rbibk+1
在这里插入图片描述

2.2 推导 g 12 g_{12} g12

同理
ω = 1 2 ( ( ω ‾ b k + n k g − b k g ) + ( ω ‾ b k + 1 + n k + 1 g − b k g ) ) \boldsymbol{\omega}=\frac{1}{2}\left(\left(\overline{\boldsymbol{\omega}}^{b_{k}}+\mathbf{n}_{k}^{g}-\mathbf{b}_{k}^{g}\right)+\left(\overline{\boldsymbol{\omega}}^{b_{k+1}}+\mathbf{n}_{k+1}^{g}-\mathbf{b}_{k}^{g}\right)\right) ω=21((ωbk+nkgbkg)+(ωbk+1+nk+1gbkg))
在矩阵 G \mathbf G G中考虑
ω = 1 2 ( ω ‾ b k + ω ‾ b k + 1 ) + 1 2 n k g + 1 2 n k + 1 g \boldsymbol{\omega}=\frac{1}{2}\left( \overline{\boldsymbol{\omega}}^{b_{k}} + \overline{\boldsymbol{\omega}}^{b_{k+1}} \right) + \frac{1}{2} \mathbf{n}_{k}^{g} + \frac{1}{2} \mathbf{n}_{k+1}^{g} ω=21(ωbk+ωbk+1)+21nkg+21nk+1g
只考虑增量 δ n k g \delta \mathbf{n}_{k}^{g} δnkg部分得:

g 12 = ∂ δ α b i b k + 1 ∂ δ n k g = ∂ 1 4 q b i b k ⊗ [ 1 1 2 ( ω + 1 2 δ n k g ) δ t ] ( a b k + 1 − b k a ) δ t 2 ∂ δ n k g = 1 4 ∂ R b i b k exp ⁡ ( [ ( ω + 1 2 δ n k g ) δ t ] × ) ( a b k + 1 − b k a ) δ t 2 ∂ δ n k g = 1 4 ∂ R b i b k exp ⁡ ( [ ω δ t ] × ) exp ⁡ ( [ J r ( ω δ t ) 1 2 δ n k g δ t ] × ) ( a b k + 1 − b k a ) δ t 2 ∂ δ n k g ≈ 1 4 ∂ − R b i b k + 1 ( [ ( a b k + 1 − b k a ) δ t 2 ] × ) ( J r ( ω δ t ) 1 2 δ n k g δ t ) ∂ δ n k g = − 1 4 ( R b i b k + 1 [ ( a b k + 1 − b k a ) ] × δ t 2 ) ( 1 2 J r ( ω δ t ) δ t ) ≈ − 1 4 ( R b i b k + 1 [ ( a b k + 1 − b k a ) ] × δ t 2 ) ( 1 2 δ t ) \begin{aligned} \mathbf{g}_{12} &=\frac{\partial \delta \boldsymbol{\alpha}_{b_{i} b_{k+1}}}{\partial \delta \mathbf{n}_{k}^{g}} \\ &=\frac{\partial \frac{1}{4} \mathbf{q}_{b_{i} b_{k}} \otimes\left[\begin{array}{c} 1 \\ \frac{1}{2}\left( \boldsymbol{\omega} + \frac{1}{2} \delta\mathbf{n}_{k}^{g}\right) \delta t \end{array} \right]\left(\mathbf{a}^{b_{k+1}}-\mathbf{b}_{k}^{a}\right) \delta t^{2}}{\partial \delta \mathbf{n}_{k}^{g}} \\ &=\frac{1}{4} \frac{\partial \mathbf{R}_{b_{i} b_{k}} \exp \left(\left[\left( \boldsymbol{\omega} + \frac{1}{2} \delta\mathbf{n}_{k}^{g}\right) \delta t\right]_{\times}\right)\left(\mathbf{a}^{b_{k+1}}-\mathbf{b}_{k}^{a}\right) \delta t^{2}}{\partial \delta \mathbf{n}_{k}^{g}} \\ &=\frac{1}{4} \frac{\partial \mathbf{R}_{b_{i} b_{k}} \exp \left(\left[ \boldsymbol{\omega} \delta t\right]_{\times}\right) \exp \left(\left[ J_{r}(\boldsymbol{\omega} \delta t) \frac{1}{2}\delta \mathbf{n}_{k}^{g} \delta t\right]_{\times}\right) \left(\mathbf{a}^{b_{k+1}}-\mathbf{b}_{k}^{a}\right) \delta t^{2}}{\partial \delta \mathbf{n}_{k}^{g}} \\ &\approx \frac{1}{4} \frac{\partial-\mathbf{R}_{b_{i} b_{k+1}}\left(\left[\left(\mathbf{a}^{b_{k+1}}-\mathbf{b}_{k}^{a}\right) \delta t^{2} \right]_{\times}\right) \left(J_{r}(\boldsymbol{\omega} \delta t) \frac{1}{2}\delta \mathbf{n}_{k}^{g} \delta t\right)}{\partial \delta \mathbf{n}_{k}^{g}}\\ &=-\frac{1}{4}\left(\mathbf{R}_{b_{i} b_{k+1}}\left[\left(\mathbf{a}^{b_{k+1}}-\mathbf{b}_{k}^{a}\right)\right]_{\times} \delta t^{2}\right) \left(\frac{1}{2} J_{r}(\boldsymbol{\omega} \delta t) \delta t\right)\\ &\approx -\frac{1}{4}\left(\mathbf{R}_{b_{i} b_{k+1}}\left[\left(\mathbf{a}^{b_{k+1}}-\mathbf{b}_{k}^{a}\right)\right]_{\times} \delta t^{2}\right) \left(\frac{1}{2} \delta t\right) \end{aligned} g12=δnkgδαbibk+1=δnkg41qbibk[121(ω+21δnkg)δt](abk+1bka)δt2=41δnkgRbibkexp([(ω+21δnkg)δt]×)(abk+1bka)δt2=41δnkgRbibkexp([ωδt]×)exp([Jr(ωδt)21δnkgδt]×)(abk+1bka)δt241δnkgRbibk+1([(abk+1bka)δt2]×)(Jr(ωδt)21δnkgδt)=41(Rbibk+1[(abk+1bka)]×δt2)(21Jr(ωδt)δt)41(Rbibk+1[(abk+1bka)]×δt2)(21δt)

3. 证明式(9)

在这里插入图片描述

在这里插入图片描述 V V ⊤ = I \mathbf{V} \mathbf{V}^{\top}=\mathbf{I} VV=I
( J ⊤ J + μ I ) Δ x lm ⁡ = − J ⊤ f ( V Λ V ⊤ + μ I ) Δ x lm ⁡ = − F ′ ⊤ ( V Λ V ⊤ + μ V V ⊤ ) Δ x lm ⁡ = − F ′ ⊤ V ( Λ + μ I ) V ⊤ Δ x lm ⁡ = − F ′ \begin{array}{c} \left(\mathbf{J}^{\top} \mathbf{J}+\mu \mathbf{I}\right) \Delta \mathbf{x}_{\operatorname{lm}}=-\mathbf{J}^{\top} \mathbf{f} \\ \left(\mathbf{V} \mathbf{\Lambda} \mathbf{V}^{\top}+\mu \mathbf{I}\right) \Delta \mathbf{x}_{\operatorname{lm}}=-\mathbf{F}^{\prime \top} \\ \left(\mathbf{V} \mathbf{\Lambda} \mathbf{V}^{\top}+\mu \mathbf{V} \mathbf{V}^{\top}\right) \Delta \mathbf{x}_{\operatorname{lm}}=-\mathbf{F}^{\prime \top} \\ \mathbf{V}(\mathbf{\Lambda}+\mu \mathbf{I}) \mathbf{V}^{\top} \Delta \mathbf{x}_{\operatorname{lm}}=-\mathbf{F}^{\prime} \end{array} (JJ+μI)Δxlm=Jf(VΛV+μI)Δxlm=F(VΛV+μVV)Δxlm=FV(Λ+μI)VΔxlm=F

Δ x lm ⁡ = − V ( Λ + μ I ) − 1 V ⊤ F ′ = − [ v 1 v 2 ⋯ v 3 ] [ 1 λ 1 + μ 1 λ 2 + μ ⋱ 1 λ n + μ ] [ v 1 T v 2 T ⋮ v n T ] F ′ T = − [ v 1 v 2 ⋯ v 3 ] [ v 1 T F ′ T λ 1 + μ v 2 T F ′ T λ 2 + μ ⋮ v n T F ′ T λ n + μ ] = − ( v 1 T F ′ T λ 1 + μ v 1 + v 2 T F ′ T λ 2 + μ v 2 + ⋯ + v n T F ′ T λ n + μ v n ) = − ∑ j = 1 n v j T F ′ T λ j + μ v j \begin{aligned} \Delta \mathbf{x}_{\operatorname{lm}} &=-\mathbf{V}(\mathbf{\Lambda}+\mu \mathbf{I})^{-1} \mathbf{V}^{\top} \mathbf{F}^{\prime}\\ &=-\left[\begin{array}{l} \mathbf v_{1} \mathbf v_{2} & \cdots & \mathbf v_{3} \end{array}\right] \left[\begin{array}{cccc} \frac{1}{\lambda_{1}+\mu} & & & \\ & \frac{1}{\lambda_{2}+\mu} & & \\ & & \ddots & \\ & & & \frac{1}{\lambda_{n}+\mu} \end{array}\right] \left[\begin{array}{c} \mathbf v_{1}^{T} \\ \mathbf v_{2}^{T} \\ \vdots \\ \mathbf v_{n}^{T} \end{array}\right] \mathbf F^{\prime T}\\ &=-\left[\begin{array}{l} \mathbf v_{1} \mathbf v_{2} & \cdots & \mathbf v_{3} \end{array}\right] \left[\begin{array}{c} \frac{\mathbf v_{1}^{T} \mathbf F^{\prime T}}{\lambda_{1}+\mu} \\ \frac{\mathbf v_{2}^{T} \mathbf F^{\prime T}}{\lambda_{2}+\mu} \\ \vdots \\ \frac{\mathbf v_{n}^{T} \mathbf F^{\prime T}}{\lambda_{n}+\mu} \end{array}\right]\\ &=-\left(\frac{\mathbf v_{1}^{T} \mathbf F^{\prime T}}{\lambda_{1}+\mu} \mathbf v_{1}+\frac{\mathbf v_{2}^{T} \mathbf F^{\prime T}}{\lambda_{2}+\mu} \mathbf v_{2}+\cdots+\frac{\mathbf v_{n}^{T} \mathbf F^{\prime T}}{\lambda_{n}+\mu} \mathbf v_{n}\right)=-\sum_{j=1}^{n} \frac{\mathbf v_{j}^{T} \mathbf F^{\prime T}}{\lambda_{j}+\mu} \mathbf v_{j} \end{aligned} Δxlm=V(Λ+μI)1VF=[v1v2v3]λ1+μ1λ2+μ1λn+μ1v1Tv2TvnTFT=[v1v2v3]λ1+μv1TFTλ2+μv2TFTλn+μvnTFT=(λ1+μv1TFTv1+λ2+μv2TFTv2++λn+μvnTFTvn)=j=1nλj+μvjTFTvj

评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值