视觉SLAM十四讲学习6 Ceres Solver (1) 入门

前言

本篇开始学习SLAM的实际操作,从Ceres Solver开始。

Hello World!

构建问题

Ceres Solver是C++常用的非线性优化库。Ceres Solver解非线性优化的流程包含三步:构建优化问题,设置求解条件,求解。

考虑一个简单的最小二乘问题ceres求解:
min ⁡ 1 2 ∣ ∣ 10 − x ∣ ∣ 2 \min \frac {1}{2} ||10-x||^2 min2110x2
首先,在Ceres中构造一个代价函数,并重载()运算符:

struct LinearFunc {
	template <typename T>
	bool operater() (const T* const x, T* residual) const {
	residual[0] = 10.0 - x[0];
	return true;
	}
}

其中,residual是最小二乘中的残差,x是迭代变量。

然后将代价函数添加到优化问题中:

double x = 5.0
ceres::CostFunction* cost_function = new AutoDiffCostFunction<LinearFunc, 1, 1>(new LinearFunc);
ceres::Problem problem;
problem.AddResidualBlock(cost_function, nullptr, &x) 

其中,AutoDiffCostFunction将LinearFunc转换为ceres的自动求导类型(ceres可选数值求导,手动求导和自动求导三种类型),然后再添加到problem的残差块内。

求解设置

这里主要是选择求解器和求解结果的展示方式:

ceres::Solver::Options options;
options.linear_solver_type = ceres::DENSE_QR;
options.minimizer_progress_to_stdout = true;
ceres::Summary summary;

这里,通过ceres::Solver::Options来进行求解器设置,比如线性求解器、非线性求解方法、线搜索方法等、求解结果打印等,并且通过ceres::Summary来获得求解中的情况。

求解

ceres::Solve(options, &problem, &summary);
std::cout << summary.BriefReport() << "\n";

最后通过ceres::Solver()来求解,std输出summary.BriefReport()得到优化过程的情况。

完整代码

#include "ceres/ceres.h"

struct CostFunctor
{
    template <typename T>
    bool operator()(const T* const x, T* residual) const {
        residual[0] = 10.0 - x[0];
        return true;
    }
};

struct NumericCostFunctor{
    template <typename T>
    bool operator()(const T* const x, T* residual) const {
        residual[0] = 10.0 - x[0];
        return true;
    }
};

int main(int argc, char** argv) {
    google::InitGoogleLogging(argv[0]);

    double x_init = 5.;
    double x = x_init;

    ceres::Problem problem;

    // ceres::CostFunction* cost_function = new ceres::AutoDiffCostFunction<CostFunctor, 1, 1>(new CostFunctor);
    ceres::CostFunction* cost_function = new ceres::NumericDiffCostFunction<NumericCostFunctor, ceres::CENTRAL, 1, 1>(new NumericCostFunctor);
    problem.AddResidualBlock(cost_function, nullptr, &x);

    ceres::Solver::Options options;
    options.linear_solver_type = ceres::DENSE_QR;
    options.minimizer_progress_to_stdout = true;
    ceres::Solver::Summary summary;
    ceres::Solve(options, &problem, &summary);

    std::cout<<summary.BriefReport()<<"\n";
    std::cout<<"x: "<< x_init << " -> " << x << " \n";

    return 0;
}

多函数多变量最小二乘

对于以下多函数多变量的最小二乘问题,求 min ⁡ ∣ ∣ F ( x ) ∣ ∣ 2 \min||F(x)||^2 minF(x)2
f 1 ( x ) = x 1 + 10 x 2 f 2 ( x ) = 5 ( x 3 − x 4 ) f 3 ( x ) = ( x 2 − 2 x 3 ) 2 f 4 ( x ) = 1 0 ( x 1 − x 4 ) 2 F ( x ) = [ f 1 ( x ) , f 2 ( x ) , f 3 ( x ) , f 4 ( x ) ] f_1(x) = x_1+10x_2 \\ f_2(x) = \sqrt 5 (x_3-x_4) \\ f_3(x) = (x_2-2x_3)^2 \\ f_4(x) = \sqrt 10 (x_1-x_4)^2 \\ F(x) = [f_1(x), f_2(x),f_3(x), f_4(x)] f1(x)=x1+10x2f2(x)=5 (x3x4)f3(x)=(x22x3)2f4(x)=1 0(x1x4)2F(x)=[f1(x),f2(x),f3(x),f4(x)]
Ceres解法与上面Hello World的解法差不多,首先把四个代价函数构建起来:

struct F1
{
    template <typename T>
    bool operator()(const T* const x1, const T* const x2, T* residual) const {
        residual[0] = x1[0] + x2[0] * 10.;
        return true;
    }
};

struct F2
{
    template <typename T>
    bool operator()(const T* const x3, const T* const x4, T* residual) const {
        residual[0] = sqrt(5) * (x3[0] - x4[0]);
        return true;
    }
};

struct F3{
    template <typename T>
    bool operator()(const T* const x2, const T* const x3, T* residual) const {
        residual[0] = pow(x2[0] - x3[0] * 2., 2);
        return true;
    }
};

struct F4{
    template <typename T>
    bool operator()(const T* const x1, const T* const x4, T* residual) const { 
        residual[0] = sqrt(10) * pow(x1[0] - x4[0], 2);
    return true; 
    }
};

然后,在优化问题中,将代价函数加入残差块中进行优化即可。

int main(int argc, char **argv) {
    google::InitGoogleLogging(argv[0]);

    double x1 = 3.0, x2 = -1.0, x3 = 0.0, x4 = 1.0;
    ceres::Problem problem;
    problem.AddResidualBlock(new ceres::AutoDiffCostFunction<F1, 1, 1, 1>(new F1), nullptr, &x1, &x2);
    problem.AddResidualBlock(new ceres::AutoDiffCostFunction<F2, 1, 1, 1>(new F2), nullptr, &x3, &x4);
    problem.AddResidualBlock(new ceres::AutoDiffCostFunction<F3, 1, 1, 1>(new F3), nullptr, &x2, &x3);
    problem.AddResidualBlock(new ceres::AutoDiffCostFunction<F4, 1, 1, 1>(new F4), nullptr, &x1, &x4);

    ceres::Solver::Options options;
    options.linear_solver_type = ceres::DENSE_QR;
    options.minimizer_progress_to_stdout = true;
    ceres::Solver::Summary summary;
    ceres::Solve(options, &problem, &summary);

    std::cout<< summary.BriefReport()<<std::endl;
    std::cout<< "[3.0, -1.0, 0.0, 1.0]  ->  [" <<x1<<", "<<x2<<", "<<x3<<", "<<x4<<", "<<std::endl;

    return 0;
}

曲线拟合

代价函数

另一种常见的优化问题是拟合,也就是给定多个样本点数据,求函数的参数。考虑下面这个曲线:
f ( x ) = exp ⁡ ( 0.7 x + 0.5 ) f(x) = \exp (0.7x + 0.5) f(x)=exp(0.7x+0.5)

首先仍然是构建代价函数:

struct ExpCostFunction
{
    ExpCostFunction(double x, double y) {
        x_ = x;
        y_ = y;
    }

    template <typename T>
    bool operator()(const T* const m, const T* const c, T* residual) const {
        residual[0] = y_ - exp( m[0] * x_ + c[0] );
        return true;
    }

    private:
        double x_;
        double y_;
};

在代价函数中,添加了一个构造函数,用于接收样本点的数据x,y,从而辅助建立残差 r = y − e m x + c r=y-e^{mx+c} r=yemx+c。把代价函数添加到残差块中时,就是通过这个构造函数代入样本。

优化问题

随后把代价函数加入残差块中,建立优化问题:

ceres::Problem problem;
for (size_t i = 0; i < data_num; i++)
{
    problem.AddResidualBlock(new ceres::AutoDiffCostFunction<ExpCostFunction, 1, 1, 1>(new ExpCostFunction(x[i], y[i])), new ceres::CauchyLoss(0.5), &m, &c);
}

这里用for循环来把所有样本点处的残差都加入残差块中。同时使用CauchyLoss作为损失函数,以此降低样本外点对优化方向的影响。

整体代码

#include "ceres/ceres.h"

struct ExpCostFunction
{
    ExpCostFunction(double x, double y) {
        x_ = x;
        y_ = y;
    }

    template <typename T>
    bool operator()(const T* const m, const T* const c, T* residual) const {
        residual[0] = y_ - exp( m[0] * x_ + c[0] );
        return true;
    }

    private:
        double x_;
        double y_;
};

template <typename T>
bool generateData(std::vector<T>& x, std::vector<T>& y, double* param, int data_num=100){
    for(int i = 0; i < data_num; i++) {
        double x_temp = i / 20.;
        double y_temp = exp(param[0] * x_temp + param[1]) * (1 + (rand() % 200 + 1 - 100) / 1000.);
        x.push_back(x_temp);
        y.push_back(y_temp);
    }

    return true;
}

int main(int argc, char** argv) {
    google::InitGoogleLogging(argv[0]);

    double m = 0.0;
    double c = 0.0;

    std::vector<double> x, y;
    double param[2] = {0.7, 0.5};
    int data_num = 100;
    generateData(x, y, param);

    ceres::Problem problem;
    for (size_t i = 0; i < data_num; i++)
    {
        // problem.AddResidualBlock(new ceres::AutoDiffCostFunction<ExpCostFunction, 1, 1, 1>(new ExpCostFunction(x[i], y[i])), nullptr, &m, &c);
        problem.AddResidualBlock(new ceres::AutoDiffCostFunction<ExpCostFunction, 1, 1, 1>(new ExpCostFunction(x[i], y[i])), new ceres::CauchyLoss(0.5), &m, &c);
    }
    ceres::Solver::Options options;
    options.minimizer_progress_to_stdout = true;
    options.linear_solver_type = ceres::DENSE_QR;
    ceres::Solver::Summary summary;

    ceres::Solve(options, &problem, &summary);
    std::cout<< summary.BriefReport() << std::endl;\
    std::cout<<" [m, c] -> [" << m << ", " << c << "]"<<std::endl; 

    return 0;
}

一般无约束优化问题

上面的例子最小二乘问题的求解。Ceres除了对于非线性最小二乘问题求解外,其实也能够解决一般无约束优化问题。

考虑下面这个问题:
arg min ⁡ x , y [ ( 1 − x ) 2 + 100 ( y − x 2 ) 2 ] \argmin_{x,y} [(1-x)^2+100(y-x^2)^2] x,yargmin[(1x)2+100(yx2)2]

首先依然是构造代价函数:

struct RosenbrockFunction
{
    template <typename T>
    bool operator()(const T* const params, T* cost) const {
        const T x = params[0];
        const T y = params[1];
        cost[0] = pow(1.0 - x, 2) + 100. * pow(y - pow(x, 2), 2);

        return true;
    }
};

然后构建优化问题,不过此时的优化问题要选择梯度问题,并把代价函数转换为一阶函数:

ceres::FirstOrderFunction* function = new ceres::AutoDiffFirstOrderFunction<RosenbrockFunction, 2>(new RosenbrockFunction);
ceres::GradientProblem problem(function);

然后设置梯度求解器:

ceres::GradientProblemSolver::Options options;
options.minimizer_progress_to_stdout = true;
ceres::GradientProblemSolver::Summary summary;

ceres::Solve(options, problem, params, &summary);

完整代码

#include "ceres/ceres.h"

struct RosenbrockFunction
{
    template <typename T>
    bool operator()(const T* const params, T* cost) const {
        const T x = params[0];
        const T y = params[1];
        cost[0] = pow(1.0 - x, 2) + 100. * pow(y - pow(x, 2), 2);

        return true;
    }

    static ceres::FirstOrderFunction* Create(){
        constexpr int kNumParameters = 2;
        return new ceres::AutoDiffFirstOrderFunction<RosenbrockFunction, kNumParameters>(new RosenbrockFunction);
    }
};

int main(int argc, char** argv) {
    google::InitGoogleLogging(argv[0]);

    double params[2] = {-1.2, 1.0};

    ceres::GradientProblem problem(RosenbrockFunction::Create());

    ceres::GradientProblemSolver::Options options;
    options.minimizer_progress_to_stdout = true;
    ceres::GradientProblemSolver::Summary summary;
    
    ceres::Solve(options, problem, params, &summary);
    std::cout<<summary.FullReport()<<std::endl;

    return 0;
}

后记

本篇是Ceres Solver的入门学习。Ceres的主要编写思路就是,构造代价函数,构建优化问题并添加残差块,设置求解器。

下篇将学习使用Ceres完成BA优化。

  • 0
    点赞
  • 12
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值