参考系列2:优化库——ceres(三)实战案例

实战案例一





1.1 CmakeLists.txt配置

cmake_minimum_required(VERSION 2.8)
project(ceres)

find_package(Ceres REQUIRED)
include_directories(${CERES_INCLUDE_DIRS})

add_executable(test test.cpp)
target_link_libraries(test ${CERES_LIBRARIES})

1.2 示例:ceres入门例子

#include<iostream>
#include<ceres/ceres.h>

using namespace std;
using namespace ceres;

//第一部分:构建代价函数,重载()符号,仿函数的小技巧
struct CostFunctor {
   template <typename T>
   bool operator()(const T* const x, T* residual) const {
     residual[0] = T(10.0) - x[0];
     return true;
   }
};

//主函数
int main(int argc, char** argv) {
  google::InitGoogleLogging(argv[0]);

  // 寻优参数x的初始值,为5
  double initial_x = 5.0;
  double x = initial_x;

  // 第二部分:构建寻优问题
  Problem problem;
  //使用自动求导,将之前的代价函数结构体传入,第一个1是输出维度,即残差的维度,第二个1是输入维度,即待寻优参数x的维度。
  CostFunction* cost_function =
      new AutoDiffCostFunction<CostFunctor, 1, 1>(new CostFunctor); 
  //向问题中添加误差项,本问题比较简单,添加一个就行。
  problem.AddResidualBlock(cost_function, NULL, &x); 

  //第三部分: 配置并运行求解器
  Solver::Options options;
  options.linear_solver_type = ceres::DENSE_QR; //配置增量方程的解法
  options.minimizer_progress_to_stdout = true;//输出到cout
  Solver::Summary summary;//优化信息
  Solve(options, &problem, &summary);//求解!!!

  std::cout << summary.BriefReport() << "\n";//输出优化的简要信息
  //最终结果
  std::cout << "x : " << initial_x << " -> " << x << "\n";
  return 0;
}

案例一深入理解:

Part 1: Cost Function

(1)AutoDiffCostFunction(自动求导- - - - - - - 情况一

  • 构造 代价函数结构体(例如:struct CostFunctor),在其结构体内对 模板括号() 重载,定义残差;
  • 在重载的 () 函数 形参 中,最后一个为残差,前面几个为待优化状态量
struct CostFunctor {
    template<typename T>
    bool operator()(const T *const x, T *residual) const {
        residual[0] = T(10.0) - x[0]; // r(x) = 10 - x
        return true;
    }
};
  • 创建代价函数的实例,对于模板参数的数字,第一个为残差的维度,后面几个为待优化状态量的维度
ceres::CostFunction *cost_function;
cost_function = new ceres::AutoDiffCostFunction<CostFunctor, 1, 1>(new CostFunctor);

(2) NumericDiffCostFunction (数值求导 - - - - - - - 情况二

  • 数值求导法 也是构造 代价函数结构体,但在重载 括号() 时没有用模板
struct CostFunctorNum {
    bool operator()(const double *const x, double *residual) const {
        residual[0] = 10.0 - x[0]; // r(x) = 10 - x
        return true;
    }
};
  • 并且在实例化代价函数时也稍微有点区别,多了一个模板参数 ceres::CENTRAL
ceres::CostFunction *cost_function;
cost_function =
new ceres::NumericDiffCostFunction<CostFunctorNum, ceres::CENTRAL, 1, 1>(new CostFunctorNum);

(3) 自定义 CostFunction (自己求残差)  - - - - - - - 情况三

  • 构建一个 继承自 ceres::SizedCostFunction<1,1> 的类,同样,对于模板参数的数字,第一个为残差的维度,后面几个为待优化状态量的维度;
  • 重载 虚函数 virtual bool Evaluate(double const* const* parameters, double *residuals, double **jacobians) const,根据 待优化变量,实现 残差和雅克比矩阵的计算;
class SimpleCostFunctor : public ceres::SizedCostFunction<1,1> {
public:
    virtual ~SimpleCostFunctor() {};

    virtual bool Evaluate(
      double const* const* parameters, double *residuals, double** jacobians) const {
        const double x = parameters[0][0];

        residuals[0] = 10 - x; // r(x) = 10 - x

        if(jacobians != NULL && jacobians[0] != NULL) {
            jacobians[0][0] = -1; // r'(x) = -1
        }

        return true;
    }
};

Part 2: AddResidualBlock

  • 声明 ceres::Problem problem;
  • 通过 AddResidualBlock 将 代价函数(cost function)、损失函数(loss function) 和 待优化状态量 添加到 problem
ceres::CostFunction *cost_function;
cost_function = new ceres::AutoDiffCostFunction<CostFunctor, 1, 1>(new CostFunctor);

ceres::Problem problem;
problem.AddResidualBlock(cost_function, NULL, &x);

Part 3: Config & Solve

配置求解器,并计算,输出结果

ceres::Solver::Options options;
options.max_num_iterations = 25;
options.linear_solver_type = ceres::DENSE_QR;
options.minimizer_progress_to_stdout = true;

ceres::Solver::Summary summary;
ceres::Solve(options, &problem, &summary);
std::cout << summary.BriefReport() << "\n";

Simple Example Code

#include "ceres/ceres.h"
#include "glog/logging.h"

struct CostFunctor {
    template<typename T>
    bool operator()(const T *const x, T *residual) const {
        residual[0] = 10.0 - x[0]; // f(x) = 10 - x
        return true;
    }
};

struct CostFunctorNum {
    bool operator()(const double *const x, double *residual) const {
        residual[0] = 10.0 - x[0]; // f(x) = 10 - x
        return true;
    }
};

class SimpleCostFunctor : public ceres::SizedCostFunction<1,1> {
public:
    virtual ~SimpleCostFunctor() {};

    virtual bool Evaluate(
      double const* const* parameters, double *residuals, double **jacobians) const {
        const double x = parameters[0][0];

        residuals[0] = 10 - x; // f(x) = 10 - x

        if(jacobians != NULL && jacobians[0] != NULL) {
            jacobians[0][0] = -1; // f'(x) = -1
        }

        return true;
    }
};

int main(int argc, char** argv) {
    google::InitGoogleLogging(argv[0]);

    double x = 0.5;
    const double initial_x = x;

    ceres::Problem problem;

    // Set up the only cost function (also known as residual)
    ceres::CostFunction *cost_function;

    // auto-differentiation
//    cost_function = new ceres::AutoDiffCostFunction<CostFunctor, 1, 1>(new CostFunctor);

     //numeric differentiation
//    cost_function =
//    new ceres::NumericDiffCostFunction<CostFunctorNum, ceres::CENTRAL, 1, 1>(
//      new CostFunctorNum);

    cost_function = new SimpleCostFunctor;

    // 添加代价函数cost_function和损失函数NULL,其中x为状态量
    problem.AddResidualBlock(cost_function, NULL, &x);

    ceres::Solver::Options options;
    options.minimizer_progress_to_stdout = true;

    ceres::Solver::Summary summary;
    ceres::Solve(options, &problem, &summary);

    std::cout << summary.BriefReport() << "\n";
    std::cout << "x : " << initial_x << " -> " << x << "\n";

    return 0;
}

实战案例二、基于李代数的视觉SLAM位姿优化(解析求导)

以下内容来源与参考:

Ceres-Solver 从入门到上手视觉SLAM位姿优化问题

下面以 基于李代数的视觉SLAM位姿优化问题 为例,介绍 Ceres Solver 的使用。

(1)残差(预测值 - 观测值)

(2)雅克比矩阵

(3)核心代码

  • 代价函数的构造
//应该是重投影误差,这样的话才会有残差2维的,优化变量是6维度的
class BAGNCostFunctor : public ceres::SizedCostFunction<2, 6> {
public:
    EIGEN_MAKE_ALIGNED_OPERATOR_NEW //对齐
	//? 二维,三维? 第一个是像素观测值   第二个是准备用来重投影的三维观测值
    BAGNCostFunctor(Eigen::Vector2d observed_p, Eigen::Vector3d observed_P) :
            observed_p_(observed_p), observed_P_(observed_P) {}

    virtual ~BAGNCostFunctor() {}

    virtual bool Evaluate(
        //参数 残差  雅克比
      double const* const* parameters, double *residuals, double **jacobians) const {

        Eigen::Map<const Eigen::Matrix<double,6,1>> T_se3(*parameters);

        Sophus::SE3 T_SE3 = Sophus::SE3::exp(T_se3);

        Eigen::Vector3d Pc = T_SE3 * observed_P_;

        Eigen::Matrix3d K;
        double fx = 520.9, fy = 521.0, cx = 325.1, cy = 249.7;
        K << fx, 0, cx, 0, fy, cy, 0, 0, 1;
        
        //! 计算残差
        Eigen::Vector2d residual =  observed_p_ - (K * Pc).hnormalized();

        residuals[0] = residual[0];
        residuals[1] = residual[1];

        if(jacobians != NULL) {

            if(jacobians[0] != NULL) {
                // 2*6的雅克比矩阵
                Eigen::Map<Eigen::Matrix<double, 2, 6, Eigen::RowMajor>> J(jacobians[0]);

                double x = Pc[0];
                double y = Pc[1];
                double z = Pc[2];

                double x2 = x*x;
                double y2 = y*y;
                double z2 = z*z;

                J(0,0) =  fx/z;
                J(0,1) =  0;
                J(0,2) = -fx*x/z2;
                J(0,3) = -fx*x*y/z2;
                J(0,4) =  fx+fx*x2/z2;
                J(0,5) = -fx*y/z;
                J(1,0) =  0;
                J(1,1) =  fy/z;
                J(1,2) = -fy*y/z2;
                J(1,3) = -fy-fy*y2/z2;
                J(1,4) =  fy*x*y/z2;
                J(1,5) =  fy*x/z;
            }
        }

        return true;
    }

private:
    const Eigen::Vector2d observed_p_;
    const Eigen::Vector3d observed_P_;
};
  • 构造优化问题,并求解相机位姿
Sophus::Vector6d se3;

//在当前problem中添加代价函数残差块,损失函数为NULL采用默认的最小二乘误差即L2范数,优化变量为 se3
ceres::Problem problem;
for(int i=0; i<n_points; ++i) {
    ceres::CostFunction *cost_function;
    cost_function = new BAGNCostFunctor(p2d[i], p3d[i]); // p2d[i], p3d[i]是已知参数;
    problem.AddResidualBlock(cost_function, NULL, se3.data()); // se3.data()需要优化的参数;
}

ceres::Solver::Options options;
options.dynamic_sparsity = true;
options.max_num_iterations = 100; //迭代100次
options.sparse_linear_algebra_library_type = ceres::SUITE_SPARSE;
options.minimizer_type = ceres::TRUST_REGION;
options.linear_solver_type = ceres::SPARSE_NORMAL_CHOLESKY;
options.trust_region_strategy_type = ceres::DOGLEG;
options.minimizer_progress_to_stdout = true;
options.dogleg_type = ceres::SUBSPACE_DOGLEG;

ceres::Solver::Summary summary;
ceres::Solve(options, &problem, &summary);
std::cout << summary.BriefReport() << "\n";

std::cout << "estimated pose: \n" << Sophus::SE3::exp(se3).matrix() << std::endl;

理解:

  1. p2d[i], p3d[i]是已知参数;
  2.  se3.data()需要优化的参数,其和 Evaluate()中的 parameters 对应;

  • 1
    点赞
  • 11
    收藏
    觉得还不错? 一键收藏
  • 3
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值