1 数值导数
有的时候当自动求导不方便的时候,需要进行数值求导。
struct NumericDiffCostFunctor {
bool operator()(const double* const x, double* residual) const {
residual[0] = 10.0 - x[0];
return true;
}
};
2 添加问题
CostFunction* cost_function =
new NumericDiffCostFunction<NumericDiffCostFunctor, ceres::CENTRAL, 1, 1>( new NumericDiffCostFunctor);
problem.AddResidualBlock(cost_function, NULL, &x);
注意和自动求导的区别,其代码如下:
CostFunction* cost_function = new AutoDiffCostFunction<CostFunctor, 1, 1>(new CostFunctor);
problem.AddResidualBlock(cost_function, NULL, &x);
建议:自动使用自动求导方式,C++模板类使之更加的高效,而数字求导则很容易出错且收敛速度很慢。
3 完整的程序
#include "ceres/ceres.h"
#include "glog/logging.h"
using ceres::NumericDiffCostFunction;
using ceres::CENTRAL;
using ceres::CostFunction;
using ceres::Problem;
using ceres::Solver;
using ceres::Solve;
// A cost functor that implements the residual r = 10 - x.
struct CostFunctor {
bool operator()(const double* const x, double* residual) const {
residual[0] = 10.0 - x[0];
return true;
}
};
int main(int argc, char** argv) {
google::InitGoogleLogging(argv[0]);
// 给定初始值
double x = 0.5;
const double initial_x = x;
// 构建问题
Problem problem;
CostFunction* cost_function = new NumericDiffCostFunction<CostFunctor, CENTRAL, 1, 1> (new CostFunctor);
// 代价函数
problem.AddResidualBlock(cost_function, NULL, &x);
// 构建求解器
Solver::Options options;
// 输出到cout
options.minimizer_progress_to_stdout = true;
// 调试信息
Solver::Summary summary;
// 开始计算
Solve(options, &problem, &summary);
std::cout << summary.BriefReport() << "\n";
std::cout << "x : " << initial_x
<< " -> " << x << "\n";
return 0;
}
CMakeLists.txt文件
cmake_minimum_required(VERSION 2.8)
project(ceres)
find_package(Ceres REQUIRED)
include_directories(${CERES_INCLUDE_DIRS})
add_executable(use_ceres main.cpp)
target_link_libraries(use_ceres ${CERES_LIBRARIES})
运行结果
建议
costFunction的构造函数看上去几乎与用于自动求导导的构造函数相同,除了一个额外的模板参数that indicates the kind of finite differencing scheme to be used for computing the numerical derivatives [3]。有关更多详细信息,请参见的文档NumericDiffCostFunction
。
一般来说,我们建议使用自动求导而不是数值求导。
The use of C++ templates makes automatic differentiation efficient, whereas numeric differentiation is expensive, prone to numeric errors, and leads to slower convergence.