【slam十四讲第二版】【课后习题】【第七讲~视觉里程计Ⅰ】

【slam十四讲第二版】【课后习题】【第七讲~视觉里程计Ⅰ】

0 前言

由于课本前六讲基本都是公式推导,所以就写在了平板上,而后面的多是理论知识以及代码的实现,所以特此在csdn上记录分享,

1 课后习题

1.1. 除了本书介绍的ORB 特征点外,你还能找到哪些其他的特征点?请说说SIFT 或SURF 的原理,对比它们与ORB 之间的优劣。

  • 除了ORB之外,一般还有还有SIFTSURFBRISKAKAZE,这些都是在OpenCV中已经实现了的。

  • SIFT算法,又称为尺度不变特征转换(Scale-invariant feature transform),大致方法是首先搜索所有尺度下图像的位置,通过高斯微分函数来识别潜在的对于尺度和旋转不变的兴趣点,然后在候选位置通过拟合精细的模型来确定位置和尺度,再基于图像梯度,分配给关键点一个或多个方向,最后在每个关键点的周围邻域,在选定的尺度下测量图像局部梯度来描述关键点。使用斑点检测方法和浮点型特征描述子,在建立高斯差分空间金字塔的基础上提取初具有尺度不变性的特征点,然后对特征点邻域内的点的梯度方向进行直方图统计。特征点的主方向就是直方图中比重最大的方向,必要时可选一个辅方向。SIFT特征集旋转不变性,尺度不变性,对图像变形和光照鲁棒鲁等优点于一身,不足之处是计算量大,计算速度慢,需要GPU加速才可满足实时性需求。

  • SURF算法,又称为加速稳健特征(Speeded Up Robust Features),其方法和构建图像金字塔的方法相反,其主要方法是通过修改滤波器的尺寸和模糊度从而得到不同层级的影像,但是寻找特征点的方法和SIFT类似。使用基于DoH的斑点特征检测方法,在特征点的描述上,SURF算法通过积分图,利用两个方向上的Harr小波模型进行梯度计算,然后对邻域内点的梯度方向以扇形的方式进行统计,得到特征点的主方向。其算法速度快且稳定性好。

  • SIFT和SURF能够比较稳定地提供较高的准确率,其中SIFT比SURF更准确一点,但是二者都特别慢。相较而言ORB速度更快,但是更容易出现问题,而且错误率远比其他二者大。

  • 计算速度: ORB>>SURF>>SIFT(各差一个量级)

  • 旋转鲁棒性: SURF>ORB~SIFT(表示差不多)

  • 模糊鲁棒性: SURF>ORB~SIFT

  • 尺度变换鲁棒性: SURF>SIFT>ORB(ORB并不具备尺度变换性)

  • 综上所述,如果对计算实时性要求非常高,可选用ORB算法,但基本要保证正对拍摄;如果对实行性要求稍高,可以选择SURF;基本不用SIFT


1.2. 设计程序,调用OpenCV 中的其他种类特征点。统计在提取1000 个特征点时,在你的机器上所用的时间。


1.3. 我们发现OpenCV 提供的ORB 特征点,在图像当中分布不够均匀。你是否能够找到或提出让特征点分布更加均匀的方法?

  • 用熵来度量局部信息量大小,将图像分为若干网格,计算每个网格图像的熵,在熵较大的区域内提取特征点。若局部依旧集中可再次划分网格,在没有特征的网格中,强行提取一个兴趣值最大的特征点。
  • 2016年出的ORB_SLAM2中已经用四叉树实现的特征点均匀分布算法。四叉树均匀划分的方法跟经典ORB算法变化不大,一般还是:1.构建图像金字塔;2.将每层图像划分网格,FAST算法提取特征点;3.对网格内的特征点进行评估,选取质量最好的留下作为代表关键点;4.用强度质心法计算每个关键点的方向;5对金字塔图层进行高斯滤波;6.使用BRIEF计算各个关键点的描述子(包含论文中的启发式搜索算法得到的256对匹配对坐标进行优化);7.保留关键点和描述子。

1.4. 研究FLANN 为何能够快速处理匹配问题。除了FLANN 之外,还能哪些可以加速匹配的手段?

  • FLANN的全称为Fast Library for Approximate Nearest Neighbors。它是一个对大数据集和高维特征进行最近邻近的算法的集合。在面对大数据集时它的性能要好于BFMatcher。匹配问题实际上就是一个特征向量求相似度问题。对于最简单的办法,就是逐个匹配对计算距离。明显这种遍历的方式是效率极低的,对于大数据情况下不适用。因此经典kd-tree的搜索回溯的搜索办法在这里派上了用场,减少了不必要的计算过程,提高了效率,但是对于多维数据而言,其效果往往不升反降。在经典kd-tree算法上提出了随机kd-tree算法,随即选出方差较大的维度,建立kd-tree,再进行搜索回溯。还有一种分级k-means tree,与之前不同的是先通过k-means算法(之后回环检测时会用到)来进行先数据聚类,然后再进行kd-tree的建立。这些方法相交于传统的直接匹配法优势都比较大。
  • 加速匹配的方法:预排序图像检索;GPU加速,可以使得匹配速度提高十多倍;再后来就是用FPGA加速,其匹配速度能提升10倍;再后来的VF-SIFT(very fast SIFT)算法,其核心思想是从SIFT特征中提取4个特征角,根据特征角区间的不同,避免了大量不必要的搜索,这样据说是普通搜索的1250倍。

1.5. 把演示程序使用的EPnP 改成其他PnP 方法,并研究它们的工作原理。

1.5.1 前言

  • 目前除了EPnP外,OpenCV还提供了另外两种方法:迭代法和P3P法,其中,在OpenCV3中还另外提供了DLS法和UPnP法。

  • EPnP(Efficient PnP)的思路是将空间中的任意3D点可以用4个不共面的3D点加权表示,然后通过n个3D点在相机平面的投影关系以及四个控制点的权重关系构建一个的矩阵,求这个矩阵的特征向量,就可以得到这个相机平面的坐标,再用POSIT正交投影变换算法得到相机位姿。

  • 迭代法实质是迭代求出重投影误差的最小解先用DLT直接线性变换求解,然后用LM算法进行优化,这个解显然不是正解,而且这个方法只能使用4个共面特征点才能求得。

  • P3P法,它需要3对3D-2D的匹配点,然后通过三角形投影关系得到的方程组,然后将方程组进行变换得到两个二元二次方程进行求解。

  • DLS(Direct Least-Squares)算法整体思路是首先对PnP非线性最小贰乘建模,重新定义LS模型以便于参数降维然后构造Maculy矩阵进行求解,将PnP问题重构为无约束LSM问题然后优化求解。

  • UPnP(Uncalibrated PnP)算法跟EPnP差不了太多,仅仅是多估计了焦距。因此,比较适合未标定场合。

  • 代码基础是【slam十四讲第二版】【课本例题代码向】【第七讲~视觉里程计Ⅰ】【1OpenCV的ORB特征】【2手写ORB特征】【3对极约束求解相机运动】【4三角测量】【5求解PnP】【3D-3D:ICP】中的第5小节,这里我的本工程的代码自取:链接:链接: https://pan.baidu.com/s/1t63wx7OInkjuu3NaZqM2Uw 提取码: 9a9v

1.5.2 pnps.cpp



// Created by czy on 2022/4/7
#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/features2d/features2d.hpp>
#include <opencv2/calib3d/calib3d.hpp>
#include <Eigen/Core>
#include <Eigen/Geometry>
#include <chrono>

using namespace std;

void find_feature_matches(const cv::Mat& img_1, const cv::Mat& img_2,
                          std::vector<cv::KeyPoint>& keypoints_1, std::vector<cv::KeyPoint>& keypoints_2,
                          std::vector<cv::DMatch>& matches);


// 像素坐标转相机归一化坐标
cv::Point2d pixel2cam(const cv::Point2d& p, const cv::Mat& K);

int main(int argc, char ** argv) {
    cv::Mat img_1,img_2,d1,d2;
    if(argc!=5)
    {
        //读取图片
        img_1 = cv::imread("../src/1.png", CV_LOAD_IMAGE_COLOR);//读取彩色图
        img_2 = cv::imread("../src/2.png",1);
        //接下来的是建立3d点 利用深度图可以获取深度信息
        //depth1是图1对应的深度图 depth2是图2对应的深度图
        d1 = cv::imread("../src/1_depth.png", CV_LOAD_IMAGE_UNCHANGED);// 深度图为16位无符号数,单通道图像
        d2 = cv::imread("../src/2_depth.png", -1);
    }
    else
    {
        //读取图片
        img_1 = cv::imread(argv[1], CV_LOAD_IMAGE_COLOR);//读取彩色图
        img_2 = cv::imread(argv[2],1);
        //接下来的是建立3d点 利用深度图可以获取深度信息
        //depth1是图1对应的深度图 depth2是图2对应的深度图
        d1 = cv::imread(argv[3], CV_LOAD_IMAGE_UNCHANGED);// 深度图为16位无符号数,单通道图像
        d2 = cv::imread(argv[4], -1);
    }
    assert(img_1.data != nullptr && img_2.data != nullptr);//若读取的图片没有内容,就终止程序

    vector<cv::KeyPoint> keypoints_1, keypoints_2;
    vector<cv::DMatch> matches;
    find_feature_matches(img_1, img_2, keypoints_1, keypoints_2, matches);//得到两个图片的特征匹配点
    cout << "一共找到了" << matches.size() << "组匹配点" << endl;

    cv::Mat K = (cv::Mat_<double>(3, 3) << 520.9, 0, 325.1, 0, 521.0, 249.7, 0, 0, 1);

    vector<cv::Point3f> pts_3d;//创建容器pts_3d存放3d点(图1对应的特征点的相机坐标下的3d点)
    vector<cv::Point2f> pts_2d;//创建容器pts_2d存放图2的特征点

    for (cv::DMatch m: matches) {
        //把对应的图1的特征点的深度信息拿出来
        ushort d = d1.ptr<unsigned short>(int(keypoints_1[m.queryIdx].pt.y))[int(keypoints_1[m.queryIdx].pt.x)];
        if (d == 0) // bad depth
            continue;
        float dd = d / 5000.0;//用dd存放换算过尺度的深度信息
        cv::Point2d p1 = pixel2cam(keypoints_1[m.queryIdx].pt, K);//p1里面放的是图1特征点在相机坐标下的归一化坐标(只包含 x,y)
        pts_3d.push_back(cv::Point3f(p1.x * dd, p1.y * dd, dd));//得到图1特征点在相机坐标下的3d坐标
        pts_2d.push_back(keypoints_2[m.trainIdx].pt);//得到图2特张点的像素坐标
    }

    cout << "3d-2d pairs: " << pts_3d.size() << endl;//3d-2d配对个数得用pts_3d的size


    cout << "使用cv_PnP求解 位姿" << endl;

    cv::Mat r, t;
    cv::Mat R;
    chrono::steady_clock::time_point t1,t2;
    chrono::duration<double> time_used;

    cout << "***********************************SOLVEPNP_ITERATIVE***********************************" << endl;
    t1 = chrono::steady_clock::now();
    //Mat()这个参数指的是畸变系数向量
    cv::solvePnP(pts_3d, pts_2d, K, cv::Mat(), r, t, false,cv::SOLVEPNP_ITERATIVE); // 调用OpenCV 的 PnP 求解,可选择EPNP,DLS等方法
    t2 = chrono::steady_clock::now();
    time_used = chrono::duration_cast<chrono::duration<double>>(t2 - t1);
    cv::Rodrigues(r, R);//r为旋转向量形式,利用cv的Rodrigues()函数将旋转向量转换为旋转矩阵
    cout << "solve pnp in opencv cost time: " << time_used.count() << " seconds." << endl;
    cout << "R=" << endl << R << endl;
    cout << "t=" << endl << t << endl;
    cout << "calling bundle adjustment" << endl;
    cout << "***********************************SOLVEPNP_ITERATIVE***********************************" << endl;

    cout << "***********************************SOLVEPNP_EPNP***********************************" << endl;
    t1 = chrono::steady_clock::now();
    //Mat()这个参数指的是畸变系数向量
    cv::solvePnP(pts_3d, pts_2d, K, cv::Mat(), r, t, false,cv::SOLVEPNP_EPNP); // 调用OpenCV 的 PnP 求解,可选择EPNP,DLS等方法
    t2 = chrono::steady_clock::now();
    time_used = chrono::duration_cast<chrono::duration<double>>(t2 - t1);
    cv::Rodrigues(r, R);//r为旋转向量形式,利用cv的Rodrigues()函数将旋转向量转换为旋转矩阵
    cout << "solve pnp in opencv cost time: " << time_used.count() << " seconds." << endl;
    cout << "R=" << endl << R << endl;
    cout << "t=" << endl << t << endl;
    cout << "calling bundle adjustment" << endl;
    cout << "***********************************SOLVEPNP_EPNP***********************************" << endl;

    cout << "***********************************SOLVEPNP_UPNP***********************************" << endl;
    t1 = chrono::steady_clock::now();
    //Mat()这个参数指的是畸变系数向量
    cv::solvePnP(pts_3d, pts_2d, K, cv::Mat(), r, t, false,cv::SOLVEPNP_UPNP); // 调用OpenCV 的 PnP 求解,可选择EPNP,DLS等方法
    t2 = chrono::steady_clock::now();
    time_used = chrono::duration_cast<chrono::duration<double>>(t2 - t1);
    cv::Rodrigues(r, R);//r为旋转向量形式,利用cv的Rodrigues()函数将旋转向量转换为旋转矩阵
    cout << "solve pnp in opencv cost time: " << time_used.count() << " seconds." << endl;
    cout << "R=" << endl << R << endl;
    cout << "t=" << endl << t << endl;
    cout << "calling bundle adjustment" << endl;
    cout << "***********************************SOLVEPNP_UPNP***********************************" << endl;

    cout << "***********************************SOLVEPNP_DLS***********************************" << endl;
    t1 = chrono::steady_clock::now();
    //Mat()这个参数指的是畸变系数向量
    cv::solvePnP(pts_3d, pts_2d, K, cv::Mat(), r, t, false,cv::SOLVEPNP_DLS); // 调用OpenCV 的 PnP 求解,可选择EPNP,DLS等方法
    t2 = chrono::steady_clock::now();
    time_used = chrono::duration_cast<chrono::duration<double>>(t2 - t1);
    cv::Rodrigues(r, R);//r为旋转向量形式,利用cv的Rodrigues()函数将旋转向量转换为旋转矩阵
    cout << "solve pnp in opencv cost time: " << time_used.count() << " seconds." << endl;
    cout << "R=" << endl << R << endl;
    cout << "t=" << endl << t << endl;
    cout << "calling bundle adjustment" << endl;
    cout << "***********************************SOLVEPNP_DLS***********************************" << endl;

    cout << "***********************************SOLVEPNP_P3P***********************************" << endl;

    vector<cv::Point3f> pts_p3p_3d;//创建容器pts_3d存放3d点(图1对应的特征点的相机坐标下的3d点)
    vector<cv::Point2f> pts_p3p_2d;//创建容器pts_2d存放图2的特征点

    //取出其中的4个点对
    for (int i = 0; i < 4; i++)
    {
        pts_p3p_3d.push_back(pts_3d[i]);
        pts_p3p_2d.push_back(pts_2d[i]);
    }

/*    cv::Mat pts_p3p = (cv::Mat_<double>(4,3) <<
            pts_p3p_3d[0].x,pts_p3p_3d[0].y,pts_p3p_3d[0].z,
            pts_p3p_3d[1].x,pts_p3p_3d[1].y,pts_p3p_3d[1].z,
            pts_p3p_3d[2].x,pts_p3p_3d[2].y,pts_p3p_3d[2].z,
            pts_p3p_3d[3].x,pts_p3p_3d[3].y,pts_p3p_3d[3].z
        );*/

    t1 = chrono::steady_clock::now();
    //Mat()这个参数指的是畸变系数向量
    cv::solvePnP(pts_p3p_3d, pts_p3p_2d, K, cv::Mat(), r, t, false,cv::SOLVEPNP_P3P); // 调用OpenCV 的 PnP 求解,可选择EPNP,DLS等方法
    t2 = chrono::steady_clock::now();
    time_used = chrono::duration_cast<chrono::duration<double>>(t2 - t1);
    cv::Rodrigues(r, R);//r为旋转向量形式,利用cv的Rodrigues()函数将旋转向量转换为旋转矩阵
    cout << "solve pnp in opencv cost time: " << time_used.count() << " seconds." << endl;
    cout << "R=" << endl << R << endl;
    cout << "t=" << endl << t << endl;
    cout << "calling bundle adjustment" << endl;
    cout << "***********************************SOLVEPNP_P3P***********************************" << endl;
}



//实现特征匹配
void find_feature_matches(const cv::Mat& img_1, const cv::Mat& img_2,
                          std::vector<cv::KeyPoint>& keypoints_1, std::vector<cv::KeyPoint>& keypoints_2,
                          std::vector<cv::DMatch>& matches)
{
    //-- 初始化
    cv::Mat descriptors_1, descriptors_2;
    // used in OpenCV3
    cv::Ptr<cv::FeatureDetector> detector = cv::ORB::create();
    cv::Ptr<cv::DescriptorExtractor> descriptor = cv::ORB::create();
    // use this if you are in OpenCV2
    // Ptr<FeatureDetector> detector = FeatureDetector::create ( "ORB" );
    // Ptr<DescriptorExtractor> descriptor = DescriptorExtractor::create ( "ORB" );

    cv::Ptr<cv::DescriptorMatcher> matcher = cv::DescriptorMatcher::create("BruteForce-Hamming");

    //-- 第一步:检测 Oriented FAST 角点位置
    detector->detect(img_1, keypoints_1);
    detector->detect(img_2, keypoints_2);

    //-- 第二步:根据角点位置计算 BRIEF 描述子
    descriptor->compute(img_1, keypoints_1, descriptors_1);
    descriptor->compute(img_2, keypoints_2, descriptors_2);

    //-- 第三步:对两幅图像中的BRIEF描述子进行匹配,使用 Hamming 距离
    vector<cv::DMatch> match;
    // BFMatcher matcher ( NORM_HAMMING );
    matcher->match(descriptors_1, descriptors_2, match);

    //-- 第四步:匹配点对筛选
    double min_dist = 10000, max_dist = 0;

    //找出所有匹配之间的最小距离和最大距离, 即是最相似的和最不相似的两组点之间的距离
    for (int i = 0; i < descriptors_1.rows;i++)
    {
        double dist = match[i].distance;
        if(dist > max_dist) max_dist = dist;
        if(dist < min_dist) min_dist = dist;
    }

    printf("-- Max dist : %f \n", max_dist);;
    printf("-- Min dist : %f \n", min_dist);

    //当描述子之间的距离大于两倍的最小距离时,即认为匹配有误.但有时候最小距离会非常小,设置一个经验值30作为下限.
    for ( int i = 0; i < descriptors_1.rows; i++ )
    {
        if ( match[i].distance <= max ( 2*min_dist, 30.0 ))
        {
            matches.push_back ( match[i] );
        }
    }
}

//实现像素坐标到相机坐标的转换(求出来的只是包含相机坐标下的x,y的二维点)
cv::Point2d pixel2cam(const cv::Point2d& p, const cv::Mat& K)
{
    return cv::Point2d(
            ((p.x - K.at<double>(0,2))/K.at<double>(0,0)),
            ((p.y - K.at<double>(1,2))/K.at<double>(1,1))
    );
}

1.5.3 CMakeLists.txt

cmake_minimum_required(VERSION 2.8)
project(pnps)

set( CMAKE_CXX_STANDARD 14)
set( CMAKE_BUILD_TYPE Release )

find_package(OpenCV 3.1 REQUIRED)
include_directories(${OpenCV_INCLUDE_DIRECTORIES})

include_directories("/usr/include/eigen3")

add_executable(pnps src/pnps.cpp)

target_link_libraries( pnps
        ${OpenCV_LIBS}
        )

1.5.4 输出结果为

/home/bupo/shenlan/zuoye/cap7/pnps/cmake-build-release/pnps ./src/1.png ./src/2.png ./src/1_depth.png ./src/2_depth.png
[ INFO:0] Initialize OpenCL runtime...
-- Max dist : 95.000000 
-- Min dist : 7.000000 
一共找到了81组匹配点
3d-2d pairs: 77
使用cv_PnP求解 位姿
***********************************SOLVEPNP_ITERATIVE***********************************
solve pnp in opencv cost time: 0.000981819 seconds.
R=
[0.9979193252225089, -0.05138618904650331, 0.03894200717386666;
 0.05033852907733834, 0.9983556574295412, 0.02742286944793203;
 -0.04028712992734059, -0.02540552801469367, 0.9988651091656532]
t=
[-0.1255867099750398;
 -0.007363525258777434;
 0.06099926588678889]
calling bundle adjustment
***********************************SOLVEPNP_ITERATIVE***********************************
***********************************SOLVEPNP_EPNP***********************************
solve pnp in opencv cost time: 0.00010083 seconds.
R=
[0.9978654469053291, -0.05171629787598204, 0.03987448314938394;
 0.05055337177925334, 0.9982813811698801, 0.02964187260119165;
 -0.04133892202484725, -0.02756281087914256, 0.9987649297919227]
t=
[-0.1266609334454452;
 -0.01111413717711941;
 0.05673412814657697]
calling bundle adjustment
***********************************SOLVEPNP_EPNP***********************************
***********************************SOLVEPNP_UPNP***********************************
solve pnp in opencv cost time: 5.0345e-05 seconds.
R=
[0.9978654469053291, -0.05171629787598204, 0.03987448314938394;
 0.05055337177925334, 0.9982813811698801, 0.02964187260119165;
 -0.04133892202484725, -0.02756281087914256, 0.9987649297919227]
t=
[-0.1266609334454452;
 -0.01111413717711941;
 0.05673412814657697]
calling bundle adjustment
***********************************SOLVEPNP_UPNP***********************************
***********************************SOLVEPNP_DLS***********************************
solve pnp in opencv cost time: 4.746e-05 seconds.
R=
[0.9978654469053291, -0.05171629787598204, 0.03987448314938394;
 0.05055337177925334, 0.9982813811698801, 0.02964187260119165;
 -0.04133892202484725, -0.02756281087914256, 0.9987649297919227]
t=
[-0.1266609334454452;
 -0.01111413717711941;
 0.05673412814657697]
calling bundle adjustment
***********************************SOLVEPNP_DLS***********************************
***********************************SOLVEPNP_P3P***********************************
solve pnp in opencv cost time: 2.0098e-05 seconds.
R=
[0.9982328467626991, -0.04009678360638762, 0.0438569445864655;
 0.03857211555269049, 0.9986400289063435, 0.03507541258100014;
 -0.04520371163773695, -0.03332177381773087, 0.9984218967169202]
t=
[-0.1276949195981435;
 -0.02542013264231638;
 0.04538521283166519]
calling bundle adjustment
***********************************SOLVEPNP_P3P***********************************

进程已结束,退出代码0

1.5.5 报错解决

1.5.6 感悟

  • 处理速度上没法比较,可能数据太小,有时这个快,有时那个快

1.6. 在PnP 优化中,将第一个相机的观测也考虑进来,程序应如何书写?最后结果会有何变化?

1.6.1 前言

  • 实际上在PnP例子中,我们可以把第一帧作为世界坐标系,然后在优化过程中对于第一帧的RT我们不做优化,但是我们在添加节点时仍然要将第一帧在世界坐标系下的空间点加入到图中,并且与第一帧的位姿链接起来,然后将第一帧坐标系下的空间点与第二帧的位姿连接起来。
  • 本工程所必需的图片和cmake_modules,以及我的工程实现自取:链接:链接: https://pan.baidu.com/s/1hnpyYyN8SyAsw1wgthcsVg 提取码: 0klt

1.6.2 cap7_6_pnp.cpp

#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/features2d/features2d.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/calib3d/calib3d.hpp>
#include <Eigen/Core>
#include <Eigen/Geometry>
#include <g2o/core/base_vertex.h>
#include <g2o/core/base_unary_edge.h>
#include <g2o/core/block_solver.h>
#include <g2o/core/optimization_algorithm_levenberg.h>
#include <g2o/solvers/csparse/linear_solver_csparse.h>
#include <g2o/types/sba/types_six_dof_expmap.h>
#include <chrono>
using namespace std;
using namespace cv;
#define  MyselfBAFunc  1        //1 课后习题6需要的BA优化函数
//0 例程用的
void find_feature_matches (
        const Mat& img_1, const Mat& img_2,
        std::vector<KeyPoint>& keypoints_1,
        std::vector<KeyPoint>& keypoints_2,
        std::vector< DMatch >& matches );

// 像素坐标转相机归一化坐标
Point2d pixel2cam ( const Point2d& p, const Mat& K );
#if MyselfBAFunc
void MyselfBAFun(
        const vector< cv::Point3f> &points1_3d,  //第一帧3d点(匹配好且有深度信息的点)
        const vector< cv::Point2f> &points1_2d,  //第一帧像素平面2d点(匹配好的点)
        const vector< cv::Point2f> &points2_2d,  //第二帧像素平面2d点(匹配好的点)
        const Mat&K,                             //因为里面没有修改相应的值所以用const
        const Mat&R,
        const Mat&t
);
#else
void bundleAdjustment (
    const vector<Point3f> &points_3d,
    const vector<Point2f> &points_2d,
    const Mat& K,
    Mat& R, Mat& t
);
#endif
int main ( int argc, char** argv )
{
    cv::Mat img_1,img_2,d1,d2;
    if(argc!=5)
    {
        //读取图片
        img_1 = cv::imread("../src/1.png", CV_LOAD_IMAGE_COLOR);//读取彩色图
        img_2 = cv::imread("../src/2.png",1);
        //接下来的是建立3d点 利用深度图可以获取深度信息
        //depth1是图1对应的深度图 depth2是图2对应的深度图
        d1 = cv::imread("../src/1_depth.png", CV_LOAD_IMAGE_UNCHANGED);// 深度图为16位无符号数,单通道图像
        d2 = cv::imread("../src/2_depth.png", -1);
    }
    else
    {
        //读取图片
        img_1 = cv::imread(argv[1], CV_LOAD_IMAGE_COLOR);//读取彩色图
        img_2 = cv::imread(argv[2],1);
        //接下来的是建立3d点 利用深度图可以获取深度信息
        //depth1是图1对应的深度图 depth2是图2对应的深度图
        d1 = cv::imread(argv[3], CV_LOAD_IMAGE_UNCHANGED);// 深度图为16位无符号数,单通道图像
        d2 = cv::imread(argv[4], -1);
    }
    assert(img_1.data != nullptr && img_2.data != nullptr);//若读取的图片没有内容,就终止程序


    vector<KeyPoint> keypoints_1, keypoints_2;
    vector<DMatch> matches;
    find_feature_matches ( img_1, img_2, keypoints_1, keypoints_2, matches );
    cout<<"一共找到了"<<matches.size() <<"组匹配点"<<endl;

    Mat K = ( Mat_<double> ( 3,3 ) << 520.9, 0, 325.1, 0, 521.0, 249.7, 0, 0, 1 );
    vector<Point3f> pts_3d;
    vector<Point2f> pts_2d;
#if MyselfBAFunc
    vector<Point2f> pts1_2d;                                  //第一帧下的像素坐标
#endif
    for ( DMatch m:matches )
    {
        //可以参考书上101页对应的程序,表示获取对应位置的深度图片的深度值
        ushort d = d1.ptr<unsigned short> (int ( keypoints_1[m.queryIdx].pt.y )) [ int ( keypoints_1[m.queryIdx].pt.x ) ];
        if ( d == 0 )   // bad depth
            continue;
        float dd = d/5000.0;
        Point2d p1 = pixel2cam ( keypoints_1[m.queryIdx].pt, K );
        pts_3d.push_back ( Point3f ( p1.x*dd, p1.y*dd, dd ) );//表示的是 相机坐标系下的3D坐标 //这里是通过RGBD获取的深度信息,但是我们可以用三角测量获得第一帧下的3D坐标
        pts_2d.push_back ( keypoints_2[m.trainIdx].pt );      //第二帧 匹配好的点 2D像素坐标
#if MyselfBAFunc
        pts1_2d.push_back( keypoints_1[m.queryIdx].pt );      //第一帧 匹配好的点 2D像素坐标
#endif
    }
    //上面已经获得了第一帧坐标系的下的3d坐标 相当于世界坐标系下的坐标 (因为仅仅有两针图像帧 所以 以第一帧为世界坐标,也就是说 世界坐标到第一帧图像的R=I T=0 )
    cout<<"3d-2d pairs: "<<pts_3d.size() <<endl;
    Mat r, t; //定义旋转和平移变量
    //参数信息: 第一帧3D 第二帧像素2D 内参矩阵k 无失真补偿  旋转向量r 平移向量t false表示输入的r t不作为初始化值 如果是true则此时会把t r作为初始值进行迭代
    solvePnP ( pts_3d, pts_2d, K, Mat(), r, t, false,SOLVEPNP_EPNP ); // 调用OpenCV 的 PnP 求解,可选择EPNP,DLS等方法SOLVEPNP_EPNP
    Mat R;
    cv::Rodrigues ( r, R );                                 // r为旋转向量形式,用Rodrigues公式转换为矩阵

    cout<<"R="<<endl<<R<<endl;
    cout<<"t="<<endl<<t<<endl;
    cout<<"calling bundle adjustment"<<endl;
#if MyselfBAFunc
    MyselfBAFun( pts_3d, pts1_2d, pts_2d, K, R, t);
#else
    bundleAdjustment ( pts_3d, pts_2d, K, R, t );
#endif

}

void find_feature_matches ( const Mat& img_1, const Mat& img_2,
                            std::vector<KeyPoint>& keypoints_1,
                            std::vector<KeyPoint>& keypoints_2,
                            std::vector< DMatch >& matches )
{
    //-- 初始化
    Mat descriptors_1, descriptors_2;
    // used in OpenCV3
    Ptr<FeatureDetector> detector = ORB::create();
    Ptr<DescriptorExtractor> descriptor = ORB::create();
    // use this if you are in OpenCV2
    // Ptr<FeatureDetector> detector = FeatureDetector::create ( "ORB" );
    // Ptr<DescriptorExtractor> descriptor = DescriptorExtractor::create ( "ORB" );
    Ptr<DescriptorMatcher> matcher  = DescriptorMatcher::create ( "BruteForce-Hamming" );
    //-- 第一步:检测 Oriented FAST 角点位置
    detector->detect ( img_1,keypoints_1 );
    detector->detect ( img_2,keypoints_2 );

    //-- 第二步:根据角点位置计算 BRIEF 描述子
    descriptor->compute ( img_1, keypoints_1, descriptors_1 );
    descriptor->compute ( img_2, keypoints_2, descriptors_2 );

    //-- 第三步:对两幅图像中的BRIEF描述子进行匹配,使用 Hamming 距离
    vector<DMatch> match;
    // BFMatcher matcher ( NORM_HAMMING );
    matcher->match ( descriptors_1, descriptors_2, match );

    //-- 第四步:匹配点对筛选
    double min_dist=10000, max_dist=0;

    //找出所有匹配之间的最小距离和最大距离, 即是最相似的和最不相似的两组点之间的距离
    for ( int i = 0; i < descriptors_1.rows; i++ )
    {
        double dist = match[i].distance;
        if ( dist < min_dist ) min_dist = dist;
        if ( dist > max_dist ) max_dist = dist;
    }

    printf ( "-- Max dist : %f \n", max_dist );
    printf ( "-- Min dist : %f \n", min_dist );

    //当描述子之间的距离大于两倍的最小距离时,即认为匹配有误.但有时候最小距离会非常小,设置一个经验值30作为下限.
    for ( int i = 0; i < descriptors_1.rows; i++ )
    {
        if ( match[i].distance <= max ( 2*min_dist, 30.0 ) )
        {
            matches.push_back ( match[i] );
        }
    }
}

Point2d pixel2cam ( const Point2d& p, const Mat& K )
{
    return Point2d
            (
                    ( p.x - K.at<double> ( 0,2 ) ) / K.at<double> ( 0,0 ),
                    ( p.y - K.at<double> ( 1,2 ) ) / K.at<double> ( 1,1 )
            );
}

#if MyselfBAFunc
void MyselfBAFun(
        const vector< cv::Point3f> &points1_3d,  //第一帧3d点(匹配好且有深度信息的点)
        const vector< cv::Point2f> &points1_2d,  //第一帧像素平面2d点(匹配好的点)
        const vector< cv::Point2f> &points2_2d,  //第二帧像素平面2d点(匹配好的点)
        const Mat&K,                             //因为里面没有修改相应的值所以用const
        const Mat&R,
        const Mat&t
){
#define PoseVertexNum 2                          //定义位姿节点个数 本试验仅仅有2帧图

//设置优化器
    typedef g2o::BlockSolver< g2o::BlockSolverTraits<6,3> > Block;  //优化位姿6维  优化路标点3维
    std::unique_ptr<Block::LinearSolverType> linearSolver=g2o::make_unique < g2o::LinearSolverCSparse<Block::PoseMatrixType> >();//线性求解设为CSparse
    std::unique_ptr<Block> solver_ptr (new Block(std::move(linearSolver) ) );
    g2o::OptimizationAlgorithmLevenberg *solver = new g2o::OptimizationAlgorithmLevenberg( std::move(solver_ptr) );

/*  Block::LinearSolverType *linearSolver = new g2o::LinearSolverCSparse<Block::PoseMatrixType>();  //设置线性求解器类型
    Block *solver_ptr = new Block( std::unique_ptr<Block::LinearSolverType>(linearSolver) );  //矩阵块求解器
    g2o::OptimizationAlgorithmLevenberg *solver = new g2o::OptimizationAlgorithmLevenberg( std::unique_ptr<g2o::Solver>(solver_ptr) ); //LM优化算法
*/
    g2o::SparseOptimizer optimizer;     //设置稀疏优化器
    optimizer.setAlgorithm(solver);     //设置优化算法

//向优化器中添加节点和边
    //添加节点 Vertex
    //(1)添加位姿节点 第一帧作为世界坐标系(不优化) 同时也是相机坐标系
    int poseVertexIndex = 0;                                       //位姿节点索引为0  总共两个位姿态节点 第一帧和第二帧
    Eigen::Matrix3d R2Init;
    R2Init <<
           R.at<double> ( 0,0 ), R.at<double> ( 0,1 ), R.at<double> ( 0,2 ) ,
            R.at<double> ( 1,0 ), R.at<double> ( 1,1 ), R.at<double> ( 1,2 ) ,
            R.at<double> ( 2,0 ), R.at<double> ( 2,1 ), R.at<double> ( 2,2 ) ;
    for( ; poseVertexIndex < PoseVertexNum ; poseVertexIndex++ )
    {
        auto pose = new g2o::VertexSE3Expmap();  //相机位姿
        pose->setId( poseVertexIndex );                            //设置节点标号
        pose->setFixed( poseVertexIndex == 0 );                    //如果是第一帧 则固定住 不优化
        if( poseVertexIndex == 1 )                                 //第二帧时让RT估计值为pnp得到的值 加快优化速度!
            pose->setEstimate(
                    g2o::SE3Quat( R2Init,
                                  Eigen::Vector3d( t.at<double> ( 0,0 ), t.at<double> ( 1,0 ), t.at<double> ( 2,0 ) )
                    )
            );                                    //两帧图像的位姿预设值都为 r=单位矩阵 t=0(当然这里可以填写自己设定的预设值 比如Pnp估计值)
        else
            pose->setEstimate( g2o::SE3Quat() );
        optimizer.addVertex( pose );                               //位姿节点加入优化器
    }
    //(2)添加路标节点
    int landmarkVertexIndex = PoseVertexNum ;
    for( int i = 0;  i < points1_3d.size() ; i ++ ){
        auto point = new g2o::VertexPointXYZ();                 //路标点
        point->setId( landmarkVertexIndex + i );                   //设置路标点节点标号
        point->setMarginalized( true );                            //设置边缘化
        point->setEstimate( Eigen::Vector3d( points1_3d[i].x, points1_3d[i].y, points1_3d[i].z ) ); //设置估计值 实际上就是第一帧坐标下的3d点坐标(也是世界坐标系下的坐标)
        optimizer.addVertex( point );                              //路标节点加入优化器
    }
    //加入相机参数(当然有另一种方式:查看笔记两种边的不同点)(最后一项为0 默认fx = fy 然后优化位姿 与g2o::EdegeSE3ProjectXYZ不同 笔记以记载 )
    g2o::CameraParameters *camera = new g2o::CameraParameters(
            K.at< double >(0,0), Eigen::Vector2d( K.at< double >(0,2), K.at< double >(1,2) ),0
    );
    camera->setId( 0 );
    optimizer.addParameter( camera );

    //加入边edge
    //世界坐标下路标点连接到第一帧位姿节点(因为以第一帧坐标系当做世界坐标系 所以我们前面没有优化第一帧的RT  仅仅优化第一帧到第二帧的RT)
    for(int i= 0 ;i < points1_2d.size() ; i++ ){
        auto edge = new g2o::EdgeProjectXYZ2UV();               //设置连接到第一帧的边
        //二元边 连接节点
        edge->setVertex( 0, dynamic_cast< g2o::VertexPointXYZ * >( optimizer.vertex( landmarkVertexIndex + i ) ) ); //世界坐标系下的路标节点
        edge->setVertex( 1, dynamic_cast< g2o::VertexSE3Expmap * >( optimizer.vertex(0) ) );
        edge->setMeasurement( Eigen::Vector2d ( points1_2d[i].x, points1_2d[i].y ) );   //设置测量值为第一帧下的相机归一化平面坐标
        edge->setParameterId(0,0); //最后一位设置使用的相机参数(因为上面仅仅输入了一个相机参数id=0, 对应上面camer->setId(0),第一个参数0不知道是什么,但是必须为0)
        edge->setInformation ( Eigen::Matrix2d::Identity() );   //信息矩阵2x2的单位阵
        optimizer.addEdge( edge );
    }
    //第一帧路标点链接到第二帧位姿节点
    for( int i=0 ;i < points1_2d.size() ; i++){
        auto edge = new g2o::EdgeProjectXYZ2UV();   //设置链接到第二帧的边
        edge->setVertex( 0, dynamic_cast< g2o::VertexPointXYZ * >( optimizer.vertex( landmarkVertexIndex + i) ) ); //第一帧坐标系下的路标点
        edge->setVertex( 1, dynamic_cast< g2o::VertexSE3Expmap *> ( optimizer.vertex(1) ) ); //连接到第二个位姿节点
        edge->setMeasurement( Eigen::Vector2d ( points2_2d[i].x, points2_2d[i].y ) );        //设置测量值为第二帧下的相机归一化平面坐标
        edge->setInformation( Eigen::Matrix2d::Identity() ); //信息矩阵为2x2 实际上就是误差的加权为1:1的
        edge->setParameterId(0,0);
        optimizer.addEdge( edge );
    }

//run 算法
    cout<<"开始优化!"<<endl;
    chrono::steady_clock::time_point t1 = chrono::steady_clock::now();
    optimizer.setVerbose ( true );          //设置详细信息
    optimizer.initializeOptimization( );    //优化器初始化
    optimizer.optimize( 100 );              //进行最多100次的优化
    chrono::steady_clock::time_point t2 = chrono::steady_clock::now();
    cout<<"优化结束!"<<endl;
    chrono::duration< double > time_used = chrono::duration_cast< chrono::duration<double> >( t2 -t1 );
    cout<<"optimization costs time: "<<time_used.count() << "seconds."<<endl;
    cout<<endl<<"after optimization:"<<endl;
    //输出优化节点的位姿 estimate()输出的是SE3类型   Eigen::Isometry3d 实际上就是4x4的一个矩阵表示变换矩阵 这个类初始化可以是李代数
    //这里有一点不明白的是 Eigen::Isometry3d()为什么可以用estimate()返回的SE3Quat类型初始化???????
    cout<<"T="<<endl<<Eigen::Isometry3d ( dynamic_cast<g2o::VertexSE3Expmap *>(optimizer.vertex(1))->estimate()).matrix() <<endl;
/*    g2o::SE3Quat  a();
    Eigen::Isometry3d( a);*/
}
#else
void bundleAdjustment (
    const vector< Point3f > &points_3d,
    const vector< Point2f > &points_2d,//这里加不加引用 都是会报错
    const Mat& K,
    Mat& R, Mat& t )
{
      // 初始化g2o
    typedef g2o::BlockSolver< g2o::BlockSolverTraits<6,3> > Block;  // pose 维度为 6, landmark 维度为 3
    std::unique_ptr<Block::LinearSolverType> linearSolver( new g2o::LinearSolverCSparse<Block::PoseMatrixType>() );
    std::unique_ptr<Block> solver_ptr ( new Block ( std::move(linearSolver) ) );
    g2o::OptimizationAlgorithmLevenberg* solver = new g2o::OptimizationAlgorithmLevenberg ( std::move(solver_ptr) );
/*    Block::LinearSolverType* linearSolver = new g2o::LinearSolverCSparse<Block::PoseMatrixType>(); // 线性方程求解器
    Block* solver_ptr = new Block ( std::unique_ptr<Block::LinearSolverType>(linearSolver) );     // 矩阵块求解器
    g2o::OptimizationAlgorithmLevenberg* solver = new g2o::OptimizationAlgorithmLevenberg ( std::unique_ptr<g2o::Solver>(solver_ptr) );*/
    g2o::SparseOptimizer optimizer;
    optimizer.setAlgorithm ( solver );

    // vertex
    g2o::VertexSE3Expmap* pose = new g2o::VertexSE3Expmap(); // camera pose
    Eigen::Matrix3d R_mat;
    R_mat <<
          R.at<double> ( 0,0 ), R.at<double> ( 0,1 ), R.at<double> ( 0,2 ),
               R.at<double> ( 1,0 ), R.at<double> ( 1,1 ), R.at<double> ( 1,2 ),
               R.at<double> ( 2,0 ), R.at<double> ( 2,1 ), R.at<double> ( 2,2 );
    pose->setId ( 0 );
    //设置顶点估计值 然后才会用BA再次优化
    pose->setEstimate ( g2o::SE3Quat (
                            R_mat,
                            Eigen::Vector3d ( t.at<double> ( 0,0 ), t.at<double> ( 1,0 ), t.at<double> ( 2,0 ) )
                        ) );
    optimizer.addVertex ( pose );

    int index = 1;
    for ( const Point3f &p:points_3d )   // landmarks 在执行这里的时候,实际上是所有空间点(匹配好的)组成的顶点
    {
        g2o::VertexPointXYZ* point = new g2o::VertexPointXYZ();
        point->setId ( index++ );
        point->setEstimate ( Eigen::Vector3d ( p.x, p.y, p.z ) );
        point->setMarginalized ( true ); // g2o 中必须设置 marg 参见第十讲内容   待注释??
        optimizer.addVertex ( point );
    }

    // parameter: camera intrinsics
    g2o::CameraParameters* camera = new g2o::CameraParameters (
        K.at<double> ( 0,0 ), Eigen::Vector2d ( K.at<double> ( 0,2 ), K.at<double> ( 1,2 ) ), 0
    );
    camera->setId ( 0 );
    optimizer.addParameter ( camera );

    // edges
    index = 1;
    for ( const Point2f &p:points_2d )//每个点都会与位姿节点链接 所以就有那么多的边
    {
        g2o::EdgeProjectXYZ2UV* edge = new g2o::EdgeProjectXYZ2UV();
        edge->setId ( index );
        //下行转化 要用动态类型转换
        //将2元边连接上顶点
        edge->setVertex ( 0, dynamic_cast<g2o::VertexPointXYZ*> ( optimizer.vertex ( index ) ) );//空间点类型指针
        edge->setVertex ( 1, pose );                                                                //位姿类型指针
        edge->setMeasurement ( Eigen::Vector2d ( p.x, p.y ) );                                      //设置测量值
        edge->setParameterId ( 0,0 );
        edge->setInformation ( Eigen::Matrix2d::Identity() ); //因为误差向量为2维,所以信息矩阵也是2维,这里设置加权为1 即单位阵
        optimizer.addEdge ( edge );
        index++;
    }

    chrono::steady_clock::time_point t1 = chrono::steady_clock::now();
    optimizer.setVerbose ( true );
    optimizer.initializeOptimization();
    optimizer.optimize ( 100 );
    chrono::steady_clock::time_point t2 = chrono::steady_clock::now();
    chrono::duration<double> time_used = chrono::duration_cast<chrono::duration<double>> ( t2-t1 );
    cout<<"optimization costs time: "<<time_used.count() <<" seconds."<<endl;

    cout<<endl<<"after optimization:"<<endl;
    cout<<"T="<<endl<<Eigen::Isometry3d ( pose->estimate() ).matrix() <<endl;
}
#endif

1.6.3 CMakeLists.txt

cmake_minimum_required(VERSION 2.8)
project(cap7_6_pnp)

set( CMAKE_CXX_STANDARD 14)
#SET(G2O_LIBS g2o_cli g2o_ext_freeglut_minimal g2o_simulator g2o_solver_slam2d_linear g2o_types_icp g2o_types_slam2d g2o_core g2o_interface g2o_solver_csparse g2o_solver_structure_only g2o_types_sba g2o_types_slam3d g2o_csparse_extension g2o_opengl_helper g2o_solver_dense g2o_stuff g2o_types_sclam2d g2o_parser g2o_solver_pcg g2o_types_data g2o_types_sim3 cxsparse )
set( CMAKE_BUILD_TYPE Release )
# 添加cmake模块以使用g2o
list( APPEND CMAKE_MODULE_PATH ${PROJECT_SOURCE_DIR}/cmake_modules )

find_package(OpenCV 3 REQUIRED)
include_directories(${OpenCV_INCLUDE_DIRECTORIES})

include_directories("/usr/include/eigen3")

find_package(G2O REQUIRED)
include_directories(${G2O_INCLUDE_DIRECTORIES})

find_package(CSparse REQUIRED)
include_directories(${CSPARSE_INCLUDE_DIR})

find_package(Sophus  REQUIRED)
include_directories(${Sophus_INCLUDE_DIRS})

add_executable(cap7_6_pnp src/cap7_6_pnp.cpp)

target_link_libraries( cap7_6_pnp
        ${OpenCV_LIBS}
        g2o_core g2o_stuff g2o_types_sba g2o_csparse_extension
        g2o_types_slam3d
        #${G2O_LIBS}
        ${CSPARSE_LIBRARY}
        ${Sophus_LIBRARIES} fmt
        )

1.6.4 输出结果为

/home/bupo/shenlan/zuoye/cap7/cap7_6_pnp/cmake-build-release/cap7_6_pnp /home/bupo/shenlan/zuoye/cap7/cap7_6_pnp/src/1.png /home/bupo/shenlan/zuoye/cap7/cap7_6_pnp/src/2.png /home/bupo/shenlan/zuoye/cap7/cap7_6_pnp/src/1_depth.png /home/bupo/shenlan/zuoye/cap7/cap7_6_pnp/src/2_depth.png
[ INFO:0] Initialize OpenCL runtime...
-- Max dist : 95.000000 
-- Min dist : 7.000000 
一共找到了81组匹配点
3d-2d pairs: 77
R=
[0.9978654469053291, -0.05171629787598204, 0.03987448314938394;
 0.05055337177925334, 0.9982813811698801, 0.02964187260119165;
 -0.04133892202484725, -0.02756281087914256, 0.9987649297919227]
t=
[-0.1266609334454452;
 -0.01111413717711941;
 0.05673412814657697]
calling bundle adjustment
开始优化!
优化结束!
optimization costs time: 0.00311775seconds.

after optimization:
T=
  0.997695 -0.0513271  0.0443762  -0.133492
 0.0498358   0.998176  0.0340829 -0.0182294
-0.0460446 -0.0317928   0.998433  0.0413093
         0          0          0          1
iteration= 0	 chi2= 73.911450	 time= 0.000859681	 cumTime= 0.000859681	 edges= 154	 schur= 1	 lambda= 80.062423	 levenbergIter= 1
iteration= 1	 chi2= 44.088323	 time= 5.2844e-05	 cumTime= 0.000912525	 edges= 154	 schur= 1	 lambda= 26.687474	 levenbergIter= 1
iteration= 2	 chi2= 40.590311	 time= 4.6602e-05	 cumTime= 0.000959127	 edges= 154	 schur= 1	 lambda= 8.895825	 levenbergIter= 1
iteration= 3	 chi2= 39.648871	 time= 4.6151e-05	 cumTime= 0.00100528	 edges= 154	 schur= 1	 lambda= 2.965275	 levenbergIter= 1
iteration= 4	 chi2= 39.446579	 time= 6.8225e-05	 cumTime= 0.0010735	 edges= 154	 schur= 1	 lambda= 0.988425	 levenbergIter= 1
iteration= 5	 chi2= 39.428292	 time= 4.6492e-05	 cumTime= 0.00111999	 edges= 154	 schur= 1	 lambda= 0.329475	 levenbergIter= 1
iteration= 6	 chi2= 39.427612	 time= 4.6212e-05	 cumTime= 0.00116621	 edges= 154	 schur= 1	 lambda= 0.219650	 levenbergIter= 1
iteration= 7	 chi2= 39.427596	 time= 4.6332e-05	 cumTime= 0.00121254	 edges= 154	 schur= 1	 lambda= 0.146433	 levenbergIter= 1
iteration= 8	 chi2= 39.427596	 time= 4.581e-05	 cumTime= 0.00125835	 edges= 154	 schur= 1	 lambda= 0.097622	 levenbergIter= 1
iteration= 9	 chi2= 39.427596	 time= 5.9337e-05	 cumTime= 0.00131769	 edges= 154	 schur= 1	 lambda= 0.065081	 levenbergIter= 1
iteration= 10	 chi2= 39.427596	 time= 4.5951e-05	 cumTime= 0.00136364	 edges= 154	 schur= 1	 lambda= 0.043388	 levenbergIter= 1
iteration= 11	 chi2= 39.427596	 time= 4.579e-05	 cumTime= 0.00140943	 edges= 154	 schur= 1	 lambda= 0.028925	 levenbergIter= 1
iteration= 12	 chi2= 39.427596	 time= 4.571e-05	 cumTime= 0.00145514	 edges= 154	 schur= 1	 lambda= 0.019283	 levenbergIter= 1
iteration= 13	 chi2= 39.427596	 time= 4.552e-05	 cumTime= 0.00150066	 edges= 154	 schur= 1	 lambda= 0.012856	 levenbergIter= 1
iteration= 14	 chi2= 39.427596	 time= 5.2464e-05	 cumTime= 0.00155312	 edges= 154	 schur= 1	 lambda= 0.008570	 levenbergIter= 1
iteration= 15	 chi2= 39.427596	 time= 4.6152e-05	 cumTime= 0.00159927	 edges= 154	 schur= 1	 lambda= 0.005714	 levenbergIter= 1
iteration= 16	 chi2= 39.427596	 time= 4.555e-05	 cumTime= 0.00164482	 edges= 154	 schur= 1	 lambda= 0.003809	 levenbergIter= 1
iteration= 17	 chi2= 39.427596	 time= 8.2714e-05	 cumTime= 0.00172754	 edges= 154	 schur= 1	 lambda= 0.020315	 levenbergIter= 3
iteration= 18	 chi2= 39.427596	 time= 0.000217741	 cumTime= 0.00194528	 edges= 154	 schur= 1	 lambda= 731925863083109.000000	 levenbergIter= 10

进程已结束,退出代码0

1.6.5 感悟

  • 从这个例子中可以学到如何使用g2o构建多种顶点和边,并选择优化对象,实现优化

参考:视觉slam十四讲第七章课后习题6


1.7. 在ICP 程序中,将空间点也作为优化变量考虑进来,程序应如何书写?最后结果会有何变化?

1.7.1 前言

1.7.2 cap7_7_icp.cpp

#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/features2d/features2d.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/calib3d/calib3d.hpp>
#include <Eigen/Core>
#include <Eigen/Geometry>
#include <Eigen/SVD>
#include <g2o/core/base_vertex.h>
#include <g2o/core/base_unary_edge.h>
#include <g2o/core/block_solver.h>
#include <g2o/core/optimization_algorithm_gauss_newton.h>
#include <g2o/solvers/eigen/linear_solver_eigen.h>
#include <g2o/types/sba/types_six_dof_expmap.h>
#include <chrono>
#include <g2o/core/optimization_algorithm_levenberg.h>

using namespace std;
using namespace cv;
/*+++++++++++++++++++++++++++++++++++++++++配置信息+++++++++++++++++++++++++++++++++++++++++++++***/

#define  MyselfOptimizerMethod  1   //  1: 课后习题7代码结果
//  0: 3d-3d书上例子
#define  ISBAProvideInitValue   0   //  1: 用ICP结果为BA提供初值
//  0: 用单位矩阵I和0平移向量作为初始值

#define  BAOptimizer            1   //  1: 加入BA优化
//  0: 不加入BA优化

/*+++++++++++++++++++++++++++++++++++++++++End配置信息+++++++++++++++++++++++++++++++++++++++++++++*/

void find_feature_matches (
        const Mat& img_1, const Mat& img_2,
        std::vector<KeyPoint>& keypoints_1,
        std::vector<KeyPoint>& keypoints_2,
        std::vector< DMatch >& matches );

// 像素坐标转相机归一化坐标
Point2d pixel2cam ( const Point2d& p, const Mat& K );

void pose_estimation_3d3d (
        const vector<Point3f>& pts1,
        const vector<Point3f>& pts2,
        Mat& R, Mat& t
);

#if BAOptimizer
void bundleAdjustment (
        const vector< Point3f >& pts1,
        const vector< Point3f >& pts2,
        Mat& R, Mat& t );

#if MyselfOptimizerMethod
//课后习题7题
class EdgeProjectXYZRGBDPoseOnly : public g2o::BaseBinaryEdge<3, Eigen::Vector3d,g2o::VertexPointXYZ, g2o::VertexSE3Expmap>
{
public:
    EIGEN_MAKE_ALIGNED_OPERATOR_NEW;
    EdgeProjectXYZRGBDPoseOnly( ) {}

    virtual void computeError()
    {
        const g2o::VertexPointXYZ * point  = dynamic_cast< const g2o::VertexPointXYZ *> ( _vertices[0] );
        const g2o::VertexSE3Expmap * pose = dynamic_cast<const g2o::VertexSE3Expmap*> ( _vertices[1] );
        // measurement is p, point is p'
        //pose->estimate().map( _point );中用estimate()估计一个值后,然后用映射函数 就是旋转加平移 将其_point映射到另一个相机坐标系下去
        //观测值 - 映射值 因为我们做的试验是3D-3D 所以这里把第一帧的3D坐标当做测量值,然后把第二帧坐标映射到第一帧坐标系中
        _error = _measurement - pose->estimate().map( point->estimate() );
    }
    //表示线性化 误差函数 就是要计算雅克比矩阵
    virtual void linearizeOplus()override final                //这里override 表示override覆盖基类的同名同参函数, final表示派生类的某个函数不能覆盖这个函数
    {
        g2o::VertexSE3Expmap* pose = dynamic_cast<g2o::VertexSE3Expmap *> (_vertices[1]);
        g2o::VertexPointXYZ * point = dynamic_cast< g2o::VertexPointXYZ * > (_vertices[0] );
        g2o::SE3Quat T(pose->estimate());
        Eigen::Vector3d xyz_trans = T.map( point->estimate() );//映射到第二帧相机坐标系下的坐标值
        double x = xyz_trans[0];                               //第一帧到第二帧坐标系下变换后的坐标值
        double y = xyz_trans[1];
        double z = xyz_trans[2];

        //关于空间点的雅克比矩阵-R, 误差对空间点,也就是第二张图片的空间点求导,de/dp',可以看书上p187
        _jacobianOplusXi = -T.rotation().toRotationMatrix();

        //3x6的关于优化变量的雅克比矩阵 可以看书上p187/p86页 自己推到的结果
        _jacobianOplusXj(0,0) = 0;
        _jacobianOplusXj(0,1) = -z;
        _jacobianOplusXj(0,2) = y;
        _jacobianOplusXj(0,3) = -1;
        _jacobianOplusXj(0,4) = 0;
        _jacobianOplusXj(0,5) = 0;

        _jacobianOplusXj(1,0) = z;
        _jacobianOplusXj(1,1) = 0;
        _jacobianOplusXj(1,2) = -x;
        _jacobianOplusXj(1,3) = 0;
        _jacobianOplusXj(1,4) = -1;
        _jacobianOplusXj(1,5) = 0;

        _jacobianOplusXj(2,0) = -y;
        _jacobianOplusXj(2,1) = x;
        _jacobianOplusXj(2,2) = 0;
        _jacobianOplusXj(2,3) = 0;
        _jacobianOplusXj(2,4) = 0;
        _jacobianOplusXj(2,5) = -1;
    }

    bool read ( istream& in ) {}
    bool write ( ostream& out ) const {}
};
#else
// g2o edge 边代表误差  所以在里面要override一个computerError函数
class EdgeProjectXYZRGBDPoseOnly : public g2o::BaseUnaryEdge<3, Eigen::Vector3d, g2o::VertexSE3Expmap>
{
public:
    EIGEN_MAKE_ALIGNED_OPERATOR_NEW;
    EdgeProjectXYZRGBDPoseOnly( const Eigen::Vector3d& point ) : _point(point) {}

    virtual void computeError()
    {
        const g2o::VertexSE3Expmap* pose = dynamic_cast<const g2o::VertexSE3Expmap*> ( _vertices[0] );
        // measurement is p, point is p'
        //pose->estimate().map( _point );中用estimate()估计一个值后,然后用映射函数 就是旋转加平移 将其_point映射到另一个相机坐标系下去
        //观测值 - 映射值 因为我们做的试验是3D-3D 所以这里把第一帧的3D坐标当做测量值,然后把第二帧坐标映射到第一帧坐标系中
        _error = _measurement - pose->estimate().map( _point );
    }
    //表示线性化 误差函数 就是要计算雅克比矩阵
    virtual void linearizeOplus()
    {
        g2o::VertexSE3Expmap* pose = dynamic_cast<g2o::VertexSE3Expmap *>(_vertices[0]);
        g2o::SE3Quat T(pose->estimate());
        Eigen::Vector3d xyz_trans = T.map(_point);//映射到第二帧相机坐标系下的坐标值
        double x = xyz_trans[0];                  //第一帧到第二帧坐标系下变换后的坐标值
        double y = xyz_trans[1];
        double z = xyz_trans[2];

        //3x6的关于优化变量的雅克比矩阵 可以看书上p179页 自己推到的结果
        _jacobianOplusXi(0,0) = 0;
        _jacobianOplusXi(0,1) = -z;
        _jacobianOplusXi(0,2) = y;
        _jacobianOplusXi(0,3) = -1;
        _jacobianOplusXi(0,4) = 0;
        _jacobianOplusXi(0,5) = 0;

        _jacobianOplusXi(1,0) = z;
        _jacobianOplusXi(1,1) = 0;
        _jacobianOplusXi(1,2) = -x;
        _jacobianOplusXi(1,3) = 0;
        _jacobianOplusXi(1,4) = -1;
        _jacobianOplusXi(1,5) = 0;

        _jacobianOplusXi(2,0) = -y;
        _jacobianOplusXi(2,1) = x;
        _jacobianOplusXi(2,2) = 0;
        _jacobianOplusXi(2,3) = 0;
        _jacobianOplusXi(2,4) = 0;
        _jacobianOplusXi(2,5) = -1;
    }

    bool read ( istream& in ) {}
    bool write ( ostream& out ) const {}
protected:
    Eigen::Vector3d _point;                     //设立误差点 之后将其与测量值进行相减 求得误差
};
#endif

#endif
//自己设置的打印配置信息
void printConfigInfo(){
    cout<<"This is the example from GaoXiang's book : ICP结果是第二帧到第一帧的RT."<<endl;
#if BAOptimizer

    cout<<"BA Optimizer is Provided!"<<endl;
#if ISBAProvideInitValue
    cout<<"The RT from ICP's result is Provided for BA as a initialization."<<endl;
#else
    cout<<" But,Not provide initialization for BA!"<<endl;
#endif

#if  MyselfOptimizerMethod
    cout<<"优化了空间点和位姿节点!"<<endl;
#else
    cout<<"未对空间点进行优化!"<<endl;
#endif

#endif
}
int main ( int argc, char** argv )
{
    cv::Mat img_1,img_2,depth1,depth2;
    if(argc!=5)
    {
        //读取图片
        img_1 = cv::imread("../src/1.png", CV_LOAD_IMAGE_COLOR);//读取彩色图
        img_2 = cv::imread("../src/2.png",1);
        //接下来的是建立3d点 利用深度图可以获取深度信息
        //depth1是图1对应的深度图 depth2是图2对应的深度图
        depth1 = cv::imread("../src/1_depth.png", CV_LOAD_IMAGE_UNCHANGED);// 深度图为16位无符号数,单通道图像
        depth2 = cv::imread("../src/2_depth.png", -1);
    }
    else
    {
        //读取图片
        img_1 = cv::imread(argv[1], CV_LOAD_IMAGE_COLOR);//读取彩色图
        img_2 = cv::imread(argv[2],1);
        //接下来的是建立3d点 利用深度图可以获取深度信息
        //depth1是图1对应的深度图 depth2是图2对应的深度图
        depth1 = cv::imread(argv[3], CV_LOAD_IMAGE_UNCHANGED);// 深度图为16位无符号数,单通道图像
        depth2 = cv::imread(argv[4], -1);
    }
    assert(img_1.data != nullptr && img_2.data != nullptr);//若读取的图片没有内容,就终止程序

    vector<KeyPoint> keypoints_1, keypoints_2;
    vector<DMatch> matches;
    find_feature_matches ( img_1, img_2, keypoints_1, keypoints_2, matches );
    cout<<"一共找到了"<<matches.size() <<"组匹配点"<<endl;

    Mat K = ( Mat_<double> ( 3,3 ) << 520.9, 0, 325.1, 0, 521.0, 249.7, 0, 0, 1 );
    vector<Point3f> pts1, pts2;

    for ( DMatch m:matches )
    {
        ushort d1 = depth1.ptr<unsigned short> ( int ( keypoints_1[m.queryIdx].pt.y ) ) [ int ( keypoints_1[m.queryIdx].pt.x ) ];
        ushort d2 = depth2.ptr<unsigned short> ( int ( keypoints_2[m.trainIdx].pt.y ) ) [ int ( keypoints_2[m.trainIdx].pt.x ) ];
        if ( d1==0 || d2==0 )   // bad depth
            continue;
        Point2d p1 = pixel2cam ( keypoints_1[m.queryIdx].pt, K );
        Point2d p2 = pixel2cam ( keypoints_2[m.trainIdx].pt, K );
        float dd1 = float ( d1 ) /5000.0;
        float dd2 = float ( d2 ) /5000.0;

        //存储特征点的3D坐标
        pts1.push_back ( Point3f ( p1.x*dd1, p1.y*dd1, dd1 ) );
        pts2.push_back ( Point3f ( p2.x*dd2, p2.y*dd2, dd2 ) );
    }

    cout<<"3d-3d pairs: "<<pts1.size() <<endl;
    Mat R, t;
    pose_estimation_3d3d ( pts1, pts2, R, t );                      //ICP方式的位姿估计
    cout<<"ICP via SVD results: "<<endl;
    cout<<"R = "<<R<<endl;
    cout<<"t = "<<t<<endl;
    //实际上是第一帧到第二帧的R T
    cout<<"第一帧到第二帧的旋转和平移:" << endl << "R_inv = "<<R.t() <<endl;
    cout<<"t_inv = "<<-R.t() *t<<endl;

    cout<<"calling bundle adjustment"<<endl;

#if  BAOptimizer
    bundleAdjustment( pts1, pts2, R, t );                          //BA优化估计相机位姿  RT 是提供优化的初始值
#endif

    // verify p1 = R*p2 + t
/*    for ( int i=0; i<5; i++ )
    {
        cout<<"p1 = "<<pts1[i]<<endl;
        cout<<"p2 = "<<pts2[i]<<endl;
        cout<<"(R*p2+t) = "<<
            R * (Mat_<double>(3,1)<<pts2[i].x, pts2[i].y, pts2[i].z) + t
            <<endl;
        cout<<endl;
    }
  */
}

void find_feature_matches ( const Mat& img_1, const Mat& img_2,
                            std::vector<KeyPoint>& keypoints_1,
                            std::vector<KeyPoint>& keypoints_2,
                            std::vector< DMatch >& matches )
{
    //-- 初始化
    Mat descriptors_1, descriptors_2;
    // used in OpenCV3
    Ptr<FeatureDetector> detector = ORB::create();
    Ptr<DescriptorExtractor> descriptor = ORB::create();
    // use this if you are in OpenCV2
    // Ptr<FeatureDetector> detector = FeatureDetector::create ( "ORB" );
    // Ptr<DescriptorExtractor> descriptor = DescriptorExtractor::create ( "ORB" );
    Ptr<DescriptorMatcher> matcher  = DescriptorMatcher::create("BruteForce-Hamming");
    //-- 第一步:检测 Oriented FAST 角点位置
    detector->detect ( img_1,keypoints_1 );
    detector->detect ( img_2,keypoints_2 );

    //-- 第二步:根据角点位置计算 BRIEF 描述子
    descriptor->compute ( img_1, keypoints_1, descriptors_1 );
    descriptor->compute ( img_2, keypoints_2, descriptors_2 );

    //-- 第三步:对两幅图像中的BRIEF描述子进行匹配,使用 Hamming 距离
    vector<DMatch> match;
    // BFMatcher matcher ( NORM_HAMMING );
    matcher->match ( descriptors_1, descriptors_2, match );

    //-- 第四步:匹配点对筛选
    double min_dist=10000, max_dist=0;

    //找出所有匹配之间的最小距离和最大距离, 即是最相似的和最不相似的两组点之间的距离
    for ( int i = 0; i < descriptors_1.rows; i++ )
    {
        double dist = match[i].distance;
        if ( dist < min_dist ) min_dist = dist;
        if ( dist > max_dist ) max_dist = dist;
    }

    printf ( "-- Max dist : %f \n", max_dist );
    printf ( "-- Min dist : %f \n", min_dist );

    //当描述子之间的距离大于两倍的最小距离时,即认为匹配有误.但有时候最小距离会非常小,设置一个经验值30作为下限.
    for ( int i = 0; i < descriptors_1.rows; i++ )
    {
        if ( match[i].distance <= max ( 2*min_dist, 30.0 ) )
        {
            matches.push_back ( match[i] );
        }
    }
}

Point2d pixel2cam ( const Point2d& p, const Mat& K )
{
    return Point2d
            (
                    ( p.x - K.at<double> ( 0,2 ) ) / K.at<double> ( 0,0 ),
                    ( p.y - K.at<double> ( 1,2 ) ) / K.at<double> ( 1,1 )
            );
}

void pose_estimation_3d3d (
        const vector<Point3f>& pts1,
        const vector<Point3f>& pts2,
        Mat& R, Mat& t
)
{
    Point3f p1, p2;     // center of mass
    int N = pts1.size();
    for ( int i=0; i<N; i++ )
    {
        p1 += pts1[i];
        p2 += pts2[i];
    }
    p1 = Point3f( Vec3f(p1) /  N);
    p2 = Point3f( Vec3f(p2) / N);
    vector<Point3f>     q1 ( N ), q2 ( N ); // remove the center 去质心坐标
    for ( int i=0; i<N; i++ )
    {
        q1[i] = pts1[i] - p1;
        q2[i] = pts2[i] - p2;
    }

    // compute q1*q2^T
    Eigen::Matrix3d W = Eigen::Matrix3d::Zero();
    for ( int i=0; i<N; i++ )
    {
        W += Eigen::Vector3d ( q1[i].x, q1[i].y, q1[i].z ) * Eigen::Vector3d ( q2[i].x, q2[i].y, q2[i].z ).transpose();
    }
    cout<<"W="<<W<<endl;

    // SVD on W
    Eigen::JacobiSVD<Eigen::Matrix3d> svd ( W, Eigen::ComputeFullU|Eigen::ComputeFullV );//因为知道矩阵W的类型 所以在分解的时候直接是FullU | FullV
    Eigen::Matrix3d U = svd.matrixU();
    Eigen::Matrix3d V = svd.matrixV();
    cout<<"U="<<U<<endl;
    cout<<"V="<<V<<endl;

    //例程本身的实现方式 求出的R T是第二帧到第一帧的
    Eigen::Matrix3d R_ = U* ( V.transpose() );
    Eigen::Vector3d t_ = Eigen::Vector3d ( p1.x, p1.y, p1.z ) - R_ * Eigen::Vector3d ( p2.x, p2.y, p2.z );

    // convert to cv::Mat
    R = ( Mat_<double> ( 3,3 ) <<
                               R_ ( 0,0 ), R_ ( 0,1 ), R_ ( 0,2 ),
            R_ ( 1,0 ), R_ ( 1,1 ), R_ ( 1,2 ),
            R_ ( 2,0 ), R_ ( 2,1 ), R_ ( 2,2 )
    );
    t = ( Mat_<double> ( 3,1 ) << t_ ( 0,0 ), t_ ( 1,0 ), t_ ( 2,0 ) );
}
#if  BAOptimizer
void bundleAdjustment (
        const vector< Point3f >& pts1,
        const vector< Point3f >& pts2,
        Mat& R, Mat& t )                //实际上 R t在这里并不是必要的 这个只是用来提供BA初始值
{
    // 初始化g2o
    typedef g2o::BlockSolver< g2o::BlockSolverTraits<6,3> > Block;  // pose维度为 6, landmark 维度为 3
    std::unique_ptr<Block::LinearSolverType>linearSolver = g2o::make_unique<g2o::LinearSolverEigen<Block::PoseMatrixType>>();   //线性方程求解器
    std::unique_ptr<Block>solver_ptr = g2o::make_unique<Block>( std::move(linearSolver) );                                      //矩阵块求解器
    g2o::OptimizationAlgorithmLevenberg * solver = new g2o::OptimizationAlgorithmLevenberg( std::move(solver_ptr) );            //LM法

/*  //另一种修改错误的方式
    Block::LinearSolverType* linearSolver = new g2o::LinearSolverEigen<Block::PoseMatrixType>();  // 线性方程求解器
    Block* solver_ptr = new Block( std::unique_ptr<Block::LinearSolverType>(linearSolver) );      // 矩阵块求解器
    g2o::OptimizationAlgorithmLevenberg * solver = new g2o::OptimizationAlgorithmLevenberg ( std::unique_ptr <g2o::Solver>(solver_ptr) ); //LM法
*/
    g2o::SparseOptimizer optimizer;
    optimizer.setAlgorithm( solver );

    // vertex   这里仅仅添加了第二帧节点
    auto pose = new g2o::VertexSE3Expmap();                  // camera pose
    pose->setId(0);                                          // 位姿节点编号为0
#if ISBAProvideInitValue                                     // 为图优化提供ICP结果作为初值
    Eigen::Matrix3d R_mat;
    R_mat <<
          R.at<double> ( 0,0 ), R.at<double> ( 0,1 ), R.at<double> ( 0,2 ),
            R.at<double> ( 1,0 ), R.at<double> ( 1,1 ), R.at<double> ( 1,2 ),
            R.at<double> ( 2,0 ), R.at<double> ( 2,1 ), R.at<double> ( 2,2 );
    pose->setEstimate(  g2o::SE3Quat(
            R_mat,
            Eigen::Vector3d( t.at<double> ( 0,0 ),t.at<double> ( 0,1 ),t.at<double> ( 0,2 ) )
                                    )
                     );
#else                                                       // 提供默认初值
    pose->setEstimate( g2o::SE3Quat(
            Eigen::Matrix3d::Identity(),
            Eigen::Vector3d( 0,0,0 )
    ) );
#endif
    optimizer.addVertex( pose );                            //向优化器中添加节点
#if MyselfOptimizerMethod                                   //添加空间节点  并且按照书上的方式进行优化的
    int pointIndex = 1;                                     //因为上面的位姿节点就1个  所以我们这里标号为1
    for( auto &p: pts2 ){
        auto point = new g2o::VertexPointXYZ();
        point->setId( pointIndex++ );
        point->setMarginalized( true );                     //设置边缘化(必须设置 否则出错)
        point->setEstimate( Eigen::Vector3d( p.x, p.y, p.z ) );
        optimizer.addVertex( point );
    }
#endif
    // edges
    int index = 0;
    vector<EdgeProjectXYZRGBDPoseOnly*> edges;
    for ( size_t i=0; i<pts1.size(); i++ )
    {
#if MyselfOptimizerMethod
        auto edge = new EdgeProjectXYZRGBDPoseOnly( );      //在课后习题中修改的EdgeProjectXYZRGBDPoseOnly可以去掉_point成员变量
#else
        auto edge = new EdgeProjectXYZRGBDPoseOnly(
                Eigen::Vector3d(pts2[i].x, pts2[i].y, pts2[i].z) );
#endif
        edge->setId( index );
#if  MyselfOptimizerMethod
        edge->setVertex( 0 , dynamic_cast< g2o::VertexPointXYZ *> ( optimizer.vertex(index)) );
        edge->setVertex( 1 , dynamic_cast< g2o::OptimizableGraph::Vertex *> ( optimizer.vertex(0) ) );
#else   //参考ORB_SLAM:这里将原来的g2o::VertexSE3Expmap* 替换为g2o::OptimizableGraph::Vertex *
        edge->setVertex( 0 , dynamic_cast< g2o::OptimizableGraph::Vertex *> ( optimizer.vertex(0) ) );
#endif
        //这里以第一帧为测量值 说明优化的是第二帧到第一帧的旋转r和平移t
        edge->setMeasurement( Eigen::Vector3d (
                pts1[i].x, pts1[i].y, pts1[i].z)
        );
        edge->setInformation( Eigen::Matrix3d::Identity()*1e4 );
        optimizer.addEdge(edge);                            //向优化器中添加边
        index++;
        edges.push_back(edge);                              //存储边指针
    }

    chrono::steady_clock::time_point t1 = chrono::steady_clock::now();
    optimizer.setVerbose( true );
    optimizer.initializeOptimization();
    optimizer.optimize(100);
    chrono::steady_clock::time_point t2 = chrono::steady_clock::now();
    chrono::duration<double> time_used = chrono::duration_cast<chrono::duration<double>>(t2-t1);
    cout<<"optimization costs time: "<<time_used.count()<<" seconds."<<endl;

    cout<<endl<<"after optimization:"<<endl;
    cout<<"T="<<endl<<Eigen::Isometry3d( pose->estimate() ).matrix()<<endl;
/*
    cout<<"输出优化后的point值:"<<endl;
    for (int j = 0; j <3 ; ++j) {
        cout<< dynamic_cast< g2o::VertexPointXYZ * > ( optimizer.vertex(j+1) )->estimate()<<endl<<endl;//Eigen::Vector3d
    }
    cout<<endl<<"优化前的点:"<<endl;
    for (int i = 0; i <3 ; ++i) {
        cout<<pts2[i]<<endl<<endl;
    }
*/

}
#endif

1.7.3 CMakeLists.txt

cmake_minimum_required(VERSION 2.8)

project(cap7_7_icp)

set(CMAKE_CXX_STANDARD 14)
set(CMAKE_BUILD_TYPE Release )

list(APPEND CMAKE_MODULE_PATH ${PROJECT_SOURCE_DIR}/cmake_modules)

# eigen3
include_directories("/usr/include/eigen3")

#opencv2
find_package(OpenCV 3.1 REQUIRED)

#g2o
find_package(G2O REQUIRED)
find_package(CSparse REQUIRED)



include_directories(
        ${OpenCV_INCLUDE_DIRECTORIES}
        ${G2O_INCLUDE_DIRECTORIES}
        ${CSPARSE_INCLUDE_DIR}

)

add_executable(cap7_7_icp src/cap7_7_icp.cpp)

target_link_libraries(cap7_7_icp
        ${OpenCV_LIBRARIES}
        g2o_core g2o_stuff g2o_types_sba
        g2o_csparse_extension
        g2o_types_slam3d
        ${CSPARSE_LIBRARY}
        )

1.7.4 输出结果为

/home/bupo/shenlan/zuoye/cap7/cap7_7_icp/cmake-build-release/cap7_7_icp /home/bupo/shenlan/zuoye/cap7/cap7_7_icp/src/1.png /home/bupo/shenlan/zuoye/cap7/cap7_7_icp/src/2.png /home/bupo/shenlan/zuoye/cap7/cap7_7_icp/src/1_depth.png /home/bupo/shenlan/zuoye/cap7/cap7_7_icp/src/2_depth.png
This is the example from GaoXiang's book : ICP结果是第二帧到第一帧的RT.
BA Optimizer is Provided!
 But,Not provide initialization for BA!
优化了空间点和位姿节点!
[ INFO:0] Initialize OpenCL runtime...
-- Max dist : 95.000000 
-- Min dist : 7.000000 
一共找到了81组匹配点
3d-3d pairs: 75
W=  11.8688 -0.717698   1.89486
 -1.88065   3.83391  -5.78219
  3.25846  -5.82734   9.65267
U=  0.592295  -0.805658 -0.0101195
 -0.418171  -0.318113   0.850845
  0.688709   0.499719   0.525319
V=   0.64736  -0.761401 -0.0345329
 -0.388765  -0.368829   0.844291
  0.655581   0.533135   0.534772
ICP via SVD results: 
R = [0.9972065647956201, 0.05834273442898391, -0.04663895869192625;
 -0.05787745545449196, 0.998260122172866, 0.01126626067193237;
 0.04721511705620757, -0.008535444848613793, 0.9988482762174666]
t = [0.1379879629890433;
 -0.06551699470729988;
 -0.02981649388290575]
第一帧到第二帧的旋转和平移:
R_inv = [0.9972065647956201, -0.05787745545449196, 0.04721511705620757;
 0.05834273442898391, 0.998260122172866, -0.008535444848613793;
 -0.04663895869192625, 0.01126626067193237, 0.9988482762174666]
t_inv = [-0.1399866702492459;
 0.05709791102272541;
 0.03695589996443214]
calling bundle adjustment
optimization costs time: 0.00367217 seconds.

after optimization:
T=
  0.998656  0.0272915 -0.0440561   0.132469
-0.0282053   0.999397  -0.020255 0.00437166
 0.0434768  0.0214704   0.998824 -0.0484505
         0          0          0          1
iteration= 0	 chi2= 12417.623861	 time= 8.9301e-05	 cumTime= 8.9301e-05	 edges= 74	 schur= 1	 lambda= 7.813187	 levenbergIter= 1
iteration= 1	 chi2= 159.861027	 time= 6.7209e-05	 cumTime= 0.00015651	 edges= 74	 schur= 1	 lambda= 2.604396	 levenbergIter= 1
iteration= 2	 chi2= 0.003082	 time= 3.7542e-05	 cumTime= 0.000194052	 edges= 74	 schur= 1	 lambda= 0.868132	 levenbergIter= 1
iteration= 3	 chi2= 0.000000	 time= 3.693e-05	 cumTime= 0.000230982	 edges= 74	 schur= 1	 lambda= 0.578755	 levenbergIter= 1
iteration= 4	 chi2= 0.000000	 time= 3.6379e-05	 cumTime= 0.000267361	 edges= 74	 schur= 1	 lambda= 0.385836	 levenbergIter= 1
iteration= 5	 chi2= 0.000000	 time= 3.6459e-05	 cumTime= 0.00030382	 edges= 74	 schur= 1	 lambda= 0.257224	 levenbergIter= 1
iteration= 6	 chi2= 0.000000	 time= 0.00014618	 cumTime= 0.00045	 edges= 74	 schur= 1	 lambda= 46032074.362190	 levenbergIter= 8
iteration= 7	 chi2= 0.000000	 time= 3.6359e-05	 cumTime= 0.000486359	 edges= 74	 schur= 1	 lambda= 92064148.724380	 levenbergIter= 1

进程已结束,退出代码0

1.7.5 感悟

  • 学到了二元边的使用方法,以及二元便的雅可比矩阵的求解方法

参考:视觉slam十四讲第七章课后习题7


1.8. 在特征点匹配过程中,不可避免地会遇到误匹配的情况。如果我们把错误匹配输入到PnP 或ICP 中,会发生怎样的情况?你能想到哪些避免误匹配的方法?

  • 目前书中用的是根据汉明距离的暴力匹配方法,然后根据经验参数(30或者是最小距离的两倍)对匹配子根据其距离进行筛选。
  • 如果误匹配情况输入到PnP或是ICP中,再加上迭代算法选择不正确,初值估计不准确,就很容易导致计算结果产生误差,更有甚者会让迭代过程不稳定,甚至报错。
  • 目前比较流行的避免误匹配方法有:
  1. 交叉匹配(在暴力匹配的基础上再匹配一次,如果两次结果一致,则认为是个特征点,如果不一致则滤掉,BFMatcher XX (NORM_HAMMING, true) )
  2. KNN匹配(K邻近匹配,匹配时候选择K个与特征点相似的点,一般K是2,如果区别足够大,则选择最相似的点作为匹配点,bfMatcher->knnMatch(descriptors1, descriptors2, knnMatches, 2) )
  3. RANSAC(随机采样一致性,利用两个图像之间的单应矩阵,根据重投影误差判定某个匹配是不是正确匹配,findHomography)
  • 等等,一般可以跟觉已有的成熟框架如ORB_SLAM2等等观察其对于不同场景所采取的避免误匹配的方法。同样,对于后端,在优化时可以用Huber损失函数等等增强优化算法的鲁棒性。

1.9. 使用Sophus 的SE3 类,自己设计g2o 的节点与边,实现PnP 和ICP 的优化。


1.10. 在Ceres 中实现PnP 和ICP 的优化。

1.10.1 在Ceres 中实现PnP

1.10.1.1 前言
1.10.1.2 cap7_10_ceres_pnp.cpp
  • 这个是仅仅优化位姿的,主要实现的还是这个
#include <iostream>
#include <opencv2/core/core.hpp>
#include <ceres/ceres.h>
#include <chrono>

#include <opencv2/features2d/features2d.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/calib3d/calib3d.hpp>
#include <Eigen/Core>
#include <Eigen/Geometry>
#include <Eigen/SVD>

#include "rotation.h"
using namespace std;
using namespace cv;

void find_feature_matches (
        const Mat& img_1, const Mat& img_2,
        std::vector<KeyPoint>& keypoints_1,
        std::vector<KeyPoint>& keypoints_2,
        std::vector< DMatch >& matches );

// 像素坐标转相机归一化坐标
Point2d pixel2cam ( const Point2d& p, const Mat& K );

/*ceres求解BA的话:
(1)构建cost fuction,即代价函数,也就是寻优的目标式。
 这个部分需要使用仿函数(functor)这一技巧来实现,
 做法是定义一个cost function的结构体,在结构体内重载()运算符。
*/
struct cost_function_define
{
    cost_function_define(Point3f p1,Point2f p2):_p1(p1),_p2(p2){}

    /*这里目标函数是重投影误差,
     * 将第一帧观测到的3D点坐标先通过R,T变化到第二帧的坐标系下,
     * 然后用内参转到图像坐标系下,即重投影的坐标u,v。
     * 然后残差是第二帧观测到的该三维点的坐标u1,v1分别减去u,v。
    */

    template<typename T>
    bool operator()(const T* const cere_r,const T* const cere_t,T* residual)const
    {
        /*这里因为有模板,因此要将Point3f类型的_p1转为模板类型p_1,
         * 这样才可以在模板类型中的元素进行运算。否则会报错。*/
        T p_1[3];
        T p_2[3];
        p_1[0]=T(_p1.x);
        p_1[1]=T(_p1.y);
        p_1[2]=T(_p1.z);

/*        这里的R不是旋转矩阵,也不是四元数表示的,而是用欧拉角表示的。
        通过函数 AngleAxisRotatePoint(cere_r,p_1,p_2)可以对3D点进行旋转。相当于用旋转矩阵去左乘。*/
        //cout<<"point_3d: "<<p_1[0]<<" "<<p_1[1]<<"  "<<p_1[2]<<endl;
        AngleAxisRotatePoint(cere_r,p_1,p_2);

        p_2[0]=p_2[0]+cere_t[0];
        p_2[1]=p_2[1]+cere_t[1];
        p_2[2]=p_2[2]+cere_t[2];

        const T x=p_2[0]/p_2[2];
        const T y=p_2[1]/p_2[2];
        //三维点重投影计算的像素坐标
        /*这里相机内参没有进行优化,而是直接写入,要一起优化可以稍加修改即可。这里优化的只有相机外参。*/
        const T u=x*520.9+325.1;
        const T v=y*521.0+249.7;


        //观测的在图像坐标下的值
        T u1=T(_p2.x);
        T v1=T(_p2.y);
        //都可以
/*        residual[0]=u-u1;
        residual[1]=v-v1;*/
        //都可以
        residual[0]=u1-u;
        residual[1]=v1-v;
        return true;
    }
    /*如果将观测变量_p1由类型 Point3f改为double*,则优化结果完全错误。
     * 调试发现,传入的观测_p1始终是第一次的值,后面没有再改变。
     * 猜测可能是数组必须按地址传递造成的。我将其类型改为自己写的struct类型则正确,
     * 初步验证了我的猜想,但是ceres内部怎么写的造成这样,还不太清楚。
    */
    Point3f _p1;
    Point2f _p2;
};


int main ( int argc, char** argv )
{
    if ( argc != 5 )
    {
        cout<<"usage: pose_estimation_3d2d img1 img2 depth1 depth2"<<endl;
        return 1;
    }
    //-- 读取图像
    Mat img_1 = imread ( argv[1], CV_LOAD_IMAGE_COLOR );
    Mat img_2 = imread ( argv[2], CV_LOAD_IMAGE_COLOR );

    vector<KeyPoint> keypoints_1, keypoints_2;
    vector<DMatch> matches;
    find_feature_matches ( img_1, img_2, keypoints_1, keypoints_2, matches );
    cout<<"一共找到了"<<matches.size() <<"组匹配点"<<endl;

    // 建立3D点
    Mat d1 = imread ( argv[3], CV_LOAD_IMAGE_UNCHANGED );       // 深度图为16位无符号数,单通道图像
    Mat K = ( Mat_<double> ( 3,3 ) << 520.9, 0, 325.1, 0, 521.0, 249.7, 0, 0, 1 );
    vector<Point3f> pts_3d;
    vector<Point2f> pts_2d;
    for ( DMatch m:matches )
    {
        ushort d = d1.ptr<unsigned short> (int ( keypoints_1[m.queryIdx].pt.y )) [ int ( keypoints_1[m.queryIdx].pt.x ) ];
        if ( d == 0 )   // bad depth
            continue;
        float dd = d/1000.0;
        Point2d p1 = pixel2cam ( keypoints_1[m.queryIdx].pt, K );
        pts_3d.push_back ( Point3f ( p1.x*dd, p1.y*dd, dd ) );
        pts_2d.push_back ( keypoints_2[m.trainIdx].pt );
    }

    cout<<"3d-2d pairs: "<<pts_3d.size() <<endl;

    Mat r, t;
    solvePnP ( pts_3d, pts_2d, K, Mat(), r, t, false ); // 调用OpenCV 的 PnP 求解,可选择EPNP,DLS等方法
    Mat R;
    cv::Rodrigues ( r, R ); // r为旋转向量形式,用Rodrigues公式转换为矩阵
    cout<<"----------------optional berore--------------------"<<endl;
    cout<<"R="<<endl<<R<<endl;
    cout<<"t="<<endl<<t<<endl;

    cout<<"R_inv = "<<R.t() <<endl;
    cout<<"t_inv = "<<-R.t() *t<<endl;


    cout<<"calling bundle adjustment"<<endl;

    //给rot,和tranf初值
    /*传入的观测是第一帧坐标系下的3D点坐标,和第二帧图像坐标系下的二维点。
     * 因为类型不统一,而又必须要用模板参数,因此这里统一类型,修改为double*。*/
    double cere_rot[3],cere_tranf[3];
    //  cere_rot[0]=r.at<double>(0,0);
    //  cere_rot[1]=r.at<double>(1,0);
    //  cere_rot[2]=r.at<double>(2,0);
    cere_rot[0]=0;
    cere_rot[1]=1;
    cere_rot[2]=2;

    cere_tranf[0]=t.at<double>(0,0);
    cere_tranf[1]=t.at<double>(1,0);
    cere_tranf[2]=t.at<double>(2,0);


    ceres::Problem problem;
    for(int i=0;i<pts_3d.size();i++)
    {
        /*这里可以看到,待优化的变量为cere_rot和cere_tranf,都是3维的变量,残差是2维的,
         * 因此ceres::AutoDiffCostFunction<cost_function_define,2,3,3>对应2,3,3.
        */
        ceres::CostFunction* costfunction=new ceres::AutoDiffCostFunction<cost_function_define,2,3,3>(new cost_function_define(pts_3d[i],pts_2d[i]));
        problem.AddResidualBlock( // 向问题中添加误差项
                costfunction, // 使用自动求导,模板参数:误差类型,输出维度,输入维度,维数要与前面struct中一致
                NULL,  // 核函数,这里不使用,为空
                cere_rot,cere_tranf);//注意,cere_rot不能为Mat类型 // 待估计参数
    }

    // 配置求解器
    ceres::Solver::Options option; // 这里有很多配置项可以填
    option.linear_solver_type=ceres::DENSE_SCHUR;// 增量方程如何求解 使用稠密的Cholesky求解器求解简化的线性系统
    //输出迭代信息到屏幕
    option.minimizer_progress_to_stdout=true; // 输出到cout
    //显示优化信息
    ceres::Solver::Summary summary; // 优化信息
    //开始求解
    ceres::Solve(option,&problem,&summary); // 开始优化
    //显示优化信息
    cout<<summary.BriefReport()<<endl;

    cout<<"----------------optional after--------------------"<<endl;

    Mat cam_3d = ( Mat_<double> ( 3,1 )<<cere_rot[0],cere_rot[1],cere_rot[2]);
    Mat cam_9d;
    /*注意cere_rot是旋转向量,因此可以Rodrigues公式转换为旋转矩阵*/
    cv::Rodrigues ( cam_3d, cam_9d ); // r为旋转向量形式,用Rodrigues公式转换为矩阵

    cout<<"cam_9d:"<<endl<<cam_9d<<endl;
    /*输出结果和EPnP求解的基本一致。*/
    cout<<"cam_t:"<<cere_tranf[0]<<"  "<<cere_tranf[1]<<"  "<<cere_tranf[2]<<endl;
    Mat tranf_3d=(Mat_<double>(3,1)<<cere_tranf[0],cere_tranf[1],cere_tranf[2]);
    Mat tf_3d_2 = (Mat_<double>(3,1) << 0,0,0);
    Mat tf_3d_2_1 = (Mat_<double>(3,1) << 0,0,0);
    Mat tf_2d_2 = (Mat_<double>(2,1) << 0,0);
    for(int i=0;i<5;++i)
     {
        cout<<"p1= "<<pts_3d[i]<<endl;
        cout<<"p2= "<<pts_2d[i]<<endl;
        tf_3d_2 = cam_9d*(Mat_<double>(3,1)<<pts_3d[i].x,pts_3d[i].y,pts_3d[i].z)+tranf_3d;
        tf_3d_2_1 = K * tf_3d_2/tf_3d_2.at<double>(2);
        tf_2d_2 = (Mat_<double>(2,1) << tf_3d_2_1.at<double>(0), tf_3d_2_1.at<double>(1));
        cout<<"(R*p1+t)= " << tf_2d_2.t() << endl;
        cout<<endl;
     }

}

void find_feature_matches ( const Mat& img_1, const Mat& img_2,
                            std::vector<KeyPoint>& keypoints_1,
                            std::vector<KeyPoint>& keypoints_2,
                            std::vector< DMatch >& matches )
{
    //-- 初始化
    Mat descriptors_1, descriptors_2;
    // used in OpenCV3
    Ptr<FeatureDetector> detector = ORB::create();
    Ptr<DescriptorExtractor> descriptor = ORB::create();
    // use this if you are in OpenCV2
    // Ptr<FeatureDetector> detector = FeatureDetector::create ( "ORB" );
    // Ptr<DescriptorExtractor> descriptor = DescriptorExtractor::create ( "ORB" );
    Ptr<DescriptorMatcher> matcher  = DescriptorMatcher::create ( "BruteForce-Hamming" );
    //-- 第一步:检测 Oriented FAST 角点位置
    detector->detect ( img_1,keypoints_1 );
    detector->detect ( img_2,keypoints_2 );

    //-- 第二步:根据角点位置计算 BRIEF 描述子
    descriptor->compute ( img_1, keypoints_1, descriptors_1 );
    descriptor->compute ( img_2, keypoints_2, descriptors_2 );

    //-- 第三步:对两幅图像中的BRIEF描述子进行匹配,使用 Hamming 距离
    vector<DMatch> match;
    // BFMatcher matcher ( NORM_HAMMING );
    matcher->match ( descriptors_1, descriptors_2, match );

    //-- 第四步:匹配点对筛选
    double min_dist=10000, max_dist=0;

    //找出所有匹配之间的最小距离和最大距离, 即是最相似的和最不相似的两组点之间的距离
    for ( int i = 0; i < descriptors_1.rows; i++ )
    {
        double dist = match[i].distance;
        if ( dist < min_dist ) min_dist = dist;
        if ( dist > max_dist ) max_dist = dist;
    }

    printf ( "-- Max dist : %f \n", max_dist );
    printf ( "-- Min dist : %f \n", min_dist );

    //当描述子之间的距离大于两倍的最小距离时,即认为匹配有误.但有时候最小距离会非常小,设置一个经验值30作为下限.
    for ( int i = 0; i < descriptors_1.rows; i++ )
    {
        if ( match[i].distance <= max ( 2*min_dist, 30.0 ) )
        {
            matches.push_back ( match[i] );
        }
    }
}

Point2d pixel2cam ( const Point2d& p, const Mat& K )
{
    return Point2d
            (
                    ( p.x - K.at<double> ( 0,2 ) ) / K.at<double> ( 0,0 ),
                    ( p.y - K.at<double> ( 1,2 ) ) / K.at<double> ( 1,1 )
            );
}

  • 这个是看到一个大佬把K相机内参也给优化了,记录一下,直接替换cpp文件就可以
#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/features2d/features2d.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/calib3d/calib3d.hpp>
#include <Eigen/Core>
#include <Eigen/Geometry>
#include <ceres/ceres.h>
#include <Eigen/SVD>
#include <chrono>

#include "rotation.h"


using namespace std;
using namespace cv;


//第一步:定义Cost Function函数
struct cost_function
{
    cost_function(Point3f p_3D,Point2f p_p):_p_3D(p_3D),_p_p(p_p) {}//3D-2D:知道n个3D空间点以及其投影位置,然后估计相机位姿
    //计算残差
    template <typename T>//模板:使得在定义时可以模糊类型
    bool operator() (
            const T* const r, const T* const t,//r,t为待优化的参数
            const T* K,//k为待优化的参数
            T* residual ) const //殘差
    {
        T p_3d[3];
        T p_Cam[3];//相机坐标系下空间点的坐标
        p_3d[0]=T(_p_3D.x);
        p_3d[1]=T(_p_3D.y);
        p_3d[2]=T(_p_3D.z);
//         通过tool文件夹中的rotation.h中的AngleAxisRotatePoint()函数
//         计算在相机仅旋转的情况下,新坐标系下的坐标
//         也就是p_Cam=R*p_3d
        //cout<<"point_3d: "<<p_3d[0]<<" "<<p_3d[1]<<"  "<<p_3d[2]<<endl;
        AngleAxisRotatePoint(r,p_3d,p_Cam);

        p_Cam[0]=p_Cam[0]+t[0];
        p_Cam[1]=p_Cam[1]+t[1];
        p_Cam[2]=p_Cam[2]+t[2];
        //R,t计算T
        //Eigen::Isometry3d T_w_c;
//        T_w_c.rotate(r);

        const T x=p_Cam[0]/p_Cam[2];
        const T y=p_Cam[1]/p_Cam[2];
        //3D点投影后的像素坐标
//         const T u=x*520.9+325.1;
//         const T v=y*521.0+249.7;
        const T u=x*K[0]+K[1];
        const T v=y*K[2]+K[3];

        //观测到的投影位置的像素坐标
        T u1=T(_p_p.x);
        T v1=T(_p_p.y);

        //残差
        residual[0]=u-u1;
        residual[1]=v-v1;
        return true;
    }
    Point3f _p_3D;
    Point2f _p_p;
};

void find_feature_matches (
        const Mat& img_1, const Mat& img_2,
        std::vector<KeyPoint>& keypoints_1,
        std::vector<KeyPoint>& keypoints_2,
        std::vector< DMatch >& matches );

// 像素坐标转相机归一化坐标
Point2d pixel2cam ( const Point2d& p, const Mat& K );

void bundleAdjustment(const vector<Point3f> points_3d,
                      const vector<Point2f> points_2d, Mat K, Mat &r, Mat &t);

int main ( int argc, char** argv )
{
    cv::Mat img_1,img_2,d1,d2;
    if(argc!=5)
    {
        //读取图片
        img_1 = cv::imread("../src/1.png", CV_LOAD_IMAGE_COLOR);//读取彩色图
        img_2 = cv::imread("../src/2.png",1);
        //接下来的是建立3d点 利用深度图可以获取深度信息
        //depth1是图1对应的深度图 depth2是图2对应的深度图
        d1 = cv::imread("../src/1_depth.png", CV_LOAD_IMAGE_UNCHANGED);// 深度图为16位无符号数,单通道图像
        d2 = cv::imread("../src/2_depth.png", -1);
    }
    else
    {
        //读取图片
        img_1 = cv::imread(argv[1], CV_LOAD_IMAGE_COLOR);//读取彩色图
        img_2 = cv::imread(argv[2],1);
        //接下来的是建立3d点 利用深度图可以获取深度信息
        //depth1是图1对应的深度图 depth2是图2对应的深度图
        d1 = cv::imread(argv[3], CV_LOAD_IMAGE_UNCHANGED);// 深度图为16位无符号数,单通道图像
        d2 = cv::imread(argv[4], -1);
    }
    assert(img_1.data != nullptr && img_2.data != nullptr);//若读取的图片没有内容,就终止程序

    vector<KeyPoint> keypoints_1, keypoints_2;
    vector<DMatch> matches;
    find_feature_matches ( img_1, img_2, keypoints_1, keypoints_2, matches );
    cout<<"一共找到了"<<matches.size() <<"组匹配点"<<endl;

    Mat K = ( Mat_<double> ( 3,3 ) << 520.9, 0, 325.1, 0, 521.0, 249.7, 0, 0, 1 );
    vector<Point3f> pts_3d;
    vector<Point2f> pts_2d;
    for ( DMatch m:matches )
    {
        ushort d = d1.ptr<unsigned short> (int ( keypoints_1[m.queryIdx].pt.y )) [ int ( keypoints_1[m.queryIdx].pt.x ) ];
        if ( d == 0 )   // bad depth
            continue;
        float dd = d/1000.0;
        Point2d p1 = pixel2cam ( keypoints_1[m.queryIdx].pt, K );
        pts_3d.push_back ( Point3f ( p1.x*dd, p1.y*dd, dd ) );
        pts_2d.push_back ( keypoints_2[m.trainIdx].pt );
    }

    cout<<"3d-2d pairs: "<<pts_3d.size() <<endl;

    Mat r, t;
//    solvePnP ( pts_3d, pts_2d, K, Mat(), r, t, false,cv::SOLVEPNP_EPNP ); // 调用OpenCV 的 PnP 求解,可选择EPNP,DLS等方法
    // solvePnP ( pts_3d, pts_2d, K, Mat(), r, t, false,CV_ITERATIVE );
    solvePnP ( pts_3d, pts_2d, K, Mat(), r, t, false);
    Mat R;
    cv::Rodrigues ( r, R ); // r为旋转向量形式,用Rodrigues公式转换为矩阵

    cout<<"optional before: "<<endl;
    cout<<"R="<<endl<<R<<endl;
    cout<<"t="<<endl<<t<<endl<<endl;

    cout<<"calling bundle adjustment"<<endl;

    bundleAdjustment(pts_3d,pts_2d,K,r,t);
}
void find_feature_matches ( const Mat& img_1, const Mat& img_2,
                            std::vector<KeyPoint>& keypoints_1,
                            std::vector<KeyPoint>& keypoints_2,
                            std::vector< DMatch >& matches )
{
    //-- 初始化
    Mat descriptors_1, descriptors_2;
    // used in OpenCV3
    Ptr<FeatureDetector> detector = ORB::create();
    Ptr<DescriptorExtractor> descriptor = ORB::create();
    // use this if you are in OpenCV2
    // Ptr<FeatureDetector> detector = FeatureDetector::create ( "ORB" );
    // Ptr<DescriptorExtractor> descriptor = DescriptorExtractor::create ( "ORB" );
    Ptr<DescriptorMatcher> matcher  = DescriptorMatcher::create ( "BruteForce-Hamming" );
    //-- 第一步:检测 Oriented FAST 角点位置
    detector->detect ( img_1,keypoints_1 );
    detector->detect ( img_2,keypoints_2 );

    //-- 第二步:根据角点位置计算 BRIEF 描述子
    descriptor->compute ( img_1, keypoints_1, descriptors_1 );
    descriptor->compute ( img_2, keypoints_2, descriptors_2 );

    //-- 第三步:对两幅图像中的BRIEF描述子进行匹配,使用 Hamming 距离
    vector<DMatch> match;
    // BFMatcher matcher ( NORM_HAMMING );
    matcher->match ( descriptors_1, descriptors_2, match );

    //-- 第四步:匹配点对筛选
    double min_dist=10000, max_dist=0;

    //找出所有匹配之间的最小距离和最大距离, 即是最相似的和最不相似的两组点之间的距离
    for ( int i = 0; i < descriptors_1.rows; i++ )
    {
        double dist = match[i].distance;
        if ( dist < min_dist ) min_dist = dist;
        if ( dist > max_dist ) max_dist = dist;
    }

    printf ( "-- Max dist : %f \n", max_dist );
    printf ( "-- Min dist : %f \n", min_dist );

    //当描述子之间的距离大于两倍的最小距离时,即认为匹配有误.但有时候最小距离会非常小,设置一个经验值30作为下限.
    for ( int i = 0; i < descriptors_1.rows; i++ )
    {
        if ( match[i].distance <= max ( 2*min_dist, 30.0 ) )
        {
            matches.push_back ( match[i] );
        }
    }
}

Point2d pixel2cam ( const Point2d& p, const Mat& K )
{
    return Point2d
            (
                    ( p.x - K.at<double> ( 0,2 ) ) / K.at<double> ( 0,0 ),
                    ( p.y - K.at<double> ( 1,2 ) ) / K.at<double> ( 1,1 )
            );
}

//构建最小二乘问题
void bundleAdjustment(const vector<Point3f> points_3d,
                      const vector<Point2f> points_2d,Mat K, Mat &r, Mat &t)
{
//    cout<<"R = "<<endl<<R<<endl;
    cout<<"start:"<<endl;
    double rotation_vector[3],tranf[3];//旋转向量r,平移t
    double k[4];
    rotation_vector[0]=r.at<double>(0,0);
    rotation_vector[1]=r.at<double>(0,1);
    rotation_vector[2]=r.at<double>(0,2);

    tranf[0]=t.at<double>(0,0);
    tranf[1]=t.at<double>(1,0);
    tranf[2]=t.at<double>(2,0);

    k[0]=520.9;
    k[1]=325.1;
    k[2]=521.0;
    k[3]=249.7;

    ceres::Problem problem;
    for(int i=0;i<points_3d.size();++i)
    {
        ceres::CostFunction* costfunction=new ceres::AutoDiffCostFunction<cost_function,2,3,3,4>(new cost_function(points_3d[i],points_2d[i]));
        problem.AddResidualBlock(costfunction,NULL,rotation_vector,tranf,k);
    }
//    cout<<rotation_vector[0]<<" "<<rotation_vector[1]<<" "<<rotation_vector[2]<<endl;
    //配置求解器
    ceres::Solver::Options option;
    option.linear_solver_type=ceres::DENSE_QR;//DENSE_SCHUR
    //true:迭代信息输出到屏幕.false:不输出
    option.minimizer_progress_to_stdout=true;

    ceres::Solver::Summary summary;//优化信息
    //可以和g2o优化做对比
    chrono::steady_clock::time_point t1=chrono::steady_clock::now();
    //开始优化
    ceres::Solve(option,&problem,&summary);
    chrono::steady_clock::time_point t2=chrono::steady_clock::now();
    chrono::duration<double> time_used=chrono::duration_cast<chrono::duration<double>>(t2-t1);
    cout<<"solve time cost = "<<time_used.count()<<" second."<<endl;

    //输出结果
    cout<<summary.BriefReport()<<endl;
    Mat Rotaion_vector=(Mat_<double>(3,1)<<rotation_vector[0],rotation_vector[1],rotation_vector[2]);
//    cout<<rotation_vector[0]<<" "<<rotation_vector[1]<<""<<rotation_vector[2]<<endl;//0,1,2
    Mat Rotation_matrix;
    Rodrigues(Rotaion_vector,Rotation_matrix);//r为旋转向量形式,用Rodrigues公式转换为矩阵
    cout<<"after optional:"<<endl;
    cout<<"R = "<<endl<<Rotation_matrix<<endl;
//    cout<<"R = "<<endl<<<<endl;
    cout<<"t = "<<tranf[0]<<" "<<tranf[1]<<" "<<tranf[2]<<endl;

    cout << "k = " << k[0] << " " << k[1] << " " << k[2] << " " << k[3] << endl;
}
1.10.1.3 CMakeLists.txt
cmake_minimum_required(VERSION 2.8)
project(cap7_10_ceres_pnp)

set( CMAKE_CXX_STANDARD 14)
set( CMAKE_BUILD_TYPE Release )

# 添加cmake模块以使用ceres库
list( APPEND CMAKE_MODULE_PATH ${PROJECT_SOURCE_DIR}/cmake_modules )

find_package(OpenCV 3.1 REQUIRED)
include_directories(${OpenCV_INCLUDE_DIRECTORIES})

# 寻找Ceres库并添加它的头文件
find_package( Ceres REQUIRED )
include_directories( ${CERES_INCLUDE_DIRS} )

include_directories("/usr/include/eigen3")

add_executable(cap7_10_ceres_pnp src/cap7_10_ceres_pnp.cpp)

target_link_libraries( cap7_10_ceres_pnp
        ${OpenCV_LIBS}
        ${CERES_LIBRARIES}
        )
1.10.1.4 输出结果为:
/home/bupo/shenlan/zuoye/cap7/cap7_10_ceres_pnp/cmake-build-release/cap7_10_ceres_pnp /home/bupo/shenlan/zuoye/cap7/cap7_10_ceres_pnp/src/1.png /home/bupo/shenlan/zuoye/cap7/cap7_10_ceres_pnp/src/2.png /home/bupo/shenlan/zuoye/cap7/cap7_10_ceres_pnp/src/1_depth.png /home/bupo/shenlan/zuoye/cap7/cap7_10_ceres_pnp/src/2_depth.png
[ INFO:0] Initialize OpenCL runtime...
-- Max dist : 95.000000 
-- Min dist : 7.000000 
一共找到了81组匹配点
3d-2d pairs: 77
----------------optional berore--------------------
R=
[0.9979193252686379, -0.05138618892180048, 0.03894200615632972;
 0.05033852889628791, 0.9983556573826996, 0.02742287148559438;
 -0.04028712901094069, -0.0254055301076531, 0.9988651091493811]
t=
[-0.6279335377101095;
 -0.03681764073299486;
 0.3049963487769202]
R_inv = [0.9979193252686379, 0.05033852889628791, -0.04028712901094069;
 -0.05138618892180048, 0.9983556573826996, -0.0254055301076531;
 0.03894200615632972, 0.02742287148559438, 0.9988651091493811]
t_inv = [0.6407677853881939;
 0.01223858243973925;
 -0.2791875740897244]
calling bundle adjustment
iter      cost      cost_change  |gradient|   |step|    tr_ratio  tr_radius  ls_iter  iter_time  total_time
   0  1.748561e+07    0.00e+00    5.22e+07   0.00e+00   0.00e+00  1.00e+04        0    3.70e-05    1.18e-04
   1  5.467247e+06    1.20e+07    9.71e+06   6.54e+00   6.90e-01  1.06e+04        1    7.10e-05    2.03e-04
   2  1.636783e+06    3.83e+06    2.81e+06   7.28e+00   7.03e-01  1.13e+04        1    3.89e-05    2.47e-04
   3  1.852413e+07   -1.69e+07    2.81e+06   3.16e+01  -1.03e+01  5.67e+03        1    1.91e-05    2.70e-04
   4  1.870843e+07   -1.71e+07    2.81e+06   3.15e+01  -1.05e+01  1.42e+03        1    1.81e-05    2.92e-04
   5  1.988564e+07   -1.82e+07    2.81e+06   3.12e+01  -1.12e+01  1.77e+02        1    1.69e-05    3.14e-04
   6  4.437442e+07   -4.27e+07    2.81e+06   2.86e+01  -2.62e+01  1.11e+01        1    3.19e-05    3.50e-04
   7  6.460253e+05    9.91e+05    2.78e+06   1.04e+01   6.78e-01  1.16e+01        1    1.18e-04    4.74e-04
   8  5.770896e+04    5.88e+05    9.14e+05   3.49e+00   1.00e+00  3.48e+01        1    7.89e-05    5.73e-04
   9  8.993651e+03    4.87e+04    7.95e+04   1.30e+00   9.95e-01  1.04e+02        1    7.80e-05    6.68e-04
  10  1.356294e+03    7.64e+03    7.48e+03   7.89e-01   9.81e-01  3.13e+02        1    7.70e-05    7.61e-04
  11  2.032478e+02    1.15e+03    3.89e+03   4.08e-01   9.96e-01  9.39e+02        1    8.70e-05    8.64e-04
  12  1.600523e+02    4.32e+01    3.45e+02   9.20e-02   1.00e+00  2.82e+03        1    7.61e-05    9.55e-04
  13  1.597797e+02    2.73e-01    7.35e+00   7.88e-03   1.00e+00  8.45e+03        1    7.61e-05    1.05e-03
  14  1.597795e+02    2.67e-04    8.87e-02   2.56e-04   1.00e+00  2.54e+04        1    7.70e-05    1.14e-03
Ceres Solver Report: Iterations: 15, Initial cost: 1.748561e+07, Final cost: 1.597795e+02, Termination: CONVERGENCE
----------------optional after--------------------
cam_9d:
[0.9979193163023975, -0.05138607093001038, 0.03894239161802245;
 0.0503383924215025, 0.9983556583913398, 0.0274230852825386;
 -0.04028752162859244, -0.02540572912495342, 0.9988650882519897]
cam_t:-0.627937  -0.0368194  0.304997
p1= [-0.187062, -4.15408, 13.724]
p2= [323, 109]
(R*p1+t)= [322.6404327753651, 108.9202588469664]

p1= [-1.21849, -0.588597, 7.924]
p2= [231, 219]
(R*p1+t)= [230.4614792571009, 220.2360932450788]

p1= [-3.13876, 0.800932, 6.698]
p2= [65, 308]
(R*p1+t)= [65.40258102155319, 307.546289342931]

p1= [-1.61722, 0.524364, 7.133]
p2= [185, 292]
(R*p1+t)= [186.4838654527213, 291.5433045317255]

p1= [-3.13611, 0.50727, 6.558]
p2= [61, 287]
(R*p1+t)= [61.38406743181415, 286.4510353335888]


进程已结束,退出代码0


1.10.1.5 感悟
  • 更深入明白了一点如何使用ceres实现求解
  • 原理仍是一概不知

1.10.2 在ceres中实现icp

1.10.2.1 前言
1.10.2.2 cap7_10_ceres_icp.cpp
 #include <iostream>
 #include <opencv2/core/core.hpp>
 #include <ceres/ceres.h>

 #include <opencv2/highgui/highgui.hpp>
 #include <opencv2/calib3d/calib3d.hpp>
 #include <opencv2/features2d/features2d.hpp>
 #include <Eigen/Core>
 #include <Eigen/Geometry>
 #include <Eigen/SVD>

 #include <chrono>

 #include "rotation.h"

 using namespace std;
 using namespace cv;

 void find_feature_matches(const Mat& img_1,const Mat& img_2,
                           vector<KeyPoint>& keypoints_1,
                           vector<KeyPoint>& keypoints_2,
                           vector<DMatch>& matches);

 //像素坐标转相机归一化坐标
 Point2d pixel2cam(const Point2d& p,const Mat& K);

 void pose_estimation_3d3d(const vector<Point3f>& pts1,
                           const vector<Point3f>& pts2,
                           Mat& r,Mat& t_inv);

 void bundleAdjustment(const vector<Point3f>& points_3f,
                       const vector<Point3f>& points_2f,
                       Mat& R, Mat& t_inv);//test 声明的行参和定义的不同是否可行:可以的!

 struct cost_function_define
 {
     cost_function_define(Point3f p1,Point3f p2):_p1(p1),_p2(p2){}
     template<typename T>
     bool operator()(const T* const cere_r,const T* const cere_t,T* residual) const
     {
         T p_1[3];
         T p_2[3];
         p_1[0]=T(_p1.x);
         p_1[1]=T(_p1.y);
         p_1[2]=T(_p1.z);
         AngleAxisRotatePoint(cere_r,p_1,p_2);
         p_2[0]=p_2[0]+cere_t[0];
         p_2[1]=p_2[1]+cere_t[1];
         p_2[2]=p_2[2]+cere_t[2];
         const T x=p_2[0]/p_2[2];
         const T y=p_2[1]/p_2[2];
         const T u=x*520.9+325.1;
         const T v=y*521.0+249.7;
         T p_3[3];
         p_3[0]=T(_p2.x);
         p_3[1]=T(_p2.y);
         p_3[2]=T(_p2.z);

         const T x1=p_3[0]/p_3[2];
         const T y1=p_3[1]/p_3[2];

         const T u1=x1*520.9+325.1;
         const T v1=y1*521.0+249.7;

         residual[0]=u-u1;
         residual[1]=v-v1;
         return true;
     }
     Point3f _p1,_p2;
 };

int main(int argc,char** argv)
{
    cv::Mat img_1,img_2,depth_1,depth_2;
    if(argc!=5)
    {
        //读取图片
        img_1 = cv::imread("../src/1.png", CV_LOAD_IMAGE_COLOR);//读取彩色图
        img_2 = cv::imread("../src/2.png",1);
        //接下来的是建立3d点 利用深度图可以获取深度信息
        //depth1是图1对应的深度图 depth2是图2对应的深度图
        depth_1 = cv::imread("../src/1_depth.png", CV_LOAD_IMAGE_UNCHANGED);// 深度图为16位无符号数,单通道图像
        depth_2 = cv::imread("../src/2_depth.png", -1);
    }
    else
    {
        //读取图片
        img_1 = cv::imread(argv[1], CV_LOAD_IMAGE_COLOR);//读取彩色图
        img_2 = cv::imread(argv[2],1);
        //接下来的是建立3d点 利用深度图可以获取深度信息
        //depth1是图1对应的深度图 depth2是图2对应的深度图
        depth_1 = cv::imread(argv[3], CV_LOAD_IMAGE_UNCHANGED);// 深度图为16位无符号数,单通道图像
        depth_2 = cv::imread(argv[4], -1);
    }
    assert(img_1.data != nullptr && img_2.data != nullptr);//若读取的图片没有内容,就终止程序

    vector<KeyPoint> keypoints_1,keypoints_2;
    vector<DMatch> matches;
    find_feature_matches(img_1,img_2,keypoints_1,keypoints_2,matches);
    cout<<"一共找到了"<<matches.size()<<"组匹配点"<<endl;
    
    Mat K=(Mat_<double>(3,3)<<520.9,0,325.1,
            0,521.0,249.7,
            0,0,1);
    vector<Point3f> pts1,pts2;
    for(DMatch m:matches)
    {
        ushort d1=depth_1.ptr<unsigned short>(int(keypoints_1[m.queryIdx].pt.y))[int(keypoints_1[m.queryIdx].pt.x)];
        ushort d2=depth_2.ptr<unsigned short>(int(keypoints_2[m.trainIdx].pt.y))[int(keypoints_2[m.trainIdx].pt.x)];
        if(d1==0 || d2==0)//bad depth
            continue;
        Point2d p1=pixel2cam(keypoints_1[m.queryIdx].pt,K);
        Point2d p2=pixel2cam(keypoints_2[m.trainIdx].pt,K);
        float dd1=float(d1)/5000.0;
        float dd2=float(d2)/5000.0;
        pts1.push_back(Point3f(p1.x*dd1,p1.y*dd1,dd1));
        pts2.push_back(Point3f(p2.x*dd2,p2.y*dd2,dd2));
    }

    cout<<"3d-3d pairs: "<<pts1.size()<<endl;
    Mat R,t;
    pose_estimation_3d3d(pts1,pts2,R,t);
    cout<<"ICP via SVD results: "<<endl;
    cout<<"R ="<<endl<<R<<endl;
    cout<<"t ="<<endl<<t<<endl;
    Mat R_inv=R.t();
    Mat t_inv=-R.t()*t;
    cout<<"R_inv ="<<endl<<R_inv<<endl;//R^(-1)
    cout<<"t_inv ="<<endl<<t_inv<<endl;

    Mat r;
    Rodrigues(R_inv,r);//R_inv->r
    cout<<"r= "<<endl<<r<<endl;

    for(int i=0;i<5;++i)
    {
        cout<<"p1= "<<pts1[i]<<endl;//??
        cout<<"p2= "<<pts2[i]<<endl;//??
        cout<<"(R*p2+t) = "<<
            R*(Mat_<double>(3,1)<<pts2[i].x,pts2[i].y,pts2[i].z)+t
            <<endl;cout<<endl;
    }

    cout<<"calling bundle adjustment"<<endl;
    bundleAdjustment(pts1,pts2,r,t_inv);

    return 0;
}
 void find_feature_matches ( const Mat& img_1, const Mat& img_2,
                             std::vector<KeyPoint>& keypoints_1,
                             std::vector<KeyPoint>& keypoints_2,
                             std::vector< DMatch >& matches )
 {
     //-- 初始化
     Mat descriptors_1, descriptors_2;
     // used in OpenCV3
     Ptr<FeatureDetector> detector = ORB::create();
     Ptr<DescriptorExtractor> descriptor = ORB::create();
     // use this if you are in OpenCV2
     // Ptr<FeatureDetector> detector = FeatureDetector::create ( "ORB" );
     // Ptr<DescriptorExtractor> descriptor = DescriptorExtractor::create ( "ORB" );
     Ptr<DescriptorMatcher> matcher  = DescriptorMatcher::create ( "BruteForce-Hamming" );
     //-- 第一步:检测 Oriented FAST 角点位置
     detector->detect ( img_1,keypoints_1 );
     detector->detect ( img_2,keypoints_2 );

     //-- 第二步:根据角点位置计算 BRIEF 描述子
     descriptor->compute ( img_1, keypoints_1, descriptors_1 );
     descriptor->compute ( img_2, keypoints_2, descriptors_2 );

     //-- 第三步:对两幅图像中的BRIEF描述子进行匹配,使用 Hamming 距离
     vector<DMatch> match;
     // BFMatcher matcher ( NORM_HAMMING );
     matcher->match ( descriptors_1, descriptors_2, match );

     //-- 第四步:匹配点对筛选
     double min_dist=10000, max_dist=0;

     //找出所有匹配之间的最小距离和最大距离, 即是最相似的和最不相似的两组点之间的距离
     for ( int i = 0; i < descriptors_1.rows; i++ )
     {
         double dist = match[i].distance;
         if ( dist < min_dist ) min_dist = dist;
         if ( dist > max_dist ) max_dist = dist;
     }

     printf ( "-- Max dist : %f \n", max_dist );
     printf ( "-- Min dist : %f \n", min_dist );

     //当描述子之间的距离大于两倍的最小距离时,即认为匹配有误.但有时候最小距离会非常小,设置一个经验值30作为下限.
     for ( int i = 0; i < descriptors_1.rows; i++ )
     {
         if ( match[i].distance <= max ( 2*min_dist, 30.0 ) )
         {
             matches.push_back ( match[i] );
         }
     }
 }

 Point2d pixel2cam ( const Point2d& p, const Mat& K )
 {
     return Point2d
             (
                     ( p.x - K.at<double> ( 0,2 ) ) / K.at<double> ( 0,0 ),
                     ( p.y - K.at<double> ( 1,2 ) ) / K.at<double> ( 1,1 )
             );
 }
 void pose_estimation_3d3d(const vector<cv::Point3f> &pts1,
                           const vector<cv::Point3f> &pts2,
                           cv::Mat &R, cv::Mat &t)
 {
     int N = pts1.size();//匹配的3d点个数
     cv::Point3f p1,p2;//质心
     for (int i = 0; i < N; i++)
     {
         p1+=pts1[i];
         p2+=pts2[i];
     }
     p1 = cv::Point3f(cv::Vec3f(p1)/N);//得到质心
     p2 = cv::Point3f(cv::Vec3f(p2)/N);
     vector<cv::Point3f> q1(N), q2(N);

     for(int i = 0; i < N; i++)
     {
         //去质心
         q1[i] = pts1[i] - p1;
         q2[i] = pts2[i] - p2;
     }
     //计算 W+=q1*q2^T(求和)
     Eigen::Matrix3d W = Eigen::Matrix3d::Zero();//初始化
     for(int i = 0; i < N; i++)
     {
         W += Eigen::Vector3d(q1[i].x,q1[i].y,q1[i].z) * Eigen::Vector3d(q2[i].x,q2[i].y,q2[i].z).transpose();
     }
     cout << "W = " << endl << W << endl;

     //利用svd分解 W=U*sigema*V
     //Eigen::ComputeFullU : 在JacobiSVD中使用,表示要计算方阵U
     //Eigen::ComputeFullV : 在JacobiSVD中使用,表示要计算方阵V
     Eigen::JacobiSVD<Eigen::Matrix3d> svd(W,Eigen::ComputeFullU | Eigen::ComputeFullV);
     Eigen::Matrix3d U = svd.matrixU();//得到U矩阵
     Eigen::Matrix3d V = svd.matrixV();//得到V矩阵
     cout << "U=" << U << endl;
     cout << "V=" << V << endl;
     Eigen::Matrix3d R_ = U * (V.transpose());
     if(R_.determinant() < 0)//若旋转矩阵R_的行列式<0 则取负号
     {
         R_ = -R_;
     }
     //得到平移向量
     Eigen::Vector3d t_ = Eigen::Vector3d (p1.x,p1.y,p1.z) - R_ * Eigen::Vector3d(p2.x,p2.y,p2.z);
     //把 Eigen形式的 r 和 t_ 转换为CV 中的Mat格式
     R = (cv::Mat_<double>(3,3) <<
                                R_(0, 0), R_(0, 1), R_(0, 2),
             R_(1, 0), R_(1, 1), R_(1, 2),
             R_(2, 0), R_(2, 1), R_(2, 2)
     );
     t = (cv::Mat_<double>(3, 1) << t_(0, 0), t_(1, 0), t_(2, 0));

 }


 void bundleAdjustment(const vector<Point3f> &pts1, const vector<Point3f> &pts2, Mat &r, Mat &t_inv)
 {

    double cere_rot[3], cere_tranf[3];
     //关于初值的选取有疑问,随便选择一个初值时,结果不正确,why??
     cere_rot[0]=r.at<double>(0,0);
     cere_rot[1]=r.at<double>(1,0);
     cere_rot[2]=r.at<double>(2,0);

     cere_tranf[0]=t_inv.at<double>(0,0);
     cere_tranf[1]=t_inv.at<double>(1,0);
     cere_tranf[2]=t_inv.at<double>(2,0);


    ceres::Problem problem;
     for(int i=0;i<pts1.size();++i)
     {
         ceres::CostFunction* costfunction=new ceres::AutoDiffCostFunction<cost_function_define,2,3,3>(new cost_function_define(pts1[i],pts2[i]));
         problem.AddResidualBlock(costfunction,NULL,cere_rot,cere_tranf);
     }

    ceres::Solver::Options option;
    option.linear_solver_type=ceres::DENSE_SCHUR;
     option.minimizer_progress_to_stdout=true;
     ceres::Solver::Summary summary;
     ceres::Solve(option,&problem,&summary);
     cout<<summary.BriefReport()<<endl;
     
    //std::cout << summary.FullReport() << "\n";
    
     cout<<"optional after: "<<endl;
     Mat cam_3d=(Mat_<double>(3,1)<<cere_rot[0],cere_rot[1],cere_rot[2]);
 //    cout<<"cam_3d : "<<endl<<cam_3d<<endl;
     Mat cam_9d;
     Rodrigues(cam_3d,cam_9d);

     cout<<"cam_9d: "<<endl<<cam_9d<<endl;
     cout<<"cam_t: "<<endl<<cere_tranf[0]<<" "<<cere_tranf[1]<<" "<<cere_tranf[2]<<endl;
     Mat tranf_3d=(Mat_<double>(3,1)<<cere_tranf[0],cere_tranf[1],cere_tranf[2]);

     for(int i=0;i<5;++i)
     {
         cout<<"p1= "<<pts1[i]<<endl;
         cout<<"p2= "<<pts2[i]<<endl;
         cout<<"(R*p1+t)= "<<
               cam_9d*(Mat_<double>(3,1)<<pts1[i].x,pts1[i].y,pts1[i].z)+tranf_3d<<endl;
         cout<<endl;
     }
 }
1.10.2.3 CMakeLists.txt
cmake_minimum_required(VERSION 2.8)

project(cap7_10_ceres_icp)

set(CMAKE_CXX_STANDARD 14)
set(CMAKE_BUILD_TYPE Release)

# 添加cmake模块以使用ceres库
list(APPEND CMAKE_MODULE_PATH ${PROJECT_SOURCE_DIR}/cmake_modules)

find_package(OpenCV 3.1 REQUIRED)
find_package(Ceres REQUIRED)

include_directories(
        ${OpenCV_INCLUDE_DIRECTORIES}
        "/usr/include/eigen3"
        ${CERES_INCLUDE_DIRECTORIES}
)

add_executable(cap7_10_ceres_icp src/cap7_10_ceres_icp.cpp)

target_link_libraries(cap7_10_ceres_icp
        ${OpenCV_LIBRARIES}
        ${CERES_LIBRARIES}
        )

1.10.2.4 输出结果
/home/bupo/shenlan/zuoye/cap7/cap7_10_ceres_icp/cmake-build-release/cap7_10_ceres_icp /home/bupo/shenlan/zuoye/cap7/cap7_10_ceres_icp/src/1.png /home/bupo/shenlan/zuoye/cap7/cap7_10_ceres_icp/src/2.png /home/bupo/shenlan/zuoye/cap7/cap7_10_ceres_icp/src/1_depth.png /home/bupo/shenlan/zuoye/cap7/cap7_10_ceres_icp/src/2_depth.png
[ INFO:0] Initialize OpenCL runtime...
-- Max dist : 95.000000 
-- Min dist : 7.000000 
一共找到了81组匹配点
3d-3d pairs: 75
W = 
  11.8688 -0.717698   1.89486
 -1.88065   3.83391  -5.78219
  3.25846  -5.82734   9.65267
U=  0.592295  -0.805658 -0.0101195
 -0.418171  -0.318113   0.850845
  0.688709   0.499719   0.525319
V=   0.64736  -0.761401 -0.0345329
 -0.388765  -0.368829   0.844291
  0.655581   0.533135   0.534772
ICP via SVD results: 
R =
[0.9972065647956201, 0.05834273442898391, -0.04663895869192625;
 -0.05787745545449196, 0.998260122172866, 0.01126626067193237;
 0.04721511705620757, -0.008535444848613793, 0.9988482762174666]
t =
[0.1379879629890433;
 -0.06551699470729988;
 -0.02981649388290575]
R_inv =
[0.9972065647956201, -0.05787745545449196, 0.04721511705620757;
 0.05834273442898391, 0.998260122172866, -0.008535444848613793;
 -0.04663895869192625, 0.01126626067193237, 0.9988482762174666]
t_inv =
[-0.1399866702492459;
 0.05709791102272541;
 0.03695589996443214]
r= 
[0.009910244558423291;
 0.04697155214755139;
 0.05816521729285219]
p1= [-0.0374123, -0.830816, 2.7448]
p2= [-0.0111479, -0.746763, 2.7652]
(R*p2+t) = [-0.04566300488482969;
 -0.7791822151698653;
 2.738046267661636]

p1= [-0.243698, -0.117719, 1.5848]
p2= [-0.299118, -0.0975683, 1.6558]
(R*p2+t) = [-0.243212054430598;
 -0.1269486029625337;
 1.610786345002579]

p1= [-0.627753, 0.160186, 1.3396]
p2= [-0.709645, 0.159033, 1.4212]
(R*p2+t) = [-0.6266796777024644;
 0.1503229238263245;
 1.354883323538178]

p1= [-0.323443, 0.104873, 1.4266]
p2= [-0.399079, 0.12047, 1.4838]
(R*p2+t) = [-0.3221508525590339;
 0.09455772952719482;
 1.432403794814274]

p1= [-0.627221, 0.101454, 1.3116]
p2= [-0.709709, 0.100216, 1.3998]
(R*p2+t) = [-0.6291763602679332;
 0.09137127679150181;
 1.334006907588644]

calling bundle adjustment
iter      cost      cost_change  |gradient|   |step|    tr_ratio  tr_radius  ls_iter  iter_time  total_time
   0  1.386936e+03    0.00e+00    7.80e+04   0.00e+00   0.00e+00  1.00e+04        0    2.70e-04    3.74e-04
   1  1.581843e+02    1.23e+03    1.74e+03   7.83e-02   9.99e-01  3.00e+04        1    1.07e-04    6.52e-04
   2  1.573157e+02    8.69e-01    1.57e+00   1.71e-03   1.00e+00  9.00e+04        1    5.98e-05    7.43e-04
Ceres Solver Report: Iterations: 3, Initial cost: 1.386936e+03, Final cost: 1.573157e+02, Termination: CONVERGENCE
optional after: 
cam_9d: 
[0.9979163765916491, -0.0513269110025363, 0.03909544138555834;
 0.05027495981204224, 0.9983586154877093, 0.02743179354985397;
 -0.04043925995974022, -0.02540911427818745, 0.9988588704944791]
cam_t: 
-0.125957 -0.00740074 0.0609871
p1= [-0.0374123, -0.830816, 2.7448]
p2= [-0.0111479, -0.746763, 2.7652]
(R*p1+t)= [-0.01333943894784084;
 -0.7634388360382136;
 2.825278250152581]

p1= [-0.243698, -0.117719, 1.5848]
p2= [-0.299118, -0.0975683, 1.6558]
(R*p1+t)= [-0.3011474122086973;
 -0.09370484161307237;
 1.656824782674852]

p1= [-0.627753, 0.160186, 1.3396]
p2= [-0.709645, 0.159033, 1.4212]
(R*p1+t)= [-0.7082517178879888;
 0.1577100606269968;
 1.420374084677415]

p1= [-0.323443, 0.104873, 1.4266]
p2= [-0.399079, 0.12047, 1.4838]
(R*p1+t)= [-0.3983357466712113;
 0.1201731356860113;
 1.49637422656784]

p1= [-0.627221, 0.101454, 1.3116]
p2= [-0.709709, 0.100216, 1.3998]
(R*p1+t)= [-0.7058015075804205;
 0.09833264832376766;
 1.393876887667482]


进程已结束,退出代码0

1.10.2.5 感悟
  • 从输出结果中可以看出,使用SVD求解出来的位姿不够准确,但是经过非线性优化之后精确了很多

1.10.3 参考文章

  1. ceres求解PnP–SLAM 十四讲第七章课后题(这个pnp的代码运行)
  2. 在ceres中实现ICP优化(仅优化位姿
  3. 《视觉SLAM十四讲》课后习题—ch7(更新中……)这个有点会问题,pnp的代码会与大盘报错WARNING: Logging before InitGoogleLogging() is written to STDERR E0408 11:45:04.339355 7237 trust_region_minimizer.cc:73] Terminating: Residual and Jacobian evaluation failed.,找不到合适的解决方案(补充,一一排查找到原因了,是因为大佬王家在自定义的仿函数类中,返回布尔量了,可以看我的1.10.1.2)

1.10.4 rotation.h

#ifndef ROTATION_H
#define ROTATION_H
 
#include <algorithm>
#include <cmath>
#include <limits>
 
//
// math functions needed for rotation conversion. 
 
// dot and cross production 
 
template<typename T> 
inline T DotProduct(const T x[3], const T y[3]) {
  return (x[0] * y[0] + x[1] * y[1] + x[2] * y[2]);
}
 
template<typename T>
inline void CrossProduct(const T x[3], const T y[3], T result[3]){
  result[0] = x[1] * y[2] - x[2] * y[1];
  result[1] = x[2] * y[0] - x[0] * y[2];
  result[2] = x[0] * y[1] - x[1] * y[0];
}
 
 
//
 
 
// Converts from a angle anxis to quaternion : 
template<typename T>
inline void AngleAxisToQuaternion(const T* angle_axis, T* quaternion){
  const T& a0 = angle_axis[0];
  const T& a1 = angle_axis[1];
  const T& a2 = angle_axis[2];
  const T theta_squared = a0 * a0 + a1 * a1 + a2 * a2;
  
  
  if(theta_squared > T(std::numeric_limits<double>::epsilon()) ){
    const T theta = sqrt(theta_squared);
    const T half_theta = theta * T(0.5);
    const T k = sin(half_theta)/theta;
    quaternion[0] = cos(half_theta);
    quaternion[1] = a0 * k;
    quaternion[2] = a1 * k;
    quaternion[3] = a2 * k;
  }
  else{ // in case if theta_squared is zero
    const T k(0.5);
    quaternion[0] = T(1.0);
    quaternion[1] = a0 * k;
    quaternion[2] = a1 * k;
    quaternion[3] = a2 * k;
  }
}
 
 
template<typename T>
inline void QuaternionToAngleAxis(const T* quaternion, T* angle_axis){
  const T& q1 = quaternion[1];
  const T& q2 = quaternion[2];
  const T& q3 = quaternion[3];
  const T sin_squared_theta = q1 * q1 + q2 * q2 + q3 * q3;
  
  // For quaternions representing non-zero rotation, the conversion
  // is numercially stable
  if(sin_squared_theta > T(std::numeric_limits<double>::epsilon()) ){
    const T sin_theta = sqrt(sin_squared_theta);
    const T& cos_theta = quaternion[0];
    
    // If cos_theta is negative, theta is greater than pi/2, which
    // means that angle for the angle_axis vector which is 2 * theta
    // would be greater than pi...
    
    const T two_theta = T(2.0) * ((cos_theta < 0.0)
				  ? atan2(-sin_theta, -cos_theta)
				  : atan2(sin_theta, cos_theta));
    const T k = two_theta / sin_theta;
    
    angle_axis[0] = q1 * k;
    angle_axis[1] = q2 * k;
    angle_axis[2] = q3 * k;
  }
  else{
    // For zero rotation, sqrt() will produce NaN in derivative since
    // the argument is zero. By approximating with a Taylor series, 
    // and truncating at one term, the value and first derivatives will be 
    // computed correctly when Jets are used..
    const T k(2.0);
    angle_axis[0] = q1 * k;
    angle_axis[1] = q2 * k;
    angle_axis[2] = q3 * k;
  }
  
}
 
 
template<typename T>
inline void AngleAxisRotatePoint(const T angle_axis[3], const T pt[3], T result[3]) {
  const T theta2 = DotProduct(angle_axis, angle_axis);
  if (theta2 > T(std::numeric_limits<double>::epsilon())) {
    // Away from zero, use the rodriguez formula
    //
    //   result = pt costheta +
    //            (w x pt) * sintheta +
    //            w (w . pt) (1 - costheta)
    //
    // We want to be careful to only evaluate the square root if the
    // norm of the angle_axis vector is greater than zero. Otherwise
    // we get a division by zero.
    //
    const T theta = sqrt(theta2);
    const T costheta = cos(theta);
    const T sintheta = sin(theta);
    const T theta_inverse = 1.0 / theta;
 
    const T w[3] = { angle_axis[0] * theta_inverse,
                     angle_axis[1] * theta_inverse,
                     angle_axis[2] * theta_inverse };
 
    // Explicitly inlined evaluation of the cross product for
    // performance reasons.
    /*const T w_cross_pt[3] = { w[1] * pt[2] - w[2] * pt[1],
                              w[2] * pt[0] - w[0] * pt[2],
                              w[0] * pt[1] - w[1] * pt[0] };*/
    T w_cross_pt[3];
    CrossProduct(w, pt, w_cross_pt);                          
 
 
    const T tmp = DotProduct(w, pt) * (T(1.0) - costheta);
    //    (w[0] * pt[0] + w[1] * pt[1] + w[2] * pt[2]) * (T(1.0) - costheta);
 
    result[0] = pt[0] * costheta + w_cross_pt[0] * sintheta + w[0] * tmp;
    result[1] = pt[1] * costheta + w_cross_pt[1] * sintheta + w[1] * tmp;
    result[2] = pt[2] * costheta + w_cross_pt[2] * sintheta + w[2] * tmp;
  } else {
    // Near zero, the first order Taylor approximation of the rotation
    // matrix R corresponding to a vector w and angle w is
    //
    //   R = I + hat(w) * sin(theta)
    //
    // But sintheta ~ theta and theta * w = angle_axis, which gives us
    //
    //  R = I + hat(w)
    //
    // and actually performing multiplication with the point pt, gives us
    // R * pt = pt + w x pt.
    //
    // Switching to the Taylor expansion near zero provides meaningful
    // derivatives when evaluated using Jets.
    //
    // Explicitly inlined evaluation of the cross product for
    // performance reasons.
    /*const T w_cross_pt[3] = { angle_axis[1] * pt[2] - angle_axis[2] * pt[1],
                              angle_axis[2] * pt[0] - angle_axis[0] * pt[2],
                              angle_axis[0] * pt[1] - angle_axis[1] * pt[0] };*/
    T w_cross_pt[3];
    CrossProduct(angle_axis, pt, w_cross_pt); 
 
    result[0] = pt[0] + w_cross_pt[0];
    result[1] = pt[1] + w_cross_pt[1];
    result[2] = pt[2] + w_cross_pt[2];
  }
}
 
#endif // rotation.h
  • 2
    点赞
  • 8
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值