RGB-D SLAM学习总结(7)

第七讲 回环检测


本讲主要添加了关键帧的选取和回环检测。


关键帧的选取:通过估计参看帧与新帧之间的运动e来判断

  • e>E_{error},运动过大,可能是计算错误,丢弃该帧;
  • 若匹配太少,说明该帧图像质量不高,丢弃;
  • e<E_{key},说明离参考帧(前一帧)太近,丢弃。

slam程序的基本流程,以下为伪码:

  1. 初始化关键帧序列:F,并将第一帧f0放入F;
  2. 对于新来的一帧I,判断是否为关键帧;
  3. 进入回环检测程序:近距离回环:匹配I与F末尾m个关键帧。匹配成功时,在图里增加一条边;
  4. 随机回环:随机在F里取n个帧,与I进行匹配。若匹配上,在图里增加一条边;
  5. 将I放入F末尾。若有新的数据,则回2;若无,则进行优化与地图拼接。

最终代码:

parameters.txt

# 这是一个参数文件

# part 4 里定义的参数
detector=ORB
descriptor=ORB
good_match_threshold=10
# camera
camera.cx=325.5;
camera.cy=253.5;
camera.fx=518.0;
camera.fy=519.0;
camera.scale=1000.0;

# part 5 
# 数据相关
# 起始与终止索引
start_index=1
end_index=780
# 数据所在目录
rgb_dir=../data/rgb_png/
rgb_extension=.png
depth_dir=../data/depth_png/
depth_extension=.png
# 点云分辨率
voxel_grid=0.01
# 是否实时可视化
visualize_pointcloud=yes
# 最小匹配数量
min_good_match=10
# 最小内点
min_inliers=5
# 最大运动误差
max_norm=0.2

# part 7
keyframe_threshold=0.08
max_norm_lp=2.0
nearby_loops=5
random_loops=5
check_loop_closure=yes

CMakeLists.txt

FIND_PACKAGE(PCL REQUIRED COMPONENTS common io visualization filters)

SET(OpenCV_DIR "/home/lzy/opencv-2.4.13.6/build")
FIND_PACKAGE(OpenCV REQUIRED)
INCLUDE_DIRECTORIES(${OpenCV_INCLUDE_DIRS})
#MESSAGE(STATUS "version:${OpenCV_VERSION}")

ADD_DEFINITIONS(${PCL_DEFINITIONS})
INCLUDE_DIRECTORIES(${PCL_INCLUDE_DIRS})
LINK_LIBRARIES(${PCL_LIBRARY_DIRS})

# 添加g2o的依赖
LIST(APPEND CMAKE_MODULE_PATH ${PROJECT_SOURCE_DIR}/cmake_modules)
SET(G2O_ROOT "/usr/local/include/g2o")
FIND_PACKAGE(G2O)
# CSparse
FIND_PACKAGE(CSparse)
INCLUDE_DIRECTORIES(${G2O_INCLUDE_DIR} ${CSPARSE_INCLUDE_DIR})

#INCLUDE_DIRECTORIES("/usr/include/eigen3")

ADD_EXECUTABLE(generate_pointcloud generatePointCloud.cpp)
TARGET_LINK_LIBRARIES(generate_pointcloud ${OpenCV_LIBS} ${PCL_LIBRARIES})

ADD_LIBRARY(slambase slamBase.cpp)
TARGET_LINK_LIBRARIES(slambase ${OpenCV_LIBS} ${PCL_LIBRARIES})

ADD_EXECUTABLE(detect_features detectFeatures.cpp)
TARGET_LINK_LIBRARIES(detect_features slambase ${OpenCV_LIBS} ${PCL_LIBRARIES})

ADD_EXECUTABLE(join_pointcloud joinPointCloud.cpp)
TARGET_LINK_LIBRARIES(join_pointcloud slambase ${OpenCV_LIBS} ${PCL_LIBRARIES})

ADD_EXECUTABLE(visual_odometry visualOdometry.cpp)
TARGET_LINK_LIBRARIES(visual_odometry slambase ${OpenCV_LIBS} ${PCL_LIBRARIES})

ADD_EXECUTABLE(slamend slamEnd.cpp)
TARGET_LINK_LIBRARIES(slamend 
	slambase 
	${OpenCV_LIBS} 
	${PCL_LIBRARIES}
	g2o_core g2o_types_slam3d g2o_solver_csparse g2o_stuff g2o_csparse_extension ${CSPARSE_LIBRARY})

ADD_EXECUTABLE(slam slam.cpp)
TARGET_LINK_LIBRARIES(slam 
	slambase 
	${OpenCV_LIBS} 
	${PCL_LIBRARIES}
	g2o_core g2o_types_slam3d g2o_solver_csparse g2o_stuff g2o_csparse_extension ${CSPARSE_LIBRARY})

slamBase.h

# pragma once

// 各种头文件 
// C++标准库
#include <fstream>
#include <vector>
#include <map>
using namespace std;
// Eigen
#include <Eigen/Core>
#include <Eigen/Geometry>
// OpenCV
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/core/eigen.hpp>
#include <opencv2/features2d/features2d.hpp>
//#include <opencv2/nonfree/nonfree.hpp>
#include <opencv2/calib3d/calib3d.hpp>
//PCL
#include <pcl/io/pcd_io.h>
#include <pcl/point_types.h>
//#include <pcl/point_cloud.h>
#include <pcl/common/transforms.h>
#include <pcl/visualization/cloud_viewer.h>
#include <pcl/filters/voxel_grid.h>
// 类型定义
typedef pcl::PointXYZRGBA PointT;
typedef pcl::PointCloud<PointT> PointCloud;

// 相机内参结构
struct CAMERA_INTRINSIC_PARAMETERS 
{ 
    double cx, cy, fx, fy, scale;
};

// 帧结构
struct FRAME
{
    int frameID;
    cv::Mat rgb, depth; //该帧对应的彩色图与深度图
    cv::Mat desp;       //特征描述子
    vector<cv::KeyPoint> kp; //关键点
};

// PnP 结果
struct RESULT_OF_PNP
{
    cv::Mat rvec, tvec;
    int inliers;
};

// 函数接口
// image2PonitCloud 将rgb图转换为点云
PointCloud::Ptr image2PointCloud( cv::Mat& rgb, cv::Mat& depth, CAMERA_INTRINSIC_PARAMETERS& camera );

// point2dTo3d 将单个点从图像坐标转换为空间坐标
// input: 3维点Point3f (u,v,d)
cv::Point3f point2dTo3d( cv::Point3f& point, CAMERA_INTRINSIC_PARAMETERS& camera );

// computeKeyPointsAndDesp 同时提取关键点与特征描述子
void computeKeyPointsAndDesp( FRAME& frame, string detector, string descriptor );

// estimateMotion 计算两个帧之间的运动
// 输入:帧1和帧2, 相机内参
RESULT_OF_PNP estimateMotion( FRAME& frame1, FRAME& frame2, CAMERA_INTRINSIC_PARAMETERS& camera );

// rvec, tvec --> T
Eigen::Isometry3d cvMat2Eigen(cv::Mat& rvec, cv::Mat& tvec);

// 拼接点云
PointCloud::Ptr joinPointCloud(PointCloud::Ptr original, FRAME& newFrame, Eigen::Isometry3d T, CAMERA_INTRINSIC_PARAMETERS& camera);

// 参数读取类
class ParameterReader
{
public:
    ParameterReader( string filename="../parameters.txt" )
    {
        ifstream fin( filename.c_str() );
        if (!fin)
        {
            cerr<<"parameter file does not exist."<<endl;
            return;
        }
        while(!fin.eof())
        {
            string str;
            getline( fin, str );
            if (str[0] == '#')
            {
                // 以‘#’开头的是注释
                continue;
            }

            int pos = str.find("=");
            if (pos == -1)
                continue;
            string key = str.substr( 0, pos );
            string value = str.substr( pos+1, str.length() );
            data[key] = value;

            if ( !fin.good() )
                break;
        }
    }
    string getData( string key )
    {
        map<string, string>::iterator iter = data.find(key);
        if (iter == data.end())
        {
            cerr<<"Parameter name "<<key<<" not found!"<<endl;
            return string("NOT_FOUND");
        }
        return iter->second;
    }
public:
    map<string, string> data;
};

inline static CAMERA_INTRINSIC_PARAMETERS getDefaultCamera()
{
	ParameterReader pd;
	CAMERA_INTRINSIC_PARAMETERS camera;
	camera.fx = atof( pd.getData( "camera.fx" ).c_str());
    	camera.fy = atof( pd.getData( "camera.fy" ).c_str());
    	camera.cx = atof( pd.getData( "camera.cx" ).c_str());
    	camera.cy = atof( pd.getData( "camera.cy" ).c_str());
    	camera.scale = atof( pd.getData( "camera.scale" ).c_str() );
	return camera;
}

//the following are UBUNTU/LINUX ONLY terminal color
#define RESET "\033[0m"
#define BLACK "\033[30m" /* Black */
#define RED "\033[31m" /* Red */
#define GREEN "\033[32m" /* Green */
#define YELLOW "\033[33m" /* Yellow */
#define BLUE "\033[34m" /* Blue */
#define MAGENTA "\033[35m" /* Magenta */
#define CYAN "\033[36m" /* Cyan */
#define WHITE "\033[37m" /* White */
#define BOLDBLACK "\033[1m\033[30m" /* Bold Black */
#define BOLDRED "\033[1m\033[31m" /* Bold Red */
#define BOLDGREEN "\033[1m\033[32m" /* Bold Green */
#define BOLDYELLOW "\033[1m\033[33m" /* Bold Yellow */
#define BOLDBLUE "\033[1m\033[34m" /* Bold Blue */
#define BOLDMAGENTA "\033[1m\033[35m" /* Bold Magenta */
#define BOLDCYAN "\033[1m\033[36m" /* Bold Cyan */
#define BOLDWHITE "\033[1m\033[37m" /* Bold White */

slamBase.cpp

#include "slamBase.h"

PointCloud::Ptr image2PointCloud( cv::Mat& rgb, cv::Mat& depth, CAMERA_INTRINSIC_PARAMETERS& camera )
{
    PointCloud::Ptr cloud ( new PointCloud );

    for (int m = 0; m < depth.rows; m+=2)
        for (int n=0; n < depth.cols; n+=2)
        {
            // 获取深度图中(m,n)处的值
            ushort d = depth.ptr<ushort>(m)[n];
            // d 可能没有值,若如此,跳过此点
            if (d == 0)
                continue;
            // d 存在值,则向点云增加一个点
            PointT p;

            // 计算这个点的空间坐标
            p.z = double(d) / camera.scale;
            p.x = (n - camera.cx) * p.z / camera.fx;
            p.y = (m - camera.cy) * p.z / camera.fy;
            
            // 从rgb图像中获取它的颜色
            // rgb是三通道的BGR格式图,所以按下面的顺序获取颜色
            p.b = rgb.ptr<uchar>(m)[n*3];
            p.g = rgb.ptr<uchar>(m)[n*3+1];
            p.r = rgb.ptr<uchar>(m)[n*3+2];

            // 把p加入到点云中
            cloud->points.push_back( p );
        }
    // 设置并保存点云
    cloud->height = 1;
    cloud->width = cloud->points.size();
    cloud->is_dense = false;

    return cloud;
}

cv::Point3f point2dTo3d( cv::Point3f& point, CAMERA_INTRINSIC_PARAMETERS& camera )
{
    cv::Point3f p; // 3D 点
    p.z = double( point.z ) / camera.scale;
    p.x = ( point.x - camera.cx) * p.z / camera.fx;
    p.y = ( point.y - camera.cy) * p.z / camera.fy;
    return p;
}

// computeKeyPointsAndDesp 同时提取关键点与特征描述子
void computeKeyPointsAndDesp( FRAME& frame, string detector, string descriptor )
{
    cv::Ptr<cv::FeatureDetector> _detector;
    cv::Ptr<cv::DescriptorExtractor> _descriptor;

    //cv::initModule_nonfree();
    _detector = cv::FeatureDetector::create( detector.c_str() );
    _descriptor = cv::DescriptorExtractor::create( descriptor.c_str() );

    if (!_detector || !_descriptor)
    {
        cerr<<"Unknown detector or discriptor type !"<<detector<<","<<descriptor<<endl;
        return;
    }

    _detector->detect( frame.rgb, frame.kp );
    _descriptor->compute( frame.rgb, frame.kp, frame.desp );

    return;
}

// estimateMotion 计算两个帧之间的运动
// 输入:帧1和帧2
// 输出:rvec 和 tvec
RESULT_OF_PNP estimateMotion( FRAME& frame1, FRAME& frame2, CAMERA_INTRINSIC_PARAMETERS& camera )
{
    RESULT_OF_PNP result;
    static ParameterReader pd;
    vector< cv::DMatch > matches;
    //cv::FlannBasedMatcher matcher;
    cv::BFMatcher matcher;
    matcher.match( frame1.desp, frame2.desp, matches );
   
    //cout<<"find total "<<matches.size()<<" matches."<<endl;
    vector< cv::DMatch > goodMatches;
    double minDis = 9999;
    double good_match_threshold = atof( pd.getData( "good_match_threshold" ).c_str() );
    for ( size_t i=0; i<matches.size(); i++ )
    {
        if ( matches[i].distance < minDis )
            minDis = matches[i].distance;
    }
    //cout << "min dis = " << minDis << endl;
    if(minDis<10)minDis=10;

    for ( size_t i=0; i<matches.size(); i++ )
    {
        if (matches[i].distance < good_match_threshold*minDis)
            goodMatches.push_back( matches[i] );
    }
    //cout<<"good matches: "<<goodMatches.size()<<endl;
    if (goodMatches.size() <= 5) 
    {
        result.inliers = -1;
        return result;
    }
    // 第一个帧的三维点
    vector<cv::Point3f> pts_obj;
    // 第二个帧的图像点
    vector< cv::Point2f > pts_img;

    // 相机内参
    for (size_t i=0; i<goodMatches.size(); i++)
    {
        // query 是第一个, train 是第二个
        cv::Point2f p = frame1.kp[goodMatches[i].queryIdx].pt;
        // 获取d是要小心!x是向右的,y是向下的,所以y才是行,x是列!
        ushort d = frame1.depth.ptr<ushort>( int(p.y) )[ int(p.x) ];
        if (d == 0)
            continue;
        pts_img.push_back( cv::Point2f( frame2.kp[goodMatches[i].trainIdx].pt ) );

        // 将(u,v,d)转成(x,y,z)
        cv::Point3f pt ( p.x, p.y, d );
        cv::Point3f pd = point2dTo3d( pt, camera );
        pts_obj.push_back( pd );
    }

    if(pts_obj.size()==0 || pts_img.size()==0)
    {
	result.inliers = -1;
	return result;
    }

    double camera_matrix_data[3][3] = {
        {camera.fx, 0, camera.cx},
        {0, camera.fy, camera.cy},
        {0, 0, 1}
    };

    //cout<<"solving pnp"<<endl;
    // 构建相机矩阵
    cv::Mat cameraMatrix( 3, 3, CV_64F, camera_matrix_data );
    cv::Mat rvec, tvec, inliers;
    // 求解pnp
    cv::solvePnPRansac( pts_obj, pts_img, cameraMatrix, cv::Mat(), rvec, tvec, false, 100, 1.0, 100, inliers );

    
    result.rvec = rvec;
    result.tvec = tvec;
    result.inliers = inliers.rows;

    return result;
}

// rvec, tvec --> T
Eigen::Isometry3d cvMat2Eigen(cv::Mat& rvec, cv::Mat& tvec)
{
	cv::Mat R;
	cv::Rodrigues(rvec, R);
	Eigen::Matrix3d r;
	//cv::cv2eigen(R, r);
	for(int i=0; i<3; i++)
		for(int j=0; j<3; j++)
			r(i,j) = R.at<double>(i,j);
	
	Eigen::Isometry3d T = Eigen::Isometry3d::Identity();
	
	Eigen::AngleAxisd angle(r);
	//Eigen::Translation<double, 3> trans(tvec.at<double>(0,0), tvec.at<double>(0,1), tvec.at<double>(0,2));
	T = angle;
	T(0,3) = tvec.at<double>(0,0);
	T(1,3) = tvec.at<double>(1,0);
	T(2,3) = tvec.at<double>(2,0);
	return T;
}

// joinPointCloud
// 输入:原始点云, 新来的帧以及它的位姿
// 输出:将新来的帧加到原始帧后的图像
PointCloud::Ptr joinPointCloud(PointCloud::Ptr original, FRAME& newFrame, Eigen::Isometry3d T, CAMERA_INTRINSIC_PARAMETERS& camera)
{
	PointCloud::Ptr newCloud = image2PointCloud(newFrame.rgb, newFrame.depth, camera);
	// 合并点云
	PointCloud::Ptr output (new PointCloud());
	pcl::transformPointCloud(*original, *output, T.matrix());
	*newCloud += *output;
	// Voxel grid 滤波降采样
	static pcl::VoxelGrid<PointT> voxel;
	static ParameterReader pd;
	double gridsize = atof(pd.getData("voxel_grid").c_str());
	voxel.setLeafSize(gridsize, gridsize, gridsize);
	voxel.setInputCloud(newCloud);
	PointCloud::Ptr tmp(new PointCloud());
	voxel.filter(*tmp);
	return tmp;
}

slam.cpp

#include <iostream>
#include <fstream>
#include <sstream>
using namespace std;

#include "slamBase.h"

#include <pcl/filters/voxel_grid.h>
#include <pcl/filters/passthrough.h>
// g2o的头文件
#include <g2o/types/slam3d/types_slam3d.h> //顶点类型
#include <g2o/core/sparse_optimizer.h>
#include <g2o/core/block_solver.h>
#include <g2o/core/factory.h>
#include <g2o/core/optimization_algorithm_factory.h>
#include <g2o/core/optimization_algorithm_gauss_newton.h>
//#include <g2o/solvers/csparse/linear_solver_csparse.h>
#include <g2o/solvers/eigen/linear_solver_eigen.h>
#include <g2o/core/robust_kernel.h>
#include <g2o/core/robust_kernel_impl.h>
//#include <g2o/core/robust_kernel_factory.h>
#include <g2o/core/optimization_algorithm_levenberg.h>

// 选择优化方法
typedef g2o::BlockSolver_6_3 SlamBlockSolver;
typedef g2o::LinearSolverEigen< SlamBlockSolver::PoseMatrixType > SlamLinearSolver; 
//typedef g2o::LinearSolverCSparse<SlamBlockSolver::PoseMatrixType> SlamLinearSolver;
// 给定index,读取一帧数据
FRAME readFrame(int index, ParameterReader& pd);
// 估计一个运动的大小
double normofTransform(cv::Mat rvec, cv::Mat tvec);
// 检测两个帧,结果定义
enum CHECK_RESULT {NOT_MATCHED=0, TOO_FAR_AWAY, TOO_CLOSE, KEYFRAME};
// 函数声明
CHECK_RESULT checkKeyframes(FRAME& f1, FRAME& f2, g2o::SparseOptimizer& opti, bool is_loops=false);
// 检测近距离的回环
void checkNearbyLoops(vector<FRAME>& frames, FRAME& currFrame, g2o::SparseOptimizer& opti);
// 随机检测回环
void checkRandomLoops(vector<FRAME>& frames, FRAME& currFrame, g2o::SparseOptimizer& opti);

int main(int argc, char** argv)
{
	// 前面部分和vo是一样的
	ParameterReader pd;
	int startIndex = atoi(pd.getData("start_index").c_str());
	int endIndex = atoi(pd.getData("end_index").c_str());

	// 所有的关键帧都放在了这里
	vector<FRAME> keyframes;
	// initialize
	cout << "Initializing ..." << endl;
	int currIndex = startIndex;
	FRAME currFrame = readFrame(currIndex, pd);

	string detector = pd.getData("detector");
	string descriptor = pd.getData("descriptor");
	CAMERA_INTRINSIC_PARAMETERS camera = getDefaultCamera();
	computeKeyPointsAndDesp(currFrame, detector, descriptor);
	PointCloud::Ptr cloud = image2PointCloud(currFrame.rgb, currFrame.depth, camera);
	
	//pcl::visualization::CloudViewer viewer("viewer");
	
	// 是否显示点云
	//bool visualize = pd.getData("visualize_pointcloud")==string("yes");
	
	/********************************
	// 新增:有关g2o的初始化
	********************************/
	// 初始化求解器
	SlamLinearSolver* linearSolver = new SlamLinearSolver();
	linearSolver->setBlockOrdering(false);
	SlamBlockSolver* blockSolver = new SlamBlockSolver(linearSolver);
	g2o::OptimizationAlgorithmLevenberg* solver = new g2o::OptimizationAlgorithmLevenberg(blockSolver);
	
	g2o::SparseOptimizer globalOptimizer; // 最后用的就是这个东东
	globalOptimizer.setAlgorithm(solver);
	// 不要输出调试信息
	globalOptimizer.setVerbose(false);

	// 向globalOptimizer增加第一个顶点
	g2o::VertexSE3* v = new g2o::VertexSE3();
	v->setId(currIndex);
	v->setEstimate(Eigen::Isometry3d::Identity());
	v->setFixed(true);
	globalOptimizer.addVertex(v);

	keyframes.push_back(currFrame);

	// 新增添加关键帧的约束
	double keyframe_threshold = atof(pd.getData("keyframe_threshold").c_str());

	bool check_loop_closure = pd.getData("check_loop_closure")==string("yes");
	
	for(currIndex = startIndex+1; currIndex < endIndex; currIndex++)
	{
		cout << "Reading files " << currIndex << endl;
		FRAME currFrame = readFrame(currIndex, pd);
		computeKeyPointsAndDesp(currFrame, detector, descriptor);
		// 匹配该帧与keyframes里最后一帧
		CHECK_RESULT result = checkKeyframes(keyframes.back(), currFrame, globalOptimizer);
		switch(result)
		{
		case NOT_MATCHED:
			// 没匹配上,直接跳过
			cout << RED"Not enough lnliers." << endl;
			break;
		case TOO_FAR_AWAY:
			// 太远了,直接跳过
			cout << RED"Too far away, may be an error." << endl;
			break;
		case TOO_CLOSE:
			// 太近了,没必要添加
			cout << RESET"Too close, not a keyframe." << endl;
			break; // 靠靠靠
		case KEYFRAME:
			cout << GREEN"This is a new keyframe." << endl;
			// 检测回环
			if(check_loop_closure)
			{
				checkNearbyLoops(keyframes, currFrame, globalOptimizer);
				checkRandomLoops(keyframes, currFrame, globalOptimizer);
			}
			keyframes.push_back(currFrame);
			break;
		default:
			break;
		}
	}	
	//cout << "共有" << keyframes.size() << "关键帧" << endl;
	// 优化
	cout << "optimizing pose graph, vertices:" << globalOptimizer.vertices().size() << endl;
	globalOptimizer.save("../data/result_before.g2o");
	globalOptimizer.initializeOptimization();
	globalOptimizer.optimize(100);
	globalOptimizer.save("../data/result_after.g2o");
	cout << "Optimization done." << endl;

	// 拼接点云地图
	cout << "saving the point cloud map..." << endl;
	PointCloud::Ptr output (new PointCloud()); // 全局地图
	PointCloud::Ptr tmp (new PointCloud());

	pcl::VoxelGrid<PointT> voxel; // 网格滤波器,调整地图分辨率
	pcl::PassThrough<PointT> pass; // z方向区间滤波器,由于rgbd相机的有效深度区间有限,把太远的去掉
	
	pass.setFilterFieldName("z");
	pass.setFilterLimits(0.0, 4.0); // 4m以上就不要了
	
	double gridsize = atof(pd.getData("voxel_grid").c_str()); // 分辨图可以在parameters.txt里调
	voxel.setLeafSize(gridsize, gridsize, gridsize);

	for(size_t i=0; i<keyframes.size(); i++)
	{
		// 从g2o里取出一帧
		g2o::VertexSE3* vertex = dynamic_cast<g2o::VertexSE3*>(globalOptimizer.vertex(keyframes[i].frameID));
		Eigen::Isometry3d pose = vertex->estimate(); // 该帧优化后的位姿
		PointCloud::Ptr newCloud = image2PointCloud(keyframes[i].rgb, keyframes[i].depth, camera); // 转成点云
		// 以下是滤波
		voxel.setInputCloud(newCloud);
		voxel.filter(*tmp);
		pass.setInputCloud(tmp);
		pass.filter(*newCloud);
		// 把点云变换后加入全局地图中
		pcl::transformPointCloud(*newCloud, *tmp, pose.matrix());
		*output += *tmp;
		tmp->clear();
		newCloud->clear();
	}

	voxel.setInputCloud(output);
	voxel.filter(*tmp);
	// 存储
	pcl::io::savePCDFile("../data/result.pcd", *tmp);

	cout << "Final map is saved." << endl;
	//globalOptimizer.clear();

	return 0;
}

FRAME readFrame(int index, ParameterReader& pd)
{
	FRAME f;
	string rgbDir = pd.getData("rgb_dir");
	string depthDir = pd.getData("depth_dir");
	string rgbExt = pd.getData("rgb_extension");
	string depthExt = pd.getData("depth_extension");

	stringstream ss;
	ss << rgbDir << index << rgbExt;
	string filename;
	ss >> filename;
	f.rgb = cv::imread(filename);
	
	ss.clear();
	filename.clear();
	ss << depthDir << index << depthExt;
	ss >> filename;
	f.depth = cv::imread(filename, -1);
	f.frameID = index;
	return f;
}

double normofTransform(cv::Mat rvec, cv::Mat tvec)
{
	return fabs(min(cv::norm(rvec), 2*M_PI-cv::norm(rvec))) + fabs(cv::norm(tvec));
}

CHECK_RESULT checkKeyframes(FRAME& f1, FRAME& f2, g2o::SparseOptimizer& opti, bool is_loops)
{
	static ParameterReader pd;
	static int min_inliers = atoi(pd.getData("min_inliers").c_str());
	static double max_norm = atof(pd.getData("max_norm").c_str());
	static double keyframe_threshold = atof(pd.getData("keyframe_threshold").c_str());
	static double max_norm_lp = atof(pd.getData("max_norm_lp").c_str());
	static CAMERA_INTRINSIC_PARAMETERS camera = getDefaultCamera();
	//static g2o::RobustKernel* robustKernel = g2o::RobustKernelFactory::instance()->construct("Cauchy");
	// 比较f1和f2
	RESULT_OF_PNP result = estimateMotion(f1, f2, camera);
	// inliers不够,放弃该帧
	if(result.inliers<min_inliers)return NOT_MATCHED;
	// 计算运动范围是否太大
	double norm = normofTransform(result.rvec, result.tvec);
	if(is_loops == false)
	{
		if(norm >= max_norm)return TOO_FAR_AWAY;
	}
	else
	{
		if(norm >= max_norm_lp)return TOO_FAR_AWAY;
	}
	// 太近
	if(norm <= keyframe_threshold)return TOO_CLOSE;
	// 向g2o中增加这个顶点与上一帧联系的边
	// 顶点部分
	// 顶点只需设置id即可
	if(is_loops == false)
	{
		g2o::VertexSE3 *v = new g2o::VertexSE3();
		v->setId(f2.frameID);
		v->setEstimate(Eigen::Isometry3d::Identity());
		opti.addVertex(v);
	}
	// 边部分
	g2o::EdgeSE3* edge = new g2o::EdgeSE3();
	// 连接此边的两个顶点id
	//edge->vertices()[0] = opti.vertex(f1.frameID);
	//edge->vertices()[1] = opti.vertex(f2.frameID);
	edge->setVertex(0, opti.vertex(f1.frameID));
	edge->setVertex(1, opti.vertex(f2.frameID));
	//edge->setRobustKernel(robustKernel);
	edge->setRobustKernel(new g2o::RobustKernelHuber());
	// 信息矩阵
	Eigen::Matrix<double, 6, 6> information = Eigen::Matrix<double, 6, 6>::Identity();
	// 信息矩阵是协方差矩阵的逆,表示我们对边的精度的预先估计
    	// 因为pose为6D的,信息矩阵是6*6的阵,假设位置和角度的估计精度均为0.1且互相独立
    	// 那么协方差则为对角为0.01的矩阵,信息阵则为100的矩阵
	information(0,0) = information(1,1) = information(2,2) = 100;
	information(3,3) = information(4,4) = information(5,5) = 100;
	// 也可以将角度设大一些,表示对角度的估计更加准确
	edge->setInformation(information);
	// 边的估计即是pnp求解之结果
	Eigen::Isometry3d T = cvMat2Eigen(result.rvec, result.tvec);
	edge->setMeasurement(T.inverse());
	// 将此边加入图中
	opti.addEdge(edge);
	return KEYFRAME;
}

void checkNearbyLoops(vector<FRAME>& frames, FRAME& currFrame, g2o::SparseOptimizer& opti)
{
	static ParameterReader pd;
	static int nearby_loops = atoi(pd.getData("nearby_loops").c_str());
	// 就是把currFrame和frames里末尾几个测一遍
	if(frames.size() <= nearby_loops)
	{
		// no enough keyframes, check everyone
		for(size_t i=0; i<frames.size(); i++)
		{
			checkKeyframes(frames[i], currFrame, opti, true);
		}
	}
	else
	{
		// check the nearest ones
		for(size_t i=frames.size()-nearby_loops; i<frames.size(); i++)
		{
			checkKeyframes(frames[i], currFrame, opti, true);
		}
	}
}

void checkRandomLoops(vector<FRAME>& frames, FRAME& currFrame, g2o::SparseOptimizer& opti)
{
	static ParameterReader pd;
	static int random_loops = atoi(pd.getData("random_loops").c_str());
	srand((unsigned int) time(NULL));
	// 随机取一些帧进行检测
	if(frames.size() <= random_loops)
	{
		// no enough keyframes, check everyone
		for(size_t i=0; i<frames.size(); i++)
		{
			checkKeyframes(frames[i], currFrame, opti, true);
		}
	}
	else
	{
		// randomly check loops
		for(int i=0; i<random_loops; i++)
		{
			int index = rand()%frames.size();
			checkKeyframes(frames[index], currFrame, opti, true);
		}
	}
}

特别注意的是:作者将SIFT换成ORB特征,主要做了如下更改

1、parameters中参数(SIFT -> ORB);

2、slamBase中computeKeyPointsAndDesp函数里注释掉nonfree初始化:

      //cv::initModule_nonfree();

3、slamBase中estimateMotion函数里用于特征匹配的方法:

      cv::FlannBasedMatcher matcher;     ->     cv::BFMatcher matcher;    它们分别表示最近邻近似匹配和暴力匹配。

C++: BFMatcher::BFMatcher(int normType=NORM_L2, bool crossCheck=false )

parameter
normType四个可选值:NORM_L1, NORM_L2, NORM_HAMMING, NORM_HAMMING2。L1,L2用于SIFT,SURF;后两个用于ORB。
crossCheck如果为true,则使用交叉过滤。

匹配过程中很可能发生错误的匹配,错误的匹配主要有两种:匹配的特征点是错误的,图像上的特征点无法匹配。常用的删除错误的匹配有

  • 交叉过滤:如果第一幅图像的一个特征点和第二幅图像的一个特征点相匹配,则进行一个相反的检查,即将第二幅图像上的特征点与第一幅图像上相应特征点进行匹配,如果匹配成功,则认为这对匹配是正确的。OpenCV中的BFMatcher已经包含了这种过滤   BFMatcher matcher(NORM_L2,true),在构造BFMatcher是将第二个参数设置为true。
  • 比率测试:KNNMatch,可设置K = 2 ,即对每个匹配返回两个最近邻描述符,仅当第一个匹配与第二个匹配之间的距离足够小时,才认为这是一个匹配。

此外,使用特征提取过程得到的特征描述符(descriptor)数据类型有的是float类型的,比如说SurfDescriptorExtractor, SiftDescriptorExtractor,有的是uchar类型的,比如说有ORB,BriefDescriptorExtractor。对应float类型的匹配方式有:FlannBasedMatcher,NORM_L1, NORM_L2。对应uchar类型的匹配方式有:NORM_HAMMING, NORM_HAMMING2。所以ORB和BRIEF特征描述子只能使用BruteForce匹配法。

4、主函数里用于图优化的核函数的变化:

      //static g2o::RobustKernel* robustKernel = g2o::RobustKernelFactory::instance()->construct("Cauchy");

      //edge->setRobustKernel(robustKernel);

      edge->setRobustKernel(new g2o::RobustKernelHuber());

以及对应的头文件:

      //#include <g2o/core/robust_kernel_factory.h>

      #include <g2o/core/robust_kernel_impl.h>

图优化中也有一种核函数。 引入核函数的原因,是因为SLAM中可能给出错误的边。SLAM中的数据关联让科学家头疼了很长时间。出于变化、噪声等原因,机器人并不能确定它看到的某个路标,就一定是数据库中的某个路标。万一认错了呢?我把一条原本不应该加到图中的边给加进去了,会怎么样?

嗯,那优化算法可就慒逼了……它会看到一条误差很大的边,然后试图调整这条边所连接的节点的估计值,使它们顺应这条边的无理要求。由于这个边的误差真的很大,往往会抹平了其他正确边的影响,使优化算法专注于调整一个错误的值。

于是就有了核函数的存在。核函数保证每条边的误差不会大的没边,掩盖掉其他的边。具体的方式是,把原先误差的二范数度量,替换成一个增长没有那么快的函数,同时保证自己的光滑性质(不然没法求导啊!)。因为它们使得整个优化结果更为鲁棒,所以又叫它们为robust kernel(鲁棒核函数)。

很多鲁棒核函数都是分段函数,在输入较大时给出线性的增长速率,例如cauchy核,huber核等等。

Cauchy核:k\left ( x,y \right )=\frac{1}{\left \| x-y \right \|^{2}/\sigma +1}

huber核:

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值