视觉SLAM十四讲 读书编程笔记 Chapter7 视觉里程计1

特征点法

ORB特征

  1. ORB(Oriented FAST and Rotated BRIEF)特征的关键点是"Oriented FAST",这是一种带有方向信息的FAST角点
    ORB的描述子称为BRIEF(Binary Robust Independent Elementary Feature)
  2. FAST关键点
    FAST是一种角点,它主要检测局部像素灰度变化明显的地方,速度很快
    主要步骤为:
    (1)在图像中每一个像素像素p,假设它的灰度为Ip
    (2)设置一个阈值T(比如,Ip的20%)
    (3)以像素p为中心,选取半径为3的圆上的16个像素点
    (4)如果选取的圆上有连续的N个点的亮度大于IP+T或小于IP-T,那么像素p可以被认为是一个特征点(N通常取12,称为FAST-12)
  • FAST-12中加入预操作可以显著提高速度,具体操作为对于每个像素,直接检测邻域圆上的第1,5,9,13个像素的灰度值,只有当这4个像素中有3个同时大于Ip+T或小于Ip-T时,当前像素才有可能是一个角点,否则应该直接排除。
  • 原始的FAST角点经常出现“扎堆”的现象,所以在第一遍检测之后,还需要用非极大值抑制算法,在一定区域内仅保留响应极大值的角点,避免角点集中的问题。
  • FAST角点数量很大且不确定,而我们通常希望提取图像的固定数量的特征。因此我们指定要提取的角点数量N,对原始FAST角点分别计算Harris响应值,然后选取前N个具有最大响应值的角点作为最终的角点集合
  • FAST角点不具有尺度不变性,因为它固定取半径为3的圆,通过构建金字塔,在每一层上检测角点来实现尺度不变性
  • FAST角点不具有方向信息,通过灰度质心法来确定方向信息
    质心是指以图像块灰度值作为权重的中心。具体步骤如下:
    (1)在一个小的图像块B中,定义图像块的矩为
    在这里插入图片描述
    (2)通过矩可以找到图像块的质心
    在这里插入图片描述
    (3)连接图像块的集合中心O与质心C,得到一个方向向量OC,于是特征点的方向可以定义为
    在这里插入图片描述

这就是Oriented FAST

  1. BRIEF描述子
    BRIEF是一种二进制描述子,其描述向量由多个0和1组成,这里的0和1编码了关键点附近两个像素(比如p和q)的大小关系:如果p比q大,则取1,否则取0。如果我们选取了128个这样的p,q,最后就可以得到128维二进制向量。p和q的选取有多种不同的方法,常见的是采用随机选取的方法。

特征匹配

当特征点数量不是很大时,可以采用暴力匹配,,两个特征之间的相似程度用描述子距离来描述
描述子距离可以采用欧式距离或者Hamming距离,其中两个二进制串之间的Hamming距离指的是其不同位数的个数
当特征点数量很大时,暴力匹配运算量就会很大,这时可以采用FLANN(快速近似最近邻)算法来进行匹配。

实践:特征提取与匹配

完整代码:
feature_extracture.cpp

//2019.08.10
#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/features2d/features2d.hpp>
#include <opencv2/highgui/highgui.hpp>

using namespace std;
using namespace cv;

int main(int argc,char **argv)
{
	if(argc != 3)
	{
		cout<<"usage: feature_extraction img1 img2"<<endl;
		return 1;
	}

	//读取图像
	Mat img_1 = imread(argv[1],CV_LOAD_IMAGE_COLOR);
	Mat img_2 = imread(argv[2],CV_LOAD_IMAGE_COLOR);

	//初始化
	std::vector<KeyPoint> keypoints_1,keypoints_2;
	Mat descriptors_1,descriptors_2;
	Ptr<ORB> orb = ORB::create(500,1.2f,8,31,0,2,ORB::HARRIS_SCORE,31,20);

	//第一步:检测oriented FAST角点位置
	orb->detect(img_1,keypoints_1);
	orb->detect(img_2,keypoints_2);

	//第二步:根据角点位置计算BRIEF描述子
	orb->compute(img_1,keypoints_1,descriptors_1);
	orb->compute(img_2,keypoints_2,descriptors_2);
	

	Mat outimg1;
	drawKeypoints(img_1,keypoints_1,outimg1,Scalar::all(-1),DrawMatchesFlags::DEFAULT);
	imshow("ORB特征点",outimg1);

	//第三步:对两幅图像中的BRIEF描述子进行匹配,使用Hamming距离
	vector<DMatch> matches;
	BFMatcher matcher(NORM_HAMMING);
	matcher.match(descriptors_1,descriptors_2,matches);

	//第四步:匹配点对筛选
	double min_dist=10000,max_dist=0;
	//找出所有匹配之间的最小距离和最大距离,即最相似的和最不相似的两组点之间的距离
	for(int i=0;i<descriptors_1.rows;i++)
	{
		double dist = matches[i].distance;
		if(dist < min_dist)min_dist = dist;
		if(dist > max_dist)max_dist = dist;
	}

	printf("--Max dist:%f\n",max_dist);
	printf("--Min dist:%f\n",min_dist);	
	//当描述子之间的距离大于两倍的最小距离时,即认为匹配有误
	//但有时候最小距离可能非常小,设置一个经验值作为下限
	std::vector<DMatch> good_matches;
	for(int i=0;i<descriptors_1.rows;i++)
	{
		if(matches[i].distance <= max(2*min_dist,30.0))
		{
			good_matches.push_back(matches[i]);
		}
	}

	//第五步:绘制匹配结果
	Mat img_match;
	Mat img_goodmatch;
	drawMatches(img_1,keypoints_1,img_2,keypoints_2,matches,img_match);
	drawMatches(img_1,keypoints_1,img_2,keypoints_2,good_matches,img_goodmatch);
	imshow("所有匹配点对",img_match);
	imshow("优化后匹配点对",img_goodmatch);

	waitKey(0);
	
	return 0;

}

CMakeLists.txt

#2019.08.10
# 添加C++标准文件
set(CMAKE_CXX_FLAGS "-std=c++11")

#寻找opencv库
find_package(OpenCV REQUIRED)

#添加头文件
include_directories(${OpenCV_INCLUDE_DIRS})


add_executable(feature_extracture feature_extracture.cpp)

#链接OpenCV库
target_link_libraries(feature_extracture ${OpenCV_LIBS})

运行结果:
在这里插入图片描述在这里插入图片描述在这里插入图片描述在这里插入图片描述

2D-2D:对极几何

对极约束的推导

假设同一个相机由于位姿变化拍摄得到了两帧图像I1和I2,如下图所示,构成了一个对极几何约束:

其中三维空间点P在两帧上的投影分别是p1和p2,两个相机的中心分别是O1和O2
由点P、O1、O2确定的平面称为极平面
O1O2连线与像平面I1、I2的交点分别为e1、e2,称为极点
O1O2称为基线
l1、l2称为极线在这里插入图片描述
下面来推导对极几何约束的方程:
在这里插入图片描述
对极几何约束简洁地给出了两个匹配点的空间位置关系。相机位姿估计问题变为以下两步:

  1. 根据匹配点的像素位置求出E或者F
  2. 根据E或者F求出R、t

本质矩阵E的求解

根据本质矩阵E的定义E=t^R,它是一个3×3的矩阵,并且有:

  • 本质矩阵是由对极约束定义的,而对极约束是等式为零的约束,所以E乘以任意非零常数后,对极约束依然满足,所以E在不同尺度下是等价的。
  • E的奇异值必定是[rou,rou,0]T的形式,这称为本质矩阵的内在性质
  • 由于平移和旋转各有三个自由度,故E共有6个自由度。但由于尺度等价性,故E实际上5个自由度
    由于实际上E只有5个自由度,所以理论上最少可以使用5对点来求解它,但是这会引入额外的计算复杂性,所以最常用的方法是使用八点法来进行求解。
    八点法求解推导如下:
    考虑一对匹配点,它们的归一化坐标为x1=[u1,v1,1]T,x2=[u2,v2,1]T,根据对极约束,有
    在这里插入图片描述
    把矩阵E展开,写成向量的形式e=[e1,e2,e3,e4,e5,e6,e7,e8,e9]T
    所以对极约束就可以写成向量相乘的形式[u1u2,u1v2,u1,v1u2,v1v2,v1,u2,v2,1]e=0
    上述是一对点的对极约束,如果我们有八对点,那么就可以写成如下的矩阵形式:
    在这里插入图片描述
    这是一个线性方程组,它的系数矩阵由特征点的位置(u,v)构成,大小为8*9,如果系数矩阵是满秩的,那么就可以解得E。
    实际上,根据线性方程解出来的E,可能不会满足它的内在性质。通常的做法是,假设本质矩阵E奇异值分解之后的特征值矩阵为
    在这里插入图片描述
    正如我们看到的,这相当于把E投影在了它应该在的流形上去。

从本质矩阵E中恢复出相机的运动R,t

可以通过将E进行奇异值分解恢复出相机的运动R,t:
在这里插入图片描述

根据E的内在性质,对于同一个E,存在两个可能的t,R与它对应:

在这里插入图片描述
同时,由于-E和E等价,所以对任意一个t取负号,也会得到同样的结果。所以,一共有四种可能的解。
在这里插入图片描述
在这四种情况下,只有第一种解中P在两个相机中都具有正的深度。因此,只要把任意一点带入4中解中,检测该点在两个相机下的深度,就可以确定哪个解是正确的了。

单应矩阵

单应矩阵用来描述处于共同平面上的一些点在两张图像之间的变换关系
考虑在图像I1和图像I2上有一对匹配点p1、p2。在空间中,这些特征点落在平面P上,设这个平面满足方程:nTP+d=0
稍加整理,得:-nTP/d=1
而根据相机的成像关系,有
p2=K(RP+t)=K(RP+t(-nTP/d))=K(R-nTt/d)P=K(R-nTt/d)K-1p1
设H=K(R-nTt/d)K-1,那么p2=Hp1,H可以直接描述图像坐标p1和p2之间的交换,称为单应矩阵。
单应矩阵是一个3*3的矩阵,将p2=Hp1矩阵展开,可以写成如下形式:
在这里插入图片描述
注意,这里的等号是在非零因子下成立的。实际处理中通常乘以一个非零因子使得h9=1,于是:
在这里插入图片描述
整理得:
在这里插入图片描述
从上述等式可以看出,一组匹配点对就可以构造出两项约束,因此自由度为8的单应矩阵可以通过4对匹配特征点算出,即求解以下的线性方程组(当h9=0时,右侧为0):
在这里插入图片描述
这种做法把H看成了向量,通过解该向量的线性方程来恢复H,又称直接线性变换法。

单应矩阵也必须通过分解才能获得旋转矩阵R和平移向量t,同样的,也会返回四组R和t,并且同时可以计算出它们分别对应的唱经典所在平面的法向量。如果已知成像的地图点的深度全为正值(即在相机前方),则又可以排除两组解。最后只剩两组解,这需要更多的先验信息进行判断。通常我们假设已知场景平面的法向量来解决,如场景平面与相机平面平行,那么法向量n的理论值为1T

关于本质矩阵E的一些讨论

  • E本身具有尺度等价性,而R是一个旋转矩阵,属于SO(3),它自身具有约束,所以我们认为t具有一个尺度,换言之,在分解的过程中,对t乘以任意非零常数,分解都是成立的。因此,我们通常把t进行归一化,让它的长度等于1.
  • t进行归一化,直接导致了单目视觉的尺度不确定性,在单目视觉中,我们对两张图像的t归一化相当于固定了尺度,以这个固定尺度作为单位1,计算相机运动和特征点的3D位置,这被称为单目SLAM的初始化,初始化之后的轨迹和地图的单位,就是初始化时固定的尺度。因此,单目SLAM必须要初始化。初始化的两张图像必须有一定程度的平移,而后的轨迹和地图都将以此步的平移为单位。
  • 单目初始化不能只有纯旋转,必须要有一定程度的平移。因为若t=0,那么E一定等于0,对极几何约束一定满足,此时无法求解R

实践:对极约束求解相机运动

pose_estimation_2d2d.cpp

//2019.08.11
#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/features2d/features2d.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/calib3d/calib3d.hpp>

using namespace std;
using namespace cv;


void find_feature_matches(Mat& img_1, Mat& img_2, std::vector<KeyPoint>& keypoints_1, std::vector<KeyPoint>& keypoints_2, vector<DMatch>& matches)
{
	//初始化
	Mat descriptors_1,descriptors_2;
	Ptr<ORB> orb = ORB::create(500,1.2f,8,31,0,2,ORB::HARRIS_SCORE,31,20);

	//第一步:检测oriented FAST角点位置
	orb->detect(img_1,keypoints_1);
	orb->detect(img_2,keypoints_2);

	//第二步:根据角点位置计算BRIEF描述子
	orb->compute(img_1,keypoints_1,descriptors_1);
	orb->compute(img_2,keypoints_2,descriptors_2);
	

	Mat outimg1;
	drawKeypoints(img_1,keypoints_1,outimg1,Scalar::all(-1),DrawMatchesFlags::DEFAULT);
	imshow("ORB特征点",outimg1);

	//第三步:对两幅图像中的BRIEF描述子进行匹配,使用Hamming距离
	BFMatcher matcher(NORM_HAMMING);
	matcher.match(descriptors_1,descriptors_2,matches);

	//第四步:匹配点对筛选
	double min_dist=10000,max_dist=0;
	//找出所有匹配之间的最小距离和最大距离,即最相似的和最不相似的两组点之间的距离
	for(int i=0;i<descriptors_1.rows;i++)
	{
		double dist = matches[i].distance;
		if(dist < min_dist)min_dist = dist;
		if(dist > max_dist)max_dist = dist;
	}

	printf("--Max dist:%f\n",max_dist);
	printf("--Min dist:%f\n",min_dist);	
	//当描述子之间的距离大于两倍的最小距离时,即认为匹配有误
	//但有时候最小距离可能非常小,设置一个经验值作为下限
	std::vector<DMatch> good_matches;
	for(int i=0;i<descriptors_1.rows;i++)
	{
		if(matches[i].distance <= max(2*min_dist,30.0))
		{
			good_matches.push_back(matches[i]);
		}
	}

	//第五步:绘制匹配结果
	Mat img_match;
	Mat img_goodmatch;
	drawMatches(img_1,keypoints_1,img_2,keypoints_2,matches,img_match);
	drawMatches(img_1,keypoints_1,img_2,keypoints_2,good_matches,img_goodmatch);
	cout<<matches.size()<<endl;
	matches = good_matches;
	cout<<matches.size()<<endl;
	imshow("所有匹配点对",img_match);
	imshow("优化后匹配点对",img_goodmatch);
	waitKey(0);
}


void pose_estimation_2d2d(std::vector<KeyPoint> keypoints_1, std::vector<KeyPoint> keypoints_2, vector<DMatch> matches, Mat& R,Mat& t)
{
	//相机内参,TUM Freiburg2
	Mat K = (Mat_<double>(3,3) <<520.9,0,325.1,0,521.0,249.7,0,0,1);
	
	//把匹配点转换成vector<Point2f>的形式
	vector<Point2f> points1;
	vector<Point2f> points2;
	
	for(int i=0;i<(int)matches.size();i++)
	{
		points1.push_back(keypoints_1[matches[i].queryIdx].pt);
		points2.push_back(keypoints_2[matches[i].trainIdx].pt);

	}

	//计算基础矩阵F
	Mat fundamental_matrix;
	fundamental_matrix = findFundamentalMat(points1,points2,CV_FM_8POINT);
	cout<<"fundamental_matrix is "<<endl<<fundamental_matrix<<endl;

	//计算本质矩阵E
	Point2d principal_point(325.1,249.7);//光心,TUM dataset标定值
	int focal_length = 521;
	Mat essential_matrix;
	essential_matrix = findEssentialMat(points1,points2,focal_length,principal_point,RANSAC);
	cout<<"essential_matrix is"<<endl<<essential_matrix<<endl;

	//计算单应矩阵
	Mat homography_matrix = findHomography(points1,points2,RANSAC,3,noArray(),2000,0.99);
	cout<<"homography_matrix is"<<endl<<homography_matrix<<endl;

	//从本质矩阵E中恢复R和t
	recoverPose(essential_matrix,points1,points2,R,t,focal_length,principal_point);
	cout<<"R is"<<endl<<R<<endl;
	cout<<"t is"<<endl<<t<<endl;

}

Point2d pixel2cam ( const Point2d& p, const Mat& K )
{
    return Point2d
           (
               ( p.x - K.at<double> ( 0,2 ) ) / K.at<double> ( 0,0 ),
               ( p.y - K.at<double> ( 1,2 ) ) / K.at<double> ( 1,1 )
           );
}


int main(int argc,char **argv)
{
	if(argc != 3)
	{
		cout<<"usage: feature_extraction img1 img2"<<endl;
		return 1;
	}

	//读取图像
	Mat img_1 = imread(argv[1],CV_LOAD_IMAGE_COLOR);
	Mat img_2 = imread(argv[2],CV_LOAD_IMAGE_COLOR);

	//初始化
	std::vector<KeyPoint> keypoints_1,keypoints_2;
	vector<DMatch> matches;
	find_feature_matches(img_1,img_2,keypoints_1,keypoints_2,matches);
	cout<<"一共找到了"<<matches.size()<<"组匹配点"<<endl;

	//估计两张图像间的运动	
	Mat R,t;
	pose_estimation_2d2d(keypoints_1,keypoints_2,matches,R,t);
	
	//验证 E=t^R*scale
	Mat t_x = (Mat_<double>(3,3)<<
			0,			-t.at<double>(2,0),		t.at<double>(1,0),
			t.at<double>(2,0),	0,				-t.at<double>(0,0),
			-t.at<double>(1,0),	t.at<double>(0,0),		0);
	cout<<"t^R="<<endl<<t_x*R<<endl;
	
	//验证对极约束
	Mat K = (Mat_<double>(3,3) <<520.9,0,325.1,0,521.0,249.7,0,0,1);
	for(DMatch m:matches)
	{
		Point2d pt1 = pixel2cam(keypoints_1[m.queryIdx].pt, K);
		Mat y1 = (Mat_<double>(3,1)<<pt1.x,pt1.y,1);
		Point2d pt2 = pixel2cam(keypoints_2[m.trainIdx].pt, K);
		Mat y2 = (Mat_<double>(3,1)<<pt2.x,pt2.y,1);
		Mat d = y2.t()*t_x*R*y1;
		cout <<"epipolar constraint = "<<d<<endl;
	}

	return 0;
}
 

CMakeLists.txt

#2019.08.11
# 添加C++标准文件
set(CMAKE_CXX_FLAGS "-std=c++11")

#寻找opencv库
find_package(OpenCV REQUIRED)

#添加头文件
include_directories(${OpenCV_INCLUDE_DIRS})


add_executable(pose_estimation_2d2d pose_estimation_2d2d.cpp)

#链接OpenCV库
target_link_libraries(pose_estimation_2d2d ${OpenCV_LIBS})

运行结果:

在这里插入图片描述
在这里插入图片描述

三角测量

使用三角法测量空间点的深度,按照对极几何中的定义,设x1,x2为两个特征点的归一化坐标,那么他们满足:s1x1=s2Rx2+t
其中R和t是已知的,想要求解的是两个特征点的深度s1、s2,对上式两侧左乘一个
x
1^,得到:
在这里插入图片描述
解这个等式,可以求得s2,然后将s2带回原式,就可以求得s1

**三角测量的矛盾:**三角测量时由平移得到的,有平移才会有对极几何中的三角形,才谈得上三角测量。平移t越大,深度不确定性越小,但此时特征点匹配会越困难;平移越小,特征点匹配越容易,但深度不确定性会增加。
在这里插入图片描述

实践:三角测量

triangulation.cpp

//2019.08.11
#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/features2d/features2d.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/calib3d/calib3d.hpp>

using namespace std;
using namespace cv;


void find_feature_matches(Mat& img_1, Mat& img_2, std::vector<KeyPoint>& keypoints_1, std::vector<KeyPoint>& keypoints_2, vector<DMatch>& matches)
{
	//初始化
	Mat descriptors_1,descriptors_2;
	Ptr<ORB> orb = ORB::create(500,1.2f,8,31,0,2,ORB::HARRIS_SCORE,31,20);

	//第一步:检测oriented FAST角点位置
	orb->detect(img_1,keypoints_1);
	orb->detect(img_2,keypoints_2);

	//第二步:根据角点位置计算BRIEF描述子
	orb->compute(img_1,keypoints_1,descriptors_1);
	orb->compute(img_2,keypoints_2,descriptors_2);
	

	Mat outimg1;
	drawKeypoints(img_1,keypoints_1,outimg1,Scalar::all(-1),DrawMatchesFlags::DEFAULT);
	imshow("ORB特征点",outimg1);

	//第三步:对两幅图像中的BRIEF描述子进行匹配,使用Hamming距离
	BFMatcher matcher(NORM_HAMMING);
	matcher.match(descriptors_1,descriptors_2,matches);

	//第四步:匹配点对筛选
	double min_dist=10000,max_dist=0;
	//找出所有匹配之间的最小距离和最大距离,即最相似的和最不相似的两组点之间的距离
	for(int i=0;i<descriptors_1.rows;i++)
	{
		double dist = matches[i].distance;
		if(dist < min_dist)min_dist = dist;
		if(dist > max_dist)max_dist = dist;
	}

	printf("--Max dist:%f\n",max_dist);
	printf("--Min dist:%f\n",min_dist);	
	//当描述子之间的距离大于两倍的最小距离时,即认为匹配有误
	//但有时候最小距离可能非常小,设置一个经验值作为下限
	std::vector<DMatch> good_matches;
	for(int i=0;i<descriptors_1.rows;i++)
	{
		if(matches[i].distance <= max(2*min_dist,30.0))
		{
			good_matches.push_back(matches[i]);
		}
	}

	//第五步:绘制匹配结果
	Mat img_match;
	Mat img_goodmatch;
	drawMatches(img_1,keypoints_1,img_2,keypoints_2,matches,img_match);
	drawMatches(img_1,keypoints_1,img_2,keypoints_2,good_matches,img_goodmatch);
	cout<<matches.size()<<endl;
	matches = good_matches;
	cout<<matches.size()<<endl;
	imshow("所有匹配点对",img_match);
	imshow("优化后匹配点对",img_goodmatch);
	waitKey(0);
}


void pose_estimation_2d2d(std::vector<KeyPoint> keypoints_1, std::vector<KeyPoint> keypoints_2, vector<DMatch> matches, Mat& R,Mat& t)
{
	//相机内参,TUM Freiburg2
	Mat K = (Mat_<double>(3,3) <<520.9,0,325.1,0,521.0,249.7,0,0,1);
	
	//把匹配点转换成vector<Point2f>的形式
	vector<Point2f> points1;
	vector<Point2f> points2;
	
	for(int i=0;i<(int)matches.size();i++)
	{
		points1.push_back(keypoints_1[matches[i].queryIdx].pt);
		points2.push_back(keypoints_2[matches[i].trainIdx].pt);

	}

	//计算基础矩阵F
	Mat fundamental_matrix;
	fundamental_matrix = findFundamentalMat(points1,points2,CV_FM_8POINT);
	cout<<"fundamental_matrix is "<<endl<<fundamental_matrix<<endl;

	//计算本质矩阵E
	Point2d principal_point(325.1,249.7);//光心,TUM dataset标定值
	int focal_length = 521;
	Mat essential_matrix;
	essential_matrix = findEssentialMat(points1,points2,focal_length,principal_point,RANSAC);
	cout<<"essential_matrix is"<<endl<<essential_matrix<<endl;

	//计算单应矩阵
	Mat homography_matrix = findHomography(points1,points2,RANSAC,3,noArray(),2000,0.99);
	cout<<"homography_matrix is"<<endl<<homography_matrix<<endl;

	//从本质矩阵E中恢复R和t
	recoverPose(essential_matrix,points1,points2,R,t,focal_length,principal_point);
	cout<<"R is"<<endl<<R<<endl;
	cout<<"t is"<<endl<<t<<endl;

}

Point2d pixel2cam ( const Point2d& p, const Mat& K )
{
    return Point2d
           (
               ( p.x - K.at<double> ( 0,2 ) ) / K.at<double> ( 0,0 ),
               ( p.y - K.at<double> ( 1,2 ) ) / K.at<double> ( 1,1 )
           );
}


void triangulation(const vector<KeyPoint>& keypoint_1, const vector<KeyPoint>& keypoint_2, const std::vector<DMatch>& matches,
		 const Mat& R, const Mat& t, vector<Point3d>& points)
{
	Mat T1 = (Mat_<float>(3,4)<<
		1,0,0,0,
		0,1,0,0,
		0,0,1,0);
	Mat T2 = (Mat_<float>(3,4)<<
		R.at<double>(0,0),	R.at<double>(0,1),	R.at<double>(0,2),	t.at<double>(0,0),
		R.at<double>(1,0),	R.at<double>(1,1),	R.at<double>(1,2),	t.at<double>(1,0),
		R.at<double>(2,0),	R.at<double>(2,1),	R.at<double>(2,2),	t.at<double>(2,0));

	Mat K = (Mat_<double>(3,3) <<520.9,0,325.1,0,521.0,249.7,0,0,1);

	vector<Point2f> pts_1,pts_2;
	for(DMatch m:matches)
	{
		//将像素坐标转换为相机坐标
		pts_1.push_back(pixel2cam(keypoint_1[m.queryIdx].pt, K));
		pts_2.push_back(pixel2cam(keypoint_2[m.trainIdx].pt, K));
	}

	Mat pts_4d;
	cv::triangulatePoints(T1,T2,pts_1,pts_2,pts_4d);

	//转换成非齐次坐标
	for(int i=0;i<pts_4d.cols;i++)
	{
		Mat x = pts_4d.col(i);
		x /= x.at<float>(3,0);//归一化
		Point3d p(x.at<float>(0,0), x.at<float>(1,0), x.at<float>(2,0));
		points.push_back(p);
	}
	
}



int main(int argc,char **argv)
{
	if(argc != 3)
	{
		cout<<"usage: feature_extraction img1 img2"<<endl;
		return 1;
	}

	//读取图像
	Mat img_1 = imread(argv[1],CV_LOAD_IMAGE_COLOR);
	Mat img_2 = imread(argv[2],CV_LOAD_IMAGE_COLOR);

	//初始化
	std::vector<KeyPoint> keypoints_1,keypoints_2;
	vector<DMatch> matches;
	find_feature_matches(img_1,img_2,keypoints_1,keypoints_2,matches);
	cout<<"一共找到了"<<matches.size()<<"组匹配点"<<endl;

	//估计两张图像间的运动	
	Mat R,t;
	pose_estimation_2d2d(keypoints_1,keypoints_2,matches,R,t);
	
	//验证 E=t^R*scale
	Mat t_x = (Mat_<double>(3,3)<<
			0,			-t.at<double>(2,0),		t.at<double>(1,0),
			t.at<double>(2,0),	0,				-t.at<double>(0,0),
			-t.at<double>(1,0),	t.at<double>(0,0),		0);
	cout<<"t^R="<<endl<<t_x*R<<endl;
	
	//验证对极约束
	Mat K = (Mat_<double>(3,3) <<520.9,0,325.1,0,521.0,249.7,0,0,1);
	for(DMatch m:matches)
	{
		Point2d pt1 = pixel2cam(keypoints_1[m.queryIdx].pt, K);
		Mat y1 = (Mat_<double>(3,1)<<pt1.x,pt1.y,1);
		Point2d pt2 = pixel2cam(keypoints_2[m.trainIdx].pt, K);
		Mat y2 = (Mat_<double>(3,1)<<pt2.x,pt2.y,1);
		Mat d = y2.t()*t_x*R*y1;
		cout <<"epipolar constraint = "<<d<<endl;
	}

	//三角化
	vector<Point3d> points;
	triangulation(keypoints_1, keypoints_2, matches, R, t, points);

	//验证三角化点与特征点的重投影关系
	for(int i=0;i<matches.size();i++)
	{
		Point2d pt1_cam = pixel2cam(keypoints_1[matches[i].queryIdx].pt, K);
		Point2d pt1_cam_3d(points[i].x/points[i].z, points[i].y/points[i].z);
	

		cout<<"point in the first camera frame:"<<pt1_cam<<endl;
		cout<<"point projected from 3D"<<pt1_cam_3d<<",d="<<points[i].z<<endl;
		if(points[i].z>=14.98 && points[i].z<=14.99)cout<<"++++++++++++++++++++++++++++++"<<endl;

		//第二副图
		Point2f pt2_cam = pixel2cam(keypoints_2[matches[i].trainIdx].pt, K);
		Mat pt2_trans = R*(Mat_<double>(3,1)<<points[i].x,points[i].y,points[i].z) + t;
		pt2_trans /= pt2_trans.at<double>(2,0);
		cout<<"point in the second camera frame:"<<pt2_cam<<endl;
		cout<<"point reprojected from second frame:"<<pt2_trans.t()<<endl;
	}

	return 0;
}
 

CMakeLists.txt

#2019.08.11
# 添加C++标准文件
set(CMAKE_CXX_FLAGS "-std=c++11")

#寻找opencv库
find_package(OpenCV REQUIRED)

#添加头文件
include_directories(${OpenCV_INCLUDE_DIRS})


add_executable(triangulation triangulation.cpp)

#链接OpenCV库
target_link_libraries(triangulation ${OpenCV_LIBS})

运行结果:
在这里插入图片描述

3D-2D: PnP

PnP(Perspective-n-Point)是求解3D到2D点对运动的方法。它描述了当知道n个3D空间点及其投影位置时,如何估计相机的位姿。3D-2D方法不需要使用对极约束,又可以在很少的匹配点中获得较好的运动估计,是最重要的一种姿态估计方法。

直接线性变换法(DLT)

在DLT中,我们将变换矩阵T看成了12个未知数,忽略了它们之间的联系。
DLT求解原理的推导:
在这里插入图片描述

在DLT中,直接将T看成12个不相关的未知量,这样求解出来的T可能不满足SE(3)的约束,即直接线性变换法求解出来的R可能不满足SO(3)的约束。这可以通过将T左边3×3的矩阵块进行QR分解,来将DLT的结果重新投影到SE(3)流形上。

P3P

顾名思义,P3P是利用从世界坐标系投影到像素坐标系的三个点来求解位姿的方法。设世界坐标系下有3D点A,B,C,其在像素坐标系下对应2D点a,b,c.有验证点D-d,相机光心为O:
在这里插入图片描述

显然,存在三组三角形相似关系:Oab-OAB, Obc-OBC, Oac-OAC
考虑三角形相似关系,利用余弦定理,有:
OA2+OB2-2OA OB cos<a,b> = AB2
OB2+OC2-2OB OC cos<b,c> = BC2
OA2+OC2-2OA OC cos<a,c> = AC2

对以上三式,全体除以OC2,并且记x=OA/OC,y=OB/OC,得到:
x2+y2-2xycos<a,b> = AB2/OC2
y2+12-2ycos<b,c> = BC2/OC2
x2+12-2xcos<a,c> = AC2/OC2

记v=AB2/OC2, uv=BC2/OC2, wv=AC2/OC2,有:
x2 + y2 - 2xycos<a,b> - v = 0
y2 + 12 -2ycos<b,c> -uv = 0
x2 + 12 -2xcos<a,c> -wv = 0

把第一个式子中的v放到等式一边,然后带入其后两式,得:
(1-u)y2 - ux2 - cos<b,c>y + 2uxycos<a,b> + 1 = 0
(1-w)x2 - wy2 -cos<a,c>x + 2wxycos<a,b> + 1 = 0
其中三个余弦角cos<b,c>,cos<a,b> ,cos<a,c>是可以通过2D坐标和光心位置计算得出;u = BC2/AB2, w = AC2/AB2可以通过世界坐标系下的3D点求解得到。因此,上述两个式子中,x,y是未知的,该方程组是关于x,y的二元二次方程,对这个方程进行求解(比较复杂),可以得到四种可能的解,然后用验证点D-d来计算最可能的解。

求解出x,y之后,v便已知,于是OC已知,再根据x,y的定义,OA,OB便可知,从而可以得到在相机坐标系下A’,B’,C’的坐标。
P3P正是利用世界坐标系下三个点在像素坐标系的投影,来计算相机坐标系下这三个点的坐标,然后再利用ICP来进行求解相机的位姿。

Bundle Adjustment

在 PnP中,BA问题是一个最小化重投影误差的问题。
在这里插入图片描述
构建最小二乘问题:
在这里插入图片描述
对位姿进行非线性优化求解:
在这里插入图片描述
在这里插入图片描述
对空间点进行非线性优化:
在这里插入图片描述

实践:求解PnP

使用EPnP求解位姿

pose_estimation_3d2d.cpp

//2019.08.12
#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/features2d/features2d.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/calib3d/calib3d.hpp>

using namespace std;
using namespace cv;


void find_feature_matches(Mat& img_1, Mat& img_2, std::vector<KeyPoint>& keypoints_1, std::vector<KeyPoint>& keypoints_2, vector<DMatch>& matches)
{
	//初始化
	Mat descriptors_1,descriptors_2;
	Ptr<ORB> orb = ORB::create(500,1.2f,8,31,0,2,ORB::HARRIS_SCORE,31,20);

	//第一步:检测oriented FAST角点位置
	orb->detect(img_1,keypoints_1);
	orb->detect(img_2,keypoints_2);

	//第二步:根据角点位置计算BRIEF描述子
	orb->compute(img_1,keypoints_1,descriptors_1);
	orb->compute(img_2,keypoints_2,descriptors_2);
	

	Mat outimg1;
	drawKeypoints(img_1,keypoints_1,outimg1,Scalar::all(-1),DrawMatchesFlags::DEFAULT);
	imshow("ORB特征点",outimg1);

	//第三步:对两幅图像中的BRIEF描述子进行匹配,使用Hamming距离
	BFMatcher matcher(NORM_HAMMING);
	matcher.match(descriptors_1,descriptors_2,matches);

	//第四步:匹配点对筛选
	double min_dist=10000,max_dist=0;
	//找出所有匹配之间的最小距离和最大距离,即最相似的和最不相似的两组点之间的距离
	for(int i=0;i<descriptors_1.rows;i++)
	{
		double dist = matches[i].distance;
		if(dist < min_dist)min_dist = dist;
		if(dist > max_dist)max_dist = dist;
	}

	printf("--Max dist:%f\n",max_dist);
	printf("--Min dist:%f\n",min_dist);	
	//当描述子之间的距离大于两倍的最小距离时,即认为匹配有误
	//但有时候最小距离可能非常小,设置一个经验值作为下限
	std::vector<DMatch> good_matches;
	for(int i=0;i<descriptors_1.rows;i++)
	{
		if(matches[i].distance <= max(2*min_dist,30.0))
		{
			good_matches.push_back(matches[i]);
		}
	}

	//第五步:绘制匹配结果
	Mat img_match;
	Mat img_goodmatch;
	drawMatches(img_1,keypoints_1,img_2,keypoints_2,matches,img_match);
	drawMatches(img_1,keypoints_1,img_2,keypoints_2,good_matches,img_goodmatch);
	cout<<matches.size()<<endl;
	matches = good_matches;
	cout<<matches.size()<<endl;
	imshow("所有匹配点对",img_match);
	imshow("优化后匹配点对",img_goodmatch);
	waitKey(0);
}


void pose_estimation_2d2d(std::vector<KeyPoint> keypoints_1, std::vector<KeyPoint> keypoints_2, vector<DMatch> matches, Mat& R,Mat& t)
{
	//相机内参,TUM Freiburg2
	Mat K = (Mat_<double>(3,3) <<520.9,0,325.1,0,521.0,249.7,0,0,1);
	
	//把匹配点转换成vector<Point2f>的形式
	vector<Point2f> points1;
	vector<Point2f> points2;
	
	for(int i=0;i<(int)matches.size();i++)
	{
		points1.push_back(keypoints_1[matches[i].queryIdx].pt);
		points2.push_back(keypoints_2[matches[i].trainIdx].pt);

	}

	//计算基础矩阵F
	Mat fundamental_matrix;
	fundamental_matrix = findFundamentalMat(points1,points2,CV_FM_8POINT);
	cout<<"fundamental_matrix is "<<endl<<fundamental_matrix<<endl;

	//计算本质矩阵E
	Point2d principal_point(325.1,249.7);//光心,TUM dataset标定值
	int focal_length = 521;
	Mat essential_matrix;
	essential_matrix = findEssentialMat(points1,points2,focal_length,principal_point,RANSAC);
	cout<<"essential_matrix is"<<endl<<essential_matrix<<endl;

	//计算单应矩阵
	Mat homography_matrix = findHomography(points1,points2,RANSAC,3,noArray(),2000,0.99);
	cout<<"homography_matrix is"<<endl<<homography_matrix<<endl;

	//从本质矩阵E中恢复R和t
	recoverPose(essential_matrix,points1,points2,R,t,focal_length,principal_point);
	cout<<"R is"<<endl<<R<<endl;
	cout<<"t is"<<endl<<t<<endl;

}

Point2d pixel2cam ( const Point2d& p, const Mat& K )
{
    return Point2d
           (
               ( p.x - K.at<double> ( 0,2 ) ) / K.at<double> ( 0,0 ),
               ( p.y - K.at<double> ( 1,2 ) ) / K.at<double> ( 1,1 )
           );
}


void triangulation(const vector<KeyPoint>& keypoint_1, const vector<KeyPoint>& keypoint_2, const std::vector<DMatch>& matches,
		 const Mat& R, const Mat& t, vector<Point3d>& points)
{
	Mat T1 = (Mat_<float>(3,4)<<
		1,0,0,0,
		0,1,0,0,
		0,0,1,0);
	Mat T2 = (Mat_<float>(3,4)<<
		R.at<double>(0,0),	R.at<double>(0,1),	R.at<double>(0,2),	t.at<double>(0,0),
		R.at<double>(1,0),	R.at<double>(1,1),	R.at<double>(1,2),	t.at<double>(1,0),
		R.at<double>(2,0),	R.at<double>(2,1),	R.at<double>(2,2),	t.at<double>(2,0));

	Mat K = (Mat_<double>(3,3) <<520.9,0,325.1,0,521.0,249.7,0,0,1);

	vector<Point2f> pts_1,pts_2;
	for(DMatch m:matches)
	{
		//将像素坐标转换为相机坐标
		pts_1.push_back(pixel2cam(keypoint_1[m.queryIdx].pt, K));
		pts_2.push_back(pixel2cam(keypoint_2[m.trainIdx].pt, K));
	}

	Mat pts_4d;
	cv::triangulatePoints(T1,T2,pts_1,pts_2,pts_4d);

	//转换成非齐次坐标
	for(int i=0;i<pts_4d.cols;i++)
	{
		Mat x = pts_4d.col(i);
		x /= x.at<float>(3,0);//归一化
		Point3d p(x.at<float>(0,0), x.at<float>(1,0), x.at<float>(2,0));
		points.push_back(p);
	}
	
}





int main(int argc,char **argv)
{
	if(argc != 5)
	{
		cout<<"usage: pose_estimation_3d2d img1 img2 depth1 depth2"<<endl;
		return 1;
	}

	//读取图像
	Mat img_1 = imread(argv[1],CV_LOAD_IMAGE_COLOR);
	Mat img_2 = imread(argv[2],CV_LOAD_IMAGE_COLOR);


	std::vector<KeyPoint> keypoints_1,keypoints_2;
	vector<DMatch> matches;
	find_feature_matches(img_1,img_2,keypoints_1,keypoints_2,matches);
	cout<<"一共找到了"<<matches.size()<<"组匹配点"<<endl;

	//建立3D点
	Mat d1 = imread(argv[3], CV_LOAD_IMAGE_UNCHANGED);//深度图为16位无符号数,单通道图像
	Mat K = (Mat_<double>(3,3) <<520.9,0,325.1,0,521.0,249.7,0,0,1);
	vector<Point3f> pts_3d;
	vector<Point2f> pts_2d;
	for(DMatch m:matches)
	{
		unsigned short d = d1.ptr<unsigned short>(int(keypoints_1[m.queryIdx].pt.y))[int(keypoints_1[m.queryIdx].pt.x)];
		if(d == 0)//bad depth
			continue;
		float dd = d/1000.0;
		Point2d p1 = pixel2cam(keypoints_1[m.queryIdx].pt, K);
		pts_3d.push_back(Point3f(p1.x*dd, p1.y*dd, dd));
		pts_2d.push_back(keypoints_2[m.trainIdx].pt);
	}

	cout<<"3d-2d pairs:"<<pts_3d.size()<<endl;
	
	//PnP求解
	Mat r,t;
	//调用openCV的PnP求解,可选择EPNP,DLS等方法
	solvePnP(pts_3d,pts_2d,K,Mat(),r,t,false,cv::SOLVEPNP_EPNP);
	Mat R;
	cv::Rodrigues(r,R);//r为旋转向量形式,利用Rodrigues公式转换为旋转矩阵R
	
	cout <<"R="<<endl<<R<<endl;
	cout <<"t="<<endl<<t<<endl;


	return 0;
}
 

CMakeLists.txt

#2019.08.12
# 添加C++标准文件
set(CMAKE_CXX_FLAGS "-std=c++11")

#寻找opencv库
find_package(OpenCV REQUIRED)

#添加头文件
include_directories(${OpenCV_INCLUDE_DIRS})


add_executable(pose_estimation_3d2d pose_estimation_3d2d.cpp)

#链接OpenCV库
target_link_libraries(pose_estimation_3d2d ${OpenCV_LIBS})

运行结果:
在这里插入图片描述

使用BA优化

将PnP的结果作为BA的初始值,进行非线性的优化,以求解更加准确的位姿。
PnP的BA图优化可以表示为:
在这里插入图片描述

在这个图优化中,节点和边的选择如下:

  1. 节点: 第二个相机的位姿节点李代数se(3),所有特征点的空间位置P
  2. 边:每个3D点在第二个相机中的投影,以观测方程来描述
    在这里插入图片描述

由于第一个相机位姿固定为零,没有把它写在优化变量里,并且把第一个相机画成虚线,表示不考虑它。

完整代码:
pose_estimation_3d2d.cpp

//2019.08.12
#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/features2d/features2d.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/calib3d/calib3d.hpp>
#include <Eigen/Core>
#include <Eigen/Geometry>
#include <g2o/core/block_solver.h>
#include <g2o/core/optimization_algorithm_levenberg.h>
#include <g2o/core/base_vertex.h>
#include <g2o/core/base_unary_edge.h>
#include <g2o/solvers/csparse/linear_solver_csparse.h>
#include <g2o/types/sba/types_six_dof_expmap.h>
#include <chrono>


using namespace std;
using namespace cv;


void find_feature_matches(Mat& img_1, Mat& img_2, std::vector<KeyPoint>& keypoints_1, std::vector<KeyPoint>& keypoints_2, vector<DMatch>& matches)
{
	//初始化
	Mat descriptors_1,descriptors_2;
	Ptr<ORB> orb = ORB::create(500,1.2f,8,31,0,2,ORB::HARRIS_SCORE,31,20);

	//第一步:检测oriented FAST角点位置
	orb->detect(img_1,keypoints_1);
	orb->detect(img_2,keypoints_2);

	//第二步:根据角点位置计算BRIEF描述子
	orb->compute(img_1,keypoints_1,descriptors_1);
	orb->compute(img_2,keypoints_2,descriptors_2);
	

	Mat outimg1;
	drawKeypoints(img_1,keypoints_1,outimg1,Scalar::all(-1),DrawMatchesFlags::DEFAULT);
	imshow("ORB特征点",outimg1);

	//第三步:对两幅图像中的BRIEF描述子进行匹配,使用Hamming距离
	BFMatcher matcher(NORM_HAMMING);
	matcher.match(descriptors_1,descriptors_2,matches);

	//第四步:匹配点对筛选
	double min_dist=10000,max_dist=0;
	//找出所有匹配之间的最小距离和最大距离,即最相似的和最不相似的两组点之间的距离
	for(int i=0;i<descriptors_1.rows;i++)
	{
		double dist = matches[i].distance;
		if(dist < min_dist)min_dist = dist;
		if(dist > max_dist)max_dist = dist;
	}

	printf("--Max dist:%f\n",max_dist);
	printf("--Min dist:%f\n",min_dist);	
	//当描述子之间的距离大于两倍的最小距离时,即认为匹配有误
	//但有时候最小距离可能非常小,设置一个经验值作为下限
	std::vector<DMatch> good_matches;
	for(int i=0;i<descriptors_1.rows;i++)
	{
		if(matches[i].distance <= max(2*min_dist,30.0))
		{
			good_matches.push_back(matches[i]);
		}
	}

	//第五步:绘制匹配结果
	Mat img_match;
	Mat img_goodmatch;
	drawMatches(img_1,keypoints_1,img_2,keypoints_2,matches,img_match);
	drawMatches(img_1,keypoints_1,img_2,keypoints_2,good_matches,img_goodmatch);
	cout<<matches.size()<<endl;
	matches = good_matches;
	cout<<matches.size()<<endl;
	imshow("所有匹配点对",img_match);
	imshow("优化后匹配点对",img_goodmatch);
	waitKey(0);
}


void pose_estimation_2d2d(std::vector<KeyPoint> keypoints_1, std::vector<KeyPoint> keypoints_2, vector<DMatch> matches, Mat& R,Mat& t)
{
	//相机内参,TUM Freiburg2
	Mat K = (Mat_<double>(3,3) <<520.9,0,325.1,0,521.0,249.7,0,0,1);
	
	//把匹配点转换成vector<Point2f>的形式
	vector<Point2f> points1;
	vector<Point2f> points2;
	
	for(int i=0;i<(int)matches.size();i++)
	{
		points1.push_back(keypoints_1[matches[i].queryIdx].pt);
		points2.push_back(keypoints_2[matches[i].trainIdx].pt);

	}

	//计算基础矩阵F
	Mat fundamental_matrix;
	fundamental_matrix = findFundamentalMat(points1,points2,CV_FM_8POINT);
	cout<<"fundamental_matrix is "<<endl<<fundamental_matrix<<endl;

	//计算本质矩阵E
	Point2d principal_point(325.1,249.7);//光心,TUM dataset标定值
	int focal_length = 521;
	Mat essential_matrix;
	essential_matrix = findEssentialMat(points1,points2,focal_length,principal_point,RANSAC);
	cout<<"essential_matrix is"<<endl<<essential_matrix<<endl;

	//计算单应矩阵
	Mat homography_matrix = findHomography(points1,points2,RANSAC,3,noArray(),2000,0.99);
	cout<<"homography_matrix is"<<endl<<homography_matrix<<endl;

	//从本质矩阵E中恢复R和t
	recoverPose(essential_matrix,points1,points2,R,t,focal_length,principal_point);
	cout<<"R is"<<endl<<R<<endl;
	cout<<"t is"<<endl<<t<<endl;

}

Point2d pixel2cam ( const Point2d& p, const Mat& K )
{
    return Point2d
           (
               ( p.x - K.at<double> ( 0,2 ) ) / K.at<double> ( 0,0 ),
               ( p.y - K.at<double> ( 1,2 ) ) / K.at<double> ( 1,1 )
           );
}


void triangulation(const vector<KeyPoint>& keypoint_1, const vector<KeyPoint>& keypoint_2, const std::vector<DMatch>& matches,
		 const Mat& R, const Mat& t, vector<Point3d>& points)
{
	Mat T1 = (Mat_<float>(3,4)<<
		1,0,0,0,
		0,1,0,0,
		0,0,1,0);
	Mat T2 = (Mat_<float>(3,4)<<
		R.at<double>(0,0),	R.at<double>(0,1),	R.at<double>(0,2),	t.at<double>(0,0),
		R.at<double>(1,0),	R.at<double>(1,1),	R.at<double>(1,2),	t.at<double>(1,0),
		R.at<double>(2,0),	R.at<double>(2,1),	R.at<double>(2,2),	t.at<double>(2,0));

	Mat K = (Mat_<double>(3,3) <<520.9,0,325.1,0,521.0,249.7,0,0,1);

	vector<Point2f> pts_1,pts_2;
	for(DMatch m:matches)
	{
		//将像素坐标转换为相机坐标
		pts_1.push_back(pixel2cam(keypoint_1[m.queryIdx].pt, K));
		pts_2.push_back(pixel2cam(keypoint_2[m.trainIdx].pt, K));
	}

	Mat pts_4d;
	cv::triangulatePoints(T1,T2,pts_1,pts_2,pts_4d);

	//转换成非齐次坐标
	for(int i=0;i<pts_4d.cols;i++)
	{
		Mat x = pts_4d.col(i);
		x /= x.at<float>(3,0);//归一化
		Point3d p(x.at<float>(0,0), x.at<float>(1,0), x.at<float>(2,0));
		points.push_back(p);
	}
	
}


void bundleAdjustment(const vector<Point3f> points_3d, const vector<Point2f> points_2d, const Mat& K,Mat& R, Mat& t)
{
	//初始化g2o
	//定义矩阵块,每个误差项中pose维度为6.landmark维度为3
	typedef g2o::BlockSolver<g2o::BlockSolverTraits<6,3>> Block;
	//定义线性方程求解器,选用稀疏的增量方程
	Block::LinearSolverType* linearSolver = new g2o::LinearSolverCSparse<Block::PoseMatrixType>();
	//用线性方程求解器构造矩阵块求解器
	Block* solver_ptr = new Block(linearSolver);
	//在矩阵块求解器上选用LM优化算法,生成最终的求解器
	g2o::OptimizationAlgorithmLevenberg* solver = new g2o::OptimizationAlgorithmLevenberg(solver_ptr);
	//定义图优化器对象
	g2o::SparseOptimizer optimizer;
	//为图优化器设定求解器
	optimizer.setAlgorithm(solver);


	//vertex配置
	//定义相机位姿节点
	g2o::VertexSE3Expmap* pose = new g2o::VertexSE3Expmap();
	//定义用于优化的旋转矩阵,初始值设置为R
	Eigen::Matrix3d R_mat;
	R_mat<<
		R.at<double>(0,0),	R.at<double>(0,1),	R.at<double>(0,2),
		R.at<double>(1,0),	R.at<double>(1,1),	R.at<double>(1,2),
		R.at<double>(2,0),	R.at<double>(2,1),	R.at<double>(2,2);
	//为相机位姿节点设置ID
	pose->setId(0);
	//为相机位姿节点设置初始估计值
	pose->setEstimate(g2o::SE3Quat(R_mat,Eigen::Vector3d(t.at<double>(0,0), t.at<double>(1,0), t.at<double>(2,0))));
	//将pose加入到图优化器中
	optimizer.addVertex(pose);


	//定义index作为空间点节点的ID索引,初始值为1
	int index = 1;
	for(const Point3f p:points_3d)//遍历所有的landmark
	{
		//定义一个空间点顶点
		g2o::VertexSBAPointXYZ* point = new g2o::VertexSBAPointXYZ();
		//设置这个空间点顶点的ID,index++
		point->setId(index++);
		//为这个空间点顶点用当前的landmark设置初始值
		point->setEstimate(Eigen::Vector3d(p.x,p.y,p.z));
		//设置边缘化
		point->setMarginalized(true);
		//将这个空间点顶点加入到图优化器中
		optimizer.addVertex(point);		
	}


	//设置相机参数
	g2o::CameraParameters* camera = new g2o::CameraParameters(K.at<double>(0,0),Eigen::Vector2d(K.at<double>(0,2),K.at<double>(1,2)),0);//这个说实话没看懂
	camera->setId(0);
	optimizer.addParameter(camera);


	//edges配置
	index=1;
	for(const Point2f p:points_2d)
	{
		//定义一个边的类
		g2o::EdgeProjectXYZ2UV* edge = new g2o::EdgeProjectXYZ2UV();
		//设置这条边的id
		edge->setId(index);
		//设置这条边的空间点顶点
		edge->setVertex(0,dynamic_cast<g2o::VertexSBAPointXYZ*>(optimizer.vertex(index)));
		//设置这条边的位姿顶点
		edge->setVertex(1,pose);
		//设置测量值
		edge->setMeasurement(Eigen::Vector2d(p.x,p.y));
		//设置参数Id
		edge->setParameterId(0,0);
		//设置信息
		edge->setInformation(Eigen::Matrix2d::Identity());
		//将这条边加入到图优化器中
		optimizer.addEdge(edge);
		index++;
	}

	
	chrono::steady_clock::time_point t1 = chrono::steady_clock::now();
	//设置详细内容
	optimizer.setVerbose(true);
	//优化器初始化
	optimizer.initializeOptimization();
	//开始迭代优化
	optimizer.optimize(100);
	chrono::steady_clock::time_point t2 = chrono::steady_clock::now();
	chrono::duration<double> time_used = chrono::duration_cast<chrono::duration<double>> ( t2-t1 );
	cout<<"optimization costs time: "<<time_used.count() <<" seconds."<<endl;
	
	cout<<endl<<"after optimization:"<<endl;
	cout<<"T="<<endl<<Eigen::Isometry3d(pose->estimate()).matrix()<<endl;
	

}





int main(int argc,char **argv)
{
	if(argc != 5)
	{
		cout<<"usage: pose_estimation_3d2d img1 img2 depth1 depth2"<<endl;
		return 1;
	}

	//读取图像
	Mat img_1 = imread(argv[1],CV_LOAD_IMAGE_COLOR);
	Mat img_2 = imread(argv[2],CV_LOAD_IMAGE_COLOR);


	std::vector<KeyPoint> keypoints_1,keypoints_2;
	vector<DMatch> matches;
	find_feature_matches(img_1,img_2,keypoints_1,keypoints_2,matches);
	cout<<"一共找到了"<<matches.size()<<"组匹配点"<<endl;

	//建立3D点
	Mat d1 = imread(argv[3], CV_LOAD_IMAGE_UNCHANGED);//深度图为16位无符号数,单通道图像
	Mat K = (Mat_<double>(3,3) <<520.9,0,325.1,0,521.0,249.7,0,0,1);
	vector<Point3f> pts_3d;
	vector<Point2f> pts_2d;
	for(DMatch m:matches)
	{
		unsigned short d = d1.ptr<unsigned short>(int(keypoints_1[m.queryIdx].pt.y))[int(keypoints_1[m.queryIdx].pt.x)];
		if(d == 0)//bad depth
			continue;
		float dd = d/1000.0;
		Point2d p1 = pixel2cam(keypoints_1[m.queryIdx].pt, K);
		pts_3d.push_back(Point3f(p1.x*dd, p1.y*dd, dd));
		pts_2d.push_back(keypoints_2[m.trainIdx].pt);
	}

	cout<<"3d-2d pairs:"<<pts_3d.size()<<endl;
	
	//PnP求解
	Mat r,t;
	//调用openCV的PnP求解,可选择EPNP,DLS等方法
	solvePnP(pts_3d,pts_2d,K,Mat(),r,t,false,cv::SOLVEPNP_EPNP);
	Mat R;
	cv::Rodrigues(r,R);//r为旋转向量形式,利用Rodrigues公式转换为旋转矩阵R
	
	cout <<"R="<<endl<<R<<endl;
	cout <<"t="<<endl<<t<<endl;

	cout<<"calling bundle adjustment"<<endl;
	bundleAdjustment ( pts_3d, pts_2d, K, R, t );

	return 0;
}

CMakeLists.txt

cmake_minimum_required( VERSION 2.8 )
project( vo1 )

set( CMAKE_BUILD_TYPE "Release" )
set( CMAKE_CXX_FLAGS "-std=c++11 -O3" )

# 添加cmake模块以使用g2o
list( APPEND CMAKE_MODULE_PATH ${PROJECT_SOURCE_DIR}/cmake_modules )

find_package( OpenCV 3.1 REQUIRED )
# find_package( OpenCV REQUIRED ) # use this if in OpenCV2 
find_package( G2O REQUIRED )
find_package( CSparse REQUIRED )

include_directories( 
    ${OpenCV_INCLUDE_DIRS} 
    ${G2O_INCLUDE_DIRS}
    ${CSPARSE_INCLUDE_DIR}
    "/usr/include/eigen3/"
)

add_executable( feature_extraction feature_extraction.cpp  )
target_link_libraries( feature_extraction ${OpenCV_LIBS} )

# add_executable( pose_estimation_2d2d pose_estimation_2d2d.cpp extra.cpp ) # use this if in OpenCV2 
add_executable( pose_estimation_2d2d pose_estimation_2d2d.cpp )
target_link_libraries( pose_estimation_2d2d ${OpenCV_LIBS} )

# add_executable( triangulation triangulation.cpp extra.cpp) # use this if in opencv2 
add_executable( triangulation triangulation.cpp )
target_link_libraries( triangulation ${OpenCV_LIBS} )

add_executable( pose_estimation_3d2d pose_estimation_3d2d.cpp )
target_link_libraries( pose_estimation_3d2d 
   ${OpenCV_LIBS}
   ${CSPARSE_LIBRARY}
   g2o_core g2o_stuff g2o_types_sba g2o_csparse_extension
)

add_executable( pose_estimation_3d3d pose_estimation_3d3d.cpp )
target_link_libraries( pose_estimation_3d3d 
   ${OpenCV_LIBS}
   g2o_core g2o_stuff g2o_types_sba g2o_csparse_extension 
   ${CSPARSE_LIBRARY}
)

在这里插入图片描述

3D-3D:ICP

SVD方法

在这里插入图片描述
在这里插入图片描述

非线性优化方法

在这里插入图片描述

实践:求解ICP

SVD方法与非线性优化方法结合

用SVD方法先求出初始的位姿,然后用非线性优化方法优化之。
在非线性优化方法中,自定义了一个一元边的类,这是3D到3D的边,观测维度为3(3D点),观测类型为Eigen::Vector3d,连接顶点类型为g2o::VertexSE3Expmap
class EdgeProjectXYZRGBDPoseOnly : public g2o::BaseUnaryEdge<3,Eigen::Vector3d,g2o::VertexSE3Expmap>
其中的雅克比矩阵与前面的推导一致。雅克比矩阵给出了关于相机位姿的导数,是一个3×6的矩阵。

完整代码:
pose_estimation_3d3d.cpp

//2019.08.12
#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/features2d/features2d.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/calib3d/calib3d.hpp>
#include <Eigen/Core>
#include <Eigen/Geometry>
#include <g2o/core/block_solver.h>
#include <g2o/core/optimization_algorithm_levenberg.h>
#include <g2o/core/base_vertex.h>
#include <g2o/core/base_unary_edge.h>
#include <g2o/solvers/csparse/linear_solver_csparse.h>
#include <g2o/types/sba/types_six_dof_expmap.h>
#include <chrono>


using namespace std;
using namespace cv;


void find_feature_matches(Mat& img_1, Mat& img_2, std::vector<KeyPoint>& keypoints_1, std::vector<KeyPoint>& keypoints_2, vector<DMatch>& matches)
{
	//初始化
	Mat descriptors_1,descriptors_2;
	Ptr<ORB> orb = ORB::create(500,1.2f,8,31,0,2,ORB::HARRIS_SCORE,31,20);

	//第一步:检测oriented FAST角点位置
	orb->detect(img_1,keypoints_1);
	orb->detect(img_2,keypoints_2);

	//第二步:根据角点位置计算BRIEF描述子
	orb->compute(img_1,keypoints_1,descriptors_1);
	orb->compute(img_2,keypoints_2,descriptors_2);
	

	Mat outimg1;
	drawKeypoints(img_1,keypoints_1,outimg1,Scalar::all(-1),DrawMatchesFlags::DEFAULT);
	imshow("ORB特征点",outimg1);

	//第三步:对两幅图像中的BRIEF描述子进行匹配,使用Hamming距离
	BFMatcher matcher(NORM_HAMMING);
	matcher.match(descriptors_1,descriptors_2,matches);

	//第四步:匹配点对筛选
	double min_dist=10000,max_dist=0;
	//找出所有匹配之间的最小距离和最大距离,即最相似的和最不相似的两组点之间的距离
	for(int i=0;i<descriptors_1.rows;i++)
	{
		double dist = matches[i].distance;
		if(dist < min_dist)min_dist = dist;
		if(dist > max_dist)max_dist = dist;
	}

	printf("--Max dist:%f\n",max_dist);
	printf("--Min dist:%f\n",min_dist);	
	//当描述子之间的距离大于两倍的最小距离时,即认为匹配有误
	//但有时候最小距离可能非常小,设置一个经验值作为下限
	std::vector<DMatch> good_matches;
	for(int i=0;i<descriptors_1.rows;i++)
	{
		if(matches[i].distance <= max(2*min_dist,30.0))
		{
			good_matches.push_back(matches[i]);
		}
	}

	//第五步:绘制匹配结果
	Mat img_match;
	Mat img_goodmatch;
	drawMatches(img_1,keypoints_1,img_2,keypoints_2,matches,img_match);
	drawMatches(img_1,keypoints_1,img_2,keypoints_2,good_matches,img_goodmatch);
	cout<<matches.size()<<endl;
	matches = good_matches;
	cout<<matches.size()<<endl;
	imshow("所有匹配点对",img_match);
	imshow("优化后匹配点对",img_goodmatch);
	waitKey(0);
}


//像素坐标根据内参K转换为相机归一化坐标
Point2d pixel2cam ( const Point2d& p, const Mat& K )
{
    return Point2d
           (
               ( p.x - K.at<double> ( 0,2 ) ) / K.at<double> ( 0,0 ),
               ( p.y - K.at<double> ( 1,2 ) ) / K.at<double> ( 1,1 )
           );
}


class EdgeProjectXYZRGBDPoseOnly : public g2o::BaseUnaryEdge<3,Eigen::Vector3d,g2o::VertexSE3Expmap>
{
public:
	EIGEN_MAKE_ALIGNED_OPERATOR_NEW;
	EdgeProjectXYZRGBDPoseOnly(const Eigen::Vector3d& point): _point(point){}

	virtual void computeError()
	{
		const g2o::VertexSE3Expmap* pose = static_cast<const g2o::VertexSE3Expmap*>( _vertices[0]);
		//measurement is p, point is p'
		_error = _measurement - pose->estimate().map(_point);
	}

	virtual void linearizeOplus()
	{
		g2o::VertexSE3Expmap* pose = static_cast<g2o::VertexSE3Expmap*>(_vertices[0]);
		g2o::SE3Quat T(pose->estimate());
		Eigen::Vector3d xyz_trans = T.map(_point);
		double x = xyz_trans[0];
		double y = xyz_trans[1];
		double z = xyz_trans[2];

		_jacobianOplusXi(0,0) = 0;
		_jacobianOplusXi(0,1) = -z;
		_jacobianOplusXi(0,2) = y;
		_jacobianOplusXi(0,3) = -1;
		_jacobianOplusXi(0,4) = 0;
		_jacobianOplusXi(0,5) = 0;

		_jacobianOplusXi(1,0) = z;
		_jacobianOplusXi(1,1) = 0;
		_jacobianOplusXi(1,2) = -x;
		_jacobianOplusXi(1,3) = 0;
		_jacobianOplusXi(1,4) = -1;
		_jacobianOplusXi(1,5) = 0;

		_jacobianOplusXi(2,0) = -y;
		_jacobianOplusXi(2,1) = x;
		_jacobianOplusXi(2,2) = 0;
		_jacobianOplusXi(2,3) = 0;
		_jacobianOplusXi(2,4) = 0;
		_jacobianOplusXi(2,5) = -1;

	}


	bool read(istream& in){}
	bool write(ostream& out)const{}	

protected:
	Eigen::Vector3d _point;

};


void bundleAdjustment(const vector<Point3f> pts1, const vector<Point3f> pts2,Mat& R, Mat& t)
{
	//初始化g2o
	//定义矩阵块,每个误差项中pose维度为6.landmark维度为3
	typedef g2o::BlockSolver<g2o::BlockSolverTraits<6,3>> Block;
	//定义线性方程求解器,选用稀疏的增量方程
	Block::LinearSolverType* linearSolver = new g2o::LinearSolverCSparse<Block::PoseMatrixType>();
	//用线性方程求解器构造矩阵块求解器
	Block* solver_ptr = new Block(linearSolver);
	//在矩阵块求解器上选用LM优化算法,生成最终的求解器
	g2o::OptimizationAlgorithmLevenberg* solver = new g2o::OptimizationAlgorithmLevenberg(solver_ptr);
	//定义图优化器对象
	g2o::SparseOptimizer optimizer;
	//为图优化器设定求解器
	optimizer.setAlgorithm(solver);


	//vertex配置
	//定义相机位姿节点
	g2o::VertexSE3Expmap* pose = new g2o::VertexSE3Expmap();
	//为相机位姿节点设置ID
	pose->setId(0);
	//为相机位姿节点设置初始估计值,因为是凸的,初始值并不重要
	pose->setEstimate(g2o::SE3Quat(Eigen::Matrix3d::Identity(),Eigen::Vector3d( 0,0,0 )));
	//将pose加入到图优化器中
	optimizer.addVertex(pose);


	//定义index作为空间点节点的ID索引,初始值为1
	int index = 1;
	//定义一个边的容器	
	vector<EdgeProjectXYZRGBDPoseOnly*> edges;
	for( size_t i=0; i<pts1.size(); i++ )//遍历所有的landmark
	{
		//定义一条边,以pts2中的一个点初始化它的一个顶点
		EdgeProjectXYZRGBDPoseOnly* edge = new EdgeProjectXYZRGBDPoseOnly(Eigen::Vector3d(pts2[i].x, pts2[i].y, pts2[i].z) );
		//设置这条边的ID
		edge->setId(index);
		//设置这条边的位姿顶点
		edge->setVertex( 0, dynamic_cast<g2o::VertexSE3Expmap*> (pose) );
		//设置这条边的的初始观测值为pts1中对应的点
		edge->setMeasurement( Eigen::Vector3d(pts1[i].x, pts1[i].y, pts1[i].z) );
		edge->setInformation( Eigen::Matrix3d::Identity()*1e4 );
		//将这条边加入到图优化器中
		optimizer.addEdge(edge);	
		index++;
		edges.push_back(edge);
	}


	
	chrono::steady_clock::time_point t1 = chrono::steady_clock::now();
	//设置详细内容
	optimizer.setVerbose(true);
	//优化器初始化
	optimizer.initializeOptimization();
	//开始迭代优化
	optimizer.optimize(10);
	chrono::steady_clock::time_point t2 = chrono::steady_clock::now();
	chrono::duration<double> time_used = chrono::duration_cast<chrono::duration<double>> ( t2-t1 );
	cout<<"optimization costs time: "<<time_used.count() <<" seconds."<<endl;
	
	cout<<endl<<"after optimization:"<<endl;
	cout<<"T="<<endl<<Eigen::Isometry3d(pose->estimate()).matrix()<<endl;
	

}





void pose_estimation_3d3d(const vector<Point3f>& pts1, const vector<Point3f>& pts2, Mat& R, Mat& t)
{
	Point3f p1,p2;//定义质心
	int N = pts1.size();
	for(int i=0; i<N; i++)
	{
		p1 += pts1[i];
		p2 += pts2[i];
	}
	p1 /= N;
	p2 /= N;

	vector<Point3f> q1(N),q2(N);//去除质心的点
	for(int i=0; i<N; i++)
	{
		q1[i] = pts1[i] - p1;
		q2[i] = pts2[i]	- p2;
	}

	//计算q1*q2^T
	Eigen::Matrix3d W = Eigen::Matrix3d::Zero();
	for(int i=0; i<N; i++)
	{
		W += Eigen::Vector3d(q1[i].x, q1[i].y, q1[i].z) * Eigen::Vector3d(q2[i].x, q2[i].y, q2[i].z).transpose();
	}
	
	//对W进行SVD分解
	Eigen::JacobiSVD<Eigen::Matrix3d> svd(W, Eigen::ComputeFullU|Eigen::ComputeFullV);
	Eigen::Matrix3d U = svd.matrixU();
	Eigen::Matrix3d V = svd.matrixV();
	cout<<"U="<<U<<endl;
	cout<<"V="<<V<<endl;

	Eigen::Matrix3d R_ = U*(V.transpose());
	Eigen::Vector3d t_ = Eigen::Vector3d(p1.x, p1.y, p1.z) - R_ * Eigen::Vector3d(p2.x, p2.y, p2.z);

	//转换成opencv的Mat
	R =(Mat_<double>(3,3)<<
		R_(0,0),	R_(0,1),	R_(0,2),
		R_(1,0),	R_(1,1),	R_(1,2),
		R_(2,0),	R_(2,1),	R_(2,2));
	t = (Mat_<double>(3,1)<<t_(0,0),t_(1,0),t_(2,0));

}



int main(int argc,char **argv)
{
	if(argc != 5)
	{
		cout<<"usage: pose_estimation_3d2d img1 img2 depth1 depth2"<<endl;
		return 1;
	}

	//读取图像
	Mat img_1 = imread(argv[1],CV_LOAD_IMAGE_COLOR);
	Mat img_2 = imread(argv[2],CV_LOAD_IMAGE_COLOR);


	std::vector<KeyPoint> keypoints_1,keypoints_2;
	vector<DMatch> matches;
	find_feature_matches(img_1,img_2,keypoints_1,keypoints_2,matches);
	cout<<"一共找到了"<<matches.size()<<"组匹配点"<<endl;

	//建立3D点
	Mat depth1 = imread(argv[3], CV_LOAD_IMAGE_UNCHANGED);//深度图为16位无符号数,单通道图像
	Mat depth2 = imread(argv[4], CV_LOAD_IMAGE_UNCHANGED);//深度图为16位无符号数,单通道图像
	Mat K = (Mat_<double>(3,3) <<520.9,0,325.1,0,521.0,249.7,0,0,1);
	vector<Point3f> pts1,pts2;

	for(DMatch m:matches)
	{
		unsigned short d1 = depth1.ptr<unsigned short>(int(keypoints_1[m.queryIdx].pt.y))[int(keypoints_1[m.queryIdx].pt.x)];
		unsigned short d2 = depth2.ptr<unsigned short>(int(keypoints_2[m.trainIdx].pt.y))[int(keypoints_2[m.trainIdx].pt.x)];
		if(d1 == 0 || d2 == 0)//bad depth
			continue;
		float dd1 = float(d1)/5000.0;
		float dd2 = float(d2)/5000.0;
		Point2d p1 = pixel2cam(keypoints_1[m.queryIdx].pt, K);
		Point2d p2 = pixel2cam(keypoints_2[m.trainIdx].pt, K);
		pts1.push_back(Point3f(p1.x*dd1, p1.y*dd1, dd1));
		pts2.push_back(Point3f(p2.x*dd2, p2.y*dd2, dd2));
	}

	cout<<"3d-3d pairs:"<<pts1.size()<<endl;
	
	//SVD求解
	Mat R,t;
	pose_estimation_3d3d ( pts1, pts2, R, t );

	cout <<"ICP via SVD reault:"<<endl;
	cout <<"R="<<endl<<R<<endl;
	cout <<"t="<<endl<<t<<endl;
	cout <<"R_inv="<<endl<<R.t()<<endl;
	cout <<"t_inv="<<endl<<-R.t()*t<<endl;

	cout<<"calling bundle adjustment"<<endl;
	bundleAdjustment ( pts1, pts2, R, t );

	// verify p1 = R*p2 + t
	for ( int i=0; i<5; i++ )
	{
		cout<<"p1 = "<<pts1[i]<<endl;
		cout<<"p2 = "<<pts2[i]<<endl;
		cout<<"(R*p2+t) = "<<
		R * (Mat_<double>(3,1)<<pts2[i].x, pts2[i].y, pts2[i].z) + t<<endl;
		cout<<endl;
    }
	return 0;
}
 


CMakeLists.txt

cmake_minimum_required( VERSION 2.8 )
project( vo1 )

set( CMAKE_BUILD_TYPE "Release" )
set( CMAKE_CXX_FLAGS "-std=c++11 -O3" )

# 添加cmake模块以使用g2o
list( APPEND CMAKE_MODULE_PATH ${PROJECT_SOURCE_DIR}/cmake_modules )

find_package( OpenCV REQUIRED )
# find_package( OpenCV REQUIRED ) # use this if in OpenCV2 
find_package( G2O REQUIRED )
find_package( CSparse REQUIRED )

include_directories( 
    ${OpenCV_INCLUDE_DIRS} 
    ${G2O_INCLUDE_DIRS}
    ${CSPARSE_INCLUDE_DIR}
    "/usr/include/eigen3/"
)


add_executable( pose_estimation_3d3d pose_estimation_3d3d.cpp )
target_link_libraries( pose_estimation_3d3d 
   ${OpenCV_LIBS}
   g2o_core g2o_stuff g2o_types_sba g2o_csparse_extension 
   ${CSPARSE_LIBRARY}
)

运行结果:
在这里插入图片描述
在这里插入图片描述

在《视觉SLAM十四》中,章节安排如下: 1. 数学基础部分:介绍这本书的基本信息,包括自测题。概述SLAM系统的组成和各模块的工作。介绍三维空间运动、李群和李代数、针孔相机模型以及非线性优化。完成一个曲线拟合的实验。 2. SLAM技术部分:解特征点法的视觉里程计,包括特征点的提取与匹配、对极几何约束的计算、PnP和ICP等方法。学习直接法的视觉里程计,包括光流和直接法的原理,并使用g2o实现一个简单的RGB-D直接法。构建一个视觉里程计框架,解决优化和关键帧选择的问题。深入讨论后端优化,包括Bundle Adjustment和位姿图的优化。介绍回环检测和地图构建的方法。最后,介绍当前的开源SLAM项目和未来的发展方向。 另外,对于四元数的学习,可以先了解复平面的概念。复平面是一个用来描述复数的平面,其中实部和虚部分别对应平面的横坐标和纵坐标。了解复平面后,可以开始学习四元数的概念和应用。<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* [视觉SLAM十四笔记](https://blog.csdn.net/dada19980122/article/details/111404967)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v92^chatsearchT0_1"}}] [.reference_item style="max-width: 50%"] - *2* *3* [【视觉SLAM十四笔记【逐行代码带你解析】【适合纯小白 ps:因为我就是】(持续更新中)](https://blog.csdn.net/R_ichun/article/details/131964588)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v92^chatsearchT0_1"}}] [.reference_item style="max-width: 50%"] [ .reference_list ]
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值