简介
- 在
LocalMapping.cc
这个中主要做了两件事:一是局部关键帧和地图点的优化,二是imu的初始化。 - 根据ORB-SLAM3论文和代码,imu的初始化一共有三个阶段:
- 纯视觉的MAP估计。系统运行2s,地图中大概有10个关键帧和几百个地图点时,进行一次纯视觉的局部BA,优化关键帧位姿和地图点。
- 仅imu的MAP估计。估计imu的速度、bias(初始化过程中假设不变)、重力方向旋转矩阵Rwg(将第一帧相机确定的世界坐标系的负z轴旋转到与重力方向重合的方向)、尺度s。通过预积分构建imu残差,利用图优化的方法估计状态量。
- 视觉imu联合MAP估计。将视觉残差和imu残差联合起来,进行全局优化,优化相机位姿、imu速度、bias、地图点。
ORB 一些概念
关键帧:
- 关键帧是连续几帧普通帧中较具有代表性的一帧。
- 作用:
- 降低局部相邻关键帧中的信息冗余度
- 由于在SLAM方案中会将普通帧的深度投影到关键帧上,故一定程度上,关键帧是普通帧滤波和优化的结果,防止无效和错误信息进入优化过程
- 面向后端优化的算力与精度的折中,提高计算资源的效率
- 选择:
- 本身质量高:清晰的画面、特征点充足、特征点分布均匀等
- 良好的联系网:与其他关键帧之间有一定的共视关系,同时不能重复度太高,即既存在约束,又要减少冗余的信息
- 选取方向:(可选取后再按上述标准进行筛选)
- 与上一帧关键帧之间的时间(帧数)间隔是否合适
- 与上一帧之间的距离是否足够远
- 跟踪局部地图的质量(共视特征点的数量)
共视图(Covisibility Graph)
- 共视关键帧:
- 任取一关键帧,与该关键帧能观测到相同地图点的关键帧(一级关键帧)。
- 该关键帧的一级关键帧自身的一级关键帧称为该关键帧的二级关键帧。
- 注:能观测到相同关键帧的数目由共视程度来表征。
- 共视图:
- 以任一关键帧为中心,与其共视关键帧及联系构建的共视网。
- 共视图中的节点为关键帧;若两个关键帧共视地图点超过15个,则用一条边将两者相连接。用权重值θ表示两个关键帧能共同观测到的点云数量。
- 通常,只会使用一级相邻层级的共视关系及信息,而局部地图优化时用两级相邻共视关系进行优化。
- 作用:
- 增加地图点信息,以优化地图
- 表示关键帧之间的关系、联系的紧密程度
- 具体的应用:
- 是跟踪局部地图,扩大搜索范围
- 局部建图中关键帧之间新建地图点
- 闭环检测、重定位检测
- 优化
本质树(Spinning Tree)
- 由父子关键帧构成,常用于本质图中
本质图(Essential Graph)
- 针对关键帧而构成的图像。
- 与共视图相比,本质图比较稀疏
- 节点表示所有关键帧,但连接的边只保留联系紧密关键帧之间的边,使得结果更加精确
- 图中包含的信息有:
- 扩展树的连接关系
- 形成闭环的连接关系
- 闭环后,引起地图点变动而新增加的连接关系
- 共视关系较好的连接关系(至少有100个共视地图点)
- 作用:
- 闭环矫正时,用相似变换(
Sim3
)来矫正尺度漂移,将闭环的误差均摊在本质图中
- 闭环矫正时,用相似变换(
- 优势:
- 全局BA 收敛后,可能局部未收敛,本质图可以保证 某一局部收敛
- 可显著减少运行时间
- 本质图+局部BA 一起结合,在一定程度能提高精度,但耗时增大。
大体流程:
- 设置当前不能接受新的关键帧到系统中,即在局部建图(处理关键帧)过程中不允许新关键帧集有变动
- 对新关键帧进行处理。建立新的关键帧及其属性(计算关键帧BOW向量,将关键帧的地图点加入该关键帧,并更新地图点的属性(观测到该地图点的关键帧,该地图点的平均观测方向,最优描述子))并加入全局map
- 检测当前关键帧对应的地图点是否是好的(筛选条件:地图点是否是好的,地图点的查找率大于0.25, 该地图点第一关键帧(第一次观察到该地图点的帧id)与当前帧id相隔距离, 该关键点的被观察到的次数)
- 对极约束检测特征点,并建立地图点加入全局map。利用对极约束对当前帧和共视关键帧(当前帧的)进行三角测量(匹配当前帧和关键帧,然后对每对匹配通过三角化生成3D点,之后进行检验(检验地图点在两帧中的深度,在两帧中的重投影误差,尺度连续性)),如果满足对极约束则建立当前帧的地图点及其属性(a.观测到该MapPoint的关键帧 b.该MapPoint的描述子 c.该MapPoint的平均观测方向和深度范围),将地图点加入关键帧,加入全局map.
- 进行数据融合。在每组关键帧(当前情况下没有新的关键帧加入为一组关键帧)处理完成之后,对当前关键帧(每组的最后一关键帧)进行关键帧的融合(融合的是当前关键帧及其共视关键帧),对地图点进行融合(融合的是当前关键帧的地图点和两级相邻关键帧(关键帧的共视关键帧和所有共视关键帧的共视关键帧)的地图点)。
- 将关键帧交给回环检测
- 设置当前可以接受新的关键帧到系统中,即在局部建图(处理关键帧)过程中允许新关键帧集有变动
Run
代码流程:
-
while 1 循环
-
SetAcceptKeyFrames(false)。设置不在接收关键帧 ,成员变量控制:
mbAcceptKeyFrames=false
-
如果有新的关键帧 且 imu_ok:
-
ProcessNewKeyFrame() 对新关键帧进行处理。建立新的关键帧及其属性并加入全局map
- 新关键帧队列中 取出第一帧作为当前关键帧,然后从队列中弹出
- 当前关键帧计算词袋
computeBoW
- 将 MapPoints 关联到新的关键帧并更新法线和描述符
- 取出当前帧匹配的地图点,若其在关键帧上则 对该地图点添加关键帧观测+法线和描述符
- 否则,加入:mlpRecentAddMapPoints
- 当前帧更新 图链接
UpdateConnections
- 将该关键帧插入地图
mpAtlas->AddKeyFrame
-
MapPointCulling地图点的剔除。针对 mlpRecentAddMapPoints,地图点坏的条件:
- 地图点本身是坏的
- 地图点的查找率小于0.25
- 第一次观察到该地图点的帧id与当前帧id相隔 ≥2 且 地图观测数小于 阈值
cnThObs
-
CreateNewMapPoints创建新的地图点。针对 mlpRecentAddMapPoints,地图点坏的条件:
-
步骤1:在当前关键帧的共视关键帧中找到共视程度最高的nn帧相邻帧vpNeighKFs
- 在共视图中检索邻域关键帧
vpNeighKFs
,个数10(单目20),该邻域帧按照共视地图点个数从多到少排列 - 若
mbInertial==true
(参数)- 若
vpNeighKFs
个数小于10 且 当前帧存在前关键帧 且 count < 10时: - 找到前关键帧在 邻域的
vpNeighKFs
的下标,若其是最后一个,则将前一关键帧加入邻域 - 前一关键帧 赋值当前关键帧
- 若
- 构建 ORBMatcher 器,
0.6,false
, - 得到当前帧的世界位姿,转换为 4*4矩阵。
- 在共视图中检索邻域关键帧
-
步骤2:遍历相邻关键帧vpNeighKFs
- 邻接的关键帧在世界坐标系中的坐标,并求出基线向量(两关键帧间的相机位移)和基线长度
- 步骤2.1:判断基线是不是足够长:
- 如果传感器不是单目:
- 判断相机运动的基线是不是足够长,如果是立体相机,基线小于关键帧基线时不生成3D点
- 如果传感器是单目:
medianDepthKF2
= 邻接关键帧的场景深度中值- 计算基线与场景深度的比例
ratioBaselineDepth
- 如果特别远(比例特别小),那么不考虑当前邻接的关键帧,不生成3D点
- 如果传感器不是单目:
- 步骤2.2:根据两个关键帧的位姿计算它们之间的基本矩阵 ComputeF12
- inv(K1) * t12x*R12*inv(K2)
- 步骤2.3:通过极线约束限制匹配时的搜索范围,进行特征点匹配 SearchForTriangulation
- Param: 当前关键帧+邻域关键帧+两关键帧基本矩阵+匹配的地图点+匹配成功后的点+false
- 步骤2.4:对每对匹配通过三角化生成3D点,和
Triangulate
函数差不多- 2.4.1 取出各匹配特征点数据(for 循环遍历)
- 当前关键帧和邻域关键帧的索引
idx1
和idx2
- 当前匹配在当前和邻接关键帧中的特征点
pk1
和pk2
- 当前关键帧和邻域关键帧的索引
- 2.4.2 利用匹配点反投影得到视差角
- 投影:
- 正向投影:空间点转为图像上点
- 反向投影:将图像上的点所表示的空间上的点的集合
- 特征点反投影得到
xn1
和xn2
- 由相机坐标系转到世界坐标系,得到视差角余弦值
- 先将两向量投影到世界坐标系,然后点乘除以两条边
- 视差加1是为了让其随便初始化为一个很大的值
- 投影:
- 2.4.3 对于双目,利用双目得到视差角
- 双目,且有深度,
cos(2*atan2(mb/2,depth))
- 双目,且有深度,
- 2.4.4 三角化恢复视差角
- 视差角度小时用三角法恢复3D点,视差角大时用双目恢复3D点(双目以及深度有效)
- 2.4.5 检测生成的3D点是否在相机前方,即
z
大于0,否则 continue - 2.4.6 计算重投影误差:
- 计算3D点在当前关键帧下的重投影误差
x1,y1,invz1
- 计算3D点在另一个关键帧下的重投影误差
x2,y2,invz2
- 两重投影误差:基于卡方检验计算出的阈值(假设测量有一个像素的偏差)
- 计算3D点在当前关键帧下的重投影误差
- 2.4.7 检查尺度连续性
- 世界坐标系下,3D点与相机间的向量,方向由相机指向3D点
x3D-Ow1
ratioDist
是不考虑金字塔尺度下的距离比例ratioOctave
金字塔尺度因子的比例- ratioDist*ratioFactor < ratioOctave 或 ratioDist/ratioOctave > ratioFactor表明尺度变化是连续的
- 世界坐标系下,3D点与相机间的向量,方向由相机指向3D点
- 2.4.8 三角化生成3D点成功,构造成MapPoint
- a.观测到该MapPoint的关键帧
- b.该MapPoint的描述子
- c.该MapPoint的平均观测方向和深度范围
- 2.4.9 将新产生的点放入检测队列
- 这些MapPoints都会经过MapPointCulling函数的检验
- 2.4.1 取出各匹配特征点数据(for 循环遍历)
-
已经处理完队列中的最后的一个关键帧 CheckNewKeyFrames
return(!mlNewKeyFrames.empty());
- 检查并融合当前关键帧与相邻帧(两级相邻)重复的MapPoints SearchInNeighbors
- 步骤1:获得当前关键帧在covisibility图中权重排名前nn的邻接关键帧
- 找到当前帧一级相邻与二级相邻关键帧,存入
vpTargetKFs
- 邻域个数小于20时,扩展到时间邻域 ,当前帧的前一帧、前前一帧加入等加入容器,直到满足20
- 找到当前帧一级相邻与二级相邻关键帧,存入
- 步骤2:将当前帧的MapPoints分别与一级、二级、时间相邻帧(的MapPoints)进行融合
- 取出当前帧匹配的地图点
vpMapPointMatches
,并遍历: - 投影当前帧的MapPoints到相邻关键帧pKFi中,并判断是否有重复的MapPoints
- 1.如果MapPoint能匹配关键帧的特征点,并且该点有对应的MapPoint,那么将两个MapPoint合并(选择观测数多的)
- 2.如果MapPoint能匹配关键帧的特征点,并且该点没有对应的MapPoint,那么为该点添加MapPoint
- 步骤3:将一级二级相邻帧的MapPoints分别与当前帧(的MapPoints)进行融合
- 步骤4:更新当前帧MapPoints的描述子,深度,观测主方向等属性
- 在所有找到pMP的关键帧中,获得最佳的描述子
- 更新平均观测方向和观测距离
- 步骤5:更新当前帧的MapPoints后更新与其它帧的连接关系,更新covisibility图
- 取出当前帧匹配的地图点
- 步骤1:获得当前关键帧在covisibility图中权重排名前nn的邻接关键帧
- 检查并融合当前关键帧与相邻帧(两级相邻)重复的MapPoints SearchInNeighbors
-
mbAbortBA = false; bool b_doneLBA = false;
-
已经处理完队列中的最后的一个关键帧,并且闭环检测没有请求停止LocalMapping
- 若地图集中 关键帧大于2时:
- 使用IMU,且imu已经初始化时:
- 计算当前关键帧到前一关键帧+前一关键帧到前前关键帧相机中心距离之和
- 若距离大于 5cm,则 累计 当前帧与前一帧的时间
- 若当前地图做BA2不成功时,正常都会成功
- 若 时间小于10S 且 距离小于0.02,则打印没足够运动,重置
- 需大范围BA标记,使用单目时需匹配内点大于75,非单目时匹配内点大于100
- 进行内部BA
Optimizer::LocalInertialBA
- 否则 直接进行BA
Optimizer::LocalBundleAdjustment
- 使用IMU,且imu已经初始化时:
- 若IMU 未初始化时,初始化IMU,单目误差更小,imu权重更大
- 初始化IMU InitializeIMU
- 单目:1e2,1e10
- 否则:1e2,1e5
- 初始化IMU InitializeIMU
- 检查冗余本地关键帧 KeyFrameCulling
- 若累计时间小于100s,且 使用imu时
- imu已初始化,且跟踪ok 时:
- BA1 失败时,进行BA2,重新初始化IMU
- 否则进行BA2
- 单目且关键帧小于100时:尺度细化 ScaleRefinement
- 若地图集中 关键帧大于2时:
-
闭环检测插入 当前关键帧 mpLoopCloser->InsertKeyFrame(mpCurrentKeyFrame)
-
-
-
如果 Stop 且 imu_ok:
Stop:
if(mbStopRequested && !mbNotStop) return true
,否则返回 false- while 循环,直到完全 stop,while中等待 3000us
- checkFinished检测请求是否结束,
mbFinishRequested==true
,则 break
-
ResetIfRequested
- 如果有重置请求时,清空新关键帧和最近添加的地图点,不进行BA1和BA2
- 如果有重置活跃地图时,清空新关键帧和最近添加的地图点
-
SetAcceptKeyFrames(true)。设置接收关键帧
-
checkFinished检测请求是否结束,
mbFinishRequested==true
,则 break
-
-
SetFinish。两个状态赋值:
mbFinished=true , mbStopped=true
代码注解
void LocalMapping::Run()
{
mbFinished = false;
// while 1 直到标记 mbFinishRequested==true`,则 break
while(1)
{
// Tracking will see that Local Mapping is busy
// 设置不在接收关键帧 ,成员变量控制:`mbAcceptKeyFrames=false`
SetAcceptKeyFrames(false);
// Check if there are keyframes in the queue
// 如果有新的关键帧 且 imu_ok 时:
if(CheckNewKeyFrames() && !mbBadImu)
{
// BoW conversion and insertion in Map
// 对新关键帧进行处理。建立新的关键帧及其属性并加入全局map
ProcessNewKeyFrame();
// Check recent MapPoints
// <font color="blue">地图点的剔除</font>。针对 mlpRecentAddMapPoints,地图点坏的条件:
// - 地图点本身是坏的
// - 地图点的查找率小于0.25
// - 第一次观察到该地图点的帧id与当前帧id相隔 ≥2 且 地图观测数小于 阈值`cnThObs`
MapPointCulling();
// Triangulate new MapPoints
// 针对 mlpRecentAddMapPoints,相机运动过程中与相邻关键帧通过三角化恢复出一些MapPoints
CreateNewMapPoints();
mbAbortBA = false;
// 已经处理完队列中的最后的一个关键帧
if(!CheckNewKeyFrames())
{
// Find more matches in neighbor keyframes and fuse point duplications
//检查并融合当前关键帧与相邻帧(两级相邻)重复的MapPoints
SearchInNeighbors();
}
bool b_doneLBA = false;
int num_FixedKF_BA = 0;
int num_OptKF_BA = 0;
int num_MPs_BA = 0;
int num_edges_BA = 0;
// 已经处理完队列中的最后的一个关键帧,并且闭环检测没有请求停止LocalMapping
if(!CheckNewKeyFrames() && !stopRequested())
{
if(mpAtlas->KeyFramesInMap()>2)
{
// 使用IMU,且imu已经初始化时:
if(mbInertial && mpCurrentKeyFrame->GetMap()->isImuInitialized())
{
// 计算当前关键帧到前一关键帧+前一关键帧到前前关键帧相机中心距离之和
float dist = cv::norm(mpCurrentKeyFrame->mPrevKF->GetCameraCenter() - mpCurrentKeyFrame->GetCameraCenter()) +
cv::norm(mpCurrentKeyFrame->mPrevKF->mPrevKF->GetCameraCenter() - mpCurrentKeyFrame->mPrevKF->GetCameraCenter());
// 若距离大于 5cm,则 累计 当前帧与前一帧的时间
if(dist>0.05)
mTinit += mpCurrentKeyFrame->mTimeStamp - mpCurrentKeyFrame->mPrevKF->mTimeStamp;
// 若当前地图做BA2不成功时,正常都会成功
if(!mpCurrentKeyFrame->GetMap()->GetIniertialBA2())
{ // 若 时间小于10S 且 距离小于0.02,则打印没足够运动,重置
if((mTinit<10.f) && (dist<0.02))
{
cout << "Not enough motion for initializing. Reseting..." << endl;
unique_lock<mutex> lock(mMutexReset);
mbResetRequestedActiveMap = true;
mpMapToReset = mpCurrentKeyFrame->GetMap();
mbBadImu = true;
}
}
// 需大范围BA标记,使用单目时需匹配内点大于75,非单目时匹配点大于100
bool bLarge = ((mpTracker->GetMatchesInliers()>75)&&mbMonocular)||((mpTracker->GetMatchesInliers()>100)&&!mbMonocular);
// 进行内部BA
Optimizer::LocalInertialBA(mpCurrentKeyFrame, &mbAbortBA, mpCurrentKeyFrame->GetMap(),num_FixedKF_BA,num_OptKF_BA,num_MPs_BA,num_edges_BA, bLarge, !mpCurrentKeyFrame->GetMap()->GetIniertialBA2());
b_doneLBA = true;
}
else // 否则直接进行BA
{ // VI-D Local BA
Optimizer::LocalBundleAdjustment(mpCurrentKeyFrame,&mbAbortBA, mpCurrentKeyFrame->GetMap(),num_FixedKF_BA,num_OptKF_BA,num_MPs_BA,num_edges_BA);
b_doneLBA = true;
}
}
// Initialize IMU here
// 若IMU 未初始化时,初始化IMU,单目误差更小,imu权重更大
if(!mpCurrentKeyFrame->GetMap()->isImuInitialized() && mbInertial)
{
if (mbMonocular)
InitializeIMU(1e2, 1e10, true);
else
InitializeIMU(1e2, 1e5, true);
}
// Check redundant local Keyframes 检查冗余本地关键帧
KeyFrameCulling();
// 若累计时间小于100s,且 使用imu时
if ((mTinit<100.0f) && mbInertial)
{
// imu已初始化,且跟踪ok
if(mpCurrentKeyFrame->GetMap()->isImuInitialized() && mpTracker->mState==Tracking::OK)
{
// BA1 失败时,进行BA2,重新初始化IMU
if(!mpCurrentKeyFrame->GetMap()->GetIniertialBA1()){
if (mTinit>5.0f)
{
cout << "start VIBA 1" << endl;
mpCurrentKeyFrame->GetMap()->SetIniertialBA1();
if (mbMonocular)
InitializeIMU(1.f, 1e5, true);
else
InitializeIMU(1.f, 1e5, true);
cout << "end VIBA 1" << endl;
}
}
else if(!mpCurrentKeyFrame->GetMap()->GetIniertialBA2()){
if (mTinit>15.0f){
cout << "start VIBA 2" << endl;
mpCurrentKeyFrame->GetMap()->SetIniertialBA2();
if (mbMonocular)
InitializeIMU(0.f, 0.f, true);
else
InitializeIMU(0.f, 0.f, true);
cout << "end VIBA 2" << endl;
}
}
// scale refinement
if (((mpAtlas->KeyFramesInMap())<=100) &&
((mTinit>25.0f && mTinit<25.5f)||
(mTinit>35.0f && mTinit<35.5f)||
(mTinit>45.0f && mTinit<45.5f)||
(mTinit>55.0f && mTinit<55.5f)||
(mTinit>65.0f && mTinit<65.5f)||
(mTinit>75.0f && mTinit<75.5f))){
cout << "start scale ref" << endl;
if (mbMonocular)
ScaleRefinement();
cout << "end scale ref" << endl;
}
}
}
}
// 闭环检测插入 当前关键帧
mpLoopCloser->InsertKeyFrame(mpCurrentKeyFrame);
}
else if(Stop() && !mbBadImu)
// Stop: `if(mbStopRequested && !mbNotStop) return true`,否则返回 false
{
// Safe area to stop
// while 循环,直到完全 stop,while中等待 3000us
while(isStopped() && !CheckFinish())
{
usleep(3000);
}
// 检测请求是否结束,`mbFinishRequested==true`,则 break
if(CheckFinish())
break;
}
// 如果有请求时,执行:
// 如果有重置请求时,清空新关键帧和最近添加的地图点,不进行BA1和BA2
// 如果有重置活跃地图时,清空新关键帧和最近添加的地图点
ResetIfRequested();
// Tracking will see that Local Mapping is busy
SetAcceptKeyFrames(true); // 设置接收关键帧
// 检测请求是否结束,`mbFinishRequested==true`,则 break
if(CheckFinish())
break;
usleep(3000);
}
// 两个状态赋值:`mbFinished=true , mbStopped=true `
SetFinish();
}
其他函数
ProcessNewKeyFrame 对新关键帧进行处理
- 建立新的关键帧及其属性并加入全局map
void LocalMapping::ProcessNewKeyFrame()
{
/// 1. 新关键帧队列中取出第一帧作为当前关键帧,然后从队列中弹出
{
unique_lock<mutex> lock(mMutexNewKFs);
mpCurrentKeyFrame = mlNewKeyFrames.front();
mlNewKeyFrames.pop_front();
}
// Compute Bags of Words structures
// 当前关键帧计算词袋 `computeBoW`
mpCurrentKeyFrame->ComputeBoW();
/// 2.将 MapPoints 关联到新的关键帧并更新法线和描述符
// Associate MapPoints to the new keyframe and update normal and descriptor
// 取出 当前帧匹配的地图点
const vector<MapPoint*> vpMapPointMatches = mpCurrentKeyFrame->GetMapPointMatches();
// 遍历 地图点
for(size_t i=0; i<vpMapPointMatches.size(); i++)
{
MapPoint* pMP = vpMapPointMatches[i];
if(pMP)
{ // 地图点非空且不是坏的
if(!pMP->isBad())
{
// 若其在关键帧上则 对该地图点添加关键帧观测+法线和描述符
// 否则,加入:mlpRecentAddMapPoints
if(!pMP->IsInKeyFrame(mpCurrentKeyFrame))
{
pMP->AddObservation(mpCurrentKeyFrame, i);
pMP->UpdateNormalAndDepth();
pMP->ComputeDistinctiveDescriptors();
}
else // this can only happen for new stereo points inserted by the Tracking
// 这只会发生在 Tracking 插入的新立体点
{
mlpRecentAddedMapPoints.push_back(pMP);
}
}
}
}
// Update links in the Covisibility Graph
// 当前帧更新 图链接 `UpdateConnections`
mpCurrentKeyFrame->UpdateConnections();
// Insert Keyframe in Map
// 将该关键帧插入地图 `mpAtlas->AddKeyFrame`
mpAtlas->AddKeyFrame(mpCurrentKeyFrame);
}
MapPointCulling 最近添加地图点的剔除
- 针对 mlpRecentAddMapPoints
void LocalMapping::MapPointCulling()
{
// Check Recent Added MapPoints
list<MapPoint*>::iterator lit = mlpRecentAddedMapPoints.begin();
const unsigned long int nCurrentKFid = mpCurrentKeyFrame->mnId;
int nThObs;
if(mbMonocular)
nThObs = 2;
else
nThObs = 3;
const int cnThObs = nThObs;
int borrar = mlpRecentAddedMapPoints.size();
while(lit!=mlpRecentAddedMapPoints.end())
{
MapPoint* pMP = *lit;
if(pMP->isBad()) // 地图本身是坏的
lit = mlpRecentAddedMapPoints.erase(lit);
else if(pMP->GetFoundRatio()<0.25f) // 地图查找率小于0.25
{
pMP->SetBadFlag();
lit = mlpRecentAddedMapPoints.erase(lit);
}
// 第一次观察到该地图点的帧id与当前帧id相隔 ≥2 且 地图观测数小于 阈值`cnThObs`
else if(((int)nCurrentKFid-(int)pMP->mnFirstKFid)>=2 && pMP->Observations()<=cnThObs)
{
pMP->SetBadFlag();
lit = mlpRecentAddedMapPoints.erase(lit);
}
else if(((int)nCurrentKFid-(int)pMP->mnFirstKFid)>=3)
lit = mlpRecentAddedMapPoints.erase(lit);
else
{
lit++;
borrar--;
}
}
//cout << "erase MP: " << borrar << endl;
}
CreateNewMapPoints 创建新的地图点
void LocalMapping::CreateNewMapPoints()
{
// Retrieve neighbor keyframes in covisibility graph
// 单目20个邻域,其他10个邻域关键帧
int nn = 10;
// For stereo inertial case
if(mbMonocular)
nn=20;
// 步骤1:在当前关键帧的共视关键帧中找到共视程度最高的nn帧相邻帧vpNeighKFs
//在共视图中检索邻域关键帧`vpNeighKFs`,个数10(单目20),该邻域帧按照共视地图点个数从多到少排列
vector<KeyFrame*> vpNeighKFs = mpCurrentKeyFrame->GetBestCovisibilityKeyFrames(nn);
if (mbInertial) //若`mbInertial==true`(参数)
{
KeyFrame* pKF = mpCurrentKeyFrame;
int count=0;
// 若`vpNeighKFs`个数小于10 且 当前帧存在前关键帧 且 count < 10时:
while((vpNeighKFs.size()<=nn)&&(pKF->mPrevKF)&&(count++<nn))
{
vector<KeyFrame*>::iterator it = std::find(vpNeighKFs.begin(), vpNeighKFs.end(), pKF->mPrevKF);
// 找到前关键帧在 邻域的`vpNeighKFs`的下标,若其是最后一个,则将前一关键帧加入邻域
if(it==vpNeighKFs.end())
vpNeighKFs.push_back(pKF->mPrevKF);
pKF = pKF->mPrevKF; // 前一关键帧 赋值当前关键帧
}
}
// 构建 ORBMatcher 器, `0.6,false`
float th = 0.6f;
ORBmatcher matcher(th,false);
// 得到当前帧的世界位姿,转换为 4*4矩阵。
auto Rcw1 = mpCurrentKeyFrame->GetRotation_();
auto Rwc1 = Rcw1.t();
auto tcw1 = mpCurrentKeyFrame->GetTranslation_();
cv::Matx44f Tcw1{Rcw1(0,0),Rcw1(0,1),Rcw1(0,2),tcw1(0),
Rcw1(1,0),Rcw1(1,1),Rcw1(1,2),tcw1(1),
Rcw1(2,0),Rcw1(2,1),Rcw1(2,2),tcw1(2),
0.f,0.f,0.f,1.f};
auto Ow1 = mpCurrentKeyFrame->GetCameraCenter_();
const float &fx1 = mpCurrentKeyFrame->fx;
const float &fy1 = mpCurrentKeyFrame->fy;
const float &cx1 = mpCurrentKeyFrame->cx;
const float &cy1 = mpCurrentKeyFrame->cy;
const float &invfx1 = mpCurrentKeyFrame->invfx;
const float &invfy1 = mpCurrentKeyFrame->invfy;
const float ratioFactor = 1.5f*mpCurrentKeyFrame->mfScaleFactor;
// 步骤2:遍历相邻关键帧vpNeighKFs
// Search matches with epipolar restriction and triangulate
for(size_t i=0; i<vpNeighKFs.size(); i++)
{
if(i>0 && CheckNewKeyFrames())
return;
// 邻接的关键帧在世界坐标系中的坐标,并求出基线向量(两关键帧间的相机位移)和基线长度
KeyFrame* pKF2 = vpNeighKFs[i];
GeometricCamera* pCamera1 = mpCurrentKeyFrame->mpCamera, *pCamera2 = pKF2->mpCamera;
// Check first that baseline is not too short
auto Ow2 = pKF2->GetCameraCenter_();
auto vBaseline = Ow2-Ow1;
const float baseline = cv::norm(vBaseline);
// 步骤2.1:判断基线是不是足够长:
if(!mbMonocular)
{ // 判断相机运动的基线是不是足够长,如果是立体相机,基线小于关键帧基线时不生成3D点
if(baseline<pKF2->mb)
continue;
}
else
{ // `medianDepthKF2` = 邻接关键帧的场景深度中值
const float medianDepthKF2 = pKF2->ComputeSceneMedianDepth(2);
// 计算基线与场景深度的比例 `ratioBaselineDepth`
const float ratioBaselineDepth = baseline/medianDepthKF2;
// 如果特别远(比例特别小),那么不考虑当前邻接的关键帧,不生成3D点
if(ratioBaselineDepth<0.01)
continue;
}
// Compute Fundamental Matrix
//步骤2.2:根据两个关键帧的位姿计算它们之间的基本矩阵
// inv(K1) * t12x*R12 * inv(K2)
auto F12 = ComputeF12_(mpCurrentKeyFrame,pKF2);
// Search matches that fullfil epipolar constraint
// 步骤2.3:通过极线约束限制匹配时的搜索范围,进行特征点匹配
vector<pair<size_t,size_t> > vMatchedIndices;
bool bCoarse = mbInertial &&
((!mpCurrentKeyFrame->GetMap()->GetIniertialBA2() && mpCurrentKeyFrame->GetMap()->GetIniertialBA1())||
mpTracker->mState==Tracking::RECENTLY_LOST);
// Param: 当前关键帧+邻域关键帧+两关键帧基本矩阵+匹配的地图点+匹配成功后的点+false
matcher.SearchForTriangulation_(mpCurrentKeyFrame,pKF2,F12,vMatchedIndices,false,bCoarse);
// 步骤2.4:对每对匹配通过三角化生成3D点,和`Triangulate`函数差不多
auto Rcw2 = pKF2->GetRotation_();
auto Rwc2 = Rcw2.t();
auto tcw2 = pKF2->GetTranslation_();
cv::Matx44f Tcw2{Rcw2(0,0),Rcw2(0,1),Rcw2(0,2),tcw2(0),
Rcw2(1,0),Rcw2(1,1),Rcw2(1,2),tcw2(1),
Rcw2(2,0),Rcw2(2,1),Rcw2(2,2),tcw2(2),
0.f,0.f,0.f,1.f};
const float &fx2 = pKF2->fx;
const float &fy2 = pKF2->fy;
const float &cx2 = pKF2->cx;
const float &cy2 = pKF2->cy;
const float &invfx2 = pKF2->invfx;
const float &invfy2 = pKF2->invfy;
// Triangulate each match
const int nmatches = vMatchedIndices.size();
for(int ikp=0; ikp<nmatches; ikp++)
{
/// 2.4.1 取出各匹配特征点数据(for 循环遍历)
// 当前关键帧和邻域关键帧的索引`idx1`和`idx2`
// 当前匹配在当前和邻接关键帧中的特征点`pk1`和`pk2`
const int &idx1 = vMatchedIndices[ikp].first;
const int &idx2 = vMatchedIndices[ikp].second;
const cv::KeyPoint &kp1 = (mpCurrentKeyFrame -> NLeft == -1) ? mpCurrentKeyFrame->mvKeysUn[idx1]
: (idx1 < mpCurrentKeyFrame -> NLeft) ? mpCurrentKeyFrame -> mvKeys[idx1]
: mpCurrentKeyFrame -> mvKeysRight[idx1 - mpCurrentKeyFrame -> NLeft];
const float kp1_ur=mpCurrentKeyFrame->mvuRight[idx1];
bool bStereo1 = (!mpCurrentKeyFrame->mpCamera2 && kp1_ur>=0);
const bool bRight1 = (mpCurrentKeyFrame -> NLeft == -1 || idx1 < mpCurrentKeyFrame -> NLeft) ? false
: true;
const cv::KeyPoint &kp2 = (pKF2 -> NLeft == -1) ? pKF2->mvKeysUn[idx2]
: (idx2 < pKF2 -> NLeft) ? pKF2 -> mvKeys[idx2]
: pKF2 -> mvKeysRight[idx2 - pKF2 -> NLeft];
const float kp2_ur = pKF2->mvuRight[idx2];
bool bStereo2 = (!pKF2->mpCamera2 && kp2_ur>=0);
const bool bRight2 = (pKF2 -> NLeft == -1 || idx2 < pKF2 -> NLeft) ? false
: true;
if(mpCurrentKeyFrame->mpCamera2 && pKF2->mpCamera2){
if(bRight1 && bRight2){
Rcw1 = mpCurrentKeyFrame->GetRightRotation_();
Rwc1 = Rcw1.t();
tcw1 = mpCurrentKeyFrame->GetRightTranslation_();
Tcw1 = mpCurrentKeyFrame->GetRightPose_();
Ow1 = mpCurrentKeyFrame->GetRightCameraCenter_();
Rcw2 = pKF2->GetRightRotation_();
Rwc2 = Rcw2.t();
tcw2 = pKF2->GetRightTranslation_();
Tcw2 = pKF2->GetRightPose_();
Ow2 = pKF2->GetRightCameraCenter_();
pCamera1 = mpCurrentKeyFrame->mpCamera2;
pCamera2 = pKF2->mpCamera2;
}
else if(bRight1 && !bRight2){
Rcw1 = mpCurrentKeyFrame->GetRightRotation_();
Rwc1 = Rcw1.t();
tcw1 = mpCurrentKeyFrame->GetRightTranslation_();
Tcw1 = mpCurrentKeyFrame->GetRightPose_();
Ow1 = mpCurrentKeyFrame->GetRightCameraCenter_();
Rcw2 = pKF2->GetRotation_();
Rwc2 = Rcw2.t();
tcw2 = pKF2->GetTranslation_();
Tcw2 = pKF2->GetPose_();
Ow2 = pKF2->GetCameraCenter_();
pCamera1 = mpCurrentKeyFrame->mpCamera2;
pCamera2 = pKF2->mpCamera;
}
else if(!bRight1 && bRight2){
Rcw1 = mpCurrentKeyFrame->GetRotation_();
Rwc1 = Rcw1.t();
tcw1 = mpCurrentKeyFrame->GetTranslation_();
Tcw1 = mpCurrentKeyFrame->GetPose_();
Ow1 = mpCurrentKeyFrame->GetCameraCenter_();
Rcw2 = pKF2->GetRightRotation_();
Rwc2 = Rcw2.t();
tcw2 = pKF2->GetRightTranslation_();
Tcw2 = pKF2->GetRightPose_();
Ow2 = pKF2->GetRightCameraCenter_();
pCamera1 = mpCurrentKeyFrame->mpCamera;
pCamera2 = pKF2->mpCamera2;
}
else{
Rcw1 = mpCurrentKeyFrame->GetRotation_();
Rwc1 = Rcw1.t();
tcw1 = mpCurrentKeyFrame->GetTranslation_();
Tcw1 = mpCurrentKeyFrame->GetPose_();
Ow1 = mpCurrentKeyFrame->GetCameraCenter_();
Rcw2 = pKF2->GetRotation_();
Rwc2 = Rcw2.t();
tcw2 = pKF2->GetTranslation_();
Tcw2 = pKF2->GetPose_();
Ow2 = pKF2->GetCameraCenter_();
pCamera1 = mpCurrentKeyFrame->mpCamera;
pCamera2 = pKF2->mpCamera;
}
}
/// 2.4.2 利用匹配点反投影得到视差角
// Check parallax between rays
// 特征点反投影得到 `xn1` 和 `xn2`
auto xn1 = pCamera1->unprojectMat_(kp1.pt);
auto xn2 = pCamera2->unprojectMat_(kp2.pt);
// 由相机坐标系转到世界坐标系,得到视差角余弦值
auto ray1 = Rwc1*xn1;
auto ray2 = Rwc2*xn2;
const float cosParallaxRays = ray1.dot(ray2)/(cv::norm(ray1)*cv::norm(ray2));
// 视差加1是为了让其随便初始化为一个很大的值
float cosParallaxStereo = cosParallaxRays+1;
float cosParallaxStereo1 = cosParallaxStereo;
float cosParallaxStereo2 = cosParallaxStereo;
/// 2.4.3 对于双目,利用双目得到视差角。 --双目,且有深度,`cos(2*atan2(mb/2,depth))`
if(bStereo1)
cosParallaxStereo1 = cos(2*atan2(mpCurrentKeyFrame->mb/2,mpCurrentKeyFrame->mvDepth[idx1]));
else if(bStereo2)
cosParallaxStereo2 = cos(2*atan2(pKF2->mb/2,pKF2->mvDepth[idx2]));
cosParallaxStereo = min(cosParallaxStereo1,cosParallaxStereo2);
cv::Matx31f x3D;
bool bEstimated = false;
/// 2.4.4 三角化恢复视差角
// 视差角度小时用三角法恢复3D点,视差角大时用双目恢复3D点(双目以及深度有效)
if(cosParallaxRays<cosParallaxStereo && cosParallaxRays>0 && (bStereo1 || bStereo2 ||
(cosParallaxRays<0.9998 && mbInertial) || (cosParallaxRays<0.9998 && !mbInertial)))
{
// Linear Triangulation Method
cv::Matx14f A_r0 = xn1(0) * Tcw1.row(2) - Tcw1.row(0);
cv::Matx14f A_r1 = xn1(1) * Tcw1.row(2) - Tcw1.row(1);
cv::Matx14f A_r2 = xn2(0) * Tcw2.row(2) - Tcw2.row(0);
cv::Matx14f A_r3 = xn2(1) * Tcw2.row(2) - Tcw2.row(1);
cv::Matx44f A{A_r0(0), A_r0(1), A_r0(2), A_r0(3),
A_r1(0), A_r1(1), A_r1(2), A_r1(3),
A_r2(0), A_r2(1), A_r2(2), A_r2(3),
A_r3(0), A_r3(1), A_r3(2), A_r3(3)};
cv::Matx44f u,vt;
cv::Matx41f w;
cv::SVD::compute(A,w,u,vt,cv::SVD::MODIFY_A| cv::SVD::FULL_UV);
cv::Matx41f x3D_h = vt.row(3).t();
if(x3D_h(3)==0)
continue;
// Euclidean coordinates
x3D = x3D_h.get_minor<3,1>(0,0) / x3D_h(3);
bEstimated = true;
}
else if(bStereo1 && cosParallaxStereo1<cosParallaxStereo2)
{
x3D = mpCurrentKeyFrame->UnprojectStereo_(idx1);
bEstimated = true;
}
else if(bStereo2 && cosParallaxStereo2<cosParallaxStereo1)
{
x3D = pKF2->UnprojectStereo_(idx2);
bEstimated = true;
}
else
{
continue; //No stereo and very low parallax
}
cv::Matx13f x3Dt = x3D.t();
if(!bEstimated) continue;
//Check triangulation in front of cameras
/// 2.4.5 检测生成的3D点是否在相机前方,即 `z` 大于0,否则 continue
float z1 = Rcw1.row(2).dot(x3Dt)+tcw1(2);
if(z1<=0)
continue;
float z2 = Rcw2.row(2).dot(x3Dt)+tcw2(2);
if(z2<=0)
continue;
//Check reprojection error in first keyframe
/// 2.4.6 计算重投影误差:
// 计算3D点在当前关键帧下的重投影误差 `x1,y1,invz1`
const float &sigmaSquare1 = mpCurrentKeyFrame->mvLevelSigma2[kp1.octave];
const float x1 = Rcw1.row(0).dot(x3Dt)+tcw1(0);
const float y1 = Rcw1.row(1).dot(x3Dt)+tcw1(1);
const float invz1 = 1.0/z1;
// 基于卡方检验计算出的阈值(假设测量有一个像素的偏差)
if(!bStereo1)
{
cv::Point2f uv1 = pCamera1->project(cv::Point3f(x1,y1,z1));
float errX1 = uv1.x - kp1.pt.x;
float errY1 = uv1.y - kp1.pt.y;
if((errX1*errX1+errY1*errY1)>5.991*sigmaSquare1)
continue;
}
else
{
float u1 = fx1*x1*invz1+cx1;
float u1_r = u1 - mpCurrentKeyFrame->mbf*invz1;
float v1 = fy1*y1*invz1+cy1;
float errX1 = u1 - kp1.pt.x;
float errY1 = v1 - kp1.pt.y;
float errX1_r = u1_r - kp1_ur;
if((errX1*errX1+errY1*errY1+errX1_r*errX1_r)>7.8*sigmaSquare1)
continue;
}
//Check reprojection error in second keyframe
// 计算3D点在另一个关键帧下的重投影误差 `x2,y2,invz2`
const float sigmaSquare2 = pKF2->mvLevelSigma2[kp2.octave];
const float x2 = Rcw2.row(0).dot(x3Dt)+tcw2(0);
const float y2 = Rcw2.row(1).dot(x3Dt)+tcw2(1);
const float invz2 = 1.0/z2;
// 基于卡方检验计算出的阈值(假设测量有一个像素的偏差)
if(!bStereo2)
{
cv::Point2f uv2 = pCamera2->project(cv::Point3f(x2,y2,z2));
float errX2 = uv2.x - kp2.pt.x;
float errY2 = uv2.y - kp2.pt.y;
if((errX2*errX2+errY2*errY2)>5.991*sigmaSquare2)
continue;
}
else
{
float u2 = fx2*x2*invz2+cx2;
float u2_r = u2 - mpCurrentKeyFrame->mbf*invz2;
float v2 = fy2*y2*invz2+cy2;
float errX2 = u2 - kp2.pt.x;
float errY2 = v2 - kp2.pt.y;
float errX2_r = u2_r - kp2_ur;
if((errX2*errX2+errY2*errY2+errX2_r*errX2_r)>7.8*sigmaSquare2)
continue;
}
//Check scale consistency
/// 2.4.7 检查尺度连续性
// 世界坐标系下,3D点与相机间的向量,方向由相机指向3D点 `x3D-Ow1`
auto normal1 = x3D-Ow1;
float dist1 = cv::norm(normal1);
auto normal2 = x3D-Ow2;
float dist2 = cv::norm(normal2);
if(dist1==0 || dist2==0)
continue;
if(mbFarPoints && (dist1>=mThFarPoints||dist2>=mThFarPoints))
continue;
// `ratioDist`是不考虑金字塔尺度下的距离比例
const float ratioDist = dist2/dist1;
// `ratioOctave`金字塔尺度因子的比例
const float ratioOctave = mpCurrentKeyFrame->mvScaleFactors[kp1.octave]/pKF2->mvScaleFactors[kp2.octave];
// ratioDist*ratioFactor < ratioOctave 或 ratioDist/ratioOctave > ratioFactor表明尺度变化是连续的
if(ratioDist*ratioFactor<ratioOctave || ratioDist>ratioOctave*ratioFactor)
continue;
// Triangulation is succesfull
/// 2.4.8 三角化生成3D点成功,构造成MapPoint
cv::Mat x3D_(x3D);
MapPoint* pMP = new MapPoint(x3D_,mpCurrentKeyFrame,mpAtlas->GetCurrentMap());
// a.观测到该MapPoint的关键帧
pMP->AddObservation(mpCurrentKeyFrame,idx1);
pMP->AddObservation(pKF2,idx2);
// b.该MapPoint的描述子
mpCurrentKeyFrame->AddMapPoint(pMP,idx1);
pKF2->AddMapPoint(pMP,idx2);
pMP->ComputeDistinctiveDescriptors();
// c.该MapPoint的平均观测方向和深度范围
pMP->UpdateNormalAndDepth();
/// 2.4.9 将新产生的点放入检测队列
// 这些MapPoints都会经过MapPointCulling函数的检验
mpAtlas->AddMapPoint(pMP);
mlpRecentAddedMapPoints.push_back(pMP);
}
}
}
SearchInNeighbors 检查并融合当前关键帧与相邻帧(两级相邻)重复的MapPoints
void LocalMapping::SearchInNeighbors()
{
// Retrieve neighbor keyframes
// 步骤1:获得当前关键帧在covisibility图中权重排名前nn的邻接关键帧
int nn = 10;
if(mbMonocular)
nn=20;
/// 找到当前帧一级相邻与二级相邻关键帧
const vector<KeyFrame*> vpNeighKFs = mpCurrentKeyFrame->GetBestCovisibilityKeyFrames(nn);
vector<KeyFrame*> vpTargetKFs;
for(vector<KeyFrame*>::const_iterator vit=vpNeighKFs.begin(), vend=vpNeighKFs.end(); vit!=vend; vit++)
{
KeyFrame* pKFi = *vit;
if(pKFi->isBad() || pKFi->mnFuseTargetForKF == mpCurrentKeyFrame->mnId)
continue;
vpTargetKFs.push_back(pKFi); // 加入一级相邻帧
pKFi->mnFuseTargetForKF = mpCurrentKeyFrame->mnId; // 并标记已经加入
}
// Add some covisible of covisible
// Extend to some second neighbors if abort is not requested
for(int i=0, imax=vpTargetKFs.size(); i<imax; i++)
{
const vector<KeyFrame*> vpSecondNeighKFs = vpTargetKFs[i]->GetBestCovisibilityKeyFrames(20);
for(vector<KeyFrame*>::const_iterator vit2=vpSecondNeighKFs.begin(), vend2=vpSecondNeighKFs.end(); vit2!=vend2; vit2++)
{
KeyFrame* pKFi2 = *vit2;
if(pKFi2->isBad() || pKFi2->mnFuseTargetForKF==mpCurrentKeyFrame->mnId || pKFi2->mnId==mpCurrentKeyFrame->mnId)
continue;
vpTargetKFs.push_back(pKFi2); // 存入二级相邻帧
pKFi2->mnFuseTargetForKF=mpCurrentKeyFrame->mnId;
}
if (mbAbortBA)
break;
}
// Extend to temporal neighbors
// 扩展到时间邻域 ,邻域小于20时,当前帧的前一帧、前前一帧加入,直到满足20
if(mbInertial)
{
KeyFrame* pKFi = mpCurrentKeyFrame->mPrevKF;
while(vpTargetKFs.size()<20 && pKFi)
{
if(pKFi->isBad() || pKFi->mnFuseTargetForKF==mpCurrentKeyFrame->mnId)
{
pKFi = pKFi->mPrevKF;
continue;
}
vpTargetKFs.push_back(pKFi);
pKFi->mnFuseTargetForKF=mpCurrentKeyFrame->mnId;
pKFi = pKFi->mPrevKF;
}
}
// Search matches by projection from current KF in target KFs
ORBmatcher matcher;
// 步骤2:将当前帧的MapPoints分别与一级、二级、时间相邻帧(的MapPoints)进行融合
// 取出当前帧匹配的地图点 `vpMapPointMatches`
vector<MapPoint*> vpMapPointMatches = mpCurrentKeyFrame->GetMapPointMatches();
for(vector<KeyFrame*>::iterator vit=vpTargetKFs.begin(), vend=vpTargetKFs.end(); vit!=vend; vit++)
{
KeyFrame* pKFi = *vit;
// 投影当前帧的MapPoints到相邻关键帧pKFi中,并判断是否有重复的MapPoints
// 1.如果MapPoint能匹配关键帧的特征点,并且该点有对应的MapPoint,那么将两个MapPoint合并(选择观测数多的)
// 2.如果MapPoint能匹配关键帧的特征点,并且该点没有对应的MapPoint,那么为该点添加MapPoint
matcher.Fuse(pKFi,vpMapPointMatches);
if(pKFi->NLeft != -1) matcher.Fuse(pKFi,vpMapPointMatches,true);
}
if (mbAbortBA)
return;
// Search matches by projection from target KFs in current KF
// 通过当前 KF 中目标 KF 的投影搜索匹配
vector<MapPoint*> vpFuseCandidates;
vpFuseCandidates.reserve(vpTargetKFs.size()*vpMapPointMatches.size());
// 步骤3:将一级二级相邻帧的MapPoints分别与当前帧(的MapPoints)进行融合
for(vector<KeyFrame*>::iterator vitKF=vpTargetKFs.begin(), vendKF=vpTargetKFs.end(); vitKF!=vendKF; vitKF++)
{
KeyFrame* pKFi = *vitKF;
vector<MapPoint*> vpMapPointsKFi = pKFi->GetMapPointMatches();
// 遍历当前一级邻接和二级邻接关键帧中所有的MapPoints
for(vector<MapPoint*>::iterator vitMP=vpMapPointsKFi.begin(), vendMP=vpMapPointsKFi.end(); vitMP!=vendMP; vitMP++)
{
MapPoint* pMP = *vitMP;
if(!pMP)
continue;
// 判断MapPoints是否为坏点,或者是否已经加进集合vpFuseCandidates
if(pMP->isBad() || pMP->mnFuseCandidateForKF == mpCurrentKeyFrame->mnId)
continue;
// 加入集合,并标记已经加入
pMP->mnFuseCandidateForKF = mpCurrentKeyFrame->mnId;
vpFuseCandidates.push_back(pMP);
}
}
// 投影当前帧的MapPoints到相邻关键帧pKFi中,并判断是否有重复的MapPoints
matcher.Fuse(mpCurrentKeyFrame,vpFuseCandidates);
if(mpCurrentKeyFrame->NLeft != -1) matcher.Fuse(mpCurrentKeyFrame,vpFuseCandidates,true);
// 步骤4:更新当前帧MapPoints的描述子,深度,观测主方向等属性
// Update points
vpMapPointMatches = mpCurrentKeyFrame->GetMapPointMatches();
for(size_t i=0, iend=vpMapPointMatches.size(); i<iend; i++)
{
MapPoint* pMP=vpMapPointMatches[i];
if(pMP)
{
if(!pMP->isBad())
{ // 在所有找到pMP的关键帧中,获得最佳的描述子
pMP->ComputeDistinctiveDescriptors();
// 更新平均观测方向和观测距离
pMP->UpdateNormalAndDepth();
}
}
}
// Update connections in covisibility graph
// 步骤5:更新当前帧的MapPoints后更新与其它帧的连接关系
// 更新covisibility图
mpCurrentKeyFrame->UpdateConnections();
}
ComputeF12 计算两帧基本矩阵
- Essential Matrix: t12叉乘R12
- Fundamental Matrix:
inv(K1)*E*inv(K2)
cv::Mat LocalMapping::ComputeF12(KeyFrame *&pKF1, KeyFrame *&pKF2)
{
cv::Mat R1w = pKF1->GetRotation();
cv::Mat t1w = pKF1->GetTranslation();
cv::Mat R2w = pKF2->GetRotation();
cv::Mat t2w = pKF2->GetTranslation();
cv::Mat R12 = R1w*R2w.t();
cv::Mat t12 = -R1w*R2w.t()*t2w+t1w;
cv::Mat t12x = SkewSymmetricMatrix(t12);
const cv::Mat &K1 = pKF1->mpCamera->toK();
const cv::Mat &K2 = pKF2->mpCamera->toK();
return K1.t().inv()*t12x*R12*K2.inv();
}
cv::Matx33f LocalMapping::ComputeF12_(KeyFrame *&pKF1, KeyFrame *&pKF2)
{
auto R1w = pKF1->GetRotation_();
auto t1w = pKF1->GetTranslation_();
auto R2w = pKF2->GetRotation_();
auto t2w = pKF2->GetTranslation_();
auto R12 = R1w*R2w.t();
auto t12 = -R1w*R2w.t()*t2w+t1w;
auto t12x = SkewSymmetricMatrix_(t12);
const auto &K1 = pKF1->mpCamera->toK_();
const auto &K2 = pKF2->mpCamera->toK_();
return K1.t().inv()*t12x*R12*K2.inv();
}
SearchInNeighbors 检查并融合当前关键帧与相邻帧重复的MapPoints
void LocalMapping::SearchInNeighbors()
{
// Retrieve neighbor keyframes
int nn = 10;
if(mbMonocular)
nn=20;
/// 步骤1:获得当前关键帧在covisibility图中权重排名前nn的邻接关键帧
const vector<KeyFrame*> vpNeighKFs = mpCurrentKeyFrame->GetBestCovisibilityKeyFrames(nn);
vector<KeyFrame*> vpTargetKFs;
// 遍历相邻关键帧,将其push进 vpTargetKFs 一级相邻
for(vector<KeyFrame*>::const_iterator vit=vpNeighKFs.begin(), vend=vpNeighKFs.end(); vit!=vend; vit++)
{
KeyFrame* pKFi = *vit;
if(pKFi->isBad() || pKFi->mnFuseTargetForKF == mpCurrentKeyFrame->mnId)
continue;
vpTargetKFs.push_back(pKFi);
pKFi->mnFuseTargetForKF = mpCurrentKeyFrame->mnId;
}
// Add some covisible of covisible
// Extend to some second neighbors if abort is not requested
// 如果没有请求中止,则扩展到第二个邻居
for(int i=0, imax=vpTargetKFs.size(); i<imax; i++)
{
const vector<KeyFrame*> vpSecondNeighKFs = vpTargetKFs[i]->GetBestCovisibilityKeyFrames(20);
for(vector<KeyFrame*>::const_iterator vit2=vpSecondNeighKFs.begin(), vend2=vpSecondNeighKFs.end(); vit2!=vend2; vit2++)
{
KeyFrame* pKFi2 = *vit2;
if(pKFi2->isBad() || pKFi2->mnFuseTargetForKF==mpCurrentKeyFrame->mnId || pKFi2->mnId==mpCurrentKeyFrame->mnId)
continue;
vpTargetKFs.push_back(pKFi2);
pKFi2->mnFuseTargetForKF=mpCurrentKeyFrame->mnId;
}
if (mbAbortBA)
break;
}
// Extend to temporal neighbors
// <font color="blue">邻域个数小于20时</font>,扩展到时间邻域 ,当前帧的前一帧、前前一帧加入等加入容器,直到满足20
if(mbInertial)
{
KeyFrame* pKFi = mpCurrentKeyFrame->mPrevKF;
while(vpTargetKFs.size()<20 && pKFi)
{
if(pKFi->isBad() || pKFi->mnFuseTargetForKF==mpCurrentKeyFrame->mnId)
{
pKFi = pKFi->mPrevKF;
continue;
}
vpTargetKFs.push_back(pKFi);
pKFi->mnFuseTargetForKF=mpCurrentKeyFrame->mnId;
pKFi = pKFi->mPrevKF;
}
}
// Search matches by projection from current KF in target KFs
ORBmatcher matcher;
vector<MapPoint*> vpMapPointMatches = mpCurrentKeyFrame->GetMapPointMatches();
/// 步骤2:将当前帧的MapPoints分别与一级、二级、时间相邻帧(的MapPoints)进行融合
for(vector<KeyFrame*>::iterator vit=vpTargetKFs.begin(), vend=vpTargetKFs.end(); vit!=vend; vit++)
{
KeyFrame* pKFi = *vit;
//- 1.如果MapPoint能匹配关键帧的特征点,并且该点有对应的MapPoint,那么将两个MapPoint合并(选择观测数多的)
//- 2.如果MapPoint能匹配关键帧的特征点,并且该点没有对应的MapPoint,那么为该点添加MapPoint
matcher.Fuse(pKFi,vpMapPointMatches);
if(pKFi->NLeft != -1) matcher.Fuse(pKFi,vpMapPointMatches,true);
}
if (mbAbortBA)
return;
// Search matches by projection from target KFs in current KF
vector<MapPoint*> vpFuseCandidates;
vpFuseCandidates.reserve(vpTargetKFs.size()*vpMapPointMatches.size());
/// 步骤3:将一级二级相邻帧的MapPoints分别与当前帧(的MapPoints)进行融合
for(vector<KeyFrame*>::iterator vitKF=vpTargetKFs.begin(), vendKF=vpTargetKFs.end(); vitKF!=vendKF; vitKF++)
{
KeyFrame* pKFi = *vitKF;
vector<MapPoint*> vpMapPointsKFi = pKFi->GetMapPointMatches();
for(vector<MapPoint*>::iterator vitMP=vpMapPointsKFi.begin(), vendMP=vpMapPointsKFi.end(); vitMP!=vendMP; vitMP++)
{
MapPoint* pMP = *vitMP;
if(!pMP)
continue;
if(pMP->isBad() || pMP->mnFuseCandidateForKF == mpCurrentKeyFrame->mnId)
continue;
pMP->mnFuseCandidateForKF = mpCurrentKeyFrame->mnId;
vpFuseCandidates.push_back(pMP);
}
}
matcher.Fuse(mpCurrentKeyFrame,vpFuseCandidates);
if(mpCurrentKeyFrame->NLeft != -1) matcher.Fuse(mpCurrentKeyFrame,vpFuseCandidates,true);
// Update points
/// 步骤4:更新当前帧MapPoints的描述子,深度,观测主方向等属性
vpMapPointMatches = mpCurrentKeyFrame->GetMapPointMatches();
for(size_t i=0, iend=vpMapPointMatches.size(); i<iend; i++)
{
MapPoint* pMP=vpMapPointMatches[i];
if(pMP)
{
if(!pMP->isBad())
{ // 在所有找到pMP的关键帧中,获得最佳的描述子
pMP->ComputeDistinctiveDescriptors();
// 更新平均观测方向和观测距离
pMP->UpdateNormalAndDepth();
}
}
}
// Update connections in covisibility graph
/// 步骤5:更新当前帧的MapPoints后更新与其它帧的连接关系,更新covisibility图
mpCurrentKeyFrame->UpdateConnections();
}
ScaleRefinement
void LocalMapping::ScaleRefinement()
{
// Minimum number of keyframes to compute a solution
// Minimum time (seconds) between first and last keyframe to compute a solution. Make the difference between monocular and stereo
// unique_lock<mutex> lock0(mMutexImuInit);
if (mbResetRequested)
return;
// Retrieve all keyframes in temporal order
// 按时间顺序检索所有关键帧
list<KeyFrame*> lpKF;
KeyFrame* pKF = mpCurrentKeyFrame;
while(pKF->mPrevKF)
{
lpKF.push_front(pKF);
pKF = pKF->mPrevKF;
}
lpKF.push_front(pKF);
// list 数据放入vector 容器
vector<KeyFrame*> vpKF(lpKF.begin(),lpKF.end());
// 当有新关键帧时,将最新关键帧也加进队列
while(CheckNewKeyFrames())
{
ProcessNewKeyFrame();
vpKF.push_back(mpCurrentKeyFrame);
lpKF.push_back(mpCurrentKeyFrame);
}
const int N = vpKF.size();
mRwg = Eigen::Matrix3d::Identity();
mScale=1.0;
std::chrono::steady_clock::time_point t0 = std::chrono::steady_clock::now();
// 惯性优化
Optimizer::InertialOptimization(mpAtlas->GetCurrentMap(), mRwg, mScale);
std::chrono::steady_clock::time_point t1 = std::chrono::steady_clock::now();
if (mScale<1e-1)
{
cout << "scale too small" << endl;
bInitializing=false;
return;
}
// Before this line we are not changing the map
unique_lock<mutex> lock(mpAtlas->GetCurrentMap()->mMutexMapUpdate);
std::chrono::steady_clock::time_point t2 = std::chrono::steady_clock::now();
// 非单目且 优化尺度与1不接近时:
if ((fabs(mScale-1.f)>0.00001)||!mbMonocular)
{
// 地图集进行尺度旋转
mpAtlas->GetCurrentMap()->ApplyScaledRotation(Converter::toCvMat(mRwg).t(),mScale,true);
// 更新Imu
mpTracker->UpdateFrameIMU(mScale,mpCurrentKeyFrame->GetImuBias(),mpCurrentKeyFrame);
}
std::chrono::steady_clock::time_point t3 = std::chrono::steady_clock::now();
for(list<KeyFrame*>::iterator lit = mlNewKeyFrames.begin(), lend=mlNewKeyFrames.end(); lit!=lend; lit++)
{
(*lit)->SetBadFlag();
delete *lit;
}
mlNewKeyFrames.clear();
double t_inertial_only = std::chrono::duration_cast<std::chrono::duration<double> >(t1 - t0).count();
// To perform pose-inertial opt w.r.t. last keyframe
// 执行姿势惯性 opt w.r.t. 最后一个关键帧
mpCurrentKeyFrame->GetMap()->IncreaseChangeIndex();
return;
}
InitializeIMU 初始化IMU
- 满足条件才初始化IMU:
- 地图集中关键帧个数大于阈值
- 基于当前关键帧取所有关键帧,个数需大于阈值
- 当前关键-第一关键帧时间要大于阈值
- 试试
void LocalMapping::InitializeIMU(float priorG, float priorA, bool bFIBA)
{
if (mbResetRequested)
return;
float minTime;
int nMinKF;
if (mbMonocular)
{
minTime = 2.0;
nMinKF = 10;
}
else
{
minTime = 1.0;
nMinKF = 10;
}
// 若 地图集中关键帧个数 小于最小关键帧数,则return
if(mpAtlas->KeyFramesInMap()<nMinKF)
return;
// Retrieve all keyframe in temporal order
// 按时间顺序检索所有关键帧,并放入`vector`容器中
list<KeyFrame*> lpKF;
KeyFrame* pKF = mpCurrentKeyFrame;
while(pKF->mPrevKF)
{
lpKF.push_front(pKF);
pKF = pKF->mPrevKF;
}
lpKF.push_front(pKF);
vector<KeyFrame*> vpKF(lpKF.begin(),lpKF.end());
// 若 关键帧数量小于最小阈值时,return
if(vpKF.size()<nMinKF)
return;
// 若当前关键帧与第一关键帧时间差小于阈值时,return
mFirstTs=vpKF.front()->mTimeStamp;
if(mpCurrentKeyFrame->mTimeStamp-mFirstTs<minTime)
return;
bInitializing = true;
// 当有新关键帧时,将其放入容器
while(CheckNewKeyFrames())
{
ProcessNewKeyFrame();
vpKF.push_back(mpCurrentKeyFrame);
lpKF.push_back(mpCurrentKeyFrame);
}
// 设置默认bias 0
const int N = vpKF.size();
IMU::Bias b(0,0,0,0,0,0);
// Compute and KF velocities mRwg estimation
// 若该地图集中 imu 未初始化成功时:
if (!mpCurrentKeyFrame->GetMap()->isImuInitialized())
{
cv::Mat cvRwg;//世界坐标系(第一帧相机)到重力方向的旋转矩阵
cv::Mat dirG = cv::Mat::zeros(3,1,CV_32F);
for(vector<KeyFrame*>::iterator itKF = vpKF.begin(); itKF!=vpKF.end(); itKF++)
{
if (!(*itKF)->mpImuPreintegrated)//没有预积分,跳过
continue;
if (!(*itKF)->mPrevKF)//没有前一帧,跳过
continue;
// 世界坐标系下速度负值累加
dirG -= (*itKF)->mPrevKF->GetImuRotation()*(*itKF)->mpImuPreintegrated->GetUpdatedDeltaVelocity();
// 计算两帧间的平均速度
cv::Mat _vel = ((*itKF)->GetImuPosition() - (*itKF)->mPrevKF->GetImuPosition())/(*itKF)->mpImuPreintegrated->dT;
//设置优化前速度初始值
(*itKF)->SetVelocity(_vel);
(*itKF)->mPrevKF->SetVelocity(_vel);
}
// 归一化,只取方向
dirG = dirG/cv::norm(dirG);
// 重力方向(在经过cvRwg变换后的坐标系下)
cv::Mat gI = (cv::Mat_<float>(3,1) << 0.0f, 0.0f, -1.0f);
// //两个单位向量叉乘,等到垂直于这两个向量的向量v
cv::Mat v = gI.cross(dirG);
const float nv = cv::norm(v);//取模
const float cosg = gI.dot(dirG);//两个单位向量点乘,得到夹角的cos
const float ang = acos(cosg);//gI和dirG的夹角
cv::Mat vzg = v*ang/nv;//gI到dirG之间的旋转向量
cvRwg = IMU::ExpSO3(vzg);//指数映射,旋转向量->旋转矩阵
mRwg = Converter::toMatrix3d(cvRwg);
mTinit = mpCurrentKeyFrame->mTimeStamp-mFirstTs;
}
else // 若初始化成功,则Rwg,bg,ba直接赋值
{
mRwg = Eigen::Matrix3d::Identity();//直接设为单位阵
mbg = Converter::toVector3d(mpCurrentKeyFrame->GetGyroBias());
mba = Converter::toVector3d(mpCurrentKeyFrame->GetAccBias());
}
mScale=1.0;
mInitTime = mpTracker->mLastFrame.mTimeStamp-vpKF.front()->mTimeStamp;
std::chrono::steady_clock::time_point t0 = std::chrono::steady_clock::now();
/// 进行纯imu的BA。
Optimizer::InertialOptimization(mpAtlas->GetCurrentMap(), mRwg, mScale, mbg, mba, mbMonocular, infoInertial, false, false, priorG, priorA);
std::chrono::steady_clock::time_point t1 = std::chrono::steady_clock::now();
// 尺度过小时直接return
if (mScale<1e-1)
{
cout << "scale too small" << endl;
bInitializing=false;
return;
}
// Before this line we are not changing the map
unique_lock<mutex> lock(mpAtlas->GetCurrentMap()->mMutexMapUpdate);
std::chrono::steady_clock::time_point t2 = std::chrono::steady_clock::now();
/// 上一步估计出了尺度s和重力方向Rwg,然后矫正位姿、速度和地图点的尺度,
/// 并将位姿、速度、地图点进行旋转,使原来的坐标系的负z轴与重力方向重合,
///这样imu数据变换到world坐标系之后可以直接减去重力g
// 若尺度与 1 有差异 且 或非单目时
if ((fabs(mScale-1.f)>0.00001)||!mbMonocular)
{
mpAtlas->GetCurrentMap()->ApplyScaledRotation(Converter::toCvMat(mRwg).t(),mScale,true);
mpTracker->UpdateFrameIMU(mScale,vpKF[0]->GetImuBias(),mpCurrentKeyFrame);
}
std::chrono::steady_clock::time_point t3 = std::chrono::steady_clock::now();
// Check if initialization OK
if (!mpAtlas->isImuInitialized())
for(int i=0;i<N;i++)
{
KeyFrame* pKF2 = vpKF[i];
pKF2->bImu = true;
}
std::chrono::steady_clock::time_point t4 = std::chrono::steady_clock::now();
if (bFIBA) // 若需运行整个BA时,优化中执行
{
if (priorA!=0.f)
Optimizer::FullInertialBA(mpAtlas->GetCurrentMap(), 100, false, 0, NULL, true, priorG, priorA);
else
Optimizer::FullInertialBA(mpAtlas->GetCurrentMap(), 100, false, 0, NULL, false);
}
std::chrono::steady_clock::time_point t5 = std::chrono::steady_clock::now();
// If initialization is OK
/// 初始化成功,则更新IMU数据
mpTracker->UpdateFrameIMU(1.0,vpKF[0]->GetImuBias(),mpCurrentKeyFrame);
/// 若未初始化成功,则设置参数
if (!mpAtlas->isImuInitialized())
{
cout << "IMU in Map " << mpAtlas->GetCurrentMap()->GetId() << " is initialized" << endl;
mpAtlas->SetImuInitialized();
mpTracker->t0IMU = mpTracker->mCurrentFrame.mTimeStamp;
mpCurrentKeyFrame->bImu = true;
}
mbNewInit=true;
mnKFs=vpKF.size();
mIdxInit++;
for(list<KeyFrame*>::iterator lit = mlNewKeyFrames.begin(), lend=mlNewKeyFrames.end(); lit!=lend; lit++)
{
(*lit)->SetBadFlag();
delete *lit;
}
mlNewKeyFrames.clear();
mpTracker->mState=Tracking::OK;
bInitializing = false;
mpCurrentKeyFrame->GetMap()->IncreaseChangeIndex();
return;
}
Atlas地图集
成员变量
std::set<Map*> mspMaps; // 地图集
std::set<Map*> mspBadMaps; // 坏的地图集
Map* mpCurrentMap; // 当前地图
std::vector<GeometricCamera*> mvpCameras;
std::vector<KannalaBrandt8*> mvpBackupCamKan;
std::vector<Pinhole*> mvpBackupCamPin;
std::mutex mMutexAtlas;
unsigned long int mnLastInitKFidMap;
Viewer* mpViewer;
bool mHasViewer;
// Class references for the map reconstruction from the save file
KeyFrameDatabase* mpKeyFrameDB;
ORBVocabulary* mpORBVocabulary;
构造
// 可直接无id构造,也可id构造(当它初始化时,第一个地图被创建)
Atlas::Atlas() {
mpCurrentMap = static_cast<Map*>(NULL);
}
Atlas::Atlas(int initKFid): mnLastInitKFidMap(initKFid), mHasViewer(false)
{
mpCurrentMap = static_cast<Map*>(NULL);
CreateNewMap();
}
其他函数
创建地图
void Atlas::CreateNewMap()
{
unique_lock<mutex> lock(mMutexAtlas);
cout << "Creation of new map with id: " << Map::nNextId << endl;
// 当前地图存在时
if(mpCurrentMap){
cout << "Exits current map " << endl;
// 若地图集不为空,且上一地图id小于当前地图的最大id时,id+1
if(!mspMaps.empty() && mnLastInitKFidMap < mpCurrentMap->GetMaxKFid())
mnLastInitKFidMap = mpCurrentMap->GetMaxKFid()+1; //The init KF is the next of current maximum
// 存储当前地图
mpCurrentMap->SetStoredMap();
cout << "Saved map with ID: " << mpCurrentMap->GetId() << endl;
//if(mHasViewer)
// mpViewer->AddMapToCreateThumbnail(mpCurrentMap);
}
cout << "Creation of new map with last KF id: " << mnLastInitKFidMap << endl;
// 创建新地图,并设置为当前地图,并将当前地图插入地图集中
mpCurrentMap = new Map(mnLastInitKFidMap);
mpCurrentMap->SetCurrentMap();
mspMaps.insert(mpCurrentMap);
}
改变地图
// 若当前地图不为空时将当前地图存储起来,并 赋值+设置 当前地图
void Atlas::ChangeMap(Map* pMap)
{
unique_lock<mutex> lock(mMutexAtlas);
cout << "Chage to map with id: " << pMap->GetId() << endl;
if(mpCurrentMap){
mpCurrentMap->SetStoredMap();
}
mpCurrentMap = pMap;
mpCurrentMap->SetCurrentMap();
}
设置参考地图点
// 给当前地图设置参考地图点
void Atlas::SetReferenceMapPoints(const std::vector<MapPoint*> &vpMPs)
{
unique_lock<mutex> lock(mMutexAtlas);
mpCurrentMap->SetReferenceMapPoints(vpMPs);
}
得到所有地图
// 得到所有地图,地图按照 id 排序
vector<Map*> Atlas::GetAllMaps()
{
unique_lock<mutex> lock(mMutexAtlas);
struct compFunctor
{
inline bool operator()(Map* elem1 ,Map* elem2)
{
return elem1->GetId() < elem2->GetId();
}
};
vector<Map*> vMaps(mspMaps.begin(),mspMaps.end());
sort(vMaps.begin(), vMaps.end(), compFunctor());
return vMaps;
}
清除地图集
void Atlas::clearAtlas()
{
unique_lock<mutex> lock(mMutexAtlas);
/*for(std::set<Map*>::iterator it=mspMaps.begin(), send=mspMaps.end(); it!=send; it++)
{
(*it)->clear();
delete *it;
}*/
mspMaps.clear();
mpCurrentMap = static_cast<Map*>(NULL);
mnLastInitKFidMap = 0;
}
得到关键帧个数和地图点个数
long unsigned int Atlas::GetNumLivedKF()
{
unique_lock<mutex> lock(mMutexAtlas);
long unsigned int num = 0;
for(Map* mMAPi : mspMaps)
{
num += mMAPi->GetAllKeyFrames().size();
}
return num;
}
long unsigned int Atlas::GetNumLivedMP() {
unique_lock<mutex> lock(mMutexAtlas);
long unsigned int num = 0;
for (Map *mMAPi : mspMaps) {
num += mMAPi->GetAllMapPoints().size();
}
return num;
}
Map
成员变量
public:
vector<KeyFrame*> mvpKeyFrameOrigins; // 地图中原始关键帧
vector<unsigned long int> mvBackupKeyFrameOriginsId; //备份关键帧原始ID
KeyFrame* mpFirstRegionKF; // 区域第一个关键帧
std::mutex mMutexMapUpdate; // 地图更新互斥锁
// This avoid that two points are created simultaneously in separate threads (id conflict)
// 这样可以避免在不同的线程中同时创建两个点(id冲突)
std::mutex mMutexPointCreation;
bool mbFail; // 该地图是好坏
// Size of the thumbnail (always in power of 2)
static const int THUMB_WIDTH = 512;
static const int THUMB_HEIGHT = 512;
static long unsigned int nNextId;
protected:
long unsigned int mnId;
std::set<MapPoint*> mspMapPoints; // 地图点
std::set<KeyFrame*> mspKeyFrames; // 关键帧
KeyFrame* mpKFinitial;
KeyFrame* mpKFlowerID;
std::vector<MapPoint*> mvpReferenceMapPoints; // 参考地图点
bool mbImuInitialized; //imu初始化成功?
int mnMapChange; // 地图改变
int mnMapChangeNotified; // 地图改变通知
long unsigned int mnInitKFid;
long unsigned int mnMaxKFid; // 最大关键帧id
long unsigned int mnLastLoopKFid; // 上一次闭环关键帧id
// Index related to a big change in the map (loop closure, global BA)
int mnBigChangeIdx;
// View of the map in aerial sight (for the AtlasViewer)
GLubyte* mThumbnail; // 空中视野中的地图视图(用于 AtlasViewer)
bool mIsInUse;
bool mHasTumbnail;
bool mbBad = false;
bool mbIsInertial; // 使用imu
bool mbIMU_BA1;
bool mbIMU_BA2;
std::mutex mMutexMap;
旋转地图
void Map::RotateMap(const cv::Mat &R)
{
unique_lock<mutex> lock(mMutexMap);
// 变化矩阵4*4
cv::Mat Txw = cv::Mat::eye(4,4,CV_32F);
R.copyTo(Txw.rowRange(0,3).colRange(0,3));
KeyFrame* pKFini = mvpKeyFrameOrigins[0];
cv::Mat Twc_0 = pKFini->GetPoseInverse();
cv::Mat Txc_0 = Txw*Twc_0;
cv::Mat Txb_0 = Txc_0*pKFini->mImuCalib.Tcb;
cv::Mat Tyx = cv::Mat::eye(4,4,CV_32F);
Tyx.rowRange(0,3).col(3) = -Txb_0.rowRange(0,3).col(3);
cv::Mat Tyw = Tyx*Txw;
cv::Mat Ryw = Tyw.rowRange(0,3).colRange(0,3);
cv::Mat tyw = Tyw.rowRange(0,3).col(3);
// 将关键帧容器中的每个关键帧位姿+速度都 乘以 变换矩阵
for(set<KeyFrame*>::iterator sit=mspKeyFrames.begin(); sit!=mspKeyFrames.end(); sit++)
{
KeyFrame* pKF = *sit;
cv::Mat Twc = pKF->GetPoseInverse();
cv::Mat Tyc = Tyw*Twc;
cv::Mat Tcy = cv::Mat::eye(4,4,CV_32F);
Tcy.rowRange(0,3).colRange(0,3) = Tyc.rowRange(0,3).colRange(0,3).t();
Tcy.rowRange(0,3).col(3) = -Tcy.rowRange(0,3).colRange(0,3)*Tyc.rowRange(0,3).col(3);
pKF->SetPose(Tcy);
cv::Mat Vw = pKF->GetVelocity();
pKF->SetVelocity(Ryw*Vw);
}
// 将每个地图点的位姿进行相应变换
for(set<MapPoint*>::iterator sit=mspMapPoints.begin(); sit!=mspMapPoints.end(); sit++)
{
MapPoint* pMP = *sit;
pMP->SetWorldPos(Ryw*pMP->GetWorldPos()+tyw);
pMP->UpdateNormalAndDepth();
}
}