ORB-SLAM2——Trackin线程

Tracking整体流程图

作为ORB-SLAM2里三大线程中的第一个,Tracking线程接收传感器(摄像头)传来的每一帧,并将处理的关键帧传递给LocalMapping线程。Tracking线程其实是系统里的主线程,通过反复调用Track函数进行。
Tracking模块主要作用:

  • 单目地图的初始化
  • 当前帧的位姿估计
  • 当前帧的局部地图跟踪
  • 生成候选关键帧

在这里插入图片描述
在这里插入图片描述

System::TrackMonocular

将图片传给slam系统

cv::Mat System::TrackMonocular(const cv::Mat &im, const double &timestamp)
{
    if(mSensor!=MONOCULAR)
    {
        cerr << "ERROR: you called TrackMonocular but input sensor was not set to Monocular." << endl;
        exit(-1);
    }

    // Check mode change
    {
        unique_lock<mutex> lock(mMutexMode);
        if(mbActivateLocalizationMode)
        {
            mpLocalMapper->RequestStop();

            // Wait until Local Mapping has effectively stopped
            while(!mpLocalMapper->isStopped())
            {
                usleep(1000);
            }

            mpTracker->InformOnlyTracking(true);
            mbActivateLocalizationMode = false;
        }
        if(mbDeactivateLocalizationMode)
        {
            mpTracker->InformOnlyTracking(false);
            mpLocalMapper->Release();
            mbDeactivateLocalizationMode = false;
        }
    }

    // Check reset
    {
    unique_lock<mutex> lock(mMutexReset);
    if(mbReset)
    {
        mpTracker->Reset();
        mbReset = false;
    }
    }

    cv::Mat Tcw = mpTracker->GrabImageMonocular(im,timestamp);

    unique_lock<mutex> lock2(mMutexState);
    mTrackingState = mpTracker->mState;
    mTrackedMapPoints = mpTracker->mCurrentFrame.mvpMapPoints;
    mTrackedKeyPointsUn = mpTracker->mCurrentFrame.mvKeysUn;

    return Tcw;
}

Tracking::GrabImageMonocular

将RGB图像化为灰度图像,之后Frame() 建立新帧,提取特征点。数据流以Frame的形式进入Track()函数,输出世界坐标系到该帧相机坐标系的变换矩阵(注意此时返回的变换矩阵是tracking的位姿估计)。

cv::Mat Tracking::GrabImageMonocular(const cv::Mat &im, const double &timestamp)
{
    mImGray = im;
    // 步骤1:将RGB或RGBA图像转为灰度图像
    if(mImGray.channels()==3)
    {
        if(mbRGB)
            cvtColor(mImGray,mImGray,CV_RGB2GRAY);
        else
            cvtColor(mImGray,mImGray,CV_BGR2GRAY);
    }
    else if(mImGray.channels()==4)
    {
        if(mbRGB)
            cvtColor(mImGray,mImGray,CV_RGBA2GRAY);
        else
            cvtColor(mImGray,mImGray,CV_BGRA2GRAY);
    }
    // 步骤2:构造Frame
    if(mState==NOT_INITIALIZED || mState==NO_IMAGES_YET)// 没有成功初始化的前一个状态就是NO_IMAGES_YET
        mCurrentFrame = Frame(mImGray,timestamp,mpIniORBextractor,mpORBVocabulary,mK,mDistCoef,mbf,mThDepth);
    else
        mCurrentFrame = Frame(mImGray,timestamp,mpORBextractorLeft,mpORBVocabulary,mK,mDistCoef,mbf,mThDepth);

    Track();

    return mCurrentFrame.mTcw.clone();
}

Frame::Frame

Frame::Frame(const cv::Mat &imGray, const double &timeStamp, ORBextractor* extractor,ORBVocabulary* voc, cv::Mat &K, cv::Mat &distCoef, const float &bf, const float &thDepth)
    :mpORBvocabulary(voc),mpORBextractorLeft(extractor),mpORBextractorRight(static_cast<ORBextractor*>(NULL)),
     mTimeStamp(timeStamp), mK(K.clone()),mDistCoef(distCoef.clone()), mbf(bf), mThDepth(thDepth)
{
    // Frame ID
    mnId=nNextId++;

    // Scale Level Info
    mnScaleLevels = mpORBextractorLeft->GetLevels();
    mfScaleFactor = mpORBextractorLeft->GetScaleFactor();
    mfLogScaleFactor = log(mfScaleFactor);
    mvScaleFactors = mpORBextractorLeft->GetScaleFactors();
    mvInvScaleFactors = mpORBextractorLeft->GetInverseScaleFactors();
    mvLevelSigma2 = mpORBextractorLeft->GetScaleSigmaSquares();
    mvInvLevelSigma2 = mpORBextractorLeft->GetInverseScaleSigmaSquares();

    // ORB extraction
    ExtractORB(0,imGray);

    N = mvKeys.size();

    if(mvKeys.empty())
        return;

    UndistortKeyPoints();

    // Set no stereo information
    mvuRight = vector<float>(N,-1);
    mvDepth = vector<float>(N,-1);

    mvpMapPoints = vector<MapPoint*>(N,static_cast<MapPoint*>(NULL));
    mvbOutlier = vector<bool>(N,false);

    // This is done only for the first Frame (or after a change in the calibration)
    if(mbInitialComputations)
    {
        ComputeImageBounds(imGray);

        mfGridElementWidthInv=static_cast<float>(FRAME_GRID_COLS)/static_cast<float>(mnMaxX-mnMinX);
        mfGridElementHeightInv=static_cast<float>(FRAME_GRID_ROWS)/static_cast<float>(mnMaxY-mnMinY);

        fx = K.at<float>(0,0);
        fy = K.at<float>(1,1);
        cx = K.at<float>(0,2);
        cy = K.at<float>(1,2);
        invfx = 1.0f/fx;
        invfy = 1.0f/fy;

        mbInitialComputations=false;
    }

    mb = mbf/fx;

    AssignFeaturesToGrid();
}

void Tracking::Track()

  1. 判断tracking状态:如果是未初始化(NOT_INITIALIZED),则对单目和非单目分别执行MonocularInitialization()、StereoInitialization()进行初始化,并更新地图视图。

  2. 对于初始化成功的,接下来进行跟踪ORB-SLAM中关于跟踪状态有两种选择(由mbOnlyTracking判断)

    (1)只进行跟踪不建图

    (2)同时跟踪和建图:

初始化之后ORB-SLAM有三种跟踪模型可供选择

  • a.TrackWithMotionModel(); 运动模型:根据运动模型估计当前帧位姿——根据匀速运动模型对上一帧的地图点进行跟踪——优化位姿。

  • b.TrackReferenceKeyFrame(); 关键帧模型:BoW搜索当前帧与参考帧的匹配点——将上一帧的位姿作为当前帧的初始值——通过优化3D-2D的重投影误差来获得位姿。

  • c.Relocalization();重定位模型:计算当前帧的BoW——检测满足重定位条件的候选帧——通过BoW搜索当前帧与候选帧的匹配点——大于15个点就进行PnP位姿估计——优化。

这三个模型的选择方法:

首先假设相机恒速(即Rt和上一帧相同),然后计算匹配点数(如果匹配足够多则认为跟踪成功),如果匹配点数目较少,说明恒速模型失效,则选择参考帧模型(即特征匹配,PnP求解),如果参考帧模型同样不能进行跟踪,说明两帧键没有相关性,这时需要进行重定位,即和已经产生的关键帧中进行匹配(看看是否到了之前已经到过的地方)确定相机位姿,如果重定位仍然不能成功,则说明跟踪彻底丢失,要么等待相机回转,要不进行重置。

初始化部分

见:https://blog.csdn.net/weixin_42905141/article/details/103566499

跟踪部分

在这里插入图片描述

跟踪步骤1

  • 局部地图激活(!mbOnlyTracking):如果(mState==OK),CheckReplacedInLastFrame()首先更新上一帧被替换的MapPoints,然后如果特征点匹配太少,需要匹配参考关键帧bOK = TrackReferenceKeyFrame(),否则根据匀速运动模型匹配bOK = TrackWithMotionModel()。特殊情况初始化跟踪失败需要重定位bOK = Relocalization();
  • 局部地图不工作,只跟踪当前地图中地图点:

    1.if :(mState==LOST),需要重定位Relocalization()。
    2.else: 当匹配特征点不够多,小于10个,使用运动模型和重定位计算两种相机位姿,如果重定位失败,保持VO结果,否则更相信重定位结果。

    else//步骤2:跟踪
    {
        // System is initialized. Track Frame.
        // bOK为临时变量,用于表示每个函数是否执行成功
        bool bOK;

        // Initial camera pose estimation using motion model or relocalization (if tracking is lost)
        // 在viewer中有个开关menuLocalizationMode,有它控制是否ActivateLocalizationMode,并最终管控mbOnlyTracking
        // mbOnlyTracking等于false表示正常VO模式(有地图更新),mbOnlyTracking等于true表示用户手动选择定位模式
        if(!mbOnlyTracking)//mbOnlyTracking==false
        {
            // Local Mapping is activated. This is the normal behaviour, unless
            // you explicitly activate the "only tracking" mode.

            // 正常初始化成功
            if(mState==OK)
            {
                // Local Mapping might have changed some MapPoints tracked in last frame
                // 检查并更新上一帧被替换的MapPoints
                // 更新Fuse函数和SearchAndFuse函数替换的MapPoints
                CheckReplacedInLastFrame();

                // 步骤2.1:跟踪上一帧或者参考帧或者重定位
                // 上一帧速度为0或当前帧与上一次重定位帧之间ID差大于2,跟踪关键帧
                // mnLastRelocFrameId上一次重定位的那一帧
                if(mVelocity.empty() || mCurrentFrame.mnId<mnLastRelocFrameId+2)
                {
                    // 将上一帧的位姿作为当前帧的初始位姿
                    // 通过BoW的方式在参考帧中找当前帧特征点的匹配点
                    // 优化每个特征点都对应3D点重投影误差即可得到位姿
                    bOK = TrackReferenceKeyFrame();//跟踪参考帧
                }
                else
                {
                    // 根据恒速模型设定当前帧的初始位姿
                    // 通过投影的方式在参考帧中找当前帧特征点的匹配点
                    // 优化每个特征点所对应3D点的投影误差即可得到位姿
                    // 应该只要mVelocity不为空,就优先选择TrackWithMotionModel
                    bOK = TrackWithMotionModel();//根据固定运动速度模型预测当前帧的位姿
                    if(!bOK)

                    // TrackReferenceKeyFrame是跟踪参考帧,不能根据固定运动速度模型预测当前帧的位姿态,通过bow加速匹配(SearchByBow)
                    // 最后通过优化得到优化后的位姿
                        bOK = TrackReferenceKeyFrame();
                }
            }
            else//初始化失败
            {
                // BOW搜索,PnP求解位姿
                bOK = Relocalization();
            }
        }
        else//没有激活局部地图,只进行跟踪tracking
        {
            // Localization Mode: Local Mapping is deactivated
            // 步骤2.1:跟踪上一帧或者参考帧或者重定位
            // tracking跟丢了
            if(mState==LOST)
            {
                bOK = Relocalization();//判断重定位成功与否标志
            }
            else
            {
                // mbVO是mbOnlyTracking为true时的才有的一个变量
                // mbVO为false表示此帧匹配了很多的MapPoints,跟踪很正常,
                // mbVO为true表明此帧匹配了很少的MapPoints,少于10个,要跪的节奏
                if(!mbVO)//跟踪正常, mbVO为false则表明此帧匹配了很多的3D map点,非常好
                {
                    // In last frame we tracked enough MapPoints in the map
                    if(!mVelocity.empty())//上一帧有速度,跟踪模型
                    {
                        bOK = TrackWithMotionModel();
                    }
                    else//上一帧没速度,跟踪关键帧
                    {
                        bOK = TrackReferenceKeyFrame();
                    }
                }
                else//特征点不够多,小于10个
                {
                    // In last frame we tracked mainly "visual odometry" points.
                    // We compute two camera poses, one from motion model and one doing relocalization.
                    // If relocalization is sucessfull we choose that solution, otherwise we retain the "visual odometry" solution.
                    {
                        // 先使用运动模型和重定位计算两种相机位姿,如果重定位失败,保持VO结果
                        // mbVO为1,则表明此帧匹配了很少的3D map点,少于10个,要跪的节奏,既做跟踪又做定位
                    bool bOKMM = false;//运动模型是否成功判断标志
                    bool bOKReloc = false;//重定位是否成功判断标志
                    vector<MapPoint*> vpMPsMM;//记录地图点
                    vector<bool> vbOutMM;//记录外点
                    cv::Mat TcwMM;//变换矩阵
                    if(!mVelocity.empty())//有速度
                    {
                        bOKMM = TrackWithMotionModel();//用运动模型追踪
                        vpMPsMM = mCurrentFrame.mvpMapPoints;//记录地图点
                        vbOutMM = mCurrentFrame.mvbOutlier;//记录外点
                        TcwMM = mCurrentFrame.mTcw.clone();//当前帧的变换矩阵
                    }
                    bOKReloc = Relocalization();//用重定位
                    // 重定位没有成功,但是运动模型跟踪成功
                    if(bOKMM && !bOKReloc)
                    {
                        mCurrentFrame.SetPose(TcwMM);
                        mCurrentFrame.mvpMapPoints = vpMPsMM;
                        mCurrentFrame.mvbOutlier = vbOutMM;

                        if(mbVO)
                        {
                            // 更新当前帧的MapPoints被观测程度
                            for(int i =0; i<mCurrentFrame.N; i++)
                            {
                                if(mCurrentFrame.mvpMapPoints[i] && !mCurrentFrame.mvbOutlier[i])
                                {
                                    mCurrentFrame.mvpMapPoints[i]->IncreaseFound();
                                }
                            }
                        }
                    }
                    else if(bOKReloc)// 只要重定位成功整个跟踪过程正常进行(定位与跟踪,更相信重定位)
                    {
                        mbVO = false;
                    }

                    bOK = bOKReloc || bOKMM;
                }
            }
        }
       // 将最新的关键帧作为reference frame
        mCurrentFrame.mpReferenceKF = mpReferenceKF;
void Tracking::CheckReplacedInLastFrame()

ocal Mapping线程可能会将关键帧中某些MapPoints进行替换,由于tracking中需要用到mLastFrame,这里检查并更新上一帧中被替换的MapPoints

void Tracking::CreateInitialMapMonocular()
{
    // Create KeyFrames
    // 创建两帧,一个为初始帧,一个为当前帧
    KeyFrame* pKFini = new KeyFrame(mInitialFrame,mpMap,mpKeyFrameDB);
    KeyFrame* pKFcur = new KeyFrame(mCurrentFrame,mpMap,mpKeyFrameDB);


    // 步骤1:将初始关键帧的描述子转为BoW
    pKFini->ComputeBoW();
    // 步骤2:将当前关键帧的描述子转为BoW
    pKFcur->ComputeBoW();

    // Insert KFs in the map
    // 步骤3:将关键帧插入到地图
    // 凡是关键帧,都要插入地图
    mpMap->AddKeyFrame(pKFini);
    mpMap->AddKeyFrame(pKFcur);

    // Create MapPoints and asscoiate to keyframes
    // 步骤4:将3D点包装成MapPoints
    for(size_t i=0; i<mvIniMatches.size();i++)//遍历所有匹配
    {
        if(mvIniMatches[i]<0)
            continue;

        //Create MapPoint.
        cv::Mat worldPos(mvIniP3D[i]);

        // 步骤4.1:用3D点构造MapPoint
        MapPoint* pMP = new MapPoint(worldPos,pKFcur,mpMap);

        // 步骤4.2:为该MapPoint添加属性:
        // a.观测到该MapPoint的关键帧
        // b.该MapPoint的描述子
        // c.该MapPoint的平均观测方向和深度范围
        // 步骤4.3:表示该KeyFrame的哪个特征点可以观测到哪个3D点
        pKFini->AddMapPoint(pMP,i);
        pKFcur->AddMapPoint(pMP,mvIniMatches[i]);

        // a.表示该MapPoint可以被哪个KeyFrame的哪个特征点观测到
        pMP->AddObservation(pKFini,i);
        pMP->AddObservation(pKFcur,mvIniMatches[i]);

        // b.从众多观测到该MapPoint的特征点中挑选区分读最高的描述子
        pMP->ComputeDistinctiveDescriptors();
        // c.更新该MapPoint平均观测方向以及观测距离的范围
        pMP->UpdateNormalAndDepth();

        //Fill Current Frame structure
        mCurrentFrame.mvpMapPoints[mvIniMatches[i]] = pMP;
        mCurrentFrame.mvbOutlier[mvIniMatches[i]] = false;

        //Add to Map
        // 步骤4.4:在地图中添加该MapPoint
        mpMap->AddMapPoint(pMP);
    }

    // Update Connections
    // 步骤5:更新关键帧间的连接关系
    // 在3D点和关键帧之间建立边,每个边有一个权重,边的权重是该关键帧与当前帧公共3D点的个数
    pKFini->UpdateConnections();
    pKFcur->UpdateConnections();

    // Bundle Adjustment
    cout << "New Map created with " << mpMap->MapPointsInMap() << " points" << endl;
    // 步骤5:BA优化
    Optimizer::GlobalBundleAdjustemnt(mpMap,20);

    // Set median depth to 1
    // 步骤6:!!!将MapPoints的中值深度归一化到1,并归一化两帧之间变换
    // 评估关键帧场景深度,q=2表示中值
    float medianDepth = pKFini->ComputeSceneMedianDepth(2);
    float invMedianDepth = 1.0f/medianDepth;

    if(medianDepth<0 || pKFcur->TrackedMapPoints(1)<100)
    {
        cout << "Wrong initialization, reseting..." << endl;
        Reset();
        return;
    }

    // Scale initial baseline
    cv::Mat Tc2w = pKFcur->GetPose();
    // x/z y/z 将z归一化到1
    Tc2w.col(3).rowRange(0,3) = Tc2w.col(3).rowRange(0,3)*invMedianDepth;
    pKFcur->SetPose(Tc2w);

    // Scale points
    // 把3D点的尺度也归一化到
    vector<MapPoint*> vpAllMapPoints = pKFini->GetMapPointMatches();
    for(size_t iMP=0; iMP<vpAllMapPoints.size(); iMP++)
    {
        if(vpAllMapPoints[iMP])
        {
            MapPoint* pMP = vpAllMapPoints[iMP];
            pMP->SetWorldPos(pMP->GetWorldPos()*invMedianDepth);
        }
    }

    // 这部分和SteroInitialization()相似
    mpLocalMapper->InsertKeyFrame(pKFini);
    mpLocalMapper->InsertKeyFrame(pKFcur);

    mCurrentFrame.SetPose(pKFcur->GetPose());
    mnLastKeyFrameId=mCurrentFrame.mnId;
    mpLastKeyFrame = pKFcur;

    mvpLocalKeyFrames.push_back(pKFcur);
    mvpLocalKeyFrames.push_back(pKFini);
    mvpLocalMapPoints=mpMap->GetAllMapPoints();
    mpReferenceKF = pKFcur;
    mCurrentFrame.mpReferenceKF = pKFcur;

    mLastFrame = Frame(mCurrentFrame);

    mpMap->SetReferenceMapPoints(mvpLocalMapPoints);

    mpMapDrawer->SetCurrentCameraPose(pKFcur->GetPose());

    mpMap->mvpKeyFrameOrigins.push_back(pKFini);

    mState=OK;// 初始化成功,至此,初始化过程完成
}
bool Tracking::TrackReferenceKeyFrame()

关键帧中查找BOW相近的帧,进行匹配优化位姿

  • 1.按照关键帧进行Track的方法和运动模式恢复相机运动位姿的方法接近。首先求解当前帧的BOW向量。
  • 2.再搜索当前帧和关键帧之间的关键点匹配关系,如果这个匹配关系小于15对的话,就Track失败了。
  • 3.接着将当前帧的位置假定到上一帧的位置那里
  • 4.并通过最小二乘法优化相机的位姿。
  • 5.最后依然是抛弃无用的杂点,当match数大于等于10的时候,返回true成功。
bool Tracking::TrackReferenceKeyFrame()
{
    // Compute Bag of Words vector
    //mCurrentFrame.ComputeBoW()将当前帧的描述子转为BOW向量,加块匹配速度
    mCurrentFrame.ComputeBoW();

    // We perform first an ORB matching with the reference keyframe
    // If enough matches are found we setup a PnP solver
    ORBmatcher matcher(0.7,true);
    vector<MapPoint*> vpMapPointMatches;
    //matcher.SearchByBoW()通过特征点的BoW加快当前帧与参考帧之间的特征点匹配
    int nmatches = matcher.SearchByBoW(mpReferenceKF,mCurrentFrame,vpMapPointMatches);

    if(nmatches<15)
        return false;

    mCurrentFrame.mvpMapPoints = vpMapPointMatches;
    //mCurrentFrame.SetPose(mLastFrame.mTcw)将上一帧的位姿态作为当前帧位姿的初始值
    mCurrentFrame.SetPose(mLastFrame.mTcw);
    
    //PoseOptimization通过优化3D-2D的重投影误差来获得位姿
    Optimizer::PoseOptimization(&mCurrentFrame);

    // Discard outliers
    //剔除优化后的outlier匹配点(MapPoints)
    int nmatchesMap = 0;
    for(int i =0; i<mCurrentFrame.N; i++)
    {
        if(mCurrentFrame.mvpMapPoints[i])
        {
            if(mCurrentFrame.mvbOutlier[i])
            {
                MapPoint* pMP = mCurrentFrame.mvpMapPoints[i];

                mCurrentFrame.mvpMapPoints[i]=static_cast<MapPoint*>(NULL);
                mCurrentFrame.mvbOutlier[i]=false;
                pMP->mbTrackInView = false;
                pMP->mnLastFrameSeen = mCurrentFrame.mnId;
                nmatches--;
            }
            else if(mCurrentFrame.mvpMapPoints[i]->Observations()>0)
                nmatchesMap++;
        }
    }

    return nmatchesMap>=10;
}
bool Tracking::TrackWithMotionModel()

假设匀速运动,用上一帧位姿和速度估计当前帧位姿。方法:上一帧地图点投影到当前帧,完成匹配。

  • 1.先通过上一帧的位姿和速度预测当前帧相机的位姿

  • 2.通过PnP方法估计相机位姿,在将上一帧的地图点投影到当前固定大小范围的帧平面上,如果匹配点少,那么扩大两倍的采点范围。

  • 3.然后进行一次BA算法,通过最小二乘法优化相机的位姿。

  • 4.优化位姿之后,对当前帧的关键点和地图点,抛弃无用的杂点,剩下的点供下一次操作使用。

bool Tracking::TrackWithMotionModel()
{
    ORBmatcher matcher(0.9,true);
    // Update last frame pose according to its reference keyframe
    // Create "visual odometry" points
    // 步骤1:对于双目或rgbd摄像头,根据深度值为上一关键帧生成新的MapPoints
    // (跟踪过程中需要将当前帧与上一帧进行特征点匹配,将上一帧的MapPoints投影到当前帧可以缩小匹配范围)
    // 在跟踪过程中,去除outlier的MapPoint,如果不及时增加MapPoint会逐渐减少
    // 这个函数的功能就是补充增加RGBD和双目相机上一帧的MapPoints数
    UpdateLastFrame();
    // 根据Const Velocity Model(认为这两帧之间的相对运动和之前两帧间相对运动相同)估计当前帧的位姿
    mCurrentFrame.SetPose(mVelocity*mLastFrame.mTcw);//当前帧位置等于mVelocity*mLastFrame.mTcw
 
    fill(mCurrentFrame.mvpMapPoints.begin(),mCurrentFrame.mvpMapPoints.end(),static_cast<MapPoint*>(NULL));
    // Project points seen in previous frame
    int th;
    if(mSensor!=System::STEREO)//非双目搜索范围系数设为15
        th=15;
    else
        th=7;
 
    // 步骤2:根据匀速度模型进行对上一帧的MapPoints进行跟踪
    // 根据上一帧特征点对应的3D点投影的位置缩小特征点匹配范围
    int nmatches = matcher.SearchByProjection(mCurrentFrame,mLastFrame,th,mSensor==System::MONOCULAR);
    // If few matches, uses a wider window search
    // 如果跟踪的点少,则扩大搜索半径再来一次
    if(nmatches<20)
    {
        fill(mCurrentFrame.mvpMapPoints.begin(),mCurrentFrame.mvpMapPoints.end(),static_cast<MapPoint*>(NULL));
        nmatches = matcher.SearchByProjection(mCurrentFrame,mLastFrame,2*th,mSensor==System::MONOCULAR); // 2*th
    }
 
    if(nmatches<20)//如果匹配点少于20,返回
        return false;
    // Optimize frame pose with all matches
    // 步骤3:优化位姿
    Optimizer::PoseOptimization(&mCurrentFrame);
    // Discard outliers
    // 步骤4:优化位姿后剔除outlier的mvpMapPoints
    int nmatchesMap = 0;
    for(int i =0; i<mCurrentFrame.N; i++)
    {
        if(mCurrentFrame.mvpMapPoints[i])
        {
            if(mCurrentFrame.mvbOutlier[i])
            {
                MapPoint* pMP = mCurrentFrame.mvpMapPoints[i];
                mCurrentFrame.mvpMapPoints[i]=static_cast<MapPoint*>(NULL);
                mCurrentFrame.mvbOutlier[i]=false;
                pMP->mbTrackInView = false;
                pMP->mnLastFrameSeen = mCurrentFrame.mnId;
                nmatches--;
            }
            else if(mCurrentFrame.mvpMapPoints[i]->Observations()>0)
                nmatchesMap++;
        }
    }    
 
    if(mbOnlyTracking)//如果仅跟踪
    {
        mbVO = nmatchesMap<10;
        return nmatches>20;
    }
    return nmatchesMap>=10;
}
void Tracking::UpdateLastFrame()
  • 1.更新最近一帧的位姿
  • 2.对于双目或rgbd摄像头,为上一帧临时生成新的MapPoints,注意这些MapPoints不加入到Map中,在tracking的最后会删除,跟踪过程中需要将将上一帧的MapPoints投影到当前帧可以缩小匹配范围,加快当前帧与上一帧进行特征点匹配
void Tracking::UpdateLastFrame()
{
    // Update pose according to reference keyframe
    // 步骤1:更新最近一帧的位姿
    KeyFrame* pRef = mLastFrame.mpReferenceKF;
    cv::Mat Tlr = mlRelativeFramePoses.back();
 
    mLastFrame.SetPose(Tlr*pRef->GetPose()); // Tlr*Trw = Tlw 1:last r:reference w:world
 
    // 如果上一帧为关键帧,或者单目的情况,则退出
    if(mnLastKeyFrameId==mLastFrame.mnId || mSensor==System::MONOCULAR)
 
        return;
 
    // 步骤2:对于双目或rgbd摄像头,为上一帧临时生成新的MapPoints
    // 注意这些MapPoints不加入到Map中,在tracking的最后会删除
    // 跟踪过程中需要将将上一帧的MapPoints投影到当前帧可以缩小匹配范围,加快当前帧与上一帧进行特征点匹配
    // Create "visual odometry" MapPoints
    // We sort points according to their measured depth by the stereo/RGB-D sensor
    // 步骤2.1:得到上一帧有深度值的特征点
 
    vector<pair<float,int> > vDepthIdx;
    vDepthIdx.reserve(mLastFrame.N);
    for(int i=0; i<mLastFrame.N;i++)
    {
        float z = mLastFrame.mvDepth[i];
        if(z>0)//如果深度大于0
        {
            vDepthIdx.push_back(make_pair(z,i));
        }
 
    }
 
    if(vDepthIdx.empty())//如果没深度值则退出
 
        return;
    // 步骤2.2:按照深度从小到大排序
    sort(vDepthIdx.begin(),vDepthIdx.end());
 
    // We insert all close points (depth<mThDepth)
    // If less than 100 close points, we insert the 100 closest ones.
    // 步骤2.3:将距离比较近的点包装成MapPoints
 
    int nPoints = 0;
    for(size_t j=0; j<vDepthIdx.size();j++)
    {
        int i = vDepthIdx[j].second;
        bool bCreateNew = false;
        MapPoint* pMP = mLastFrame.mvpMapPoints[i];
        if(!pMP)
            bCreateNew = true;
        else if(pMP->Observations()<1)
        {
            bCreateNew = true;
        }
 
        if(bCreateNew)
        {
 
            // 这些生成MapPoints后并没有通过:
            // a.AddMapPoint、
            // b.AddObservation、
            // c.ComputeDistinctiveDescriptors、
            // d.UpdateNormalAndDepth添加属性,
            // 这些MapPoint仅仅为了提高双目和RGBD的跟踪成功率
            cv::Mat x3D = mLastFrame.UnprojectStereo(i);
            MapPoint* pNewMP = new MapPoint(x3D,mpMap,&mLastFrame,i);
            mLastFrame.mvpMapPoints[i]=pNewMP; // 添加新的MapPoint
 
            // 标记为临时添加的MapPoint,之后在CreateNewKeyFrame之前会全部删除
            mlpTemporalPoints.push_back(pNewMP);
            nPoints++;
        }
        else
        {
            nPoints++;
        }
        if(vDepthIdx[j].first>mThDepth && nPoints>100)
            break;
    }
}
bool Tracking::Relocalization()

重定位,从之前的关键帧中找出与当前帧之间拥有充足匹配点的候选帧,利用Ransac迭代,通过PnP求解位姿。

  1. 先计算当前帧的BOW值,并从关键帧数据库中查找候选的匹配关键帧
  2. 构建PnP求解器,标记杂点,准备好每个关键帧和当前帧的匹配点集
  3. 用PnP算法求解位姿,进行若干次P4P Ransac迭代,并使用非线性最小二乘优化,直到发现一个有充足inliers支持的相机位置
  4. 返回成功或失败
bool Tracking::Relocalization()
{
    // Compute Bag of Words Vector
    // 步骤1:计算当前帧特征点的Bow映射
    mCurrentFrame.ComputeBoW();
    // Relocalization is performed when tracking is lost当跟踪丢失执行重定位
    // Track Lost: Query KeyFrame Database for keyframe candidates for relocalisation
    // 步骤2:找到与当前帧相似的候选关键帧
    vector<KeyFrame*> vpCandidateKFs = mpKeyFrameDB->DetectRelocalizationCandidates(&mCurrentFrame);
    if(vpCandidateKFs.empty())//如果没找到候选关键帧,返回
        return false;
    const int nKFs = vpCandidateKFs.size();//候选关键帧个数
    // We perform first an ORB matching with each candidat
    // If enough matches are found we setup a PnP solver
    //我们首先执行与每个候选匹配的ORB匹配
    //如果找到足够的匹配,我们设置一个PNP解算器
    ORBmatcher matcher(0.75,true);
    vector<PnPsolver*> vpPnPsolvers;
    vpPnPsolvers.resize(nKFs);
    vector<vector<MapPoint*> > vvpMapPointMatches;
    vvpMapPointMatches.resize(nKFs);
    vector<bool> vbDiscarded;
    vbDiscarded.resize(nKFs);
    int nCandidates=0;
    for(int i=0; i<nKFs; i++)
    {
        KeyFrame* pKF = vpCandidateKFs[i];
        if(pKF->isBad())
            vbDiscarded[i] = true;//去除不好的候选关键帧
        else
        {
            // 步骤3:通过BoW进行匹配
            int nmatches = matcher.SearchByBoW(pKF,mCurrentFrame,vvpMapPointMatches[i]);
            if(nmatches<15)//如果匹配点小于15剔除
            {
                vbDiscarded[i] = true;
                continue;
            }
            else//用pnp求解
            {
                // 初始化PnPsolver
                PnPsolver* pSolver = new PnPsolver(mCurrentFrame,vvpMapPointMatches[i]);
                pSolver->SetRansacParameters(0.99,10,300,4,0.5,5.991);
                vpPnPsolvers[i] = pSolver;
                nCandidates++;
            }
        }
    }
    // Alternatively perform some iterations of P4P RANSAC可选地执行P4P RANSAC的一些迭代
    // Until we found a camera pose supported by enough inliers直到早到符合很多内点的相机位置
    bool bMatch = false;
    ORBmatcher matcher2(0.9,true);
    while(nCandidates>0 && !bMatch)
    {
        for(int i=0; i<nKFs; i++)
        {
            if(vbDiscarded[i])
                continue;
 
            // Perform 5 Ransac Iterations
            vector<bool> vbInliers;
            int nInliers;
            bool bNoMore;
 
            // 步骤4:通过EPnP算法估计姿态
            PnPsolver* pSolver = vpPnPsolvers[i];
            cv::Mat Tcw = pSolver->iterate(5,bNoMore,vbInliers,nInliers);
 
            // If Ransac reachs max. iterations discard keyframe
            if(bNoMore)
            {
                vbDiscarded[i]=true;
                nCandidates--;
            }
 
 
 
            // If a Camera Pose is computed, optimize
            if(!Tcw.empty())
            {
                Tcw.copyTo(mCurrentFrame.mTcw);
                set<MapPoint*> sFound;
                const int np = vbInliers.size();//内点个数
 
                for(int j=0; j<np; j++)
                {
                    if(vbInliers[j])
                    {
                        mCurrentFrame.mvpMapPoints[j]=vvpMapPointMatches[i][j];
                        sFound.insert(vvpMapPointMatches[i][j]);
                    }
                    else
                        mCurrentFrame.mvpMapPoints[j]=NULL;
                }
 
                // 步骤5:通过PoseOptimization对姿态进行优化求解
 
                int nGood = Optimizer::PoseOptimization(&mCurrentFrame);
                if(nGood<10)
                    continue;
                for(int io =0; io<mCurrentFrame.N; io++)
                    if(mCurrentFrame.mvbOutlier[io])
                        mCurrentFrame.mvpMapPoints[io]=static_cast<MapPoint*>(NULL);
 
                // If few inliers, search by projection in a coarse window and optimize again
                // 步骤6:如果内点较少,则通过投影的方式对之前未匹配的点进行匹配,再进行优化求解
                if(nGood<50)
                {
                   int nadditional =matcher2.SearchByProjection(mCurrentFrame,vpCandidateKFs[i],sFound,10,100);
                    if(nadditional+nGood>=50)
                    {
                        nGood = Optimizer::PoseOptimization(&mCurrentFrame);//优化
 
                        // If many inliers but still not enough, search by projection again in a narrower window
//如果许多内点仍然不够,则在较窄的窗口中再次用投影搜索
                        // the camera has been already optimized with many points
                        if(nGood>30 && nGood<50)
                        {
                            sFound.clear();
                            for(int ip =0; ip<mCurrentFrame.N; ip++)
                                if(mCurrentFrame.mvpMapPoints[ip])
                                    sFound.insert(mCurrentFrame.mvpMapPoints[ip]);
                            nadditional =matcher2.SearchByProjection(mCurrentFrame,vpCandidateKFs[i],sFound,3,64);
                            // Final optimization
                            if(nGood+nadditional>=50)
                            {
                                nGood = Optimizer::PoseOptimization(&mCurrentFrame);
                                for(int io =0; io<mCurrentFrame.N; io++)
                                    if(mCurrentFrame.mvbOutlier[io])
                                       mCurrentFrame.mvpMapPoints[io]=NULL;
                            }
                        }
                    }
                }
 
                // If the pose is supported by enough inliers stop ransacs and continue
                if(nGood>=50)
                {
                    bMatch = true;
                    break;
                }
            }
        }
    }
 
    if(!bMatch)
    {
        return false;
    }
    else
    {
        mnLastRelocFrameId = mCurrentFrame.mnId;
        return true;
    }
}

跟踪步骤2

  • 1.如果局部地图激活(!mbOnlyTracking)跟踪局部地图bOK = TrackLocalMap(),否则在重定位后特征点足够(bOK && !mbVO)也会跟踪局部地图bOK = TrackLocalMap()
        // If we have an initial estimation of the camera pose and matching. Track the local map.
        if(!mbOnlyTracking)//不是只跟踪还插入关键帧,局部地图工作
        {
            if(bOK)
                bOK = TrackLocalMap();//跟踪局部地图
        }
        else
        {
            // mbVO true means that there are few matches to MapPoints in the map. We cannot retrieve
            // a local map and therefore we do not perform TrackLocalMap(). Once the system relocalizes
            // the camera we will use the local map again.
            if(bOK && !mbVO) // 局部地图不工作,特征点足够且重定位成功
                bOK = TrackLocalMap();
        }
  • 2.更新运动模型中的速度mVelocity,mVelocity = mCurrentFrame.mTcw*LastTwc
        if(bOK)
            mState = OK;
        else
            mState=LOST;

        // Update drawer
        mpFrameDrawer->Update(this);

        // If tracking were good, check if we insert a keyframe
        if(bOK)
        {
            // Update motion model
            if(!mLastFrame.mTcw.empty())
            {
                 步骤2.3:更新恒速运动模型TrackWithMotionModel中的mVelocity
                cv::Mat LastTwc = cv::Mat::eye(4,4,CV_32F);
                mLastFrame.GetRotationInverse().copyTo(LastTwc.rowRange(0,3).colRange(0,3));
                mLastFrame.GetCameraCenter().copyTo(LastTwc.rowRange(0,3).col(3));
                mVelocity = mCurrentFrame.mTcw*LastTwc;
            }
            else
                mVelocity = cv::Mat();

            mpMapDrawer->SetCurrentCameraPose(mCurrentFrame.mTcw);
  • 3.清除临时的MapPoints,这些MapPoints在TrackWithMotionModel的UpdateLastFrame函数里生成(仅双目和rgbd)
            // Clean VO matches
            //步骤2.4:清除UpdateLastFrame中为当前帧临时添加的MapPoints
            for(int i=0; i<mCurrentFrame.N; i++)
            {
                MapPoint* pMP = mCurrentFrame.mvpMapPoints[i];
                if(pMP)
                    //排除UpdateLastFrame函数中为了跟踪增加的MapPoints
                    if(pMP->Observations()<1)
                    {
                        mCurrentFrame.mvbOutlier[i] = false;
                        mCurrentFrame.mvpMapPoints[i]=static_cast<MapPoint*>(NULL);
                    }
            }

            // Delete temporal MapPoints
            // 步骤2.5:清除临时的MapPoints,这些MapPoints在TrackWithMotionModel的UpdateLastFrame函数里生成(仅双目和rgbd)
            // 步骤2.4中只是在当前帧中将这些MapPoints剔除,这里从MapPoints数据库中删除
            // 这里生成的仅仅是为了提高双目或rgbd摄像头的帧间跟踪效果,用完以后就扔了,没有添加到地图中
            for(list<MapPoint*>::iterator lit = mlpTemporalPoints.begin(), lend =  mlpTemporalPoints.end(); lit!=lend; lit++)
            {
                MapPoint* pMP = *lit;
                delete pMP;
            }
            // 这里不仅仅是清除mlpTemporalPoints,通过delete pMP还删除了指针指向的MapPoint
            mlpTemporalPoints.clear();
  • 4.检测(NeedNewKeyFrame())并插入关键帧CreateNewKeyFrame()
            // 步骤2.6:检测并插入关键帧,对于双目会产生新的MapPoints
            if(NeedNewKeyFrame())
                CreateNewKeyFrame();
  • 6.保存最新一帧的数据mLastFrame = Frame(mCurrentFrame);
        if(!mCurrentFrame.mpReferenceKF)
            mCurrentFrame.mpReferenceKF = mpReferenceKF;

        // 保存上一帧的数据
        mLastFrame = Frame(mCurrentFrame);
    }

  • 7.记录位姿信息,用于轨迹复现
    // Store frame pose information to retrieve the complete camera trajectory afterwards.
    if(!mCurrentFrame.mTcw.empty())
    {
        cv::Mat Tcr = mCurrentFrame.mTcw*mCurrentFrame.mpReferenceKF->GetPoseInverse();
        mlRelativeFramePoses.push_back(Tcr);
        mlpReferences.push_back(mpReferenceKF);
        mlFrameTimes.push_back(mCurrentFrame.mTimeStamp);
        mlbLost.push_back(mState==LOST);
    }
    else
    {
        // This can happen if tracking is lost
        mlRelativeFramePoses.push_back(mlRelativeFramePoses.back());
        mlpReferences.push_back(mlpReferences.back());
        mlFrameTimes.push_back(mlFrameTimes.back());
        mlbLost.push_back(mState==LOST);
    }
bool Tracking::TrackLocalMap()

投影,从已经生成的地图点中找到更多对应关系

  • 1.更新Covisibility Graph, 更新局部关键帧
  • 2.根据局部关键帧,更新局部地图点,接下来运行过滤函数 isInFrustum
  • 3.将地图点投影到当前帧上,超出图像范围的舍弃
  • 4.当前视线方向v和地图点云平均视线方向n, 舍弃n*v<cos(60)的点云
  • 5.舍弃地图点到相机中心距离不在一定阈值内的点
  • 6.计算图像的尺度因子 isInFrustum 函数结束
  • 7.进行非线性最小二乘优化
  • 8.更新地图点的统计量
bool Tracking::TrackLocalMap()
{
    // We have an estimation of the camera pose and some map points tracked in the frame.
    // We retrieve the local map and try to find matches to points in the local map.
    // Update Local KeyFrames and Local Points
    // 步骤1:更新局部关键帧mvpLocalKeyFrames和局部地图点mvpLocalMapPoints
 
    UpdateLocalMap();
    // 步骤2:在局部地图中查找与当前帧匹配的MapPoints
    SearchLocalPoints();
    // Optimize Pos
    // 在这个函数之前,在Relocalization、TrackReferenceKeyFrame、TrackWithMotionModel中都有位姿优化,
    // 步骤3:更新局部所有MapPoints后对位姿再次优化
    Optimizer::PoseOptimization(&mCurrentFrame);
    mnMatchesInliers = 0;
 
 
 
    // Update MapPoints Statistics
    // 步骤3:更新当前帧的MapPoints被观测程度,并统计跟踪局部地图的效果
    for(int i=0; i<mCurrentFrame.N; i++)
    {
        if(mCurrentFrame.mvpMapPoints[i])
        {
            // 由于当前帧的MapPoints可以被当前帧观测到,其被观测统计量加1
            if(!mCurrentFrame.mvbOutlier[i])
            {
                mCurrentFrame.mvpMapPoints[i]->IncreaseFound();
                if(!mbOnlyTracking)
                {
                    // 该MapPoint被其它关键帧观测到过
                    if(mCurrentFrame.mvpMapPoints[i]->Observations()>0)
                        mnMatchesInliers++;
                }
                else
                    // 记录当前帧跟踪到的MapPoints,用于统计跟踪效果
                    mnMatchesInliers++;
            }
            else if(mSensor==System::STEREO)
                mCurrentFrame.mvpMapPoints[i] = static_cast<MapPoint*>(NULL);
 
        }
    }
 
    // Decide if the tracking was succesful
    //More restrictive if there was a relocalization recently
    // 步骤4:决定是否跟踪成功
    if(mCurrentFrame.mnId<mnLastRelocFrameId+mMaxFrames && mnMatchesInliers<50
        return false;
 
    if(mnMatchesInliers<30)
        return false;
    else
        return true;
 
}
void Tracking::UpdateLocalMap()
void Tracking::UpdateLocalMap()
{
    // This is for visualization
    // 这行程序放在UpdateLocalPoints函数后面是不是好一些
    mpMap->SetReferenceMapPoints(mvpLocalMapPoints);
    // Update
    // 更新局部关键帧和局部MapPoints
    UpdateLocalKeyFrames();
    UpdateLocalPoints();
}
void Tracking::UpdateLocalKeyFrames()

遍历当前帧的MapPoints,将观测到这些MapPoints的关键帧和相邻的关键帧取出,更新mvpLocalKeyFrames

/**
 * @brief 更新局部关键帧,called by UpdateLocalMap()
 *
 * 遍历当前帧的MapPoints,将观测到这些MapPoints的关键帧和相邻的关键帧取出,更新mvpLocalKeyFrames
 */
void Tracking::UpdateLocalKeyFrames()
{
    // Each map point vote for the keyframes in which it has been observed
    // 步骤1:遍历当前帧的MapPoints,记录所有能观测到当前帧MapPoints的关键帧
    map<KeyFrame*,int> keyframeCounter;
    for(int i=0; i<mCurrentFrame.N; i++)
    {
        if(mCurrentFrame.mvpMapPoints[i])
        {
            MapPoint* pMP = mCurrentFrame.mvpMapPoints[i];
            if(!pMP->isBad())
            {
                // 能观测到当前帧MapPoints的关键帧
                const map<KeyFrame*,size_t> observations = pMP->GetObservations();
                for(map<KeyFrame*,size_t>::const_iterator it=observations.begin(), itend=observations.end(); it!=itend; it++)
                    keyframeCounter[it->first]++;
            }
            else
            {
                mCurrentFrame.mvpMapPoints[i]=NULL;
            }
        }
    }
    if(keyframeCounter.empty())
        return;
    int max=0;
    KeyFrame* pKFmax= static_cast<KeyFrame*>(NULL);
    // 步骤2:更新局部关键帧(mvpLocalKeyFrames),添加局部关键帧有三个策略
    // 先清空局部关键帧
    mvpLocalKeyFrames.clear();
    mvpLocalKeyFrames.reserve(3*keyframeCounter.size());
    // All keyframes that observe a map point are included in the local map. Also check which keyframe shares most points
    // V-D K1: shares the map points with current frame
    // 策略1:能观测到当前帧MapPoints的关键帧作为局部关键帧
    for(map<KeyFrame*,int>::const_iterator it=keyframeCounter.begin(), itEnd=keyframeCounter.end(); it!=itEnd; it++)
    {
        KeyFrame* pKF = it->first;
        if(pKF->isBad())
            continue;
        if(it->second>max)
        {
            max=it->second;
            pKFmax=pKF;
        }
        mvpLocalKeyFrames.push_back(it->first);
        // mnTrackReferenceForFrame防止重复添加局部关键帧
        pKF->mnTrackReferenceForFrame = mCurrentFrame.mnId;
    }
    // Include also some not-already-included keyframes that are neighbors to already-included keyframes
    // V-D K2: neighbors to K1 in the covisibility graph
    // 策略2:与策略1得到的局部关键帧共视程度很高的关键帧作为局部关键帧
    for(vector<KeyFrame*>::const_iterator itKF=mvpLocalKeyFrames.begin(), itEndKF=mvpLocalKeyFrames.end(); itKF!=itEndKF; itKF++)
    {
        // Limit the number of keyframes
        if(mvpLocalKeyFrames.size()>80)
            break;
        KeyFrame* pKF = *itKF;
        // 策略2.1:最佳共视的10帧
        const vector<KeyFrame*> vNeighs = pKF->GetBestCovisibilityKeyFrames(10);
        for(vector<KeyFrame*>::const_iterator itNeighKF=vNeighs.begin(), itEndNeighKF=vNeighs.end(); itNeighKF!=itEndNeighKF; itNeighKF++)
        {
            KeyFrame* pNeighKF = *itNeighKF;
            if(!pNeighKF->isBad())
            {
                // mnTrackReferenceForFrame防止重复添加局部关键帧
                if(pNeighKF->mnTrackReferenceForFrame!=mCurrentFrame.mnId)
                {
                    mvpLocalKeyFrames.push_back(pNeighKF);
                    pNeighKF->mnTrackReferenceForFrame=mCurrentFrame.mnId;
                    break;
                }
            }
        }
        // 策略2.2:自己的子关键帧
        const set<KeyFrame*> spChilds = pKF->GetChilds();
        for(set<KeyFrame*>::const_iterator sit=spChilds.begin(), send=spChilds.end(); sit!=send; sit++)
        {
            KeyFrame* pChildKF = *sit;
            if(!pChildKF->isBad())
            {
                if(pChildKF->mnTrackReferenceForFrame!=mCurrentFrame.mnId)
                {
                    mvpLocalKeyFrames.push_back(pChildKF);
                    pChildKF->mnTrackReferenceForFrame=mCurrentFrame.mnId;
                    break;
                }
            }
        }
        // 策略2.3:自己的父关键帧
        KeyFrame* pParent = pKF->GetParent();
        if(pParent)
        {
           // mnTrackReferenceForFrame防止重复添加局部关键帧
           if(pParent->mnTrackReferenceForFrame!=mCurrentFrame.mnId)
           {
                mvpLocalKeyFrames.push_back(pParent);
                pParent->mnTrackReferenceForFrame=mCurrentFrame.mnId;
                break;
            }
        }
    }
    // V-D Kref: shares the most map points with current frame
    // 步骤3:更新当前帧的参考关键帧,与自己共视程度最高的关键帧作为参考关键帧
    if(pKFmax)
    {
        mpReferenceKF = pKFmax;
       mCurrentFrame.mpReferenceKF = mpReferenceKF;
    }
}
void Tracking::UpdateLocalPoints()

局部关键帧mvpLocalKeyFrames的MapPoints,更新mvpLocalMapPoints

/**
 * @brief 更新局部关键点,called by UpdateLocalMap()
 * 
 * 局部关键帧mvpLocalKeyFrames的MapPoints,更新mvpLocalMapPoints
 */
void Tracking::UpdateLocalPoints()
{
    // 步骤1:清空局部MapPoints
    mvpLocalMapPoints.clear();
    // 步骤2:遍历局部关键帧mvpLocalKeyFrames
    for(vector<KeyFrame*>::const_iterator itKF=mvpLocalKeyFrames.begin(), itEndKF=mvpLocalKeyFrames.end(); itKF!=itEndKF; itKF++)
    {
        KeyFrame* pKF = *itKF;
        const vector<MapPoint*> vpMPs = pKF->GetMapPointMatches();
        // 步骤2:将局部关键帧的MapPoints添加到mvpLocalMapPoints
        for(vector<MapPoint*>::const_iterator itMP=vpMPs.begin(), itEndMP=vpMPs.end(); itMP!=itEndMP; itMP++)
        {
           MapPoint* pMP = *itMP;
            if(!pMP)
                continue;
            // mnTrackReferenceForFrame防止重复添加局部MapPoint
            if(pMP->mnTrackReferenceForFrame==mCurrentFrame.mnId)
                continue;
            if(!pMP->isBad())
            {
                mvpLocalMapPoints.push_back(pMP);
                pMP->mnTrackReferenceForFrame=mCurrentFrame.mnId;
            }
        }
    }
}

void Tracking::SearchLocalPoints()

在局部地图中查找在当前帧视野范围内的点,将视野范围内的点和当前帧的特征点进行投影匹配

1.遍历当前帧的mvpMapPoints,标记这些MapPoints不参与之后的搜索
2.将所有局部MapPoints投影到当前帧,判断是否在视野范围内,然后进行投影匹配
3.对于双目或rgbd摄像头,为当前帧生成新的MapPoints

void Tracking::SearchLocalPoints()
{
    // Do not search map points already matched
    // 步骤1:遍历当前帧的mvpMapPoints,标记这些MapPoints不参与之后的搜索
    // 因为当前的mvpMapPoints一定在当前帧的视野中
    for(vector<MapPoint*>::iterator vit=mCurrentFrame.mvpMapPoints.begin(), vend=mCurrentFrame.mvpMapPoints.end(); vit!=vend; vit++)
    {
        MapPoint* pMP = *vit;
        if(pMP)
        {
            if(pMP->isBad())
            {
                *vit = static_cast<MapPoint*>(NULL);
            }
            else
            {
                // 更新能观测到该点的帧数加1
                pMP->IncreaseVisible();
                // 标记该点被当前帧观测到
                pMP->mnLastFrameSeen = mCurrentFrame.mnId;
                // 标记该点将来不被投影,因为已经匹配过
                pMP->mbTrackInView = false;
            }
        }
    }
    int nToMatch=0;
    // Project points in frame and check its visibility
    // 步骤2:将所有局部MapPoints投影到当前帧,判断是否在视野范围内,然后进行投影匹配
    for(vector<MapPoint*>::iterator vit=mvpLocalMapPoints.begin(), vend=mvpLocalMapPoints.end(); vit!=vend; vit++)
    {
        MapPoint* pMP = *vit;
        // 已经被当前帧观测到MapPoint不再判断是否能被当前帧观测到
        if(pMP->mnLastFrameSeen == mCurrentFrame.mnId)
            continue;
        if(pMP->isBad())
            continue;
        // Project (this fills MapPoint variables for matching)
        // 步骤2.1:判断LocalMapPoints中的点是否在在视野内
        if(mCurrentFrame.isInFrustum(pMP,0.5))
        {
        	// 观测到该点的帧数加1,该MapPoint在某些帧的视野范围内
            pMP->IncreaseVisible();
            // 只有在视野范围内的MapPoints才参与之后的投影匹配
            nToMatch++;
        }
    }
    if(nToMatch>0)
    {
        ORBmatcher matcher(0.8);
        int th = 1;
        if(mSensor==System::RGBD)
            th=3;
        // If the camera has been relocalised recently, perform a coarser search
        // 如果不久前进行过重定位,那么进行一个更加宽泛的搜索,阈值需要增大
        if(mCurrentFrame.mnId<mnLastRelocFrameId+2)
            th=5;
        // 步骤2.2:对视野范围内的MapPoints通过投影进行特征点匹配
        matcher.SearchByProjection(mCurrentFrame,mvpLocalMapPoints,th);
   }
}
bool Tracking::NeedNewKeyFrame()

判断是否需要生成新的关键帧标准
(1)在上一个全局重定位后,又过了20帧;
(2)局部建图闲置,或在上一个关键帧插入后,又过了20帧;
(3)当前帧跟踪到大于50个点;
(4)当前帧跟踪到的比参考关键帧少90%

bool Tracking::NeedNewKeyFrame()
{
    // 步骤1:如果用户在界面上选择重定位,那么将不插入关键帧
    // 由于插入关键帧过程中会生成MapPoint,因此用户选择重定位后地图上的点云和关键帧都不会再增加
    if(mbOnlyTracking)//如果仅跟踪,不选关键帧
        return false;
    //If Local Mapping is freezed by a Loop Closure do not insert keyframes
    // 如果局部地图被闭环检测使用,则不插入关键帧
    if(mpLocalMapper->isStopped() || mpLocalMapper->stopRequested())
        return false;
    const int nKFs = mpMap->KeyFramesInMap();//关键帧数
 
    // Do not insert keyframes if not enough frames have passed from last relocalisation
    // 步骤2:判断是否距离上一次插入关键帧的时间太短
    // mCurrentFrame.mnId是当前帧的ID
    // mnLastRelocFrameId是最近一次重定位帧的ID
    // mMaxFrames等于图像输入的帧率
    // 如果关键帧比较少,则考虑插入关键帧
    // 或距离上一次重定位超过1s,则考虑插入关键帧
    if(mCurrentFrame.mnId<mnLastRelocFrameId+mMaxFrames && nKFs>mMaxFrames)
        return false;
 
    // Tracked MapPoints in the reference keyframe
    // 步骤3:得到参考关键帧跟踪到的MapPoints数量
    // 在UpdateLocalKeyFrames函数中会将与当前关键帧共视程度最高的关键帧设定为当前帧的参考关键帧
 
    int nMinObs = 3;
    if(nKFs<=2)
        nMinObs=2;
    int nRefMatches = mpReferenceKF->TrackedMapPoints(nMinObs);//获取参考关键帧跟踪到的MapPoints数量
 
 
 
 
    // Local Mapping accept keyframes?
    // 步骤4:查询局部地图管理器是否繁忙
    bool bLocalMappingIdle = mpLocalMapper->AcceptKeyFrames();
 
 
 
    // Stereo & RGB-D: Ratio of close "matches to map"/"total matches"
    //双目和RGBD:比率接近地图匹配数/总匹配数
    // "total matches = matches to map + visual odometry matches"
    //总匹配数=地图匹配数+视觉里程计匹配数
    // Visual odometry matches will become MapPoints if we insert a keyframe.
    // This ratio measures how many MapPoints we could create if we insert a keyframe.
    //这个比率测量如果我们插入一个关键帧,我们可以创建多少个MapPoints
    // 步骤5:对于双目或RGBD摄像头,统计总的可以添加的MapPoints数量和跟踪到地图中的MapPoints数量
 
    int nMap = 0;//地图匹配数
    int nTotal= 0;//总匹配数
    if(mSensor!=System::MONOCULAR)// 双目或rgbd
    {
       for(int i =0; i<mCurrentFrame.N; i++)//遍历当前帧所有匹配点
        {
            if(mCurrentFrame.mvDepth[i]>0 && mCurrentFrame.mvDepth[i]<mThDepth)//map点的速度在合理范围内
            {
                nTotal++;// 总的可以添加mappoints数
                if(mCurrentFrame.mvpMapPoints[i])
                    if(mCurrentFrame.mvpMapPoints[i]->Observations()>0)//mappoint能被观测
                        nMap++;// 被关键帧观测到的mappoints数,即观测到地图中的MapPoints数量
            }
        }
    }
    else
    {
        // There are no visual odometry matches in the monocular case
        nMap=1;
        nTotal=1;
    }
    const float ratioMap = (float)nMap/(float)(std::max(1,nTotal));
    // 步骤6:决策是否需要插入关键帧
    // Thresholds
    // 设定inlier阈值,和之前帧特征点匹配的inlier比例
    float thRefRatio = 0.75f;
    if(nKFs<2)
        thRefRatio = 0.4f;// 关键帧只有一帧,那么插入关键帧的阈值设置很低
    if(mSensor==System::MONOCULAR)
        thRefRatio = 0.9f;
 
    // MapPoints中和地图关联的比例阈值
    float thMapRatio = 0.35f;
    if(mnMatchesInliers>300)
        thMapRatio = 0.20f;
 
    // Condition 1a: More than "MaxFrames" have passed from last keyframe insertion
    // 很长时间没有插入关键帧
    const bool c1a = mCurrentFrame.mnId>=mnLastKeyFrameId+mMaxFrames;
    // Condition 1b: More than "MinFrames" have passed and Local Mapping is idle
    // localMapper处于空闲状态
    const bool c1b = (mCurrentFrame.mnId>=mnLastKeyFrameId+mMinFrames && bLocalMappingIdle);
 
    // Condition 1c: tracking is weak
    // 跟踪要跪的节奏,0.25和0.3是一个比较低的阈值
    const bool c1c =  mSensor!=System::MONOCULAR && (mnMatchesInliers<nRefMatches*0.25 || ratioMap<0.3f) ;
    // Condition 2: Few tracked points compared to reference keyframe. Lots of visual odometry compared to map matches.
    // 阈值比c1c要高,与之前参考帧(最近的一个关键帧)重复度不是太高
    const bool c2 = ((mnMatchesInliers<nRefMatches*thRefRatio || ratioMap<thMapRatio) && mnMatchesInliers>15);
 
    if((c1a||c1b||c1c)&&c2)
    {
        // If the mapping accepts keyframes, insert keyframe.
        // Otherwise send a signal to interrupt BA
        //如果mapping接受关键帧,则插入关键帧,否则发送信号到中断BA
        if(bLocalMappingIdle)
        {
           return true;
        }
 
        else
        {
            mpLocalMapper->InterruptBA();//中断BA
            if(mSensor!=System::MONOCULAR)
            {
                // 队列里不能阻塞太多关键帧
                // tracking插入关键帧不是直接插入,而且先插入到mlNewKeyFrames中,
                // 然后localmapper再逐个pop出来插入到mspKeyFrames
                if(mpLocalMapper->KeyframesInQueue()<3)//队列中关键帧小于3
                    return true;
                else
                    return false;
            }
            else
                return false;
        }
    }
    else
        return false;
}
        // Reset if the camera get lost soon after initialization
        // 跟踪失败,并且relocation也没有搞定,只能重新Reset
        if(mState==LOST)
        {
            if(mpMap->KeyFramesInMap()<=5)
            {
                cout << "Track lost soon after initialisation, reseting..." << endl;
                mpSystem->Reset();
                return;
            }
        }
void Tracking::CreateNewKeyFrame()

创建新的关键帧,在这里将关键帧插入了LocalMapper的候选关键帧队列,​​​​​​​这样就进入到了局部构图线程。

  • 1.将当前帧构造成关键帧
  • 2.将当前关键帧设置为当前帧的参考关键帧
  • 3.对于双目或rgbd摄像头,为当前帧生成新的MapPoints
void Tracking::CreateNewKeyFrame()
{
    if(!mpLocalMapper->SetNotStop(true))
        return;
    // 步骤1:将当前帧构造成关键帧
    KeyFrame* pKF = new KeyFrame(mCurrentFrame,mpMap,mpKeyFrameDB);
    // 步骤2:将当前关键帧设置为当前帧的参考关键帧
    // 在UpdateLocalKeyFrames函数中会将与当前关键帧共视程度最高的关键帧设定为当前帧的参考关键帧
    mpReferenceKF = pKF;
    mCurrentFrame.mpReferenceKF = pKF;
    // 这段代码和UpdateLastFrame中的那一部分代码功能相同
    // 步骤3:对于双目或rgbd摄像头,为当前帧生成新的MapPoints
    if(mSensor!=System::MONOCULAR)
    {
        // 根据Tcw计算mRcw、mtcw和mRwc、mOw
        mCurrentFrame.UpdatePoseMatrices();
        // We sort points by the measured depth by the stereo/RGBD sensor.
        // We create all those MapPoints whose depth < mThDepth.
        // If there are less than 100 close points we create the 100 closest.
        // 步骤3.1:得到当前帧深度小于阈值的特征点
        // 创建新的MapPoint, depth < mThDepth
        vector<pair<float,int> > vDepthIdx;
        vDepthIdx.reserve(mCurrentFrame.N);
        for(int i=0; i<mCurrentFrame.N; i++)
        {
           float z = mCurrentFrame.mvDepth[i];
            if(z>0)
            {
                vDepthIdx.push_back(make_pair(z,i));
            }
        }
 
        if(!vDepthIdx.empty())
        {
            // 步骤3.2:按照深度从小到大排序
            sort(vDepthIdx.begin(),vDepthIdx.end());
            // 步骤3.3:将距离比较近的点包装成MapPoints
            int nPoints = 0;
            for(size_t j=0; j<vDepthIdx.size();j++)
            {
               int i = vDepthIdx[j].second;
                bool bCreateNew = false;
                MapPoint* pMP = mCurrentFrame.mvpMapPoints[i];
                if(!pMP)
                   bCreateNew = true;
                else if(pMP->Observations()<1)
                {
                    bCreateNew = true;
                    mCurrentFrame.mvpMapPoints[i] = static_cast<MapPoint*>(NULL);
                }
                if(bCreateNew)
                {
                    cv::Mat x3D = mCurrentFrame.UnprojectStereo(i);
                    MapPoint* pNewMP = new MapPoint(x3D,pKF,mpMap);
                    // 这些添加属性的操作是每次创建MapPoint后都要做的
                    pNewMP->AddObservation(pKF,i);
                    pKF->AddMapPoint(pNewMP,i);
                    pNewMP->ComputeDistinctiveDescriptors();
                    pNewMP->UpdateNormalAndDepth();
                    mpMap->AddMapPoint(pNewMP);
                    mCurrentFrame.mvpMapPoints[i]=pNewMP;
                    nPoints++;
                }
                else
                {
                    nPoints++;
                }
                // 这里决定了双目和rgbd摄像头时地图点云的稠密程度
                // 但是仅仅为了让地图稠密直接改这些不太好,
                // 因为这些MapPoints会参与之后整个slam过程
                if(vDepthIdx[j].first>mThDepth && nPoints>100)
                    break;
            }
        }
    }
    mpLocalMapper->InsertKeyFrame(pKF);
    mpLocalMapper->SetNotStop(false);
    mnLastKeyFrameId = mCurrentFrame.mnId;
    mpLastKeyFrame = pKF;
}

参考:ORB-SLAM2从理论到代码实现(六)

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

令狐少侠、

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值