【slam十四讲第二版】【课本例题代码向】【第十三讲~实践:设计SLAM系统】

【slam十四讲第二版】【课本例题代码向】【第十三讲~实践:设计SLAM系统】

0 前言

1 依赖库安装和数据集

1.1 依赖库

1.2 数据集

2 报错解决

2.1 编译报错一

2.1.1 错误提示

/usr/bin/ld: /usr/local/lib/libgflags.a(gflags.cc.o): relocation R_X86_64_32S against `.rodata‘

2.1.2 解决方案

  • 主要原因是 gflags 编译默认不是共享库,需要保证 gflags 编译成共享库。因此需要重新编译 gflags, 并重新安装 gflags 。如果已经安装过了也不用担心,直接重新来一边就行
cd build/
cmake .. -DBUILD_SHARED_LIBS=ON
make
sudo make  install
  • 参考:

编译报错:/usr/bin/ld: /usr/local/lib/libgflags.a(gflags.cc.o): relocation R_X86_64_32S against `.rodata‘
gflags,glog,gtest学习

2.2 编译报错二

2.2.1 错误提示

: && /usr/bin/c++ -fPIC -O3 -DNDEBUG   -shared -Wl,-soname,libmyslam.so -o ../lib/libmyslam.so src/CMakeFiles/myslam.dir/frame.cpp.o src/CMakeFiles/myslam.dir/mappoint.cpp.o src/CMakeFiles/myslam.dir/map.cpp.o src/CMakeFiles/myslam.dir/camera.cpp.o src/CMakeFiles/myslam.dir/config.cpp.o src/CMakeFiles/myslam.dir/feature.cpp.o src/CMakeFiles/myslam.dir/frontend.cpp.o src/CMakeFiles/myslam.dir/backend.cpp.o src/CMakeFiles/myslam.dir/viewer.cpp.o src/CMakeFiles/myslam.dir/visual_odometry.cpp.o src/CMakeFiles/myslam.dir/dataset.cpp.o  -Wl,-rpath,/usr/local/lib/x86_64-linux-gnu:/usr/local/lib  /usr/local/lib/x86_64-linux-gnu/libopencv_dnn.so.3.4.1  /usr/local/lib/x86_64-linux-gnu/libopencv_ml.so.3.4.1  /usr/local/lib/x86_64-linux-gnu/libopencv_objdetect.so.3.4.1  /usr/local/lib/x86_64-linux-gnu/libopencv_shape.so.3.4.1  /usr/local/lib/x86_64-linux-gnu/libopencv_stitching.so.3.4.1  /usr/local/lib/x86_64-linux-gnu/libopencv_superres.so.3.4.1  /usr/local/lib/x86_64-linux-gnu/libopencv_videostab.so.3.4.1  /usr/local/lib/x86_64-linux-gnu/libopencv_viz.so.3.4.1  /usr/local/lib/libpango_glgeometry.so  /usr/local/lib/libpango_plot.so  /usr/local/lib/libpango_python.so  /usr/local/lib/libpango_scene.so  /usr/local/lib/libpango_tools.so  /usr/local/lib/libpango_video.so  -lGL  -lGLU  -lGLEW  -lglut  -lg2o_core  -lg2o_stuff  -lg2o_types_sba  -lg2o_solver_csparse  -lg2o_csparse_extension  /usr/local/lib/x86_64-linux-gnu/libgtest.a  /usr/local/lib/x86_64-linux-gnu/libgtest_main.a  /usr/local/lib/x86_64-linux-gnu/libglog.a  /usr/local/lib/libgflags.so.2.2.2  -lpthread  -lcxsparse  -lfmt  /usr/local/lib/x86_64-linux-gnu/libopencv_calib3d.so.3.4.1  /usr/local/lib/x86_64-linux-gnu/libopencv_features2d.so.3.4.1  /usr/local/lib/x86_64-linux-gnu/libopencv_flann.so.3.4.1  /usr/local/lib/x86_64-linux-gnu/libopencv_highgui.so.3.4.1  /usr/local/lib/x86_64-linux-gnu/libopencv_photo.so.3.4.1  /usr/local/lib/x86_64-linux-gnu/libopencv_video.so.3.4.1  /usr/local/lib/x86_64-linux-gnu/libopencv_videoio.so.3.4.1  /usr/local/lib/x86_64-linux-gnu/libopencv_imgcodecs.so.3.4.1  /usr/local/lib/x86_64-linux-gnu/libopencv_imgproc.so.3.4.1  /usr/local/lib/x86_64-linux-gnu/libopencv_core.so.3.4.1  /usr/local/lib/libpango_geometry.so  /usr/local/lib/libtinyobj.so  /usr/local/lib/libpango_display.so  /usr/local/lib/libpango_vars.so  /usr/local/lib/libpango_windowing.so  /usr/local/lib/libpango_opengl.so  -lGLEW  -lOpenGL  -lGLX  -lGLU  /usr/local/lib/libpango_image.so  /usr/local/lib/libpango_packetstream.so  /usr/local/lib/libpango_core.so  -lrt  /usr/local/lib/x86_64-linux-gnu/libgtest.a  -lpthread && :
/usr/bin/ld: //usr/local/lib/x86_64-linux-gnu/libfmt.a(format.cc.o): relocation R_X86_64_PC32 against symbol `stderr@@GLIBC_2.2.5' can not be used when making a shared object; recompile with -fPIC
/usr/bin/ld: 最后的链结失败: 错误的值
collect2: error: ld returned 1 exit status
ninja: build stopped: subcommand failed.

2.2.2 解决方案

2.2.2.1 方案一
  • 打开fmt下面的CmakeList.txt文件,添加-fPIC参数,直接加载第二行就可以:
  • 该参数用于静态库链接成.so动态库,编译静态库的时候需要加 -fPIC这个参数。
add_compile_options(-fPIC)
  • 然后重新编译fmt
mkdir build
cd build
cmake ..
make
sudo make install
2.2.2.2 方案二
  • 如果遇到 relocation R_X86_64_PC32 against symbol `stderr@@GLIBC_2.2.5’ can not be used when making a shared object; recompile with -fPIC的错误,这是由于链接库中使用了libfmt.a与libmyslam.so,编译时动态库与静态库不能混用。.a是静态库,.so是动态库,具体问题具体分析,有的是因为gflags安装是安装的静态库,如下图所示,而我的问题是fmt是静态库,需要在编译时(cmake … -DGFLAGS_NAMESPACE=google -DCMAKE_CXX_FLAGS=-fPIC …)稍加注意,改成动态库就行了。
  • 改成动态库的具体操作步骤、解决办法:
  • 重新编译安装fmt包:
git clone  https://ghproxy.com/https://github.com/fmtlib/fmt.git
cd fmt/

mkdir build
cd build
cmake .. -DGFLAGS_NAMESPACE=google -DCMAKE_CXX_FLAGS=-fPIC ..
make -j8
sudo make install

2.3 运行报错三

2.3.1 错误提示

  • 以命令行的方式运行可执行程序报错如下:
bupo@bupo-vpc:~/my_study/slam14/slam14_my/cap13/slam_first/bin$ ./run_kitti_stereo 
WARNING: Logging before InitGoogleLogging() is written to STDERR
I0808 23:12:03.265978 11420 visual_odometry.cpp:44] VO is running
段错误 (核心已转储)
  • 使用clion的方式运行报错:
/home/bupo/my_study/slam14/slam14_my/cap13/slam_first/bin/run_kitti_stereo --gtest_filter=* --gtest_color=no
Testing started at 下午11:16 ...
ERROR: unknown command line flag 'gtest_color'
ERROR: unknown command line flag 'gtest_filter'
进程已结束,退出代码1

2.3.2 解决方案

  • 修改app/run_kitti_stereo.cpp,注释掉assert这句代码,将vo->Init() == true提取出来,重新编译运行:
//
// Created by gaoxiang on 19-5-4.
//

#include <gflags/gflags.h>
#include "myslam/visual_odometry.h"

DEFINE_string(config_file, "./config/default.yaml", "config file path");

int main(int argc, char **argv) {
    google::ParseCommandLineFlags(&argc, &argv, true);

    myslam::VisualOdometry::Ptr vo(
        new myslam::VisualOdometry(FLAGS_config_file));
    //assert(vo->Init() == true);//注释去这句话my_czy
    vo->Init() ;
    vo->Run();

    return 0;
}
  • 必须在主路径下执行:
bupo@bupo-vpc:~/my_study/slam14/slam14_my/cap13/slam_first$ ./bin/run_kitti_stereo
  • 如果在./bin下执行:
bupo@bupo-vpc:~/my_study/slam14/slam14_my/cap13/slam_first/bin$ ./run_kitti_stereo 

就会出现错误:

bupo@bupo-vpc:~/my_study/slam14/slam14_my/cap13/slam_first/bin$ ./run_kitti_stereo 
WARNING: Logging before InitGoogleLogging() is written to STDERR
E0808 23:26:24.892447 12900 config.cpp:9] parameter file ./config/default.yaml does not exist.
I0808 23:26:24.892498 12900 visual_odometry.cpp:44] VO is running
段错误 (核心已转储)

3 工程代码

3.1 主要架构

  1. 在编译的时候把所编写的头文件编译为一个共享库文件lib/libmyslam.so
  2. 然后

3.2 逐个分析

3.2.1 run_kitti_stereo.cpp

3.2.1.2 main函数
  • 首先从main函数开始
  • 使用gflags定义配置文件路径,然后构造一个视觉里程计类,初始化结束后开始运行
DEFINE_string(config_file, "./config/default.yaml", "config file path");

int main(int argc, char **argv) {

    // 解析gflags参数,只需要1行代码
    google::ParseCommandLineFlags(&argc, &argv, true);
    //gflags::ParseCommandLineFlags(&argc, &argv, true);
    myslam::VisualOdometry::Ptr vo(
        new myslam::VisualOdometry(FLAGS_config_file));
    //assert(vo->Init() == true);//注释去这句话my_czy
    vo->Init();
    vo->Run();

    return 0;
}

3.2.2 visual_odometry.cpp

  • 这个函数主要包含视觉里程计类的功能
  1. 构造函数VisualOdometry():配置文件的路径作为唯一一个输入。
  2. 初始化函数Init():
3.2.2.1 构造函数VisualOdometry()
VisualOdometry::VisualOdometry(std::string &config_path)
    : config_file_path_(config_path) {}
3.2.2.2 初始化函数Init()
  1. 作用
    在运行之前对视觉里程计类进行初始化,完成相机内外参的读取,前端类、后端类、地图类和可视化类的初始化构造,以及各类之间的关联
  2. 函数声明
    bool Init();

  1. 函数实现
bool VisualOdometry::Init() {
    // read from config file
    if (Config::SetParameterFile(config_file_path_) == false) {
        return false;
    }
    //构造Dataset类,以dataset_dir路径下数据集为基础
    dataset_ =
        Dataset::Ptr(new Dataset(Config::Get<std::string>("dataset_dir")));
    //确保dataset初始化成功
    CHECK_EQ(dataset_->Init(), true);

    // create components and links
    frontend_ = Frontend::Ptr(new Frontend);
    backend_ = Backend::Ptr(new Backend);
    map_ = Map::Ptr(new Map);
    viewer_ = Viewer::Ptr(new Viewer);

    frontend_->SetBackend(backend_);
    frontend_->SetMap(map_);
    frontend_->SetViewer(viewer_);
    frontend_->SetCameras(dataset_->GetCamera(0), dataset_->GetCamera(1));

    backend_->SetMap(map_);
    backend_->SetCameras(dataset_->GetCamera(0), dataset_->GetCamera(1));

    viewer_->SetMap(map_);

    return true;
}
3.2.2.3 Run()
  1. 作用
    开始处理数据
  2. 函数声明
    void Run();
  1. 函数实现
void VisualOdometry::Run() {
    while (1) {
        LOG(INFO) << "VO is running";
        if (Step() == false) {
            break;
        }
    }

    backend_->Stop();//关闭后端线程
    viewer_->Close();//关闭可视化线程

    LOG(INFO) << "VO exit";
}
3.2.2.4 Step()
  1. 作用
    在当前数据集的前提下处理一次数据
  2. 函数声明
    bool Step();
  1. 函数实现
bool VisualOdometry::Step() {
    Frame::Ptr new_frame = dataset_->NextFrame();
    if (new_frame == nullptr) return false;

    auto t1 = std::chrono::steady_clock::now();
    bool success = frontend_->AddFrame(new_frame);
    auto t2 = std::chrono::steady_clock::now();
    auto time_used =
        std::chrono::duration_cast<std::chrono::duration<double>>(t2 - t1);
    LOG(INFO) << "VO cost time: " << time_used.count() << " seconds.";
    return success;
}

3.2.3 config.cpp

3.2.3.1 构造函数Config()
3.2.3.2 SetParameterFile()
  1. 作用
    构造一个新的config类,完善新的config类的file_,并确保该文件可以被打开
  2. 函数声明
    // set a new config file
    static bool SetParameterFile(const std::string &filename);
  1. 函数实现
bool Config::SetParameterFile(const std::string &filename) {
    if (config_ == nullptr)
        config_ = std::shared_ptr<Config>(new Config);
    config_->file_ = cv::FileStorage(filename.c_str(), cv::FileStorage::READ);
    if (config_->file_.isOpened() == false) {
        LOG(ERROR) << "parameter file " << filename << " does not exist.";
        config_->file_.release();
        return false;
    }
    return true;
3.2.3.3 Get()
  1. 作用
    访问参数值
  2. 函数实现
    // access the parameter values
    template <typename T>
    static T Get(const std::string &key) {
        return T(Config::config_->file_[key]);
    }
3.2.3.4 析沟函数~Config()
  1. 作用

  2. 函数声明

  3. 函数实现

3.2.4 dataset.cpp

3.2.4.1 构造函数Dataset()
  1. 作用

  2. 函数声明

    Dataset(const std::string& dataset_path);
  1. 函数实现
Dataset::Dataset(const std::string& dataset_path)
    : dataset_path_(dataset_path) {}
3.2.4.2 Init()
  1. 作用
    初始化,构造相机类,获取相机的内外参和双目相机基线
    /*
     * 这里数据集格式,calib.txt里面每一行除去相机名称,
     * 有12个参数值,表示为projection_data[0~12]
     * 
     * projection_data[0~2
     *                 4~6
     *                 8~10]
     * 表示相机的内参K
     * 
     * projection_data[3
     *                 7
     *                 11]
     * 表示当前该相机到世界坐标系的坐标值,
     * 
     * 这里我们不难看到第一个相机到世界坐标系的坐标为(0,0,0,),
     * 也就是重合,可以理解为以第一个相机为世界坐标系
     * 
     * 猜测:这里之所以只给一个作标值,是因为相机安装方向是平行的。且和世界坐标系是重合的
     */
  1. 函数声明
    /// 初始化,返回是否成功
    bool Init();//初始化,构造相机类,获取相机的内外参
  1. 函数实现
bool Dataset::Init() {//初始化,构造相机类,获取相机的内外参
    // read camera intrinsics and extrinsics 读取摄像机外参和内参
    ifstream fin(dataset_path_ + "/calib.txt");
    if (!fin) {
        LOG(ERROR) << "cannot find " << dataset_path_ << "/calib.txt!";
        return false;
    }

    for (int i = 0; i < 4; ++i) {
        char camera_name[3];
        for (int k = 0; k < 3; ++k) {
            fin >> camera_name[k];
        }
        double projection_data[12];
        for (int k = 0; k < 12; ++k) {
            fin >> projection_data[k];
        }
        Mat33 K;
        K << projection_data[0], projection_data[1], projection_data[2],
            projection_data[4], projection_data[5], projection_data[6],
            projection_data[8], projection_data[9], projection_data[10];
        Vec3 t;
        t << projection_data[3], projection_data[7], projection_data[11];
        t = K.inverse() * t;
        K = K * 0.5;//找到为什么了,可以参考该文件Frame::Ptr Dataset::NextFrame(),这里面被把图片resize了,缩小了0.5,所以内参也要缩小0.5
        Camera::Ptr new_camera(new Camera(K(0, 0), K(1, 1), K(0, 2), K(1, 2),
                                          t.norm(), SE3(SO3(), t)));//构造相机Camera类
        cameras_.push_back(new_camera);
        LOG(INFO) << "Camera " << i << " extrinsics: " << t.transpose();
    }
    fin.close();
    current_image_index_ = 0;
    return true;
}
3.2.4.3 NextFrame()
  1. 作用
    创建并返回包含双目图像的下一帧
  2. 函数声明
    Frame::Ptr NextFrame();
  1. 函数实现
Frame::Ptr Dataset::NextFrame() {
    boost::format fmt("%s/image_%d/%06d.png");
    cv::Mat image_left, image_right;
    // read images
    image_left =
        cv::imread((fmt % dataset_path_ % 0 % current_image_index_).str(),
                   cv::IMREAD_GRAYSCALE);
    image_right =
        cv::imread((fmt % dataset_path_ % 1 % current_image_index_).str(),
                   cv::IMREAD_GRAYSCALE);

    if (image_left.data == nullptr || image_right.data == nullptr) {
        LOG(WARNING) << "cannot find images at index " << current_image_index_;
        return nullptr;
    }

    cv::Mat image_left_resized, image_right_resized;
    cv::resize(image_left, image_left_resized, cv::Size(), 0.5, 0.5,
               cv::INTER_NEAREST);//cv::INTER_NEAREST = 最近邻插值
    cv::resize(image_right, image_right_resized, cv::Size(), 0.5, 0.5,
               cv::INTER_NEAREST);

    auto new_frame = Frame::CreateFrame();
    new_frame->left_img_ = image_left_resized;
    new_frame->right_img_ = image_right_resized;
    current_image_index_++;
    return new_frame;
}
3.2.4.4 GetCamera()
  1. 作用
    根据相机的id得到返回对应的相机类
  2. 函数声明
    /// get camera by id
    Camera::Ptr GetCamera(int camera_id) const {
        return cameras_.at(camera_id);
    }

3.2.5 camera.cpp

3.2.5.1 构造函数Camera()
  1. 作用

  2. 函数声明

    Camera();

    Camera(double fx, double fy, double cx, double cy, double baseline,
           const SE3 &pose)
        : fx_(fx), fy_(fy), cx_(cx), cy_(cy), baseline_(baseline), pose_(pose) {
        pose_inv_ = pose_.inverse();
    }
  1. 函数实现
Camera::Camera() {
}
3.2.5.2 pose()
  1. 作用

  2. 函数声明

    SE3 pose() const { return pose_; }
  1. 函数实现
    声明中实现
3.2.5.3 K()
  1. 作用
    获取内参K矩阵
  2. 函数声明
    // return intrinsic matrix
    Mat33 K() const {
        Mat33 k;
        k << fx_, 0, cx_, 0, fy_, cy_, 0, 0, 1;
        return k;
    }
  1. 函数实现
    声明中实现
3.2.5.4 world2camera()
  1. 作用

  2. 函数声明

  3. 函数实现

3.2.5.5 camera2world()
3.2.5.6 camera2pixel()
3.2.5.7 pixel2camera()
3.2.5.8 pixel2world()
3.2.5.9 world2pixel()

3.2.6 frontend.cpp

3.2.6.1 构造函数Frontend()
  1. 作用
    构造前端类,构造GFTT特征点提取方式
  2. 函数声明
    Frontend();
  1. 函数实现
Frontend::Frontend() {
    gftt_ =
        cv::GFTTDetector::create(Config::Get<int>("num_features"), 0.01, 20);
    num_features_init_ = Config::Get<int>("num_features_init");
    num_features_ = Config::Get<int>("num_features");
}
3.2.6.2 AddFrame()
  1. 作用
    外部接口,添加一个帧并计算其定位结果
  2. 函数声明
    bool AddFrame(Frame::Ptr frame);
  1. 函数实现
bool Frontend::AddFrame(myslam::Frame::Ptr frame) {
    current_frame_ = frame;

    switch (status_) {
        case FrontendStatus::INITING:
            StereoInit();//初始化双目
            break;
        case FrontendStatus::TRACKING_GOOD:
        case FrontendStatus::TRACKING_BAD:
            Track();//跟踪
            break;
        case FrontendStatus::LOST:
            Reset();//丢失重置
            break;
    }

    last_frame_ = current_frame_;
    return true;
}
3.2.6.3 SetMap()
  1. 作用
    前端类链接对应的地图类
  2. 函数声明
void SetMap(Map::Ptr map) { map_ = map; }
3.2.6.4 SetBackend()
  1. 作用
    前端类链接对应的后端类
  2. 函数声明
oid SetBackend(std::shared_ptr<Backend> backend) { backend_ = backend; }
3.2.6.5 SetViewer()
  1. 作用
    前端类链接对应的可视化类
  2. 函数声明
    void SetViewer(std::shared_ptr<Viewer> viewer) { viewer_ = viewer; }
3.2.6.6 GetStatus()
  1. 作用

  2. 函数声明

  3. 函数实现

3.2.6.7 SetCameras()
  1. 作用
    对于前端类设置左右目的相机,用于获得内外参
  2. 函数声明
    void SetCameras(Camera::Ptr left, Camera::Ptr right) {
        camera_left_ = left;
        camera_right_ = right;
    }
  1. 函数实现
3.2.6.8 Track()
  1. 作用
    在正常模式下跟踪
  2. 函数声明
 bool Track();
  1. 函数实现
bool Frontend::Track() {
    if (last_frame_) {//如果存在上一帧
        current_frame_->SetPose(relative_motion_ * last_frame_->Pose()); // 括号里参数为:Tcc_w = Tcc_lc * Tlcw
    }

    int num_track_last = TrackLastFrame();
    tracking_inliers_ = EstimateCurrentPose();//使用g2o计算当前帧的位姿,返回的得到的内点的数目

    if (tracking_inliers_ > num_features_tracking_) {//如果内点的数目大于所要求的内点的要求数目,设定此次追踪是好的
        // tracking good
        status_ = FrontendStatus::TRACKING_GOOD;
    } else if (tracking_inliers_ > num_features_tracking_bad_) {//如果内点的数目大于所要求的内点的最小数目,设定此次追踪是不好的
        // tracking bad
        status_ = FrontendStatus::TRACKING_BAD;
    } else {//否则,判断此次追踪lost
        // lost
        status_ = FrontendStatus::LOST;
    }

    InsertKeyframe();

    //relative_motion_:当前帧与上一帧的相对运动,用于估计当前帧pose初值
    // Tcc_lc = Tcc_w * Tw_lc
    relative_motion_ = current_frame_->Pose() * last_frame_->Pose().inverse();

    if (viewer_) viewer_->AddCurrentFrame(current_frame_);
    return true;
}
3.2.6.9 Reset()
  1. 作用
    丢失时重置
  2. 函数声明
bool Reset();
  1. 函数实现
bool Frontend::Reset() {
    LOG(INFO) << "Reset is not implemented. ";
    return true;
}
3.2.6.10 TrackLastFrame()
  1. 作用
    跟踪最后一帧
  2. 函数声明
int TrackLastFrame();
  1. 函数实现
int Frontend::TrackLastFrame() {
    // use LK flow to estimate points in the right image
    std::vector<cv::Point2f> kps_last, kps_current;
    for (auto &kp : last_frame_->features_left_) {
        if (kp->map_point_.lock()) {
            // use project point
            auto mp = kp->map_point_.lock();
            auto px =
                camera_left_->world2pixel(mp->pos_, current_frame_->Pose());
            kps_last.push_back(kp->position_.pt);//获取当前特征点在上一帧中的像素坐标
            kps_current.push_back(cv::Point2f(px[0], px[1]));//估计当前特征点在当前帧中的像素坐标
        } else {
            kps_last.push_back(kp->position_.pt);
            kps_current.push_back(kp->position_.pt);
        }
    }

    std::vector<uchar> status;
    Mat error;
    cv::calcOpticalFlowPyrLK(
        last_frame_->left_img_, current_frame_->left_img_, kps_last,
        kps_current, status, error, cv::Size(11, 11), 3,
        cv::TermCriteria(cv::TermCriteria::COUNT + cv::TermCriteria::EPS, 30,
                         0.01),
        cv::OPTFLOW_USE_INITIAL_FLOW);//使用LK光流追踪法计算上一帧中特征点在当前帧中的像素坐标

    int num_good_pts = 0;//累计追踪成功的点

    for (size_t i = 0; i < status.size(); ++i) {
        if (status[i]) {//判断是否追踪成功
            cv::KeyPoint kp(kps_current[i], 7);//创建关键点,2d像素坐标
            Feature::Ptr feature(new Feature(current_frame_, kp));//创建新的特征点
            feature->map_point_ = last_frame_->features_left_[i]->map_point_;//该特征点所关联的地图点= 上一帧左图特征点所关联的特征点
            current_frame_->features_left_.push_back(feature);//将该特征点加入到当前帧的左图特征点中
            num_good_pts++;
        }
    }

    LOG(INFO) << "Find " << num_good_pts << " in the last image.";
    return num_good_pts;
}
3.2.6.11 EstimateCurrentPose()
  1. 作用
    估计当前帧的位姿
  2. 函数声明
int EstimateCurrentPose();
  1. 函数实现
int Frontend::EstimateCurrentPose() {
    // setup g2o
    typedef g2o::BlockSolver_6_3 BlockSolverType;
    typedef g2o::LinearSolverDense<BlockSolverType::PoseMatrixType>
        LinearSolverType;
    auto solver = new g2o::OptimizationAlgorithmLevenberg(
        g2o::make_unique<BlockSolverType>(
            g2o::make_unique<LinearSolverType>()));
    g2o::SparseOptimizer optimizer;
    optimizer.setAlgorithm(solver);

    // vertex
    VertexPose *vertex_pose = new VertexPose();  // camera vertex_pose
    vertex_pose->setId(0);
    vertex_pose->setEstimate(current_frame_->Pose());//Tcw
    optimizer.addVertex(vertex_pose);

    // K
    Mat33 K = camera_left_->K();

    // edges
    int index = 1;
    std::vector<EdgeProjectionPoseOnly *> edges;
    std::vector<Feature::Ptr> features;
    for (size_t i = 0; i < current_frame_->features_left_.size(); ++i) {
        auto mp = current_frame_->features_left_[i]->map_point_.lock();
        if (mp) {
            features.push_back(current_frame_->features_left_[i]);
            EdgeProjectionPoseOnly *edge =
                new EdgeProjectionPoseOnly(mp->pos_, K);
            edge->setId(index);
            edge->setVertex(0, vertex_pose);
            edge->setMeasurement(
                toVec2(current_frame_->features_left_[i]->position_.pt));
            edge->setInformation(Eigen::Matrix2d::Identity());
            edge->setRobustKernel(new g2o::RobustKernelHuber);
            edges.push_back(edge);
            optimizer.addEdge(edge);
            index++;
        }
    }

    // estimate the Pose the determine the outliers 估计姿态并确定异常值
    const double chi2_th = 5.991;
    int cnt_outlier = 0;
    for (int iteration = 0; iteration < 4; ++iteration) {
        vertex_pose->setEstimate(current_frame_->Pose());
        optimizer.initializeOptimization();
        optimizer.optimize(10);
        cnt_outlier = 0;

        // count the outliers
        for (size_t i = 0; i < edges.size(); ++i) {
            auto e = edges[i];
            if (features[i]->is_outlier_) {
                e->computeError();
            }
            if (e->chi2() > chi2_th) {
                features[i]->is_outlier_ = true;
                e->setLevel(1);
                cnt_outlier++;
            } else {
                features[i]->is_outlier_ = false;
                e->setLevel(0);
            };

            if (iteration == 2) {
                e->setRobustKernel(nullptr);
            }
        }
    }

    LOG(INFO) << "Outlier/Inlier in pose estimating: " << cnt_outlier << "/"
              << features.size() - cnt_outlier;
    // Set pose and outlier
    current_frame_->SetPose(vertex_pose->estimate());

    LOG(INFO) << "Current Pose = \n" << current_frame_->Pose().matrix();

    for (auto &feat : features) {
        if (feat->is_outlier_) {
            feat->map_point_.reset();
            feat->is_outlier_ = false;  // maybe we can still use it in future
        }
    }
    return features.size() - cnt_outlier;
}
3.2.6.12 InsertKeyframe()
  1. 作用
    将当前帧设置为关键帧并将其插入后端
  2. 函数声明
bool InsertKeyframe();
  1. 函数实现
bool Frontend::InsertKeyframe() {
    if (tracking_inliers_ >= num_features_needed_for_keyframe_) {
        // still have enough features, don't insert keyframe
        return false;
    }
    // current frame is a new keyframe
    current_frame_->SetKeyFrame();//设定当前帧为关键帧并分配关键帧id
    map_->InsertKeyFrame(current_frame_);//将当前帧插入到地图类的关键帧集合中

    LOG(INFO) << "Set frame " << current_frame_->id_ << " as keyframe "
              << current_frame_->keyframe_id_;

    SetObservationsForKeyFrame(); // 将当前帧的左图特征点 设定给其对应的地图点作为观测的到该地图点的观测点
    DetectFeatures();  // detect new features 检测新的特征

    // track in right image
    FindFeaturesInRight();//使用LK流估计右图像中的特征点
    // triangulate map points
    TriangulateNewPoints(); //使用当前帧的左右图对当前帧中的二维点进行三角测量
    // update backend because we have a new keyframe
    backend_->UpdateMap(); //由于有了新的关键帧,所以更新后端,触发地图更新,启动优化

    if (viewer_) viewer_->UpdateMap();//更新地图,触发地图更新

    return true;
}
3.2.6.13 StereoInit()
  1. 作用
    尝试使用保存在current_frame_中的双目图像初始化前端
  2. 函数声明
    bool StereoInit();
  1. 函数实现
bool Frontend::StereoInit() {
    int num_features_left = DetectFeatures();
    int num_coor_features = FindFeaturesInRight();
    if (num_coor_features < num_features_init_) {
        return false;
    }

    bool build_map_success = BuildInitMap();//初始化地图点
    if (build_map_success) {//如果初始化成功,则改变前端类状态
        status_ = FrontendStatus::TRACKING_GOOD;
        if (viewer_) {//如果可视化类非空
            viewer_->AddCurrentFrame(current_frame_);//增加一个当前帧
            viewer_->UpdateMap();//以当前帧更新可视化类
        }
        return true;
    }
    return false;
}
3.2.6.14 DetectFeatures()
  1. 作用
    在 current_frame_ 中检测左图像中的特征,同时在已经存在特征点的区域不进行特征点提取
  2. 函数声明
int DetectFeatures();
  1. 函数实现
int Frontend::DetectFeatures() {
    cv::Mat mask(current_frame_->left_img_.size(), CV_8UC1, 255);
    for (auto &feat : current_frame_->features_left_) {
        cv::rectangle(mask, feat->position_.pt - cv::Point2f(10, 10),
                      feat->position_.pt + cv::Point2f(10, 10), 0, CV_FILLED);
    }
    ///在已经存在特征点的区域不进行特征点提取 
    std::vector<cv::KeyPoint> keypoints;
    gftt_->detect(current_frame_->left_img_, keypoints, mask);
    int cnt_detected = 0;
    for (auto &kp : keypoints) {
        current_frame_->features_left_.push_back(
            Feature::Ptr(new Feature(current_frame_, kp)));
        cnt_detected++;
    }

    LOG(INFO) << "Detect " << cnt_detected << " new features";
    return cnt_detected;
}
3.2.6.15 FindFeaturesInRight()
  1. 作用
    在 current_frame_ 中检测右图像中的特征
  2. 函数声明
int FindFeaturesInRight();
  1. 函数实现
int Frontend::FindFeaturesInRight() {
    // use LK flow to estimate points in the right image
    // 使用LK流估计右图像中的点
    std::vector<cv::Point2f> kps_left, kps_right;
    for (auto &kp : current_frame_->features_left_) {
        kps_left.push_back(kp->position_.pt);
        auto mp = kp->map_point_.lock();
        if (mp) {
            // use projected points as initial guess
            // 使用投影点作为初始猜测
            auto px =
                camera_right_->world2pixel(mp->pos_, current_frame_->Pose());
            kps_right.push_back(cv::Point2f(px[0], px[1]));
        } else {
            // use same pixel in left iamge
            // 在左图像中使用相同的像素
            kps_right.push_back(kp->position_.pt);
        }
    }

    std::vector<uchar> status;
    Mat error;
    cv::calcOpticalFlowPyrLK(
        current_frame_->left_img_, current_frame_->right_img_, kps_left,
        kps_right, status, error, cv::Size(11, 11), 3,
        cv::TermCriteria(cv::TermCriteria::COUNT + cv::TermCriteria::EPS, 30,0.01),
        cv::OPTFLOW_USE_INITIAL_FLOW);

    int num_good_pts = 0;
    for (size_t i = 0; i < status.size(); ++i) {
        if (status[i]) {
            cv::KeyPoint kp(kps_right[i], 7);
            Feature::Ptr feat(new Feature(current_frame_, kp));
            feat->is_on_left_image_ = false;
            current_frame_->features_right_.push_back(feat);
            num_good_pts++;
        } else {
            current_frame_->features_right_.push_back(nullptr);
        }
    }
    LOG(INFO) << "Find " << num_good_pts << " in the right image.";
    return num_good_pts;
}
3.2.6.16 BuildInitMap()
  1. 作用
    使用单个图像构建初始地图
  2. 函数声明
bool BuildInitMap();
  1. 函数实现
bool Frontend::BuildInitMap() {
    std::vector<SE3> poses{camera_left_->pose(), camera_right_->pose()};
    size_t cnt_init_landmarks = 0;
    for (size_t i = 0; i < current_frame_->features_left_.size(); ++i) {
        if (current_frame_->features_right_[i] == nullptr) continue;
        // create map point from triangulation
        std::vector<Vec3> points{
            camera_left_->pixel2camera(
                Vec2(current_frame_->features_left_[i]->position_.pt.x,
                     current_frame_->features_left_[i]->position_.pt.y)),
            camera_right_->pixel2camera(
                Vec2(current_frame_->features_right_[i]->position_.pt.x,
                     current_frame_->features_right_[i]->position_.pt.y))};
        Vec3 pworld = Vec3::Zero();

        if (triangulation(poses, points, pworld) && pworld[2] > 0) {
            auto new_map_point = MapPoint::CreateNewMappoint();//创建一个新的地图点
            new_map_point->SetPos(pworld);//为新的地图点设定其在世界坐标系中的坐标
            new_map_point->AddObservation(current_frame_->features_left_[i]);//设定当前帧左图中与当前地图点对应的特征点
            new_map_point->AddObservation(current_frame_->features_right_[i]);//设定当前帧右图中与当前地图点对应的特征点
            current_frame_->features_left_[i]->map_point_ = new_map_point;//并为各特征点设置所关联的地图点
            current_frame_->features_right_[i]->map_point_ = new_map_point;//并为各特征点设置所关联的地图点
            cnt_init_landmarks++;//累计初始的地图点数目
            map_->InsertMapPoint(new_map_point);
        }
    }
    current_frame_->SetKeyFrame();//把当前帧设定为关键帧并分配关键帧id
    map_->InsertKeyFrame(current_frame_);
    backend_->UpdateMap();//后端循环进行一次

    LOG(INFO) << "Initial map created with " << cnt_init_landmarks
              << " map points";

    return true;
}
3.2.6.17 TriangulateNewPoints()
  1. 作用
    使用当前帧的左右图对当前帧中的二维点进行三角测量
  2. 函数声明
int TriangulateNewPoints();
  1. 函数实现
    //使用当前帧的左右图对当前帧中的二维点进行三角测量
int Frontend::TriangulateNewPoints() {
    std::vector<SE3> poses{camera_left_->pose(), camera_right_->pose()};
    SE3 current_pose_Twc = current_frame_->Pose().inverse();
    int cnt_triangulated_pts = 0;
    for (size_t i = 0; i < current_frame_->features_left_.size(); ++i) {
        if (current_frame_->features_left_[i]->map_point_.expired() &&
            current_frame_->features_right_[i] != nullptr) {
            // 左图的特征点未关联地图点且存在右图匹配点,尝试三角化
            std::vector<Vec3> points{
                camera_left_->pixel2camera(
                    Vec2(current_frame_->features_left_[i]->position_.pt.x,
                         current_frame_->features_left_[i]->position_.pt.y)),
                camera_right_->pixel2camera(
                    Vec2(current_frame_->features_right_[i]->position_.pt.x,
                         current_frame_->features_right_[i]->position_.pt.y))};
            Vec3 pworld = Vec3::Zero();

            if (triangulation(poses, points, pworld) && pworld[2] > 0) {
                auto new_map_point = MapPoint::CreateNewMappoint();
                pworld = current_pose_Twc * pworld;
                new_map_point->SetPos(pworld);
                new_map_point->AddObservation(
                    current_frame_->features_left_[i]);
                new_map_point->AddObservation(
                    current_frame_->features_right_[i]);

                current_frame_->features_left_[i]->map_point_ = new_map_point;
                current_frame_->features_right_[i]->map_point_ = new_map_point;
                map_->InsertMapPoint(new_map_point);
                cnt_triangulated_pts++;
            }
        }
    }
    LOG(INFO) << "new landmarks: " << cnt_triangulated_pts;
    return cnt_triangulated_pts;
}
3.2.6.18 SetObservationsForKeyFrame()
  1. 作用

  2. 函数声明

void SetObservationsForKeyFrame();
  1. 函数实现

3.2.7 backend.cpp

3.2.7.1 构造函数Backend()
  1. 作用
    构造函数中启动优化线程并挂起
  2. 函数声明
    /// 构造函数中启动优化线程并挂起
    Backend();
  1. 函数实现
Backend::Backend() {
    backend_running_.store(true);
    backend_thread_ = std::thread(std::bind(&Backend::BackendLoop, this));
}
3.2.7.2 SetCameras()
  1. 作用
    对于后端类设置左右目的相机,用于获得内外参
  2. 函数声明
    void SetCameras(Camera::Ptr left, Camera::Ptr right) {
        cam_left_ = left;
        cam_right_ = right;
    }
3.2.7.3 SetMap()
  1. 作用
    设置后端类对应的地图类
  2. 函数声明
void SetMap(std::shared_ptr<Map> map) { map_ = map; }
3.2.7.4 UpdateMap()
  1. 作用
    触发地图更新,启动优化
  2. 函数声明
    void UpdateMap();
  1. 函数实现
void Backend::UpdateMap() {
    std::unique_lock<std::mutex> lock(data_mutex_);
    map_update_.notify_one();
}
3.2.7.5 Stop()
  1. 作用
    关闭后端线程
  2. 函数声明
void Stop();
  1. 函数实现
void Backend::Stop() {
    backend_running_.store(false);
    map_update_.notify_one();
    backend_thread_.join();
}

3.2.7.6 BackendLoop()
  1. 作用

  2. 函数声明

    /// 后端线程
    void BackendLoop();
  1. 函数实现
void Backend::BackendLoop() {
    while (backend_running_.load()) {
        std::unique_lock<std::mutex> lock(data_mutex_);
        map_update_.wait(lock);

        /// 后端仅优化激活的Frames和Landmarks
        Map::KeyframesType active_kfs = map_->GetActiveKeyFrames();
        Map::LandmarksType active_landmarks = map_->GetActiveMapPoints();
        Optimize(active_kfs, active_landmarks);
    }
}
3.2.7.7 Optimize()
  1. 作用
    对给定关键帧和路标点进行优化
  2. 函数声明
    void Optimize(Map::KeyframesType& keyframes, Map::LandmarksType& landmarks);
  1. 函数实现
void Backend::Optimize(Map::KeyframesType &keyframes,
                       Map::LandmarksType &landmarks) {
    // setup g2o
    // 构建图优化,先设定g2o
    // typedef g2o::BlockSolver<g2o::BlockSolverTraits<6, 3>> BlockSolverType;
    typedef g2o::BlockSolver_6_3 BlockSolverType;  // 每个误差项优化变量维度为6,误差值维度为3

    typedef g2o::LinearSolverCSparse<BlockSolverType::PoseMatrixType>
        LinearSolverType;
    auto solver = new g2o::OptimizationAlgorithmLevenberg(
        g2o::make_unique<BlockSolverType>(
            g2o::make_unique<LinearSolverType>()));
    g2o::SparseOptimizer optimizer;
    optimizer.setAlgorithm(solver);

    // pose 顶点,使用Keyframe id
    std::map<unsigned long, VertexPose *> vertices;
    unsigned long max_kf_id = 0;
    for (auto &keyframe : keyframes) {
        auto kf = keyframe.second;
        VertexPose *vertex_pose = new VertexPose();  // camera vertex_pose
        vertex_pose->setId(kf->keyframe_id_);
        vertex_pose->setEstimate(kf->Pose());//Tcw 形式Pose
        optimizer.addVertex(vertex_pose);
        if (kf->keyframe_id_ > max_kf_id) {
            max_kf_id = kf->keyframe_id_;
        }

        vertices.insert({kf->keyframe_id_, vertex_pose});
    }

    // 路标顶点,使用路标id索引
    std::map<unsigned long, VertexXYZ *> vertices_landmarks;

    // K 和左右外参
    Mat33 K = cam_left_->K();
    SE3 left_ext = cam_left_->pose();
    SE3 right_ext = cam_right_->pose();

    // edges
    int index = 1;
    double chi2_th = 5.991;  // robust kernel 阈值
    std::map<EdgeProjection *, Feature::Ptr> edges_and_features;

    for (auto &landmark : landmarks) {
        if (landmark.second->is_outlier_) continue;
        unsigned long landmark_id = landmark.second->id_;
        auto observations = landmark.second->GetObs();
        for (auto &obs : observations) {
            if (obs.lock() == nullptr) continue;
            auto feat = obs.lock();
            if (feat->is_outlier_ || feat->frame_.lock() == nullptr) continue;

            auto frame = feat->frame_.lock();
            EdgeProjection *edge = nullptr;
            if (feat->is_on_left_image_) {
                edge = new EdgeProjection(K, left_ext);
            } else {
                edge = new EdgeProjection(K, right_ext);
            }

            // 如果landmark还没有被加入优化,则新加一个顶点
            if (vertices_landmarks.find(landmark_id) == ///说明没有找到
                vertices_landmarks.end()) {
                VertexXYZ *v = new VertexXYZ;
                v->setEstimate(landmark.second->Pos());
                v->setId(landmark_id + max_kf_id + 1);
                v->setMarginalized(true);
                vertices_landmarks.insert({landmark_id, v});
                optimizer.addVertex(v);
            }

            edge->setId(index);
            edge->setVertex(0, vertices.at(frame->keyframe_id_));    // pose
            edge->setVertex(1, vertices_landmarks.at(landmark_id));  // landmark
            edge->setMeasurement(toVec2(feat->position_.pt));
            edge->setInformation(Mat22::Identity());
            auto rk = new g2o::RobustKernelHuber();//核函数 鲁棒核函数
            rk->setDelta(chi2_th);
            edge->setRobustKernel(rk);
            edges_and_features.insert({edge, feat});

            optimizer.addEdge(edge);

            index++;
        }
    }

    // do optimization and eliminate the outliers
    //进行优化
    optimizer.initializeOptimization();
    optimizer.optimize(10);

    //消除异常值,直到至少一半的点的误差在所设定的阈值范围内
    int cnt_outlier = 0, cnt_inlier = 0;
    int iteration = 0;
    while (iteration < 5) {
        cnt_outlier = 0;
        cnt_inlier = 0;
        // determine if we want to adjust the outlier threshold
        // 判断是否要调整异常值阈值
        for (auto &ef : edges_and_features) {
            if (ef.first->chi2() > chi2_th) {//当误差的平方大于所设定的阈值,就判断是外点
                cnt_outlier++;
            } else {
                cnt_inlier++;
            }
        }
        double inlier_ratio = cnt_inlier / double(cnt_inlier + cnt_outlier);
        if (inlier_ratio > 0.5) {
            break;
        } else {
            chi2_th *= 2;
            iteration++;
        }
    }

    for (auto &ef : edges_and_features) {
        if (ef.first->chi2() > chi2_th) {
            ef.second->is_outlier_ = true;
            // remove the observation
            ef.second->map_point_.lock()->RemoveObservation(ef.second);
        } else {
            ef.second->is_outlier_ = false;
        }
    }

    LOG(INFO) << "Outlier/Inlier in optimization: " << cnt_outlier << "/"
              << cnt_inlier;

    // Set pose and lanrmark position
    for (auto &v : vertices) {
        keyframes.at(v.first)->SetPose(v.second->estimate());
    }
    for (auto &v : vertices_landmarks) {
        landmarks.at(v.first)->SetPos(v.second->estimate());
    }
}

3.2.8 map.cpp

3.2.8.1 构造函数Map()
  1. 作用

  2. 函数声明

    Map() {}
3.2.8.2 InsertKeyFrame()
  1. 作用
    增加一个关键帧
  2. 函数声明
void InsertKeyFrame(Frame::Ptr frame);
  1. 函数实现
void Map::InsertKeyFrame(Frame::Ptr frame) {
    current_frame_ = frame;
    if (keyframes_.find(frame->keyframe_id_) == keyframes_.end()) {
        keyframes_.insert(make_pair(frame->keyframe_id_, frame));
        active_keyframes_.insert(make_pair(frame->keyframe_id_, frame));
    } else {
        keyframes_[frame->keyframe_id_] = frame;
        active_keyframes_[frame->keyframe_id_] = frame;
    }

    if (active_keyframes_.size() > num_active_keyframes_) {
        RemoveOldKeyframe();
    }
}
3.2.8.2 InsertMapPoint()
  1. 作用
    增加一个地图顶点
  2. 函数声明
 void InsertMapPoint(MapPoint::Ptr map_point);
  1. 函数实现
void Map::InsertMapPoint(MapPoint::Ptr map_point) {
    //typedef std::unordered_map<unsigned long, MapPoint::Ptr> LandmarksType; landmarks_
    if (landmarks_.find(map_point->id_) == landmarks_.end()) {//在地图类中为找到新的地图点的id,那么就重新插入一对地图点
        landmarks_.insert(make_pair(map_point->id_, map_point));
        active_landmarks_.insert(make_pair(map_point->id_, map_point));
    } else {//如果找到就赋值
        landmarks_[map_point->id_] = map_point;
        active_landmarks_[map_point->id_] = map_point;
    }
}
3.2.8.3 GetAllMapPoints()
  1. 作用

  2. 函数声明

  3. 函数实现

3.2.8.4 GetAllKeyFrames()
  1. 作用

  2. 函数声明

  3. 函数实现

3.2.8.5 GetActiveMapPoints()
  1. 作用
    获取激活地图点
  2. 函数声明
    /// 获取激活地图点
    LandmarksType GetActiveMapPoints() {
        std::unique_lock<std::mutex> lck(data_mutex_);
        return active_landmarks_;
    }
  1. 函数实现
    声明中实现
3.2.8.6 GetActiveKeyFrames()
  1. 作用
    获取激活关键帧
  2. 函数声明
    /// 获取激活关键帧
    KeyframesType GetActiveKeyFrames() {
        std::unique_lock<std::mutex> lck(data_mutex_);
        return active_keyframes_;
    }
  1. 函数实现
    声明中实现
3.2.8.7 CleanMap()
  1. 作用
    清理map中观测数量为零的点
  2. 函数声明
void CleanMap();
  1. 函数实现
void Map::CleanMap() {
    int cnt_landmark_removed = 0;
    for (auto iter = active_landmarks_.begin();
         iter != active_landmarks_.end();) {
        if (iter->second->observed_times_ == 0) {
            iter = active_landmarks_.erase(iter);
            cnt_landmark_removed++;
        } else {
            ++iter;
        }
    }
    LOG(INFO) << "Removed " << cnt_landmark_removed << " active landmarks";
}
3.2.8.8 RemoveOldKeyframe()
  1. 作用
    将旧的关键帧置为不活跃状态
  2. 函数声明
void RemoveOldKeyframe();
  1. 函数实现
void Map::RemoveOldKeyframe() {
    if (current_frame_ == nullptr) return;
    // 寻找与当前帧最近与最远的两个关键帧
    double max_dis = 0, min_dis = 9999;
    double max_kf_id = 0, min_kf_id = 0;
    auto Twc = current_frame_->Pose().inverse();
    for (auto& kf : active_keyframes_) {
        if (kf.second == current_frame_) continue;
        //首先计算出在所选定的帧的坐标系下的当前帧位姿,然后将其转换为李代数se向量,然后求其向量长度作为标准
        auto dis = (kf.second->Pose() * Twc).log().norm();
        if (dis > max_dis) {//更新最大距离阈值
            max_dis = dis;
            max_kf_id = kf.first;
        }
        if (dis < min_dis) {//更新最小距离阈值
            min_dis = dis;
            min_kf_id = kf.first;
        }
    }

    const double min_dis_th = 0.2;  // 最近阈值
    Frame::Ptr frame_to_remove = nullptr;//定义所要移除的帧
    if (min_dis < min_dis_th) {
        // 如果存在很近的帧,优先删掉最近的
        frame_to_remove = keyframes_.at(min_kf_id);
    } else {
        // 删掉最远的
        frame_to_remove = keyframes_.at(max_kf_id);
    }

    LOG(INFO) << "remove keyframe " << frame_to_remove->keyframe_id_;
    // remove keyframe and landmark observation
    // 移除关键帧和路标观测点
    active_keyframes_.erase(frame_to_remove->keyframe_id_);
    for (auto feat : frame_to_remove->features_left_) {
        auto mp = feat->map_point_.lock();
        if (mp) {
            mp->RemoveObservation(feat);
        }
    }
    for (auto feat : frame_to_remove->features_right_) {
        if (feat == nullptr) continue;
        auto mp = feat->map_point_.lock();
        if (mp) {
            mp->RemoveObservation(feat);
        }
    }

    CleanMap();
}

3.2.9 frame.cpp

3.2.9.1 构造函数Frame()

在这里,pose是Tcw,也就是世界坐标系在相机坐标系下的坐标

  1. 作用
    构造frame结构体,
  2. 函数声明
    Frame() {}

    Frame(long id, double time_stamp, const SE3 &pose, const Mat &left,
          const Mat &right);
  1. 函数实现
Frame::Frame(long id, double time_stamp, const SE3 &pose, const Mat &left, const Mat &right)
        : id_(id), time_stamp_(time_stamp), pose_(pose), left_img_(left), right_img_(right) {}
3.2.9.2 Pose()
  1. 作用

  2. 函数声明

  3. 函数实现

3.2.9.3 SetPose()
  1. 作用

  2. 函数声明

    void SetPose(const SE3 &pose) {
        std::unique_lock<std::mutex> lck(pose_mutex_);
        pose_ = pose;
    }
  1. 函数实现
    声明中实现
3.2.9.4 SetKeyFrame()
  1. 作用
    设置关键帧并分配关键帧id
  2. 函数声明
void SetKeyFrame();
  1. 函数实现
void Frame::SetKeyFrame() {
    static long keyframe_factory_id = 0;
    is_keyframe_ = true;
    keyframe_id_ = keyframe_factory_id++;
}
3.2.9.5 CreateFrame()
  1. 作用

  2. 函数声明

  3. 函数实现

3.2.10 feature.cpp

  1. 作用

  2. 函数声明

  3. 函数实现

3.2.10.1 构造函数Feature()
  1. 作用

  2. 函数声明

    Feature() {}

    Feature(std::shared_ptr<Frame> frame, const cv::KeyPoint &kp)
        : frame_(frame), position_(kp) {}
  1. 函数实现
    声明中实现

3.2.11 mappoint.cpp

3.2.11.1 构造函数MapPoint()
  1. 作用

  2. 函数声明

    MapPoint() {}

    MapPoint(long id, Vec3 position);
  1. 函数实现
MapPoint::MapPoint(long id, Vec3 position) : id_(id), pos_(position) {}
3.2.11.2 Pos()
  1. 作用

  2. 函数声明

  3. 函数实现

3.2.11.3 SetPos()
  1. 作用

  2. 函数声明

    void SetPos(const Vec3 &pos) {
        std::unique_lock<std::mutex> lck(data_mutex_);
        pos_ = pos;
    };
  1. 函数实现
    声明中实现
3.2.11.4 AddObservation()
  1. 作用
    为地图点类添加观测到该地图点的观测点

  2. 函数声明

    void AddObservation(std::shared_ptr<Feature> feature) {
        std::unique_lock<std::mutex> lck(data_mutex_);
        observations_.push_back(feature);
        observed_times_++;
    }
  1. 函数实现
    声明中实现
3.2.11.5 RemoveObservation()
  1. 作用
    移除观测点当前地图点的观测特征点

  2. 函数声明

    void RemoveObservation(std::shared_ptr<Feature> feat);
  1. 函数实现
void MapPoint::RemoveObservation(std::shared_ptr<Feature> feat) {
    std::unique_lock<std::mutex> lck(data_mutex_);
    for (auto iter = observations_.begin(); iter != observations_.end();
         iter++) {
        if (iter->lock() == feat) {
            observations_.erase(iter);
            feat->map_point_.reset();
            observed_times_--;
            break;
        }
    }
}
3.2.11.6 GetObs()
  1. 作用
    获取观测到该地图点的观测点list
  2. 函数声明
    std::list<std::weak_ptr<Feature>> GetObs() {
        std::unique_lock<std::mutex> lck(data_mutex_);
        return observations_;
    }
  1. 函数实现
    声明中实现
3.2.11.7 CreateNewMappoint()
  1. 作用

  2. 函数声明

static MapPoint::Ptr CreateNewMappoint();
  1. 函数实现
MapPoint::Ptr MapPoint::CreateNewMappoint() {
    static long factory_id = 0;
    MapPoint::Ptr new_mappoint(new MapPoint);
    new_mappoint->id_ = factory_id++;
    return new_mappoint;
}

3.2.12 viewer.cpp

3.2.12.1 构造函数Viewer()
  1. 作用

  2. 函数声明

    Viewer();
  1. 函数实现
Viewer::Viewer() {
    viewer_thread_ = std::thread(std::bind(&Viewer::ThreadLoop, this));
}
3.2.12.2 SetMap()
  1. 作用
    设定可视化类对应的地图类
  2. 函数声明
void SetMap(Map::Ptr map) { map_ = map; }
3.2.12.3 Close()
  1. 作用
    关闭可视化线程
  2. 函数声明
void Close();
  1. 函数实现
void Viewer::Close() {
    viewer_running_ = false;
    viewer_thread_.join();
}
3.2.12.4 AddCurrentFrame()
  1. 作用
    增加一个当前帧
  2. 函数声明
    void AddCurrentFrame(Frame::Ptr current_frame);
  1. 函数实现
void Viewer::AddCurrentFrame(Frame::Ptr current_frame) {
    std::unique_lock<std::mutex> lck(viewer_data_mutex_);
    current_frame_ = current_frame;
}
3.2.12.5 UpdateMap()
  1. 作用
    更新地图
  2. 函数声明
    void UpdateMap();
  1. 函数实现
void Viewer::UpdateMap() {
    std::unique_lock<std::mutex> lck(viewer_data_mutex_);
    assert(map_ != nullptr);
    active_keyframes_ = map_->GetActiveKeyFrames();
    active_landmarks_ = map_->GetActiveMapPoints();
    map_updated_ = true;
}
3.2.12.6 ThreadLoop()
  1. 作用

  2. 函数声明

    void ThreadLoop();
  1. 函数实现
void Viewer::ThreadLoop() {
    pangolin::CreateWindowAndBind("MySLAM", 1024, 768);
    glEnable(GL_DEPTH_TEST);
    glEnable(GL_BLEND);
    glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);

    pangolin::OpenGlRenderState vis_camera(
        pangolin::ProjectionMatrix(1024, 768, 400, 400, 512, 384, 0.1, 1000),
        pangolin::ModelViewLookAt(0, -5, -10, 0, 0, 0, 0.0, -1.0, 0.0));

    // Add named OpenGL viewport to window and provide 3D Handler
    pangolin::View& vis_display =
        pangolin::CreateDisplay()
            .SetBounds(0.0, 1.0, 0.0, 1.0, -1024.0f / 768.0f)
            .SetHandler(new pangolin::Handler3D(vis_camera));

    const float blue[3] = {0, 0, 1};
    const float green[3] = {0, 1, 0};

    while (!pangolin::ShouldQuit() && viewer_running_) {
        glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
        glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
        vis_display.Activate(vis_camera);

        std::unique_lock<std::mutex> lock(viewer_data_mutex_);
        if (current_frame_) {
            DrawFrame(current_frame_, green);
            FollowCurrentFrame(vis_camera);

            cv::Mat img = PlotFrameImage();
            cv::imshow("image", img);
            cv::waitKey(1);
        }

        if (map_) {
            DrawMapPoints();
        }

        pangolin::FinishFrame();
        usleep(5000);
    }

    LOG(INFO) << "Stop viewer";
}
3.2.12.7 DrawFrame()
3.2.12.8 DrawMapPoints()
3.2.12.9 FollowCurrentFrame()
3.2.12.10 PlotFrameImage()

3.2.13 g2o_types.h

3.2.13.1 位姿顶点VertexPose
/// 位姿顶点
class VertexPose : public g2o::BaseVertex<6, SE3> {
   public:
    EIGEN_MAKE_ALIGNED_OPERATOR_NEW;

    virtual void setToOriginImpl() override { _estimate = SE3(); }

    /// left multiplication on SE3
    virtual void oplusImpl(const double *update) override {
        Vec6 update_eigen;
        update_eigen << update[0], update[1], update[2], update[3], update[4],
            update[5];
        _estimate = SE3::exp(update_eigen) * _estimate;
    }

    virtual bool read(std::istream &in) override { return true; }

    virtual bool write(std::ostream &out) const override { return true; }
};
3.2.13.2 路标顶点VertexXYZ
class VertexXYZ : public g2o::BaseVertex<3, Vec3> {
   public:
    EIGEN_MAKE_ALIGNED_OPERATOR_NEW;
    virtual void setToOriginImpl() override { _estimate = Vec3::Zero(); }

    virtual void oplusImpl(const double *update) override {
        _estimate[0] += update[0];
        _estimate[1] += update[1];
        _estimate[2] += update[2];
    }

    virtual bool read(std::istream &in) override { return true; }

    virtual bool write(std::ostream &out) const override { return true; }
};
3.2.13.3 仅估计位姿的一元边EdgeProjectionPoseOnly
class EdgeProjectionPoseOnly : public g2o::BaseUnaryEdge<2, Vec2, VertexPose> {
   public:
    EIGEN_MAKE_ALIGNED_OPERATOR_NEW;

    EdgeProjectionPoseOnly(const Vec3 &pos, const Mat33 &K)
        : _pos3d(pos), _K(K) {}

    virtual void computeError() override {
        const VertexPose *v = static_cast<VertexPose *>(_vertices[0]);
        SE3 T = v->estimate();
        Vec3 pos_pixel = _K * (T * _pos3d);
        pos_pixel /= pos_pixel[2];
        _error = _measurement - pos_pixel.head<2>();
    }

    virtual void linearizeOplus() override {
        const VertexPose *v = static_cast<VertexPose *>(_vertices[0]);
        SE3 T = v->estimate();
        Vec3 pos_cam = T * _pos3d;
        double fx = _K(0, 0);
        double fy = _K(1, 1);
        double X = pos_cam[0];
        double Y = pos_cam[1];
        double Z = pos_cam[2];
        double Zinv = 1.0 / (Z + 1e-18);
        double Zinv2 = Zinv * Zinv;
        _jacobianOplusXi << -fx * Zinv, 0, fx * X * Zinv2, fx * X * Y * Zinv2,
            -fx - fx * X * X * Zinv2, fx * Y * Zinv, 0, -fy * Zinv,
            fy * Y * Zinv2, fy + fy * Y * Y * Zinv2, -fy * X * Y * Zinv2,
            -fy * X * Zinv;
    }

    virtual bool read(std::istream &in) override { return true; }

    virtual bool write(std::ostream &out) const override { return true; }

   private:
    Vec3 _pos3d;
    Mat33 _K;
};
3.2.13.4 带有地图和位姿的二元边EdgeProjection
/// 带有地图和位姿的二元边
class EdgeProjection
    : public g2o::BaseBinaryEdge<2, Vec2, VertexPose, VertexXYZ> {
   public:
    EIGEN_MAKE_ALIGNED_OPERATOR_NEW;

    /// 构造时传入相机内外参
    EdgeProjection(const Mat33 &K, const SE3 &cam_ext) : _K(K) {
        _cam_ext = cam_ext;
    }

    virtual void computeError() override {
        const VertexPose *v0 = static_cast<VertexPose *>(_vertices[0]);
        const VertexXYZ *v1 = static_cast<VertexXYZ *>(_vertices[1]);
        SE3 T = v0->estimate();
        Vec3 pos_pixel = _K * (_cam_ext * (T * v1->estimate()));
        pos_pixel /= pos_pixel[2];
        _error = _measurement - pos_pixel.head<2>();
    }

    virtual void linearizeOplus() override {
        const VertexPose *v0 = static_cast<VertexPose *>(_vertices[0]);
        const VertexXYZ *v1 = static_cast<VertexXYZ *>(_vertices[1]);
        SE3 T = v0->estimate();
        Vec3 pw = v1->estimate();
        Vec3 pos_cam = _cam_ext * T * pw;
        double fx = _K(0, 0);
        double fy = _K(1, 1);
        double X = pos_cam[0];
        double Y = pos_cam[1];
        double Z = pos_cam[2];
        double Zinv = 1.0 / (Z + 1e-18);
        double Zinv2 = Zinv * Zinv;
        _jacobianOplusXi << -fx * Zinv, 0, fx * X * Zinv2, fx * X * Y * Zinv2,
            -fx - fx * X * X * Zinv2, fx * Y * Zinv, 0, -fy * Zinv,
            fy * Y * Zinv2, fy + fy * Y * Y * Zinv2, -fy * X * Y * Zinv2,
            -fy * X * Zinv;

        _jacobianOplusXj = _jacobianOplusXi.block<2, 3>(0, 0) *
                           _cam_ext.rotationMatrix() * T.rotationMatrix();
    }

    virtual bool read(std::istream &in) override { return true; }

    virtual bool write(std::ostream &out) const override { return true; }

   private:
    Mat33 _K;
    SE3 _cam_ext;
};

3.2.14 algorithm.h

3.2.14.1 triangulation()
/**
 * linear triangulation with SVD
 * @param poses     poses,
 * @param points    points in normalized plane
 * @param pt_world  triangulated point in the world
 * @return true if success
 */
inline bool triangulation(const std::vector<SE3> &poses,
                   const std::vector<Vec3> points, Vec3 &pt_world) {
    MatXX A(2 * poses.size(), 4);
    VecX b(2 * poses.size());
    b.setZero();
    for (size_t i = 0; i < poses.size(); ++i) {
        Mat34 m = poses[i].matrix3x4();
        A.block<1, 4>(2 * i, 0) = points[i][0] * m.row(2) - m.row(0);
        A.block<1, 4>(2 * i + 1, 0) = points[i][1] * m.row(2) - m.row(1);
    }
    auto svd = A.bdcSvd(Eigen::ComputeThinU | Eigen::ComputeThinV);
    pt_world = (svd.matrixV().col(3) / svd.matrixV()(3, 3)).head<3>();

    if (svd.singularValues()[3] / svd.singularValues()[2] < 1e-2) {
        // 解质量不好,放弃
        return true;
    }
    return false;
}

4 实现

4.1 首先修改数据集的路径

  • 修改config/default.yamp文件里的数据集路径为自己前面所下载的路径下,如:dataset_dir: /home/bupo/bag_file/data_odometry_gray/dataset/sequences/05

5 修改:实现查看整体建图效果

没有实现

  • 2
    点赞
  • 22
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值