[学习SLAM]精简版无ROS依赖的Vins-Mono代码(深蓝学院VIO 笔记)

57 篇文章 20 订阅
3 篇文章 0 订阅

代码框架

数据发布函数

1 . IMU数据发布​

2 .图像数据发布

2_1 .  PubImageData (初始帧,稳定流,齐频率,==> trackerData[0].readImage ==>feature_buf.push(feature_points))

后端优化函数

Estimator::processIMU

  Estimator::processImage

sfm.construct

视觉惯性联合初始化

1、 计算陀螺仪偏置,尺度,重力加速度和速度

陀螺仪偏置,尺度,重力加速度和速度初始化

陀螺仪偏置标定 solveGyroscopeBias()

速度/重力和尺度初始化 LinearAligment()

重力细化RefineGravity()

外参标定

CalibrationExRotation()标定外参旋转矩阵

后端优化之边缘化--缘化策略

关于先验残差的更新


代码框架

所详解的代码是深蓝学院课程提供的VIO代码,是港科大VINS-Mono代码的精简版本,保留了主要的初始化/前端/后端优化模块,整个代码不依赖于ROS系统,对于进一步深入学习VINS系统具有极大的帮助。下面将对代码的主要部分进行讲解,有理解错误的地方请多指正。首先给出整个系统流程图。
在这里插入图片描述

主函数

针对EuRoC数据集的测试主函数为run_euroc.cpp,首先创建后端线程:

// An highlighted block
std::thread thd_BackEnd(&System::ProcessBackEnd, pSystem);

然后依次创建IMU数据的发布/图像数据发布/画图线程

// An highlighted block
std::thread thd_PubImuData(PubImuData);
std::thread thd_PubImageData(PubImageData);
std::thread thd_Draw(&System::Draw, pSystem);
thd_PubImuData.join();//执行结束后才进行后面的线程
thd_PubImageData.join();

数据发布函数

1 . IMU数据发布

// An highlighted block
void PubImuData()
{
  string sImu_data_file = sConfig_path + "MH_05_imu0.txt";
  cout << "1 PubImuData start sImu_data_filea: " << sImu_data_file << endl;
  ifstream fsImu;
  fsImu.open(sImu_data_file.c_str());
  if (!fsImu.is_open())
  {
  	cerr << "Failed to open imu file! " << sImu_data_file << endl;
  	return;
  }
  std::string sImu_line;
  double dStampNSec = 0.0;
  Vector3d vAcc;
  Vector3d vGyr;
  while (std::getline(fsImu, sImu_line) && !sImu_line.empty()) // read imu data
  {
  	std::istringstream ssImuData(sImu_line);
  	ssImuData >> dStampNSec >> vGyr.x() >> vGyr.y() >> vGyr.z() >> vAcc.x() >> vAcc.y() >> vAcc.z();
  	// cout << "Imu t: " << fixed << dStampNSec << " gyr: " << vGyr.transpose() << " acc: " << vAcc.transpose() << endl;
  	pSystem->PubImuData(dStampNSec / 1e9, vGyr, vAcc);
  	usleep(5000*nDelayTimes);
  }
  fsImu.close();
}

代码在MH-05数据集上进行测试。与订阅ROS消息不同,这里采取的文件读取的方式,
读取的数据为三维角速度,和三维线加速度。读取的数据存储到imu_buf中

shared_ptr<IMU_MSG> imu_msg(new IMU_MSG());
imu_msg->header = dStampSec;
imu_msg->linear_acceleration = vAcc;
imu_msg->angular_velocity = vGyr;
*********
*********
imu_buf.push(imu_msg);

2 .图像数据发布

void PubImageData()
{
	string sImage_file = sConfig_path + "MH_05_cam0.txt";

	cout << "1 PubImageData start sImage_file: " << sImage_file << endl;

	ifstream fsImage;
	fsImage.open(sImage_file.c_str());
	if (!fsImage.is_open())
	{
		cerr << "Failed to open image file! " << sImage_file << endl;
		return;
	}

	std::string sImage_line;
	double dStampNSec;
	string sImgFileName;
	
	// cv::namedWindow("SOURCE IMAGE", CV_WINDOW_AUTOSIZE);
	while (std::getline(fsImage, sImage_line) && !sImage_line.empty())
	{
		std::istringstream ssImuData(sImage_line);
		ssImuData >> dStampNSec >> sImgFileName;
		// cout << "Image t : " << fixed << dStampNSec << " Name: " << sImgFileName << endl;
		string imagePath = sData_path + "cam0/data/" + sImgFileName;

		Mat img = imread(imagePath.c_str(), 0);
		if (img.empty())
		{
			cerr << "image is empty! path: " << imagePath << endl;
			return;
		}
		pSystem->PubImageData(dStampNSec / 1e9, img);
		// cv::imshow("SOURCE IMAGE", img);
		// cv::waitKey(0);
		usleep(50000*nDelayTimes);
	}
	fsImage.close();
}

这里读取config文件夹中的MH_05_cam0.txt文件,依次读取相应的图片,并发布出来,这里的关键在于发布函数

2_1 .  PubImageData (初始帧,稳定流,齐频率,==> trackerData[0].readImage ==>feature_buf.push(feature_points))

void System::PubImageData(double dStampSec, Mat &img)

首先,判断init_feature,如果非init_feature,就置1,并返回。后面会看到init_feature的具体含义。

if (!init_feature)
    {
        cout << "1 PubImageData skip the first detected feature, which doesn't contain optical flow speed" << endl;
        init_feature = 1;
        return;
    }

然后判断first_image_flag,如果是,则置0,并将第一帧图像时间戳和最后一帧图像时间戳都置為当前的时间戳。

if (first_image_flag)
    {
        cout << "2 PubImageData first_image_flag" << endl;
        first_image_flag = false;
        first_image_time = dStampSec;
        last_image_time = dStampSec;
        return;
    }

然后检测不稳定的图像流,如果当前的时间戳减去上次最后一帧时间戳大于1,或者当前的时间戳小于last_imgae_time,则认为i检测到不稳定的图像流,将首帧标志重置为true,last_image_time置為0,发布数目置為1。

if (dStampSec - last_image_time > 1.0 || dStampSec < last_image_time)
    {
        cerr << "3 PubImageData image discontinue! reset the feature tracker!" << endl;
        first_image_flag = true;
        last_image_time = 0;
        pub_count = 1;
        return;
    }
    last_image_time = dStampSec;

接下里是控制图像发布的频率,在读取文件的方式中会有这个,在ORBSLAM中同样。
如果已经发布的图像数目除以总的时间差小于等于频率,那么继续发布,否则不发布。进一步判断,如果频率相差0.01倍,就从当前帧开始重新计算频率。保证其始终小于等于FREQ。

if (round(1.0 * pub_count / (dStampSec - first_image_time)) <= FREQ)
    {
        PUB_THIS_FRAME = true;
        // reset the frequency control
        if (abs(1.0 * pub_count / (dStampSec - first_image_time) - FREQ) < 0.01 * FREQ)
        {
            first_image_time = dStampSec;
            pub_count = 0;
        }
    }
    else
    {
        PUB_THIS_FRAME = false;
    }

接下来是

trackerData[0].readImage(img, dStampSec);

其在FeatureTracker类中,实现的操作为:基于cv::CLAHE的图像增强,如果forw_img是空的,那么就将当前img复制给prev_img,cur_img,forw_img,三种img的含义后面会更新说明。否则,将当前图像复制给forw_img. 然后将forw_pts清空。
如果cur_pts非空,计算cur_img和forw_img的像素光流。如果要发布当前的图像,首先执行

rejectWithF(); //去除外点
setMask();//针对鱼眼相机,用mask进行一个过滤,统计每个feature跟踪是次数,首先选择跟踪次数高的特征,在其周围进行非极大值抑制

然后提取特征,这里所提取的特征数目是最大特征数目减去已经跟踪的特征数目,如下

cv::goodFeaturesToTrack(forw_img, n_pts, MAX_CNT - forw_pts.size(), 0.01, MIN_DIST, mask);
*******
addPoints();//然后将提取的特征点加入到forw_pts,ids,以及track_cnt中。

最后是更新赋值,注意,这里计算光流应用的两帧图像是cur_img, forw_img。

 prev_img = cur_img;
    prev_pts = cur_pts;
    prev_un_pts = cur_un_pts;
    cur_img = forw_img;
    cur_pts = forw_pts;
    undistortedPoints();
    prev_time = cur_time;

以上是readImage实现的操作。


然后对所读取图像中特征的id进行更新。如果发布该图像,那么将所有的特征点都保存到feature_buf中。

if (PUB_THIS_FRAME)
    {
        pub_count++;
        shared_ptr<IMG_MSG> feature_points(new IMG_MSG());
        feature_points->header = dStampSec;
        vector<set<int>> hash_ids(NUM_OF_CAM);
        for (int i = 0; i < NUM_OF_CAM; i++)
        {
            auto &un_pts = trackerData[i].cur_un_pts;
            auto &cur_pts = trackerData[i].cur_pts;
            auto &ids = trackerData[i].ids;
            auto &pts_velocity = trackerData[i].pts_velocity;
            for (unsigned int j = 0; j < ids.size(); j++)
            {
                if (trackerData[i].track_cnt[j] > 1)
                {
                    int p_id = ids[j];
                    hash_ids[i].insert(p_id);
                    double x = un_pts[j].x;
                    double y = un_pts[j].y;
                    double z = 1;
                    feature_points->points.push_back(Vector3d(x, y, z));
                    feature_points->id_of_point.push_back(p_id * NUM_OF_CAM + i);
                    feature_points->u_of_point.push_back(cur_pts[j].x);
                    feature_points->v_of_point.push_back(cur_pts[j].y);
                    feature_points->velocity_x_of_point.push_back(pts_velocity[j].x);
                    feature_points->velocity_y_of_point.push_back(pts_velocity[j].y);
                }
            }
            //}
            // skip the first image; since no optical speed on frist image
            if (!init_pub)
            {
                cout << "4 PubImage init_pub skip the first image!" << endl;
                init_pub = 1;
            }
            else
            {
                m_buf.lock();
                feature_buf.push(feature_points);
                // cout << "5 PubImage t : " << fixed << feature_points->header
                //     << " feature_buf size: " << feature_buf.size() << endl;
                m_buf.unlock();
                con.notify_one();
            }
        }
    }

    cv::Mat show_img;
	cv::cvtColor(img, show_img, CV_GRAY2RGB);
	if (SHOW_TRACK)
	{
		for (unsigned int j = 0; j < trackerData[0].cur_pts.size(); j++)
        {
			double len = min(1.0, 1.0 * trackerData[0].track_cnt[j] / WINDOW_SIZE);
			cv::circle(show_img, trackerData[0].cur_pts[j], 2, cv::Scalar(255 * (1 - len), 0, 255 * len), 2);
		}

        cv::namedWindow("IMAGE", CV_WINDOW_AUTOSIZE);
		cv::imshow("IMAGE", show_img);
        cv::waitKey(1);
	}

注意在显示特征点的时候,特征点的颜色深度与其被追踪的时长正相关
上述两个线程是读取并发布IMU数据和图像数据的线程。


后端优化函数

下面介绍后端优化函数

System::ProcessBackEnd();

首先获取观测数据,包括图像特征,IMU数据,获取数据的时候使用的是互斥锁[unique_lock<mutex> lk;con.wait(lk, [&] { return ;});]

vector<pair<vector<ImuConstPtr>, ImgConstPtr>> measurements;
        
        unique_lock<mutex> lk(m_buf);
        con.wait(lk, [&] {
            return (measurements = getMeasurements()).size() != 0;
        });
        if( measurements.size() > 1){
        cout << "1 getMeasurements size: " << measurements.size() 
            << " imu sizes: " << measurements[0].first.size()
            << " feature_buf size: " <<  feature_buf.size()
            << " imu_buf size: " << imu_buf.size() << endl;
        }
        lk.unlock();
        m_estimator.lock();

然后对于每个观测,如果IMU时间戳t小于图像时间戳,将当前时间置為t,直接读取角速度和线加速度,然后执行

estimator.processIMU(dt, Vector3d(dx, dy, dz), Vector3d(rx, ry, rz));

如果t大于图像时间戳img_t,先前current_time到img_t以及img_t到t的时间差,将img_t置為当前时间,然后将加权后的IMU数据用于后面处理。注意到,img_t到t的时间差越小,具体代码如下:


double dt_1 = img_t - current_time;
double dt_2 = t - img_t;
current_time = img_t;
assert(dt_1 >= 0);
assert(dt_2 >= 0);
assert(dt_1 + dt_2 > 0);
double w1 = dt_2 / (dt_1 + dt_2);
double w2 = dt_1 / (dt_1 + dt_2);
dx = w1 * dx + w2 * imu_msg->linear_acceleration.x();
dy = w1 * dy + w2 * imu_msg->linear_acceleration.y();
dz = w1 * dz + w2 * imu_msg->linear_acceleration.z();
rx = w1 * rx + w2 * imu_msg->angular_velocity.x();
ry = w1 * ry + w2 * imu_msg->angular_velocity.y();
rz = w1 * rz + w2 * imu_msg->angular_velocity.z();
estimator.processIMU(dt_1, Vector3d(dx, dy, dz), Vector3d(rx, ry, rz));

然后,对于图像数据,读取其空间点坐标,像素坐标,以及速度。然后,执行

estimator.processImage(image, img_msg->header);


Estimator::processIMU

下面介绍处理IMU数据的函数。采用的是中值法计算预积分,对应的公式为

void Estimator::processIMU(double dt, const Vector3d &linear_acceleration, const Vector3d &angular_velocity)
{
    if (!first_imu)
    {
        first_imu = true;
        acc_0 = linear_acceleration;
        gyr_0 = angular_velocity;
    }

    if (!pre_integrations[frame_count])
    {
        pre_integrations[frame_count] = new IntegrationBase{acc_0, gyr_0, Bas[frame_count], Bgs[frame_count]};
    }
    if (frame_count != 0)
    {
        pre_integrations[frame_count]->push_back(dt, linear_acceleration, angular_velocity);
        //if(solver_flag != NON_LINEAR)
        tmp_pre_integration->push_back(dt, linear_acceleration, angular_velocity);

        dt_buf[frame_count].push_back(dt);
        linear_acceleration_buf[frame_count].push_back(linear_acceleration);
        angular_velocity_buf[frame_count].push_back(angular_velocity);

        int j = frame_count;
        Vector3d un_acc_0 = Rs[j] * (acc_0 - Bas[j]) - g;
        Vector3d un_gyr = 0.5 * (gyr_0 + angular_velocity) - Bgs[j];
        Rs[j] *= Utility::deltaQ(un_gyr * dt).toRotationMatrix();
        Vector3d un_acc_1 = Rs[j] * (linear_acceleration - Bas[j]) - g;
        Vector3d un_acc = 0.5 * (un_acc_0 + un_acc_1);
        Ps[j] += dt * Vs[j] + 0.5 * dt * dt * un_acc;
        Vs[j] += dt * un_acc;
    }
    acc_0 = linear_acceleration;
    gyr_0 = angular_velocity;
}

  Estimator::processImage

下面介绍处理图像数据的函数. 初始化成功则进行一次非线性优化,不成功则进行滑窗操作.solver_flag==NON_LINEAR进行非线性优化.执行非线性优化具体函数solveOdometry(), 检测系统运行是否失败,若失败则重置估计器, 执行窗口滑动函数slideWindow();具体代码如下:

void Estimator::processImage(const map<int, vector<pair<int, Eigen::Matrix<double, 7, 1>>>> &image, double header)
{
    //ROS_DEBUG("new image coming ------------------------------------------");
    // cout << "Adding feature points: " << image.size()<<endl;
    if (f_manager.addFeatureCheckParallax(frame_count, image, td))
        marginalization_flag = MARGIN_OLD;
    else
        marginalization_flag = MARGIN_SECOND_NEW;

    //ROS_DEBUG("this frame is--------------------%s", marginalization_flag ? "reject" : "accept");
    //ROS_DEBUG("%s", marginalization_flag ? "Non-keyframe" : "Keyframe");
    //ROS_DEBUG("Solving %d", frame_count);
    // cout << "number of feature: " << f_manager.getFeatureCount()<<endl;
    Headers[frame_count] = header;

    ImageFrame imageframe(image, header);
    imageframe.pre_integration = tmp_pre_integration;
    all_image_frame.insert(make_pair(header, imageframe));
    tmp_pre_integration = new IntegrationBase{acc_0, gyr_0, Bas[frame_count], Bgs[frame_count]};

    if (ESTIMATE_EXTRINSIC == 2)
    {
        cout << "calibrating extrinsic param, rotation movement is needed" << endl;
        if (frame_count != 0)
        {
            vector<pair<Vector3d, Vector3d>> corres = f_manager.getCorresponding(frame_count - 1, frame_count);
            Matrix3d calib_ric;
            if (initial_ex_rotation.CalibrationExRotation(corres, pre_integrations[frame_count]->delta_q, calib_ric))
            {
                // ROS_WARN("initial extrinsic rotation calib success");
                // ROS_WARN_STREAM("initial extrinsic rotation: " << endl
                                                            //    << calib_ric);
                ric[0] = calib_ric;
                RIC[0] = calib_ric;
                ESTIMATE_EXTRINSIC = 1;
            }
        }
    }

    if (solver_flag == INITIAL)
    {
        if (frame_count == WINDOW_SIZE)
        {
            bool result = false;
            if (ESTIMATE_EXTRINSIC != 2 && (header - initial_timestamp) > 0.1)
            {
                // cout << "1 initialStructure" << endl;
                result = initialStructure();
                initial_timestamp = header;
            }
            if (result)
            {
                solver_flag = NON_LINEAR;
                solveOdometry();
                slideWindow();
                f_manager.removeFailures();
                cout << "Initialization finish!" << endl;
                last_R = Rs[WINDOW_SIZE];
                last_P = Ps[WINDOW_SIZE];
                last_R0 = Rs[0];
                last_P0 = Ps[0];
            }
            else
                slideWindow();
        }
        else
            frame_count++;
    }
    else
    {
        TicToc t_solve;
        solveOdometry();
        //ROS_DEBUG("solver costs: %fms", t_solve.toc());

        if (failureDetection())
        {
            // ROS_WARN("failure detection!");
            failure_occur = 1;
            clearState();
            setParameter();
            // ROS_WARN("system reboot!");
            return;
        }

        TicToc t_margin;
        slideWindow();
        f_manager.removeFailures();
        //ROS_DEBUG("marginalization costs: %fms", t_margin.toc());
        // prepare output of VINS
        key_poses.clear();
        for (int i = 0; i <= WINDOW_SIZE; i++)
            key_poses.push_back(Ps[i]);

        last_R = Rs[WINDOW_SIZE];
        last_P = Ps[WINDOW_SIZE];
        last_R0 = Rs[0];
        last_P0 = Ps[0];
    }
}

这里包含了初始化的部分。首先判断是否初始化,如果是,再判断frame_count是否达到了预设的窗口尺寸。然后判断是否进行初始化,执行(联合)初始化的函数为

result = initialStructure();

其中,首先对IMU的能观性进行检查。通过计算线加速度的标准差,判断IMU是否有充分运动激励[不充分--> 静态初始化],以进行初始化。

{
        map<double, ImageFrame>::iterator frame_it;
        Vector3d sum_g;
        for (frame_it = all_image_frame.begin(), frame_it++; frame_it != all_image_frame.end(); frame_it++)
        {
            double dt = frame_it->second.pre_integration->sum_dt;
            Vector3d tmp_g = frame_it->second.pre_integration->delta_v / dt;
            sum_g += tmp_g;
        }
        Vector3d aver_g;
        aver_g = sum_g * 1.0 / ((int)all_image_frame.size() - 1);
        double var = 0;
        for (frame_it = all_image_frame.begin(), frame_it++; frame_it != all_image_frame.end(); frame_it++)
        {
            double dt = frame_it->second.pre_integration->sum_dt;
            Vector3d tmp_g = frame_it->second.pre_integration->delta_v / dt;
            var += (tmp_g - aver_g).transpose() * (tmp_g - aver_g);
            //cout << "frame g " << tmp_g.transpose() << endl;
        }
        var = sqrt(var / ((int)all_image_frame.size() - 1));
        //ROS_WARN("IMU variation %f!", var);
        if (var < 0.25)
        {
            // ROS_INFO("IMU excitation not enouth!");
            //return false;
        }
    }

然后执行全局SfM。首先将将f_manager中的所有feature保存到vector sfm_f中,代码如下。这里解释一下SFMFeature,其存放的是特征点的信息

struct SFMFeature
{
    bool state;//状态(是否被三角化)
    int id;
    vector<pair<int,Vector2d>> observation;//所有观测到该特征点的图像帧ID和图像座标
    double position[3];//3d座标
    double depth;//深度
};
  Quaterniond Q[frame_count + 1];
    Vector3d T[frame_count + 1];
    map<int, Vector3d> sfm_tracked_points;
    vector<SFMFeature> sfm_f;
    for (auto &it_per_id : f_manager.feature)
    {
        int imu_j = it_per_id.start_frame - 1;
        SFMFeature tmp_feature;
        tmp_feature.state = false;
        tmp_feature.id = it_per_id.feature_id;
        for (auto &it_per_frame : it_per_id.feature_per_frame)
        {
            imu_j++;
            Vector3d pts_j = it_per_frame.point;
            tmp_feature.observation.push_back(make_pair(imu_j, Eigen::Vector2d{pts_j.x(), pts_j.y()}));
        }
        sfm_f.push_back(tmp_feature);
    }

然后调用relativePose函数,判断当前的图像帧是否产生了足够了平移[不充分--> 静态初始化],能够计算相对位姿态。对于窗口中的每一帧,调用getCorresponding函数获取特征匹配关系,具体如下

vector<pair<Vector3d, Vector3d>> FeatureManager::getCorresponding(int frame_count_l, int frame_count_r)
{
    vector<pair<Vector3d, Vector3d>> corres;
    for (auto &it : feature)
    {
        if (it.start_frame <= frame_count_l && it.endFrame() >= frame_count_r)
        {
            Vector3d a = Vector3d::Zero(), b = Vector3d::Zero();
            int idx_l = frame_count_l - it.start_frame;
            int idx_r = frame_count_r - it.start_frame;

            a = it.feature_per_frame[idx_l].point;

            b = it.feature_per_frame[idx_r].point;
            
            corres.push_back(make_pair(a, b));
        }
    }
    return corres;
}

对于所跟踪的所有特征,如果其跨度包含了[i,window_size],那么其在i帧上的坐标点赋值给a,其在windows_size上的特征点坐标赋值给b,存储为对应关系并返回。

接下来,如果所找到的匹配特征数目大于20,计算坐标的平均位移,如果平均位移大于30/460,并且能够结算出相对旋转与平移,就将对应的帧索引保留,并返回true,说明能够计算相对位姿变换。 这里的第L帧是从第一帧开始到滑动窗口中第一个满足与当前帧的平均视差足够大的帧,会作为参考帧到下面的全局sfm使用,得到的Rt为当前帧到第l帧的座标系变换Rt。
然后调用sfm.construct()函数进行三维重建,对应代码如下。如果SFM重建失败,将边缘化标志设置为MARGIN_OLD,并返回false,初始化失败。对窗口中每个图像帧求解sfm问题,得到所有图像帧相对于参考帧的旋转四元数Q、平移向量T和特征点坐标sfm_tracked_points

Matrix3d relative_R;
    Vector3d relative_T;
    int l;
    if (!relativePose(relative_R, relative_T, l))
    {
        cout << "Not enough features or parallax; Move device around" << endl;
        return false;
    }
    GlobalSFM sfm;
    if (!sfm.construct(frame_count + 1, Q, T, l,
                       relative_R, relative_T,
                       sfm_f, sfm_tracked_points))
    {
        cout << "global SFM failed!" << endl;
        marginalization_flag = MARGIN_OLD;
        return false;
    }

然后,对于所有的图像帧,包括不在滑动窗口中的,提供初始的RT估计,然后solvePnP进行优化

 map<double, ImageFrame>::iterator frame_it;
    map<int, Vector3d>::iterator it;
    frame_it = all_image_frame.begin( );
    for (int i = 0; frame_it != all_image_frame.end( ); frame_it++)
    {
        cv::Mat r, rvec, t, D, tmp_r;
        if((frame_it->first) == Headers[i].stamp.toSec())
        {
            frame_it->second.is_key_frame = true;
            frame_it->second.R = Q[i].toRotationMatrix() * RIC[0].transpose();
            frame_it->second.T = T[i];
            i++;
            continue;
        }
        if((frame_it->first) > Headers[i].stamp.toSec())
            i++;
 
        //注意这里的 Q和 T是图像帧的位姿,而不是求解PNP时所用的座标系变换矩阵,两者具有对称关系
        Matrix3d R_inital = (Q[i].inverse()).toRotationMatrix();
        Vector3d P_inital = - R_inital * T[i];
        cv::eigen2cv(R_inital, tmp_r);
        //罗德里格斯公式将旋转矩阵转换成旋转向量
        cv::Rodrigues(tmp_r, rvec);
        cv::eigen2cv(P_inital, t);

        frame_it->second.is_key_frame = false;
        //获取 pnp需要用到的存储每个特征点三维点和图像座标的 vector
        vector<cv::Point3f> pts_3_vector;
        vector<cv::Point2f> pts_2_vector;
        for (auto &id_pts : frame_it->second.points)
        {
            int feature_id = id_pts.first;
            for (auto &i_p : id_pts.second)
            {
                it = sfm_tracked_points.find(feature_id);
                if(it != sfm_tracked_points.end())
                {
                    Vector3d world_pts = it->second;
                    cv::Point3f pts_3(world_pts(0), world_pts(1), world_pts(2));
                    pts_3_vector.push_back(pts_3);
                    Vector2d img_pts = i_p.second.head<2>();
                    cv::Point2f pts_2(img_pts(0), img_pts(1));
                    pts_2_vector.push_back(pts_2);
                }
            }
        }
        //保证特征点数大于 5
        cv::Mat K = (cv::Mat_<double>(3, 3) << 1, 0, 0, 0, 1, 0, 0, 0, 1);     
        if(pts_3_vector.size() < 6)
        {
            cout << "pts_3_vector size " << pts_3_vector.size() << endl;
            ROS_DEBUG("Not enough points for solve pnp !");
            return false;
        }
        /** 
         *bool cv::solvePnP(    求解 pnp问题
         *   InputArray  objectPoints,   特征点的3D座标数组
         *   InputArray  imagePoints,    特征点对应的图像座标
         *   InputArray  cameraMatrix,   相机内参矩阵
         *   InputArray  distCoeffs,     失真系数的输入向量
         *   OutputArray     rvec,       旋转向量
         *   OutputArray     tvec,       平移向量
         *   bool    useExtrinsicGuess = false, 为真则使用提供的初始估计值
         *   int     flags = SOLVEPNP_ITERATIVE 采用LM优化
         *)   
         */
        if (! cv::solvePnP(pts_3_vector, pts_2_vector, K, D, rvec, t, 1))
        {
            ROS_DEBUG("solve pnp fail!");
            return false;
        }
        cv::Rodrigues(rvec, r);
        MatrixXd R_pnp,tmp_R_pnp;
        cv::cv2eigen(r, tmp_R_pnp);
        //这里也同样需要将座标变换矩阵转变成图像帧位姿,并转换为IMU座标系的位姿
        R_pnp = tmp_R_pnp.transpose();
        MatrixXd T_pnp;
        cv::cv2eigen(t, T_pnp);
        T_pnp = R_pnp * (-T_pnp);
        frame_it->second.R = R_pnp * RIC[0].transpose();
        frame_it->second.T = T_pnp;
    }

然后进行视觉惯性联合初始化

if (visualInitialAlign())
        return true;
    else
    {
        cout << "misalign visual structure with IMU" << endl;
        return false;
    }

sfm.construct

下面介绍具体的函数bool sfm.construct()
frame_num=frame_count + 1=11,frame_num-1表示当前帧
1、 这里把第l帧看作参考座标系,根据当前帧到第l帧的relative_R,relative_T,得到当前帧在参考座标系下的位姿,之后的pose[i]表示第l帧到第i帧的变换矩阵[R|T]

	q[l].w() = 1;
	q[l].x() = 0;
	q[l].y() = 0;
	q[l].z() = 0;
	T[l].setZero();
	q[frame_num - 1] = q[l] * Quaterniond(relative_R);
	T[frame_num - 1] = relative_T;

2、 先三角化第l帧(参考帧)与第frame_num-1帧(当前帧)的路标点
3、 pnp求解参考坐标系到第l+1开始的每一帧的变换矩阵R_initial, P_initial,保存在Pose中。并与当前帧进行三角化。

for (int i = l; i < frame_num - 1 ; i++)
	{
		if (i > l)
		{
			Matrix3d R_initial = c_Rotation[i - 1];
			Vector3d P_initial = c_Translation[i - 1];
			if(!solveFrameByPnP(R_initial, P_initial, i, sfm_f))
				return false;
			c_Rotation[i] = R_initial;
			c_Translation[i] = P_initial;
			c_Quat[i] = c_Rotation[i];
			Pose[i].block<3, 3>(0, 0) = c_Rotation[i];
			Pose[i].block<3, 1>(0, 3) = c_Translation[i];
		}
		triangulateTwoFrames(i, Pose[i], frame_num - 1, Pose[frame_num - 1], sfm_f);
	}

4、 对第l帧与从第l+1到frame_num-2的每一帧再进行三角化

for (int i = l + 1; i < frame_num - 1; i++)
		triangulateTwoFrames(l, Pose[l], i, Pose[i], sfm_f);

5、 PNP求解参考座标系到从第l-1到第0帧的每一帧之间的变换矩阵,并进行三角化

for (int i = l - 1; i >= 0; i--)
	{
		Matrix3d R_initial = c_Rotation[i + 1];
		Vector3d P_initial = c_Translation[i + 1];
		if(!solveFrameByPnP(R_initial, P_initial, i, sfm_f))
			return false;
		c_Rotation[i] = R_initial;
		c_Translation[i] = P_initial;
		c_Quat[i] = c_Rotation[i];
		Pose[i].block<3, 3>(0, 0) = c_Rotation[i];
		Pose[i].block<3, 1>(0, 3) = c_Translation[i];
		triangulateTwoFrames(i, Pose[i], l, Pose[l], sfm_f);
	}

6、 三角化其他未恢复的特征点。至此得到了滑动窗口中所有图像帧的位姿以及特征点的3d座标

for (int j = 0; j < feature_num; j++)
	{
		if (sfm_f[j].state == true)
			continue;
		if ((int)sfm_f[j].observation.size() >= 2)
		{
			Vector2d point0, point1;
			int frame_0 = sfm_f[j].observation[0].first;
			point0 = sfm_f[j].observation[0].second;
			int frame_1 = sfm_f[j].observation.back().first;
			point1 = sfm_f[j].observation.back().second;
			Vector3d point_3d;
			triangulatePoint(Pose[frame_0], Pose[frame_1], point0, point1, point_3d);
			sfm_f[j].state = true;
			sfm_f[j].position[0] = point_3d(0);
			sfm_f[j].position[1] = point_3d(1);
			sfm_f[j].position[2] = point_3d(2);
		}		
	}

7、 使用cares进行全局BA优化,代码略

8、 这里得到的是第l帧座标系到各帧的变换矩阵,应将其转变为每一帧在第l帧座标系上的位姿

for (int i = 0; i < frame_num; i++)
	{
		q[i].w() = c_rotation[i][0]; 
		q[i].x() = c_rotation[i][1]; 
		q[i].y() = c_rotation[i][2]; 
		q[i].z() = c_rotation[i][3]; 
		q[i] = q[i].inverse();
	}
	for (int i = 0; i < frame_num; i++)
	{
		T[i] = -1 * (q[i] * Vector3d(c_translation[i][0], c_translation[i][1], c_translation[i][2]));
	}
	for (int i = 0; i < (int)sfm_f.size(); i++)
	{
		if(sfm_f[i].state)
			sfm_tracked_points[sfm_f[i].id] = Vector3d(sfm_f[i].position[0], sfm_f[i].position[1], sfm_f[i].position[2]);
	}

上述内容主要参考博客:https://www.twblogs.net/a/5ca3fe9cbd9eee5b1a06f4ef/zh-cn
接下来,关于bool Estimator::visualInitialAlign() 视觉惯性联合初始化的部分将在下篇博客展开介绍。

视觉惯性联合初始化

其对应的函数为

visualInitialAlign()

是在初始窗口中的图像帧完成SFM三维重建之后,即各图像帧在参考坐标系下的初始位姿都已经计算完成之后,执行的。该函数主要实现了陀螺仪的偏置校准(加速度偏置没有处理),计算速度V[0:n]、重力g、尺度s。
同时更新了Bgs后,IMU测量量需要repropagate;得到尺度s和重力g的方向后,需更新所有图像帧在世界座标系下的Ps、Rs、Vs。

1、 计算陀螺仪偏置,尺度,重力加速度和速度

bool result = VisualIMUAlignment(all_image_frame, Bgs, g, x);
    if(!result)
    {
        ROS_DEBUG("solve g failed!");
        return false;
    }

2、 获取所有图像帧的位姿Ps、Rs,并将其置为关键帧

for (int i = 0; i <= frame_count; i++)
    {
        Matrix3d Ri = all_image_frame[Headers[i].stamp.toSec()].R;
        Vector3d Pi = all_image_frame[Headers[i].stamp.toSec()].T;
        Ps[i] = Pi;
        Rs[i] = Ri;
        all_image_frame[Headers[i].stamp.toSec()].is_key_frame = true;
    }

3、 重新计算所有特征点的深度

  //将所有特征点的深度置为-1
    VectorXd dep = f_manager.getDepthVector();
    for (int i = 0; i < dep.size(); i++)
        dep[i] = -1;
    f_manager.clearDepth(dep);
    //重新计算特征点的深度
    Vector3d TIC_TMP[NUM_OF_CAM];
    for(int i = 0; i < NUM_OF_CAM; i++)
        TIC_TMP[i].setZero();
    ric[0] = RIC[0];
    f_manager.setRic(ric);
    f_manager.triangulate(Ps, &(TIC_TMP[0]), &(RIC[0]));

4、 陀螺仪的偏置bgs改变,重新计算预积分

for (int i = 0; i <= WINDOW_SIZE; i++)
    {
        pre_integrations[i]->repropagate(Vector3d::Zero(), Bgs[i]);
    }

5、 将Ps、Vs、depth尺度s缩放后转变为相对于第0帧图像座标系下
论文提到的以第一帧c0为基准座标系,通过相机座标系ck位姿得到IMU座标系bk位姿的公式为:


之前都是以第l帧为基准坐标系,转换到以第一帧b0为基准坐标系的话应该是:
 

for (int i = frame_count; i >= 0; i--)
       Ps[i] = s * Ps[i] - Rs[i] * TIC[0] - (s * Ps[0] - Rs[0] * TIC[0]);    
   int kv = -1;
   map<double, ImageFrame>::iterator frame_i;
   for (frame_i = all_image_frame.begin(); frame_i != all_image_frame.end(); frame_i++)
   {
       if(frame_i->second.is_key_frame)
       {
           kv++;
           Vs[kv] = frame_i->second.R * x.segment<3>(kv * 3);
       }
   }
   for (auto &it_per_id : f_manager.feature)
   {
       it_per_id.used_num = it_per_id.feature_per_frame.size();
       if (!(it_per_id.used_num >= 2 && it_per_id.start_frame < WINDOW_SIZE - 2))
           continue;
       it_per_id.estimated_depth *= s;
   }

6、 通过将重力旋转到z轴上,得到世界座标系与摄像机座标系c0之间的旋转矩阵rot_diff

Matrix3d R0 = Utility::g2R(g);
   double yaw = Utility::R2ypr(R0 * Rs[0]).x();
   R0 = Utility::ypr2R(Eigen::Vector3d{-yaw, 0, 0}) * R0;
   g = R0 * g;
   Matrix3d rot_diff = R0;

7、 所有变量从参考座标系c0旋转到世界座标系w

  for (int i = 0; i <= frame_count; i++)
    {
        Ps[i] = rot_diff * Ps[i];
        Rs[i] = rot_diff * Rs[i];
        Vs[i] = rot_diff * Vs[i];
    }

陀螺仪偏置,尺度,重力加速度和速度初始化


VisualIMUAlignment()中调用了两个函数,分别用于对陀螺仪的偏置进行标定,以及估计尺度/重力和速度。

bool VisualIMUAlignment(map<double, ImageFrame> &all_image_frame, Vector3d* Bgs, Vector3d &g, VectorXd &x)
{
    solveGyroscopeBias(all_image_frame, Bgs);
    if(LinearAlignment(all_image_frame, g, x))
        return true;
    else 
        return false;    
}

陀螺仪偏置标定 solveGyroscopeBias()


然后用LDLT分解求得偏置δbw​.代码如下

void solveGyroscopeBias(map<double, ImageFrame> &all_image_frame, Vector3d* Bgs)
{
    Matrix3d A;
    Vector3d b;
    Vector3d delta_bg;
    A.setZero();
    b.setZero();
    map<double, ImageFrame>::iterator frame_i;
    map<double, ImageFrame>::iterator frame_j;
    for (frame_i = all_image_frame.begin(); next(frame_i) != all_image_frame.end(); frame_i++)
    {
        frame_j = next(frame_i);
        MatrixXd tmp_A(3, 3);
        tmp_A.setZero();
        VectorXd tmp_b(3);
        tmp_b.setZero();
        
        //R_ij = (R^c0_bk)^-1 * (R^c0_bk+1) 轉換爲四元數 q_ij = (q^c0_bk)^-1 * (q^c0_bk+1)
        Eigen::Quaterniond q_ij(frame_i->second.R.transpose() * frame_j->second.R);
        //tmp_A = J_j_bw
        tmp_A = frame_j->second.pre_integration->jacobian.template block<3, 3>(O_R, O_BG);
        //tmp_b = 2 * ((r^bk_bk+1)^-1 * (q^c0_bk)^-1 * (q^c0_bk+1))_vec
        //      = 2 * ((r^bk_bk+1)^-1 * q_ij)_vec
        tmp_b = 2 * (frame_j->second.pre_integration->delta_q.inverse() * q_ij).vec();
        //tmp_A * delta_bg = tmp_b
        A += tmp_A.transpose() * tmp_A;
        b += tmp_A.transpose() * tmp_b;
    }
    //LDLT方法
    delta_bg = A.ldlt().solve(b);
    ROS_WARN_STREAM("gyroscope bias initial calibration " << delta_bg.transpose());

    for (int i = 0; i <= WINDOW_SIZE; i++)
        Bgs[i] += delta_bg;

    for (frame_i = all_image_frame.begin(); next(frame_i) != all_image_frame.end( ); frame_i++)
    {
        frame_j = next(frame_i);
        frame_j->second.pre_integration->repropagate(Vector3d::Zero(), Bgs[0]);
    }
}

速度/重力和尺度初始化 LinearAligment()

该函数需要优化的变量有速度/重力加速度和尺度:

进一步,有

同理,有

bool LinearAlignment(map<double, ImageFrame> &all_image_frame, Vector3d &g, VectorXd &x)

最后求得尺度 s 和重力加速度 g

double s = x(n_state - 1) / 100.0;
    g = x.segment<3>(n_state - 4);
  	//如果重力加速度與參考值差太大或者尺度爲負則說明計算錯誤
    if(fabs(g.norm() - G.norm()) > 1.0 || s < 0)
    {
        return false;
    }

重力细化RefineGravity()


对应的函数为

void RefineGravity(map<double, ImageFrame> &all_image_frame, Vector3d &g, VectorXd &x)

函数中的MatrixXd lxly(3, 2)= TangentBasis(g0); 即对应 b \mathbf{b} b,VectorXd dg = x.segment<2>(n_state - 3); 即对应 w,用于寻找参数 b的Algorithm 1对应的代码如下

MatrixXd TangentBasis(Vector3d &g0)
{
    Vector3d b, c;
    Vector3d a = g0.normalized();
    Vector3d tmp(0, 0, 1);
    if(a == tmp)
        tmp << 1, 0, 0;
    b = (tmp - a * (a.transpose() * tmp)).normalized();
    c = a.cross(b);
    MatrixXd bc(3, 2);
    bc.block<3, 1>(0, 0) = b;
    bc.block<3, 1>(0, 1) = c;
    return bc;
}

外参标定

VINS外参标定指的是对相机坐标系到IMU坐标系的变换矩阵进行在线标定与优化。

在参数配置文件yaml中,参数estimate_extrinsic反映了外参的情況:
1、等于0表示这外参不需要再优化;
2、等于1表示外参只是一個估计值,后续还需要将其作为初始值放入非线性优化中;
3、等于2表示不知道外参,需要进行标定,从代码estimator.cpp中的processImage()中的以下代码進入,主要是标定外参的旋转矩阵。

if(ESTIMATE_EXTRINSIC == 2)//如果沒有外參則進行標定
    {
        ROS_INFO("calibrating extrinsic param, rotation movement is needed");
        if (frame_count != 0)
        {
            //得到當前幀與前一幀之間歸一化特徵點
            vector<pair<Vector3d, Vector3d>> corres = f_manager.getCorresponding(frame_count - 1, frame_count);
            Matrix3d calib_ric;
            //標定從camera到IMU之間的旋轉矩陣
            if (initial_ex_rotation.CalibrationExRotation(corres, pre_integrations[frame_count]->delta_q, calib_ric))
            {
                ROS_WARN("initial extrinsic rotation calib success");
                ROS_WARN_STREAM("initial extrinsic rotation: " << endl << calib_ric);
                ric[0] = calib_ric;
                RIC[0] = calib_ric;
                ESTIMATE_EXTRINSIC = 1;
            }
        }
    }

CalibrationExRotation()标定外参旋转矩阵

该函数的目的是标定外惨的旋转矩阵。下面介绍函数的内部操作。
1. solveRelativeR(corres)根据对极几何计算相邻帧见相机坐标系的旋转矩阵,这里的corres传入的是当前帧和其之前一帧的对应特征点对的归一化坐标。
1.1将corres中对应点的二维坐标分别存入ll和rr中

vector<cv::Point2f> ll, rr;
        for (int i = 0; i < int(corres.size()); i++)
        {
            ll.push_back(cv::Point2f(corres[i].first(0), corres[i].first(1)));
            rr.push_back(cv::Point2f(corres[i].second(0), corres[i].second(1)));
        }

1.2 根据对极几何求解这两帧的本质矩阵

cv::Mat E = cv::findFundamentalMat(ll, rr);

1.3 对本质矩阵进行svd分解得到4组Rt解

cv::Mat_<double> R1, R2, t1, t2;
decomposeE(E, R1, R2, t1, t2);

1.4 采用三角化对每组Rt进行验证,选择是正深度的Rt

double ratio1 = max(testTriangulation(ll, rr, R1, t1), testTriangulation(ll, rr, R1, t2));
        double ratio2 = max(testTriangulation(ll, rr, R2, t1), testTriangulation(ll, rr, R2, t2));
        cv::Mat_<double> ans_R_cv = ratio1 > ratio2 ? R1 : R2;

1.5 当前的R是上一帧到当前帧的变换矩阵,对其求转置变为当前帧想对于上一帧的姿态。

Matrix3d ans_R_eigen;
for (int i = 0; i < 3; i++)
   for (int j = 0; j < 3; j++)
       ans_R_eigen(j, i) = ans_R_cv(i, j);
return ans_R_eigen;

2. 获取相邻帧之间相机坐标系和IMU坐标系的旋转矩阵,存入vector中

    frame_count++;
    Rc.push_back(solveRelativeR(corres));//幀間cam的R,由對極幾何得到
    Rimu.push_back(delta_q_imu.toRotationMatrix());//幀間IMU的R,由IMU預積分得到
    Rc_g.push_back(ric.inverse() * delta_q_imu * ric);//每幀IMU相對於起始幀IMU的R

3. 求解相机到IMU的外参旋转矩阵,根据等式

将多帧之间的等式关联在一起构建超定方程 A x = 0,对 A 进行svd分解,其中最小的奇异值对应的右奇异向量便为 x的估计。对应的代码为

Eigen::MatrixXd A(frame_count * 4, 4);
    A.setZero();
    int sum_ok = 0;
    for (int i = 1; i <= frame_count; i++)
    {
        Quaterniond r1(Rc[i]);
        Quaterniond r2(Rc_g[i]);
        double angular_distance = 180 / M_PI * r1.angularDistance(r2);
        ROS_DEBUG(
            "%d %f", i, angular_distance);
        //huber核函數做加權
        double huber = angular_distance > 5.0 ? 5.0 / angular_distance : 1.0;
        ++sum_ok;
        Matrix4d L, R;
        //L R 分別爲左乘和右乘矩陣
        double w = Quaterniond(Rc[i]).w();
        Vector3d q = Quaterniond(Rc[i]).vec();
        L.block<3, 3>(0, 0) = w * Matrix3d::Identity() + Utility::skewSymmetric(q);
        L.block<3, 1>(0, 3) = q;
        L.block<1, 3>(3, 0) = -q.transpose();
        L(3, 3) = w;

        Quaterniond R_ij(Rimu[i]);
        w = R_ij.w();
        q = R_ij.vec();
        R.block<3, 3>(0, 0) = w * Matrix3d::Identity() - Utility::skewSymmetric(q);
        R.block<3, 1>(0, 3) = q;
        R.block<1, 3>(3, 0) = -q.transpose();
        R(3, 3) = w;

        A.block<4, 4>((i - 1) * 4, 0) = huber * (L - R);
    }

    //svd分解中最小奇異值對應的右奇異向量作爲旋轉四元數
    JacobiSVD<MatrixXd> svd(A, ComputeFullU | ComputeFullV);
    Matrix<double, 4, 1> x = svd.matrixV().col(3);
    Quaterniond estimated_R(x);
    ric = estimated_R.toRotationMatrix().inverse();

4. 至少迭代计算了WINDOW_SIZE次,其R的奇异值大于0。25才认为标定成功。

   Vector3d ric_cov;
    ric_cov = svd.singularValues().tail<3>();
    if (frame_count >= WINDOW_SIZE && ric_cov(1) > 0.25)
    {
        calib_ric_result = ric;
        return true;
    }
    else
        return false;

在初步标定完外参的旋转矩阵后,后续会对旋转矩阵与平移向量进行继续优化。

 



后端优化之边缘化--缘化策略

首先可参考边缘化策略的示例说明
VINS 根据次新帧是否为关键帧,分为两种边缘化策略:通过对比次新帧和次次新帧的视差量,来决定 marg 掉次新帧或者最老帧,对应到代码中详细分析: 1. 当次新帧为关键帧时,MARGIN_OLD,将 marg 掉最老帧,及其看到的路标点和相关联 的 IMU 数据,将其转化为先验信息加到整体的目标函数中.

  • 1. 边缘化最老帧

对应的函数为

void Estimator::MargOldFrame()

该函数,首先构建problem,依次加入外参数节点,IMU信息的相关节点,视觉节点和约束边,设置先验信息,然后设置需要边缘化掉的节点,进行边缘化操作

std::vector<std::shared_ptr<backend::Vertex>> marg_vertex;
marg_vertex.push_back(vertexCams_vec[0]);//这里的0即为最老帧对应的索引
marg_vertex.push_back(vertexVB_vec[0]);
problem.Marginalize(marg_vertex, pose_dim);
  • 2. 边缘化次新帧

在VINS中,当次新帧不是关键帧时,MARGIN_SECOND_NEW。
对应的函数为

void Estimator::MargNewFrame()

该函数,首先构建problem,依次加入外参数节点,IMU信息的相关节点,设置先验信息,然后设置需要边缘化掉的节点,进行边缘化操作。注意这里没有加入视觉约束边?

std::vector<std::shared_ptr<backend::Vertex>> marg_vertex;
marg_vertex.push_back(vertexCams_vec[WINDOW_SIZE - 1]); //注意这里的索引,次新帧
marg_vertex.push_back(vertexVB_vec[WINDOW_SIZE - 1]);
problem.Marginalize(marg_vertex, pose_dim);
  • 3. 边缘化函数

首先,我们介绍边缘化的理论过程。
VINS中的滑窗优化策略,将滑出窗外的帧与滑窗内的帧的约束使用边缘化的形式保存为先验误差进行后续非线性优化,以保留约束信息。
紧耦合VIO处理的两种误差分别为IMU误差与图像误差,采用LM、GN、Dogleg等方式迭代求解待优化参数增量,以求得最大似然估计,最小化非线性化误差。
迭代优化的核心即求解增量方程:Hδx=b
由于引入滑窗后,对于滑到窗口之外的帧不再优化其参数,但其与滑窗内的数据仍然存在约束,直接丢掉这些窗外帧会造成约束信息丢失,所以要将其封装成先验信息,加入到大的非线性优化问题中作为一部分误差,这一步即为边缘化。假设要边缘化的状态为 x 1 \mathbf{x_1} x1​,要保留的状态为 x 2 \mathbf{x_2} x2​,对于上述增量方程,变为:


通过这种操作,既没有损失约束信息,也不需要求解 x2​
上式为边缘化后待求解的增量方程,我们要根据这个增量方程恢复出边缘化封装的先验误差的具体形式epep​
随着迭代的过程,状态量被不断更新。但在边缘化时,被边缘化的状态值固定,舒尔补时使用的雅克比即为当时泰勒展开使用该固定的值(x线性化点)求得的雅克比,都需要固定。在VINS中,线性化点处的参数值x0被保存为keep_block_data。此即为EFJ,更多解释详看滑窗优化、边缘化、舒尔补、FEJ及fill-in问题。但在上述方程中,由于  b=JT*e,其中的随着状态的更新而变化,因此 b也需随之变化。
状态的更新:

使用 H 分解 J 的方式如下:

整体步骤参考边缘化
下面对代码进行介绍

bool Problem::Marginalize(const std::vector<std::shared_ptr<Vertex> > margVertexs, int pose_dim) 

首先找到需要marg的边

SetOrdering();
/// 找到需要 marg 的 edge, margVertexs[0] is frame, its edge contained pre-intergration
std::vector<shared_ptr<Edge>> marg_edges = GetConnectedEdges(margVertexs[0]);

对于marg_edges所关联的特征,重新设定其顺序:pose_dim + marg_landmark_size,这里的posedim是增加了所有节点后的维度。

std::unordered_map<int, shared_ptr<Vertex>> margLandmark;
int marg_landmark_size = 0;
//    std::cout << "\n marg edge 1st id: "<< marg_edges.front()->Id() << " end id: "<<marg_edges.back()->Id()<<std::endl;
    for (size_t i = 0; i < marg_edges.size(); ++i) {
//        std::cout << "marg edge id: "<< marg_edges[i]->Id() <<std::endl;
        auto verticies = marg_edges[i]->Verticies();
        for (auto iter : verticies) {
            if (IsLandmarkVertex(iter) && margLandmark.find(iter->Id()) == margLandmark.end()) {
                iter->SetOrderingId(pose_dim + marg_landmark_size);
                margLandmark.insert(make_pair(iter->Id(), iter));
                marg_landmark_size += iter->LocalDimension();
            }
        }
    }

构造 H矩阵和 b矩阵

 int cols = pose_dim + marg_landmark_size;
    /// 构建误差 H 矩阵 H = H_marg + H_pp_prior
    MatXX H_marg(MatXX::Zero(cols, cols));
    VecX b_marg(VecX::Zero(cols));

接下来,对于每个要边缘化的边,计算其残差和雅克比矩阵,然后叠加所有节点对之间的 J i T W J j \mathbf{J}_i^T\mathbf{W}\mathbf{J}_j JiT​WJj​构造 H H矩阵,以及构造 b

int ii = 0;
    for (auto edge: marg_edges) {
        edge->ComputeResidual();
        edge->ComputeJacobians();
        auto jacobians = edge->Jacobians();
        auto verticies = edge->Verticies();
        ii++;

        assert(jacobians.size() == verticies.size());
        for (size_t i = 0; i < verticies.size(); ++i) {
            auto v_i = verticies[i];
            auto jacobian_i = jacobians[i];
            ulong index_i = v_i->OrderingId();
            ulong dim_i = v_i->LocalDimension();

            double drho;
            MatXX robustInfo(edge->Information().rows(),edge->Information().cols());
            edge->RobustInfo(drho,robustInfo);

            for (size_t j = i; j < verticies.size(); ++j) {
                auto v_j = verticies[j];
                auto jacobian_j = jacobians[j];
                ulong index_j = v_j->OrderingId();
                ulong dim_j = v_j->LocalDimension();

                MatXX hessian = jacobian_i.transpose() * robustInfo * jacobian_j;

                assert(hessian.rows() == v_i->LocalDimension() && hessian.cols() == v_j->LocalDimension());
                // 所有的信息矩阵叠加起来
                H_marg.block(index_i, index_j, dim_i, dim_j) += hessian;
                if (j != i) {
                    // 对称的下三角
                    H_marg.block(index_j, index_i, dim_j, dim_i) += hessian.transpose();
                }
            }
            b_marg.segment(index_i, dim_i) -= drho * jacobian_i.transpose() * edge->Information() * edge->Residual();
        }

    }

然后开始构造以下形式

  int marg_size = marg_landmark_size;
        MatXX Hmm = H_marg.block(reserve_size, reserve_size, marg_size, marg_size);
        MatXX Hpm = H_marg.block(0, reserve_size, reserve_size, marg_size);
        MatXX Hmp = H_marg.block(reserve_size, 0, marg_size, reserve_size);
        VecX bpp = b_marg.segment(0, reserve_size);
        VecX bmm = b_marg.segment(reserve_size, marg_size);

然后是舒尔补的过程

// Hmm 是对角线矩阵,它的求逆可以直接为对角线块分别求逆,如果是逆深度,对角线块为1维的,则直接为对角线的倒数,这里可以加速
        MatXX Hmm_inv(MatXX::Zero(marg_size, marg_size));
        // TODO:: use openMP
        for (auto iter: margLandmark) {
            int idx = iter.second->OrderingId() - reserve_size;
            int size = iter.second->LocalDimension();
            Hmm_inv.block(idx, idx, size, size) = Hmm.block(idx, idx, size, size).inverse();
        }

        MatXX tempH = Hpm * Hmm_inv;
        MatXX Hpp = H_marg.block(0, 0, reserve_size, reserve_size) - tempH * Hmp;
        bpp = bpp - tempH * bmm;
        H_marg = Hpp;
        b_marg = bpp;

上面是针对所关联的视觉特征的边缘化,下面针对帧和speedbias进行边缘化
需要把将要边缘化的节点移动到H_marg的右下角。

/// marg frame and speedbias
    int marg_dim = 0;

    // index 大的先移动
    for (int k = margVertexs.size() -1 ; k >= 0; --k)
    {

        int idx = margVertexs[k]->OrderingId();
        int dim = margVertexs[k]->LocalDimension();
//        std::cout << k << " "<<idx << std::endl;
        marg_dim += dim;
        // move the marg pose to the Hmm bottown right
        // 将 row i 移动矩阵最下面
        Eigen::MatrixXd temp_rows = H_marg.block(idx, 0, dim, reserve_size);
        Eigen::MatrixXd temp_botRows = H_marg.block(idx + dim, 0, reserve_size - idx - dim, reserve_size);
        H_marg.block(idx, 0, reserve_size - idx - dim, reserve_size) = temp_botRows;
        H_marg.block(reserve_size - dim, 0, dim, reserve_size) = temp_rows;

        // 将 col i 移动矩阵最右边
        Eigen::MatrixXd temp_cols = H_marg.block(0, idx, reserve_size, dim);
        Eigen::MatrixXd temp_rightCols = H_marg.block(0, idx + dim, reserve_size, reserve_size - idx - dim);
        H_marg.block(0, idx, reserve_size, reserve_size - idx - dim) = temp_rightCols;
        H_marg.block(0, reserve_size - dim, reserve_size, dim) = temp_cols;

        Eigen::VectorXd temp_b = b_marg.segment(idx, dim);
        Eigen::VectorXd temp_btail = b_marg.segment(idx + dim, reserve_size - idx - dim);
        b_marg.segment(idx, reserve_size - idx - dim) = temp_btail;
        b_marg.segment(reserve_size - dim, dim) = temp_b;
    }

然后接下来就是舒尔补操作,最后产生的就是先验。

int m2 = marg_dim;
    int n2 = reserve_size - marg_dim;   // marg pose
    Eigen::MatrixXd Amm = 0.5 * (H_marg.block(n2, n2, m2, m2) + H_marg.block(n2, n2, m2, m2).transpose());

    Eigen::SelfAdjointEigenSolver<Eigen::MatrixXd> saes(Amm);
    Eigen::MatrixXd Amm_inv = saes.eigenvectors() * Eigen::VectorXd(
            (saes.eigenvalues().array() > eps).select(saes.eigenvalues().array().inverse(), 0)).asDiagonal() *
                              saes.eigenvectors().transpose();

    Eigen::VectorXd bmm2 = b_marg.segment(n2, m2);
    Eigen::MatrixXd Arm = H_marg.block(0, n2, n2, m2);
    Eigen::MatrixXd Amr = H_marg.block(n2, 0, m2, n2);
    Eigen::MatrixXd Arr = H_marg.block(0, 0, n2, n2);
    Eigen::VectorXd brr = b_marg.segment(0, n2);
    Eigen::MatrixXd tempB = Arm * Amm_inv;
    H_prior_ = Arr - tempB * Amr;
    b_prior_ = brr - tempB * bmm2;

然后,为了利用ceres进行优化,需要将 H 和 b转化为如下形式

Eigen::SelfAdjointEigenSolver<Eigen::MatrixXd> saes2(H_prior_);
    Eigen::VectorXd S = Eigen::VectorXd((saes2.eigenvalues().array() > eps).select(saes2.eigenvalues().array(), 0));
    Eigen::VectorXd S_inv = Eigen::VectorXd(
            (saes2.eigenvalues().array() > eps).select(saes2.eigenvalues().array().inverse(), 0));

    Eigen::VectorXd S_sqrt = S.cwiseSqrt();
    Eigen::VectorXd S_inv_sqrt = S_inv.cwiseSqrt();
    Jt_prior_inv_ = S_inv_sqrt.asDiagonal() * saes2.eigenvectors().transpose();
    err_prior_ = -Jt_prior_inv_ * b_prior_;

    MatXX J = S_sqrt.asDiagonal() * saes2.eigenvectors().transpose();
    H_prior_ = J.transpose() * J;
    MatXX tmp_h = MatXX( (H_prior_.array().abs() > 1e-9).select(H_prior_.array(),0) );
    H_prior_ = tmp_h;

最后,移除相关的边缘化的节点和边

// remove vertex and remove edge
    for (size_t k = 0; k < margVertexs.size(); ++k) {
        RemoveVertex(margVertexs[k]);
    }

    for (auto landmarkVertex: margLandmark) {
        RemoveVertex(landmarkVertex.second);
    }

边缘化之后,就需要对剩下的变量进行维护,主要是更新其索引

TicToc t_whole_marginalization;
    if (marginalization_flag == MARGIN_OLD)
    {
        vector2double();

        MargOldFrame();
        // prior 中对应的保留下来的参数地址
        std::unordered_map<long, double *> addr_shift; 
        for (int i = 1; i <= WINDOW_SIZE; i++)
        {
            addr_shift[reinterpret_cast<long>(para_Pose[i])] = para_Pose[i - 1];
            addr_shift[reinterpret_cast<long>(para_SpeedBias[i])] = para_SpeedBias[i - 1];
        }
        for (int i = 0; i < NUM_OF_CAM; i++)
            addr_shift[reinterpret_cast<long>(para_Ex_Pose[i])] = para_Ex_Pose[i];
        if (ESTIMATE_TD)
        {
            addr_shift[reinterpret_cast<long>(para_Td[0])] = para_Td[0];
        }
    }
    else
    {
        if (Hprior_.rows() > 0)
        {

            vector2double();

            MargNewFrame();

            std::unordered_map<long, double *> addr_shift;
            for (int i = 0; i <= WINDOW_SIZE; i++)
            {
                if (i == WINDOW_SIZE - 1)
                    continue;
                else if (i == WINDOW_SIZE)
                {
                    addr_shift[reinterpret_cast<long>(para_Pose[i])] = para_Pose[i - 1];
                    addr_shift[reinterpret_cast<long>(para_SpeedBias[i])] = para_SpeedBias[i - 1];
                }
                else
                {
                    addr_shift[reinterpret_cast<long>(para_Pose[i])] = para_Pose[i];
                    addr_shift[reinterpret_cast<long>(para_SpeedBias[i])] = para_SpeedBias[i];
                }
            }
            for (int i = 0; i < NUM_OF_CAM; i++)
                addr_shift[reinterpret_cast<long>(para_Ex_Pose[i])] = para_Ex_Pose[i];
            if (ESTIMATE_TD)
            {
                addr_shift[reinterpret_cast<long>(para_Td[0])] = para_Td[0];
            }
        }
    }

到此,函数

void Estimator::backendOptimization()

就介绍完了。接下来是滑窗操作。需要注意的是边缘化之后产生的 H , b \mathbf{H,b} H,b 是作为后面优化求解的先验矩阵。然后,先验再加上新的测量信息,就是新的信息矩阵。

关于先验残差的更新

对先验残差的更新是在problem.solve()函数中的,代码为

void Problem::UpdateStates() {

    // update vertex
    for (auto vertex: verticies_) {
        vertex.second->BackUpParameters();    // 保存上次的估计值

        ulong idx = vertex.second->OrderingId();
        ulong dim = vertex.second->LocalDimension();
        VecX delta = delta_x_.segment(idx, dim);
        vertex.second->Plus(delta);
    }

    // update prior
    if (err_prior_.rows() > 0) {
        // BACK UP b_prior_
        b_prior_backup_ = b_prior_;
        err_prior_backup_ = err_prior_;

        /// update with first order Taylor, b' = b + \frac{\delta b}{\delta x} * \delta x
        /// \delta x = Computes the linearized deviation from the references (linearization points)
        b_prior_ -= H_prior_ * delta_x_.head(ordering_poses_);       // update the error_prior
        err_prior_ = -Jt_prior_inv_ * b_prior_.head(ordering_poses_ - 15);

//        std::cout << "                : "<< b_prior_.norm()<<" " <<err_prior_.norm()<< std::endl;
//        std::cout << "     delta_x_ ex: "<< delta_x_.head(6).norm() << std::endl;
    }

}

虽然先验信息矩阵固定不变,但是随着迭代的进行,变量被不断优化,先验残差也需要跟随变化,否则,求解系统可能崩溃。
对先验残差的变化可以用一阶泰勒近似

 

参考博客:

https://www.cnblogs.com/feifanrensheng/p/10532918.html
https://blog.csdn.net/u012871872/article/details/78128087
https://blog.csdn.net/weixin_41394379/article/details/89975386

  • 2
    点赞
  • 16
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值