VINS-FUSION源码框架及C++知识点总结

  VINS-FUSION是港科大空中机器人实验室的开源视觉惯性导航SLAM,在此称为slam,是因为不同于VIO,它具有回环和地图复用功能,是一个完整的基于优化算法的slam系统,有关该算法介绍及其中的数学理论部分见之后的链接,在此不再讲解,而是专注于其代码实现的过程.
VINS-FUSION github
VINS中的camera与IMU坐标系
VINS-Mono论文学习与代码解读,一位大牛的博客,数学推导,代码注释都有。
VINS-mono详细解读,介绍了算法大概流程以及公式推导
几位大佬的注释,我没看过,建议多多对比参考:
VINS-Mono-Learning
VINS-Mono中文注释
VINS-Mono-code-annotation
本人注释的代码,只有VIO部分:
VINS-FUSION-Annotation

VINS-FUSION程序架构

  与主流slam算法一样,分为前端和后端,前端提取视觉信息(单目或者双目视觉特征点)并进行跟踪,后端处理前端得到的特征点跟踪信息,融合IMU,GPS等外部信息,利用优化算法实时得出当前的状态.运行时可以选择是否使用多线程,在此仅以多线程来讲解.运行时有各种传感器数据订阅者的回调函数,前端视觉处理函数sync_process() 以及后端优化部分 processMeasurements(),下面分别就这两部分函数源码进行讲解.

前端

  前端任务就是提取并跟踪视觉信息,不同于MSCKF前端流程在一个函数里就能看明白,vins-fusion的前端需要经过解析层层嵌套的函数才能看出明显的流程,前端封装在函数**sync_process()**里.
   首先订阅者sub_img接收到图像消息后,调用回调函数将其存储在缓存区img_buf里,并不会在回调函数里进行图像处理.前端流程如下:

Created with Raphaël 2.3.0 开始 双目? 双目绑定 跟踪(以下是本函数的内部函数) featureTracker.trackImage 上一帧有特征点吗? LK跟踪上一帧与校验 cv::calcOpticalFlowPyrLK() 提取新的特征点 cv::goodFeaturesToTrack() 是双目吗? 双目匹配 cv::calcOpticalFlowPyrLK() 特征封装 featureFrame[feature_id].emplace_back() 结束 提取单目图像 yes no yes no yes no
输入:图像
前端front-end
输出:封装好的特征点集
featureFrame

  前端输出是map<int, vector<pair<int, Eigen::Matrix<double, 7, 1>>>> featureFrame对象, 第一个int是特征点的id,第二个in是camera_id(1或者2,代表是在哪个相机上),matrix是该特征点在该帧图像里的信息(像素坐标,归一化坐标,速度).

前端比较简单,有比较更易懂,在此将其与msckf-vio的异同点罗列如下:

  1. 都是特征点法,vins提取的Harris角点(但opencv里也说该提取函数也也支持Shi Tomasi算法的角点),而msckf-vio提取的是fast角点.
  2. 都采用了opencv自带的LK跟踪算法来进行前后帧及双目特征点的匹配.
  3. 外点排除方法不同:msckf-vio基于几何检验,即用RANSAC+F矩阵检验,而vins用了forward and backward(即改变跟踪与被跟踪对象) LK光流追踪来检验.
  4. msckf-vio只保留双目匹配上的点,而vins会保留那些只在单目看到的点,论文说单目的点可以约束状态里的旋转姿态.

函数功能解读

下面就各个函数模块功能进行说明:
  cv::calcOpticalFlowPyrLK(prev_img, cur_img, prev_pts, cur_pts, status, err, cv::Size(21, 21), 3): 利用LK光流跟踪算法进行两帧间的特征的跟踪,既可以是前后两帧,也可以是同时刻的双目图像,该算法效率远远高于基于描述子匹配的算法,其参数分别为:<被跟踪图像,跟踪图像,被跟踪的特征点(是已知的),跟踪到的特征点(为空),跟踪状态(为空),错误(为空),每层金字塔上的搜索范围,金字塔层数(此处为3+1层)>.
此外还有其他参数,更详细介绍见opencv官网calcOpticalFlowPyrLK介绍

  cv::goodFeaturesToTrack(cur_img, n_pts, MAX_CNT - cur_pts.size(), 0.01, MIN_DIST, mask)
提取角点,参数分别为<图像,角点数组,提取数量,质量层次,角点之间最小距离,mask),更多介绍详见opencv官网介绍goodfeaturestotrack以及LK算法博客.

后端

  VINS-FUSION一个优点就是后端集成了camera-IMU系统的初始化过程,能标定camera-imu之间的外参,尺度因子等,所以本文就从初始化和优化两部分来进行讲解.可以设定后端在前端之后,也可以设定后端在单独的线程里,后端封装在一个函数里Estimator::processMeasurements().此外由于涉及到不同的传感器有不同的技术细节,所以也需要分相应的情况进行分别阐述.

初始化

单目VIO初始化

  初始化主要的任务是利用视觉和IMU松耦合方案,先sfm求滑动窗口(10个)内的所有帧位姿和所有三维路标点的位置,然后跟IMU预积分量对齐,求解重力方向,尺度因子,陀螺仪bias及每帧的初速度,也就是说以视觉信息为基准来初始化imu的初始状态.初始化流程如下:初始化流程图

函数功能解读

不同于msckf-vio,本套源码嵌套函数非常多,下面就各个函数进行讲解(有些函数参数名太长就没有写出来,对照源码就能知道具体函数):

getIMUInterval(prevTime, curTime, accVector, gyrVector): 提取两帧之间的IMU数据;

initFirstIMUPose(accVector): 静态初始化,利用重力在w系和imu系下的向量表示不同来得到初始时刻IMU系的位姿Rs[0];

processIMU(): IMU预积分得到两帧图像之间的IMU预积分量pre_integrations[i], tmp_pre_integration,以及中值积分得到每帧图像时刻IMU全局状态Rs[i], Ps[i], Vs[i]; 注:只有frame_count!=0时才会计算这些量,frame_count==0时预积分为0,IMU全局状态Rs[0]即为初始状态,不需要通过中值积分得到;

CalibrationExRotation(): 一共取WINDOW_SIZE(设为10)个状态, 也就是WINDOW_SIZE个图像信息,相邻图像帧之间通过特征点匹配计算F矩阵,进而恢复出两两帧之间的旋转矩阵, IMU之间预积分算出两两帧IMU系之间的旋转矩阵,匹配即可算出camera-imu之间的外参. 此处注意,视觉匹配求出的旋转矩阵与imu积分得到的旋转矩阵存在互逆的关系,所以源码中会对求出的两两帧之间的视觉旋转矩阵进行转置(也就是求逆),这一步需要晃动相机,使得其位姿有明显变化.

sfm.construct(): 首先在图像帧窗口中由前往后(越老的帧越靠前)选出与当前帧(也就是最新帧)有一定数量的共视点且视差的帧L作为参考帧;然后三角化+pnp计算出初始的位姿和三维点坐标;最后Full-BA优化求解出更准确的位姿和三维点坐标.注:这里不是以第一帧为参考帧,位姿里的平移矩阵无尺度,旋转矩阵变换到IMU系了.

solveGyroscopeBias(all_image_frame, Bgs): 初始化陀螺仪bias,用到的数据是sfm得到的相邻帧时刻imu的变换和IMU预积分得到的变换,以视觉为基准,可以求解出陀螺仪的bias,这个bias在帧间是恒定值,不同的帧间则不同.

LinearAlignment(all_image_frame, g, x): 最小化相邻两帧之间IMU预积分得到的速度,位置增量与预测量(也就是要估计的量)算出的增量之间的残差,可以求出初始帧里的g,尺度和各帧时刻IMU的初速度,最后并对g进行Refine.

pre_integrations[i]->repropagate(Vector3d::Zero(), Bgs[i]): 利用之前求出的陀螺仪bias重新进行预积分,过程与之前一样,注意这里将加速度计bias设为0.

optimization(): 视觉惯性联合优化+边缘化marginalize.

updateLatestStates(): 将最新帧的信息设为latest的量.

slideWindow(): 移除被marginalize掉的帧的信息.

优化

初始化之后的后端优化流程图(基本就是初始化的子图):
后端优化流程图
  整个代码并不难懂,其中的难点在于其优化部分的代码,也是整套代码的核心部分. 涉及到优化的部分有初始化的sfm和后端优化optimization(),在此特意提取出这部分的代码,进行额外的讲解.

一, sfm

//full BA
	ceres::Problem problem;
	ceres::LocalParameterization* local_parameterization = new ceres::QuaternionParameterization();注意这个参数
	//cout << " begin full BA " << endl;
    //step1: 加入待优化的参数:相机位姿
	for (int i = 0; i < frame_num; i++)
	{
		//double array for ceres
		c_translation[i][0] = c_Translation[i].x();
		c_translation[i][1] = c_Translation[i].y();
		c_translation[i][2] = c_Translation[i].z();
		c_rotation[i][0] = c_Quat[i].w();
		c_rotation[i][1] = c_Quat[i].x();
		c_rotation[i][2] = c_Quat[i].y();
		c_rotation[i][3] = c_Quat[i].z();
        problem.AddParameterBlock(c_rotation[i], 4, local_parameterization); //加入图像帧初始化的位姿旋转矩阵
        problem.AddParameterBlock(c_translation[i], 3);//加入图像帧初始化的平移矩阵
		if (i == l)
		{
            problem.SetParameterBlockConstant(c_rotation[i]);//第l帧射设为参考帧,其旋转位姿固定不变
		}
		if (i == l || i == frame_num - 1)
		{
            problem.SetParameterBlockConstant(c_translation[i]); //第l和最新帧(当前帧)平移矩阵不变,为何呢?
		}
	}

    //加入残差项:重投影误差    
	for (int i = 0; i < feature_num; i++)
	{
		if (sfm_f[i].state != true)
			continue;
		for (int j = 0; j < int(sfm_f[i].observation.size()); j++)
		{
            int l = sfm_f[i].observation[j].first; //观测到该特征点的图像帧
            //残差cost_function重投影误差
			ceres::CostFunction* cost_function = ReprojectionError3D::Create(
												sfm_f[i].observation[j].second.x(),
												sfm_f[i].observation[j].second.y());

    		problem.AddResidualBlock(cost_function, NULL, c_rotation[l], c_translation[l], 
    								sfm_f[i].position);	 
		}

    }   //加入代价函数cost_function
	ceres::Solver::Options options;
	options.linear_solver_type = ceres::DENSE_SCHUR;
	//options.minimizer_progress_to_stdout = true;
	options.max_solver_time_in_seconds = 0.2;
	ceres::Solver::Summary summary;
    ceres::Solve(options, &problem, &summary);  //求解

其中残差函数定义为:

struct ReprojectionError3D
{
  ReprojectionError3D(double observed_u, double observed_v)
  	:observed_u(observed_u), observed_v(observed_v)
  	{}

  template <typename T>
  bool operator()(const T* const camera_R, const T* const camera_T, const T* point, T* residuals) const
  {
  	T p[3];
  	ceres::QuaternionRotatePoint(camera_R, point, p);
  	p[0] += camera_T[0]; p[1] += camera_T[1]; p[2] += camera_T[2];
  	T xp = p[0] / p[2];
  	T yp = p[1] / p[2];
  	residuals[0] = xp - T(observed_u);
  	residuals[1] = yp - T(observed_v);
  	return true;
  }

  static ceres::CostFunction* Create(const double observed_x,
                                     const double observed_y) 
  {
    return (new ceres::AutoDiffCostFunction<
            ReprojectionError3D, 2, 4, 3, 3>(
            	new ReprojectionError3D(observed_x,observed_y)));
  }

  double observed_u;
  double observed_v;
};

可以看出sfm中采用的是自动求导机制,这部分用的是ceres求解BA问题的标准流程,先添加待优化的变量,然后定义并添加残差项,也就是重投影误差,最后设置求解器进行求解,有关ceres这部分的教程详见官网google ceres官方教程

二,optimization()

在此将整段核心代码拆解如下:

1, 添加相机状态变量参数
//添加相机状态变量参数和相机内参变量参数
 for (int i = 0; i < frame_count + 1; i++)
  {
      ceres::LocalParameterization *local_parameterization = new PoseLocalParameterization(); //局部位姿参数
      problem.AddParameterBlock(para_Pose[i], SIZE_POSE, local_parameterization); //para_Pose:有IMU则为IMU位置和姿态四元数,无IMU则为相机的位姿
      if(USE_IMU)
          problem.AddParameterBlock(para_SpeedBias[i], SIZE_SPEEDBIAS);//para_SpeedBias:IMU初速度,陀螺仪加速度计bias
  }
  if(!USE_IMU)
      problem.SetParameterBlockConstant(para_Pose[0]);
//添加相机内参变量参数
 for (int i = 0; i < NUM_OF_CAM; i++) //NUM_OF_CAM相机数量
  {
      ceres::LocalParameterization *local_parameterization = new PoseLocalParameterization(); //局部位姿参数
      problem.AddParameterBlock(para_Ex_Pose[i], SIZE_POSE, local_parameterization);  //para_Ex_Pose:camera-imu之间的外参
      if ((ESTIMATE_EXTRINSIC && frame_count == WINDOW_SIZE && Vs[0].norm() > 0.2) || openExEstimation)   //这种条件下需要优化外参
      {
          //ROS_INFO("estimate extinsic param");
          openExEstimation = 1;
      }
      else
      {
          //ROS_INFO("fix extinsic param");
          problem.SetParameterBlockConstant(para_Ex_Pose[i]);
      }
  }

这里需要注意"局部位姿参数"的定义,这部分的内容在ceres官网有详细介绍 LocalParameterization,这个对象主要定义该参数的广义加法和雅克比,如果不加这个对象,则使用默认的狭义加法(1+1=2)和雅克比计算方法,在这个参数里有相机的外参旋转四元数,它的加法不是狭义的加法,所以需要重新定义,代码如下:

class PoseLocalParameterization : public ceres::LocalParameterization
{
  virtual bool Plus(const double *x, const double *delta, double *x_plus_delta) const;
  virtual bool ComputeJacobian(const double *x, double *jacobian) const;
  virtual int GlobalSize() const { return 7; };
  virtual int LocalSize() const { return 6; };
};

bool PoseLocalParameterization::Plus(const double *x, const double *delta, double *x_plus_delta) const
{
  Eigen::Map<const Eigen::Vector3d> _p(x);
  Eigen::Map<const Eigen::Quaterniond> _q(x + 3);
  Eigen::Map<const Eigen::Vector3d> dp(delta);
  Eigen::Quaterniond dq = Utility::deltaQ(Eigen::Map<const Eigen::Vector3d>(delta + 3));
  Eigen::Map<Eigen::Vector3d> p(x_plus_delta);
  Eigen::Map<Eigen::Quaterniond> q(x_plus_delta + 3);
  p = _p + dp;
  q = (_q * dq).normalized();  //四元数乘法
  return true;
}
bool PoseLocalParameterization::ComputeJacobian(const double *x, double *jacobian) const
{
  Eigen::Map<Eigen::Matrix<double, 7, 6, Eigen::RowMajor>> j(jacobian);
  j.topRows<6>().setIdentity();
  j.bottomRows<1>().setZero();
  return true;
}
2,加入边缘化先验残差信息(后面有详细介绍)
    if (last_marginalization_info && last_marginalization_info->valid)
    {
        // construct new marginlization_factor
        MarginalizationFactor *marginalization_factor = new MarginalizationFactor(last_marginalization_info);
        problem.AddResidualBlock(marginalization_factor, NULL,
                                 last_marginalization_parameter_blocks);
    }
3,IMU预积分残差源码及其数学推导
 if(USE_IMU) //加入预积分残差
    {
        for (int i = 0; i < frame_count; i++)
        {
            int j = i + 1;
            if (pre_integrations[j]->sum_dt > 10.0)   //预积分时间太长,不使用
                continue;
            IMUFactor* imu_factor = new IMUFactor(pre_integrations[j]);
            problem.AddResidualBlock(imu_factor, NULL, para_Pose[i], para_SpeedBias[i], para_Pose[j], para_SpeedBias[j]);
        }
    }

其中的构建残差的IMUFactor为:

class IMUFactor : public ceres::SizedCostFunction<15, 7, 9, 7, 9>
{
  public:
    IMUFactor() = delete;
    IMUFactor(IntegrationBase* _pre_integration):pre_integration(_pre_integration)
    {
    }
    virtual bool Evaluate(double const *const *parameters, double *residuals, double **jacobians) const
    {
       //以下是待估计的参数:世界坐标系下相邻i,j两帧的位姿,初速度和陀螺仪,加速度计bias
        Eigen::Vector3d Pi(parameters[0][0], parameters[0][1], parameters[0][2]);
        Eigen::Quaterniond Qi(parameters[0][6], parameters[0][3], parameters[0][4], parameters[0][5]);
        Eigen::Vector3d Vi(parameters[1][0], parameters[1][1], parameters[1][2]);
        Eigen::Vector3d Bai(parameters[1][3], parameters[1][4], parameters[1][5]);
        Eigen::Vector3d Bgi(parameters[1][6], parameters[1][7], parameters[1][8]);
        Eigen::Vector3d Pj(parameters[2][0], parameters[2][1], parameters[2][2]);
        Eigen::Quaterniond Qj(parameters[2][6], parameters[2][3], parameters[2][4], parameters[2][5]);
        Eigen::Vector3d Vj(parameters[3][0], parameters[3][1], parameters[3][2]);
        Eigen::Vector3d Baj(parameters[3][3], parameters[3][4], parameters[3][5]);
        Eigen::Vector3d Bgj(parameters[3][6], parameters[3][7], parameters[3][8]);

#if 0
        if ((Bai - pre_integration->linearized_ba).norm() > 0.10 ||
            (Bgi - pre_integration->linearized_bg).norm() > 0.01)
        {
            pre_integration->repropagate(Bai, Bgi);
        }
#endif
        //计算残差
        Eigen::Map<Eigen::Matrix<double, 15, 1>> residual(residuals);
        residual = pre_integration->evaluate(Pi, Qi, Vi, Bai, Bgi,
                                            Pj, Qj, Vj, Baj, Bgj);
       //残差乘以信息矩阵
        Eigen::Matrix<double, 15, 15> sqrt_info = Eigen::LLT<Eigen::Matrix<double, 15, 15>>(pre_integration->covariance.inverse()).matrixL().transpose();
        residual = sqrt_info * residual;
        // 求雅克比矩阵
        if (jacobians)
        {
            double sum_dt = pre_integration->sum_dt;
            Eigen::Matrix3d dp_dba = pre_integration->jacobian.template block<3, 3>(O_P, O_BA);
            Eigen::Matrix3d dp_dbg = pre_integration->jacobian.template block<3, 3>(O_P, O_BG);
            Eigen::Matrix3d dq_dbg = pre_integration->jacobian.template block<3, 3>(O_R, O_BG);
            Eigen::Matrix3d dv_dba = pre_integration->jacobian.template block<3, 3>(O_V, O_BA);
            Eigen::Matrix3d dv_dbg = pre_integration->jacobian.template block<3, 3>(O_V, O_BG);
            if (pre_integration->jacobian.maxCoeff() > 1e8 || pre_integration->jacobian.minCoeff() < -1e8)
            {
                ROS_WARN("numerical unstable in preintegration");
            }
            if (jacobians[0])
            {
                Eigen::Map<Eigen::Matrix<double, 15, 7, Eigen::RowMajor>> jacobian_pose_i(jacobians[0]);
                jacobian_pose_i.setZero();
                jacobian_pose_i.block<3, 3>(O_P, O_P) = -Qi.inverse().toRotationMatrix();
                jacobian_pose_i.block<3, 3>(O_P, O_R) = Utility::skewSymmetric(Qi.inverse() * (0.5 * G * sum_dt * sum_dt + Pj - Pi - Vi * sum_dt));

#if 0
            jacobian_pose_i.block<3, 3>(O_R, O_R) = -(Qj.inverse() * Qi).toRotationMatrix();
#else
                Eigen::Quaterniond corrected_delta_q = pre_integration->delta_q * Utility::deltaQ(dq_dbg * (Bgi - pre_integration->linearized_bg));
                jacobian_pose_i.block<3, 3>(O_R, O_R) = -(Utility::Qleft(Qj.inverse() * Qi) * Utility::Qright(corrected_delta_q)).bottomRightCorner<3, 3>();
#endif
                jacobian_pose_i.block<3, 3>(O_V, O_R) = Utility::skewSymmetric(Qi.inverse() * (G * sum_dt + Vj - Vi));
                jacobian_pose_i = sqrt_info * jacobian_pose_i;
                if (jacobian_pose_i.maxCoeff() > 1e8 || jacobian_pose_i.minCoeff() < -1e8)
                {
                    ROS_WARN("numerical unstable in preintegration");
                }
            }
            if (jacobians[1])
            {
                Eigen::Map<Eigen::Matrix<double, 15, 9, Eigen::RowMajor>> jacobian_speedbias_i(jacobians[1]);
                jacobian_speedbias_i.setZero();
                jacobian_speedbias_i.block<3, 3>(O_P, O_V - O_V) = -Qi.inverse().toRotationMatrix() * sum_dt;
                jacobian_speedbias_i.block<3, 3>(O_P, O_BA - O_V) = -dp_dba;
                jacobian_speedbias_i.block<3, 3>(O_P, O_BG - O_V) = -dp_dbg;

#if 0
            jacobian_speedbias_i.block<3, 3>(O_R, O_BG - O_V) = -dq_dbg;
#else               
            jacobian_speedbias_i.block<3, 3>(O_R, O_BG - O_V) = -Utility::Qleft(Qj.inverse() * Qi * pre_integration->delta_q).bottomRightCorner<3, 3>() * dq_dbg;
#endif

                jacobian_speedbias_i.block<3, 3>(O_V, O_V - O_V) = -Qi.inverse().toRotationMatrix();
                jacobian_speedbias_i.block<3, 3>(O_V, O_BA - O_V) = -dv_dba;
                jacobian_speedbias_i.block<3, 3>(O_V, O_BG - O_V) = -dv_dbg;
                jacobian_speedbias_i.block<3, 3>(O_BA, O_BA - O_V) = -Eigen::Matrix3d::Identity();
                jacobian_speedbias_i.block<3, 3>(O_BG, O_BG - O_V) = -Eigen::Matrix3d::Identity();
                jacobian_speedbias_i = sqrt_info * jacobian_speedbias_i;
            }
            if (jacobians[2])
            {
                Eigen::Map<Eigen::Matrix<double, 15, 7, Eigen::RowMajor>> jacobian_pose_j(jacobians[2]);
                jacobian_pose_j.setZero();
                jacobian_pose_j.block<3, 3>(O_P, O_P) = Qi.inverse().toRotationMatrix();
#if 0
            jacobian_pose_j.block<3, 3>(O_R, O_R) = Eigen::Matrix3d::Identity();
#else               
             Eigen::Quaterniond corrected_delta_q = pre_integration->delta_q * Utility::deltaQ(dq_dbg * (Bgi - pre_integration->linearized_bg));
             jacobian_pose_j.block<3, 3>(O_R, O_R) = Utility::Qleft(corrected_delta_q.inverse() * Qi.inverse() * Qj).bottomRightCorner<3, 3>();
#endif
                jacobian_pose_j = sqrt_info * jacobian_pose_j;
            }
            if (jacobians[3])
            {
                Eigen::Map<Eigen::Matrix<double, 15, 9, Eigen::RowMajor>> jacobian_speedbias_j(jacobians[3]);
                jacobian_speedbias_j.setZero();
                jacobian_speedbias_j.block<3, 3>(O_V, O_V - O_V) = Qi.inverse().toRotationMatrix();
                jacobian_speedbias_j.block<3, 3>(O_BA, O_BA - O_V) = Eigen::Matrix3d::Identity();
                jacobian_speedbias_j.block<3, 3>(O_BG, O_BG - O_V) = Eigen::Matrix3d::Identity();
                jacobian_speedbias_j = sqrt_info * jacobian_speedbias_j;
            }
        }

        return true;
    }
    IntegrationBase* pre_integration;
};

计算残差代码pre_integration->evaluate()

Eigen::Matrix<double, 15, 1> evaluate(const Eigen::Vector3d &Pi, const Eigen::Quaterniond &Qi, const Eigen::Vector3d &Vi, const Eigen::Vector3d &Bai, const Eigen::Vector3d &Bgi,
                                          const Eigen::Vector3d &Pj, const Eigen::Quaterniond &Qj, const Eigen::Vector3d &Vj, const Eigen::Vector3d &Baj, const Eigen::Vector3d &Bgj)
    {
        Eigen::Matrix<double, 15, 1> residuals;
        //用bias修正预积分量
        Eigen::Matrix3d dp_dba = jacobian.block<3, 3>(O_P, O_BA);
        Eigen::Matrix3d dp_dbg = jacobian.block<3, 3>(O_P, O_BG);

        Eigen::Matrix3d dq_dbg = jacobian.block<3, 3>(O_R, O_BG);

        Eigen::Matrix3d dv_dba = jacobian.block<3, 3>(O_V, O_BA);
        Eigen::Matrix3d dv_dbg = jacobian.block<3, 3>(O_V, O_BG);

        Eigen::Vector3d dba = Bai - linearized_ba;
        Eigen::Vector3d dbg = Bgi - linearized_bg;

        Eigen::Quaterniond corrected_delta_q = delta_q * Utility::deltaQ(dq_dbg * dbg);
        Eigen::Vector3d corrected_delta_v = delta_v + dv_dba * dba + dv_dbg * dbg;
        Eigen::Vector3d corrected_delta_p = delta_p + dp_dba * dba + dp_dbg * dbg;
        //计算残差
        residuals.block<3, 1>(O_P, 0) = Qi.inverse() * (0.5 * G * sum_dt * sum_dt + Pj - Pi - Vi * sum_dt) - corrected_delta_p;
        residuals.block<3, 1>(O_R, 0) = 2 * (corrected_delta_q.inverse() * (Qi.inverse() * Qj)).vec();
        residuals.block<3, 1>(O_V, 0) = Qi.inverse() * (G * sum_dt + Vj - Vi) - corrected_delta_v;
        residuals.block<3, 1>(O_BA, 0) = Baj - Bai;
        residuals.block<3, 1>(O_BG, 0) = Bgj - Bgi;
        return residuals;
    }

有关这部分的数学知识点非常多,在此见深蓝学院VIO课程的推导过程,贴图如下(特别感谢相关的三位老师,高博,贺博和崔博):
预积分误差公式
预积分误差对bias的求导
预积分误差对bias的求导
在这里插入图片描述
预积分误差对bias的求导
预积分误差对bias的求导
预积分误差对bias的求导
预积分误差对bias的求导

4,视觉重投影误差源码及数学推导

代码如下:

for (auto &it_per_id : f_manager.feature) //提取每个特征点
    {
        it_per_id.used_num = it_per_id.feature_per_frame.size(); //筛选出观测帧大于等于4的特征点来构建视觉残差
         if (it_per_id.used_num < 4)
            continue; 
        ++feature_index;
        int imu_i = it_per_id.start_frame, imu_j = imu_i - 1;        
        Vector3d pts_i = it_per_id.feature_per_frame[0].point; //该帧在首帧归一化平面上的位置
        for (auto &it_per_frame : it_per_id.feature_per_frame)
        {
            imu_j++;
            if (imu_i != imu_j) //加入视觉重投影信息,IMU状态
            {
                Vector3d pts_j = it_per_frame.point;
                ProjectionTwoFrameOneCamFactor *f_td = new ProjectionTwoFrameOneCamFactor(pts_i, pts_j, it_per_id.feature_per_frame[0].velocity, it_per_frame.velocity,                                                                 it_per_id.feature_per_frame[0].cur_td, it_per_frame.cur_td);
                problem.AddResidualBlock(f_td, loss_function, para_Pose[imu_i], para_Pose[imu_j], para_Ex_Pose[0], para_Feature[feature_index], para_Td[0]);
            }
            if(STEREO && it_per_frame.is_stereo)
            {                
                Vector3d pts_j_right = it_per_frame.pointRight;
                if(imu_i != imu_j)
                {
                    ProjectionTwoFrameTwoCamFactor *f = new ProjectionTwoFrameTwoCamFactor(pts_i, pts_j_right, it_per_id.feature_per_frame[0].velocity, it_per_frame.velocityRight,                                                                 it_per_id.feature_per_frame[0].cur_td, it_per_frame.cur_td);
                    problem.AddResidualBlock(f, loss_function, para_Pose[imu_i], para_Pose[imu_j], para_Ex_Pose[0], para_Ex_Pose[1], para_Feature[feature_index], para_Td[0]);
                }
                else
                {
                    ProjectionOneFrameTwoCamFactor *f = new ProjectionOneFrameTwoCamFactor(pts_i, pts_j_right, it_per_id.feature_per_frame[0].velocity, it_per_frame.velocityRight,                                                                 it_per_id.feature_per_frame[0].cur_td, it_per_frame.cur_td);
                    problem.AddResidualBlock(f, loss_function, para_Ex_Pose[0], para_Ex_Pose[1], para_Feature[feature_index], para_Td[0]);
                }              
            }
            f_m_cnt++;
        }
    }

形式上与ceres添加残差块一致,其残差块ProjectionOneFrameTwoCamFactor()定义如下:

class ProjectionTwoFrameOneCamFactor : public ceres::SizedCostFunction<2, 7, 7, 7, 1, 1>
{
  public:
    ProjectionTwoFrameOneCamFactor(const Eigen::Vector3d &_pts_i, const Eigen::Vector3d &_pts_j,
    				   const Eigen::Vector2d &_velocity_i, const Eigen::Vector2d &_velocity_j,
    				   const double _td_i, const double _td_j);
    virtual bool Evaluate(double const *const *parameters, double *residuals, double **jacobians) const;
    void check(double **parameters);

    Eigen::Vector3d pts_i, pts_j;
    Eigen::Vector3d velocity_i, velocity_j;
    double td_i, td_j;
    Eigen::Matrix<double, 2, 3> tangent_base;
    static Eigen::Matrix2d sqrt_info;
    static double sum_t;
};


bool ProjectionTwoFrameOneCamFactor::Evaluate(double const *const *parameters, double *residuals, double **jacobians) const
{
    TicToc tic_toc;
    Eigen::Vector3d Pi(parameters[0][0], parameters[0][1], parameters[0][2]);
    Eigen::Quaterniond Qi(parameters[0][6], parameters[0][3], parameters[0][4], parameters[0][5]);

    Eigen::Vector3d Pj(parameters[1][0], parameters[1][1], parameters[1][2]);
    Eigen::Quaterniond Qj(parameters[1][6], parameters[1][3], parameters[1][4], parameters[1][5]);

    Eigen::Vector3d tic(parameters[2][0], parameters[2][1], parameters[2][2]);
    Eigen::Quaterniond qic(parameters[2][6], parameters[2][3], parameters[2][4], parameters[2][5]);

    double inv_dep_i = parameters[3][0];  //逆深度

    double td = parameters[4][0];

    Eigen::Vector3d pts_i_td, pts_j_td;
    pts_i_td = pts_i - (td - td_i) * velocity_i;
    pts_j_td = pts_j - (td - td_j) * velocity_j;
    Eigen::Vector3d pts_camera_i = pts_i_td / inv_dep_i;
    Eigen::Vector3d pts_imu_i = qic * pts_camera_i + tic;  //转换到第i帧对应的imu系下
    Eigen::Vector3d pts_w = Qi * pts_imu_i + Pi;           //转到世界系下
    Eigen::Vector3d pts_imu_j = Qj.inverse() * (pts_w - Pj);  //转换到j帧对应的imu系下
    Eigen::Vector3d pts_camera_j = qic.inverse() * (pts_imu_j - tic);  //转换到j帧归一化平面上
    Eigen::Map<Eigen::Vector2d> residual(residuals);

#ifdef UNIT_SPHERE_ERROR 
    residual =  tangent_base * (pts_camera_j.normalized() - pts_j_td.normalized());  //使用了切平面上的残差
#else
    double dep_j = pts_camera_j.z();
    residual = (pts_camera_j / dep_j).head<2>() - pts_j_td.head<2>();
#endif
    residual = sqrt_info * residual;

    if (jacobians)
    {
        Eigen::Matrix3d Ri = Qi.toRotationMatrix();
        Eigen::Matrix3d Rj = Qj.toRotationMatrix();
        Eigen::Matrix3d ric = qic.toRotationMatrix();
        Eigen::Matrix<double, 2, 3> reduce(2, 3);
#ifdef UNIT_SPHERE_ERROR
        double norm = pts_camera_j.norm();
        Eigen::Matrix3d norm_jaco;
        double x1, x2, x3;
        x1 = pts_camera_j(0);
        x2 = pts_camera_j(1);
        x3 = pts_camera_j(2);
        norm_jaco << 1.0 / norm - x1 * x1 / pow(norm, 3), - x1 * x2 / pow(norm, 3),            - x1 * x3 / pow(norm, 3),
                     - x1 * x2 / pow(norm, 3),            1.0 / norm - x2 * x2 / pow(norm, 3), - x2 * x3 / pow(norm, 3),
                     - x1 * x3 / pow(norm, 3),            - x2 * x3 / pow(norm, 3),            1.0 / norm - x3 * x3 / pow(norm, 3);
        reduce = tangent_base * norm_jaco;
#else
        reduce << 1. / dep_j, 0, -pts_camera_j(0) / (dep_j * dep_j),
            0, 1. / dep_j, -pts_camera_j(1) / (dep_j * dep_j);
#endif
        reduce = sqrt_info * reduce;

        if (jacobians[0])
        {
            Eigen::Map<Eigen::Matrix<double, 2, 7, Eigen::RowMajor>> jacobian_pose_i(jacobians[0]);

            Eigen::Matrix<double, 3, 6> jaco_i;
            jaco_i.leftCols<3>() = ric.transpose() * Rj.transpose();
            jaco_i.rightCols<3>() = ric.transpose() * Rj.transpose() * Ri * -Utility::skewSymmetric(pts_imu_i);

            jacobian_pose_i.leftCols<6>() = reduce * jaco_i;
            jacobian_pose_i.rightCols<1>().setZero();
        }
        if (jacobians[1])
        {
            Eigen::Map<Eigen::Matrix<double, 2, 7, Eigen::RowMajor>> jacobian_pose_j(jacobians[1]);
            Eigen::Matrix<double, 3, 6> jaco_j;
            jaco_j.leftCols<3>() = ric.transpose() * -Rj.transpose();
            jaco_j.rightCols<3>() = ric.transpose() * Utility::skewSymmetric(pts_imu_j);
            jacobian_pose_j.leftCols<6>() = reduce * jaco_j;
            jacobian_pose_j.rightCols<1>().setZero();
        }
        if (jacobians[2])
        {
            Eigen::Map<Eigen::Matrix<double, 2, 7, Eigen::RowMajor>> jacobian_ex_pose(jacobians[2]);
            Eigen::Matrix<double, 3, 6> jaco_ex;
            jaco_ex.leftCols<3>() = ric.transpose() * (Rj.transpose() * Ri - Eigen::Matrix3d::Identity());
            Eigen::Matrix3d tmp_r = ric.transpose() * Rj.transpose() * Ri * ric;
            jaco_ex.rightCols<3>() = -tmp_r * Utility::skewSymmetric(pts_camera_i) + Utility::skewSymmetric(tmp_r * pts_camera_i) +
                                     Utility::skewSymmetric(ric.transpose() * (Rj.transpose() * (Ri * tic + Pi - Pj) - tic));
            jacobian_ex_pose.leftCols<6>() = reduce * jaco_ex;
            jacobian_ex_pose.rightCols<1>().setZero();
        }
        if (jacobians[3])
        {
            Eigen::Map<Eigen::Vector2d> jacobian_feature(jacobians[3]);
            jacobian_feature = reduce * ric.transpose() * Rj.transpose() * Ri * ric * pts_i_td * -1.0 / (inv_dep_i * inv_dep_i);
        }
        if (jacobians[4])
        {
            Eigen::Map<Eigen::Vector2d> jacobian_td(jacobians[4]);
            jacobian_td = reduce * ric.transpose() * Rj.transpose() * Ri * ric * velocity_i / inv_dep_i * -1.0  +
                          sqrt_info * velocity_j.head(2);
        }
    }
    sum_t += tic_toc.toc();

    return true;
}

有关视觉残差的数学公式推导如下:
视觉残差1
视觉残差2
视觉残差3
视觉残差4
视觉残差5
视觉残差6
视觉残差7
视觉残差8

5,求解
    ceres::Solver::Options options;

    options.linear_solver_type = ceres::DENSE_SCHUR;  //用舒尔补的方法进行求解
    //options.num_threads = 2;
    options.trust_region_strategy_type = ceres::DOGLEG; //使用dogleg方法
    options.max_num_iterations = NUM_ITERATIONS;  //优化最多迭代次数
    //options.use_explicit_schur_complement = true;
    //options.minimizer_progress_to_stdout = true;
    //options.use_nonmonotonic_steps = true;
    if (marginalization_flag == MARGIN_OLD)
        options.max_solver_time_in_seconds = SOLVER_TIME * 4.0 / 5.0;
    else
        options.max_solver_time_in_seconds = SOLVER_TIME;  //需要边缘化最老帧的时候,最大优化时间要少点,给边缘化预留点时间
    TicToc t_solver;
    ceres::Solver::Summary summary;
    ceres::Solve(options, &problem, &summary);

三,总结

综上总结用ceres来计算最优化问题的一般步骤:

  1. 声明问题ceres::Problem problem;
  2. 如果优化参数有一些非默认参数(比如固定值,非默认加法等),需要加入参数块problem.AddParameterBlock(),且如果有非默认加法(比如四元数没有狭义的加法,而是乘法),还需要定义局部参数ceres::LocalParameterization,以此定义广义“加法”和扰动求导法则。
  3. 加入残差块problem.AddResidualBlock(),需要定义残差,非自动求导的情况下还要定义雅可比矩阵。
  4. 设置求解选项,进行求解。
  5. 要注意加速技巧,这部分详见ceres官网。

四,边缘化留下的先验信息及其代码导读

VINS里,如果当前帧与上一帧视差明显,则最新的第二帧设为关键帧,并边缘化掉最老的帧,这个时候就给下次优化留下了一些先验测量信息;如果当前帧与上一帧视差不明显,就去掉最新的第二帧,直接去掉所带的信息,预积分留下加入到下一次预积分,而不会引入先验残差信息。
VINS采用Schur complement的方法进行边缘化,有关该方法详见SLAM中的marginalization 和 Schur complement,VINS中的基本思想是利用首帧观测是边缘化帧的特征点以及IMU最老的两帧之间的预积分,先按照求解优化的流程构建Ax=e, 然后用Schur complement的方法得到H x = b, 从H,b反推出ceres优化里要用的残差以及雅克比矩阵,最后在下一次优化的时候加入整个优化问题中. 接下来我们具体看看VINS源码中是如何做的.

  1. step1: 加入上次marginalize之后保留状态的先验信息
       if (last_marginalization_info && last_marginalization_info->valid)
       {
           vector<int> drop_set;
           for (int i = 0; i < static_cast<int>(last_marginalization_parameter_blocks.size()); i++)
           {  //last_marginalization_parameter_blocks存有状态窗口内所有状态的首地址,每个状态拆分为para_Pose和para_SpeedBias两大块,所以其size大小为2*WINDOW_SIZE
               if (last_marginalization_parameter_blocks[i] == para_Pose[0] ||
                   last_marginalization_parameter_blocks[i] == para_SpeedBias[0])
                   drop_set.push_back(i);  //选出要被marginalize 的最老帧的状态信息
           }
           // construct new marginlization_factor
           MarginalizationFactor *marginalization_factor = new MarginalizationFactor(last_marginalization_info);
           ResidualBlockInfo *residual_block_info = new ResidualBlockInfo(marginalization_factor, NULL,
                                                                          last_marginalization_parameter_blocks,
                                                                          drop_set);
           marginalization_info->addResidualBlockInfo(residual_block_info);
       }

这一块比较难懂,根据后面添加先验信息代码的类比,可以推断出last_marginalization_info类似残差里的观测值,比如IMU残差里的IMU预积分,last_marginalization_parameter_blocks是计算残差和雅克比矩阵要用到的状态参数,在IMU残差里就是预积分前后两帧的状态,下面我们来详细看看涉及到的参数定义都是什么:
**last_marginalization_parameter_blocks:**看其来路:

 std::unordered_map<long, double *> addr_shift;
        for (int i = 1; i <= WINDOW_SIZE; i++)
        {
            //reinterpret_cast把一个指针转换成一个整数等
            addr_shift[reinterpret_cast<long>(para_Pose[i])] = para_Pose[i - 1];
            if(USE_IMU)
                addr_shift[reinterpret_cast<long>(para_SpeedBias[i])] = para_SpeedBias[i - 1];
        }
        for (int i = 0; i < NUM_OF_CAM; i++)
            addr_shift[reinterpret_cast<long>(para_Ex_Pose[i])] = para_Ex_Pose[i];

        addr_shift[reinterpret_cast<long>(para_Td[0])] = para_Td[0];

        vector<double *> parameter_blocks = marginalization_info->getParameterBlocks(addr_shift);  //提取出要保留的WINDOW_SIZE个相机位姿状态量的地址
last_marginalization_parameter_blocks = parameter_blocks;  // //marginalize之后要被保留下来的状态参数里的相机位姿,IMU,Td和相机-IMU的外参

std::vector<double *> MarginalizationInfo::getParameterBlocks(std::unordered_map<long, double *> &addr_shift)
{
    std::vector<double *> keep_block_addr;
    keep_block_size.clear();
    keep_block_idx.clear();
    keep_block_data.clear();

    for (const auto &it : parameter_block_idx)
    {
        if (it.second >= m)
        {
            keep_block_size.push_back(parameter_block_size[it.first]);
            keep_block_idx.push_back(parameter_block_idx[it.first]);
            keep_block_data.push_back(parameter_block_data[it.first]);
            keep_block_addr.push_back(addr_shift[it.first]);  //marginalize之后要被保留下来的状态参数地址
        }
    }
    sum_block_size = std::accumulate(std::begin(keep_block_size), std::end(keep_block_size), 0);

    return keep_block_addr;
}

可以得知last_marginalization_parameter_blocks保留有上次marginalize后要保留的部分状态变量的地址. 怎么理解呢? addr_shift保留有本次滑动窗口后要保留的状态变量地址:除去最老帧的其他帧位姿,相机的外参,IMU-CAMERA时间校准量,单目IMU的情况下,

last_marginalization_info: 这个是上一次边缘化信息,所以需要把整个marginalize部分看懂才能清楚这个的来路,接下来我们在代码中追溯其去脉.

MarginalizationFactor::MarginalizationFactor(MarginalizationInfo* _marginalization_info):marginalization_info(_marginalization_info)
{
    int cnt = 0;
    for (auto it : marginalization_info->keep_block_size) //
    {
        mutable_parameter_block_sizes()->push_back(it);
        cnt += it;
    }
    //printf("residual size: %d, %d\n", cnt, n);
    set_num_residuals(marginalization_info->n);//这个残差维度n就是上一次marginalize后要留下的状态参数.
};

///
bool MarginalizationFactor::Evaluate(double const *const *parameters, double *residuals, double **jacobians) const
{
    int n = marginalization_info->n;
    int m = marginalization_info->m;
    Eigen::VectorXd dx(n);
    for (int i = 0; i < static_cast<int>(marginalization_info->keep_block_size.size()); i++)
    {
        int size = marginalization_info->keep_block_size[i]; //
        int idx = marginalization_info->keep_block_idx[i] - m;
        Eigen::VectorXd x = Eigen::Map<const Eigen::VectorXd>(parameters[i], size); //parameters是所有状态变量
        Eigen::VectorXd x0 = Eigen::Map<const Eigen::VectorXd>(marginalization_info->keep_block_data[i], size);
        if (size != 7)  //这是速度和bias,残差可以直接相减
            dx.segment(idx, size) = x - x0;
        else  //这是位姿,包含旋转四元数,前三个为位置,可以直接相减,旋转的残差则不能直接相减
        {
            dx.segment<3>(idx + 0) = x.head<3>() - x0.head<3>();
            dx.segment<3>(idx + 3) = 2.0 * Utility::positify(Eigen::Quaterniond(x0(6), x0(3), x0(4), x0(5)).inverse() * Eigen::Quaterniond(x(6), x(3), x(4), x(5))).vec();
            if (!((Eigen::Quaterniond(x0(6), x0(3), x0(4), x0(5)).inverse() * Eigen::Quaterniond(x(6), x(3), x(4), x(5))).w() >= 0))
            {
                dx.segment<3>(idx + 3) = 2.0 * -Utility::positify(Eigen::Quaterniond(x0(6), x0(3), x0(4), x0(5)).inverse() * Eigen::Quaterniond(x(6), x(3), x(4), x(5))).vec();
            }
        }
    }
    Eigen::Map<Eigen::VectorXd>(residuals, n) = marginalization_info->linearized_residuals + marginalization_info->linearized_jacobians * dx;
    //linearized_residuals和linearized_jacobians是在上一优化步骤之后的marginalize里计算得到的
    if (jacobians)
    {

        for (int i = 0; i < static_cast<int>(marginalization_info->keep_block_size.size()); i++)
        {
            if (jacobians[i])
            {
                int size = marginalization_info->keep_block_size[i], local_size = marginalization_info->localSize(size);
                int idx = marginalization_info->keep_block_idx[i] - m;
                Eigen::Map<Eigen::Matrix<double, Eigen::Dynamic, Eigen::Dynamic, Eigen::RowMajor>> jacobian(jacobians[i], n, size);
                jacobian.setZero();
                jacobian.leftCols(local_size) = marginalization_info->linearized_jacobians.middleCols(idx, local_size);
            }
        }
    }
    return true;
}

通过以上代码可以看出,MarginalizationFactor计算出的残差就是状态量的残差,其雅克比矩阵是该残差对状态量的求导雅克比矩阵(这个雅克比矩阵不是计算的到的,而是由Schur complement计算出来的雅克比矩阵传递下来的),综合来看,上一次marginalize后保留的的状态参数P(包括camera-imu外参,时间校准参数Td,要保留的相机位姿T_cameras),此次优化过程依旧会保留下来,上一次的值就是此次优化的先验信息,这个先验信息的加入,将使得这些参数在优化过程中不会剧烈变化,可以说起到一个smooth的作用.marginalize最新第二帧和最老的一帧,都是有这个先验信息.

  1. step2 边缘化掉的最老帧以及预积分会给第二老的帧留下先验信息:
        if(USE_IMU)
        {
            if (pre_integrations[1]->sum_dt < 10.0)
            {
                IMUFactor* imu_factor = new IMUFactor(pre_integrations[1]);
                ResidualBlockInfo *residual_block_info = new ResidualBlockInfo(imu_factor, NULL,
                                                                           vector<double *>{para_Pose[0], para_SpeedBias[0], para_Pose[1], para_SpeedBias[1]},
                                                                           vector<int>{0, 1});
                marginalization_info->addResidualBlockInfo(residual_block_info);  //这里并不会增加要被marginalize掉的状态的数量,因为要增加的para_Pose[0], para_SpeedBias[0]在上一步已经增加了
            }
        }

此处残差定义与优化时的是一样的,可以这么理解: 该先验信息方程的雅克比J和残差b,b为当前状态下最老两帧位姿P_old1,P_old2之差与预积分的差,J为当前状态下b对P_old1,P_old2的求导雅克比,b的维度为15(位置3+姿态3+速度3+imu bias6,虽然姿态是个四元数,但它们的残差是三维的),J是一个15*32的矩阵,代码里计算J的时候分成左右i,j两部分.

  1. step3: 要marinalize的最老帧设为F_old, 首次观测在F_old的特征点为P_old,P_old还在其他帧F_other被观测到,则F_old,P_old被marginalize掉后会给F_other留下先验信息,这个先验信息与优化里计算视觉残差及雅克比是一样的,只是参数都变成优化后的了.
//marginalize掉那些观测到该点的首帧要被边缘化的特征点,操作类似于上文中向ceres::problem中添加视觉观测信息
        {
            int feature_index = -1;
            for (auto &it_per_id : f_manager.feature)
            {
                it_per_id.used_num = it_per_id.feature_per_frame.size();
                if (it_per_id.used_num < 4)
                    continue;

                ++feature_index;

                int imu_i = it_per_id.start_frame, imu_j = imu_i - 1;
                if (imu_i != 0)  //只marginalize观测到该点的首帧是第0帧的特征点
                    continue;

                Vector3d pts_i = it_per_id.feature_per_frame[0].point;  //首帧归一化平面上的位置

                for (auto &it_per_frame : it_per_id.feature_per_frame)
                {
                    imu_j++;
                    if(imu_i != imu_j)
                    {
                        Vector3d pts_j = it_per_frame.point;
                        ProjectionTwoFrameOneCamFactor *f_td = new ProjectionTwoFrameOneCamFactor(pts_i, pts_j, it_per_id.feature_per_frame[0].velocity, it_per_frame.velocity,
                                                                          it_per_id.feature_per_frame[0].cur_td, it_per_frame.cur_td);
                        ResidualBlockInfo *residual_block_info = new ResidualBlockInfo(f_td, loss_function,
                                                                                        vector<double *>{para_Pose[imu_i], para_Pose[imu_j], para_Ex_Pose[0], para_Feature[feature_index], para_Td[0]},
                                                                                        vector<int>{0, 3});
                        marginalization_info->addResidualBlockInfo(residual_block_info);
                    }
                    if(STEREO && it_per_frame.is_stereo)
                    {
                        Vector3d pts_j_right = it_per_frame.pointRight;
                        if(imu_i != imu_j)
                        {
                            ProjectionTwoFrameTwoCamFactor *f = new ProjectionTwoFrameTwoCamFactor(pts_i, pts_j_right, it_per_id.feature_per_frame[0].velocity, it_per_frame.velocityRight,
                                                                          it_per_id.feature_per_frame[0].cur_td, it_per_frame.cur_td);
                            ResidualBlockInfo *residual_block_info = new ResidualBlockInfo(f, loss_function,
                                                                                           vector<double *>{para_Pose[imu_i], para_Pose[imu_j], para_Ex_Pose[0], para_Ex_Pose[1], para_Feature[feature_index], para_Td[0]},
                                                                                           vector<int>{0, 4});
                            marginalization_info->addResidualBlockInfo(residual_block_info);
                        }
                        else
                        {
                            ProjectionOneFrameTwoCamFactor *f = new ProjectionOneFrameTwoCamFactor(pts_i, pts_j_right, it_per_id.feature_per_frame[0].velocity, it_per_frame.velocityRight,
                                                                          it_per_id.feature_per_frame[0].cur_td, it_per_frame.cur_td);
                            ResidualBlockInfo *residual_block_info = new ResidualBlockInfo(f, loss_function,
                                                                                           vector<double *>{para_Ex_Pose[0], para_Ex_Pose[1], para_Feature[feature_index], para_Td[0]},
                                                                                           vector<int>{2});
                            marginalization_info->addResidualBlockInfo(residual_block_info);
                        }
                    }
                }
            }
        }
  1. step4:预边缘化 marginalization_info->marginalize(),也就是调用各个误差块的evaluate函数计算其对应残差块和雅克比矩阵
void MarginalizationInfo::preMarginalize()
{
    for (auto it : factors)
    {//factors有IMU预积分残差块imu_factor,视觉重投影残差块ProjectionTwoFrameOneCamFactor和marginalization_factor
        it->Evaluate();//计算残差块的残差及雅克比矩阵

        std::vector<int> block_sizes = it->cost_function->parameter_block_sizes();//比如imu_factor就涉及到前后两个相机位姿
        for (int i = 0; i < static_cast<int>(block_sizes.size()); i++)
        {
            long addr = reinterpret_cast<long>(it->parameter_blocks[i]);
            int size = block_sizes[i];
            if (parameter_block_data.find(addr) == parameter_block_data.end())
            {//如果当前parameter_block_data里没有it->parameter_blocks,就在parameter_block_data里加入该参数块
                double *data = new double[size];
                memcpy(data, it->parameter_blocks[i], sizeof(double) * size);
                parameter_block_data[addr] = data;  //addr是地址,data是地址存放的数据
            }
        }
    }
}
  1. step5:边缘化marginalization_info->marginalize(),计算优化过程中用到的Hx=b,并分解得到相应雅克比矩阵和残差.
void MarginalizationInfo::marginalize()
{
    int pos = 0;
    for (auto &it : parameter_block_idx) //要被marginalize掉的状态量:一个位姿+一堆特征点
    {
        it.second = pos;
        pos += localSize(parameter_block_size[it.first]);
    }

    m = pos;
    for (const auto &it : parameter_block_size)  //要保留的状态量?
    {
        if (parameter_block_idx.find(it.first) == parameter_block_idx.end())//不在被marginalize掉的状态里的状态
        {
            parameter_block_idx[it.first] = pos;
            pos += localSize(it.second);
        }
    }

    n = pos - m;
    if(m == 0)
    {
        valid = false;
        printf("unstable tracking...\n");
        return;
    }
    TicToc t_summing;
    Eigen::MatrixXd A(pos, pos);
    Eigen::VectorXd b(pos);
    A.setZero();
    b.setZero();
//multi thread
//计算A,b矩阵, 优化问题最后会转化成Ax=b, A=J.transpose()*J,b=-J.transpose()*e
    TicToc t_thread_summing;
    pthread_t tids[NUM_THREADS];
    ThreadsStruct threadsstruct[NUM_THREADS];
    int i = 0;
    for (auto it : factors)
    {
        threadsstruct[i].sub_factors.push_back(it);
        i++;
        i = i % NUM_THREADS;
    }
    for (int i = 0; i < NUM_THREADS; i++)
    {
        TicToc zero_matrix;
        threadsstruct[i].A = Eigen::MatrixXd::Zero(pos,pos);
        threadsstruct[i].b = Eigen::VectorXd::Zero(pos);
        threadsstruct[i].parameter_block_size = parameter_block_size;
        threadsstruct[i].parameter_block_idx = parameter_block_idx;
        int ret = pthread_create( &tids[i], NULL, ThreadsConstructA ,(void*)&(threadsstruct[i]));
        if (ret != 0)
        {
            ROS_WARN("pthread_create error");
            ROS_BREAK();
        }
    }
    for( int i = NUM_THREADS - 1; i >= 0; i--)  
    {
        pthread_join( tids[i], NULL ); 
        A += threadsstruct[i].A;
        b += threadsstruct[i].b;
    }

    //TODO
    Eigen::MatrixXd Amm = 0.5 * (A.block(0, 0, m, m) + A.block(0, 0, m, m).transpose());
    Eigen::SelfAdjointEigenSolver<Eigen::MatrixXd> saes(Amm);

    Eigen::MatrixXd Amm_inv = saes.eigenvectors() * Eigen::VectorXd((saes.eigenvalues().array() > eps).select(saes.eigenvalues().array().inverse(), 0)).asDiagonal() * saes.eigenvectors().transpose();
    Eigen::VectorXd bmm = b.segment(0, m);
    Eigen::MatrixXd Amr = A.block(0, m, m, n);
    Eigen::MatrixXd Arm = A.block(m, 0, n, m);
    Eigen::MatrixXd Arr = A.block(m, m, n, n);//提取起始于m,m,大小为n*n的子矩阵
    Eigen::VectorXd brr = b.segment(m, n);
    A = Arr - Arm * Amm_inv * Amr;
    b = brr - Arm * Amm_inv * bmm;
    //分解A,b得到ceres计算最小二乘问题里需要的雅克比矩阵和残差
    Eigen::SelfAdjointEigenSolver<Eigen::MatrixXd> saes2(A);
    Eigen::VectorXd S = Eigen::VectorXd((saes2.eigenvalues().array() > eps).select(saes2.eigenvalues().array(), 0));
    Eigen::VectorXd S_inv = Eigen::VectorXd((saes2.eigenvalues().array() > eps).select(saes2.eigenvalues().array().inverse(), 0));
    Eigen::VectorXd S_sqrt = S.cwiseSqrt();
    Eigen::VectorXd S_inv_sqrt = S_inv.cwiseSqrt();
    linearized_jacobians = S_sqrt.asDiagonal() * saes2.eigenvectors().transpose();
    linearized_residuals = S_inv_sqrt.asDiagonal() * saes2.eigenvectors().transpose() * b;
}
  1. step6:将该先验信息加入到下一优化步骤
if (last_marginalization_info && last_marginalization_info->valid)
    {
        // construct new marginlization_factor
        MarginalizationFactor *marginalization_factor = new MarginalizationFactor(last_marginalization_info);
        problem.AddResidualBlock(marginalization_factor, NULL,
                                 last_marginalization_parameter_blocks);//last_marginalization_parameter_blocks上一次marginalize后要留下来的帧
    }

细心的朋友会发现,这个先验信息的定义,与之前残差块定义是一致的,后者定义的是一个子残差块,前者则是整个的残差块合在一起形成的先验信息.

Shur complement数学推导

再次感谢深蓝学院VIO主讲老师,在这里直接贴图:
边缘化
边缘化
边缘化
边缘化
边缘化
边缘化
边缘化
边缘化

具体到VINS里其滑动窗口算法如下,由于ceres求解最小二乘的机制,代码里将信息矩阵与雅可比和残差进行合并计算,所以看不出有额外计算信息矩阵的部分:
滑动窗口1
滑动窗口2
滑动窗口3
滑动窗口4

五, 耗时分析

过程平均耗时(ms)
前端平均耗时18.8589
初始化耗时1116.17
后端优化部分17.6578
marginalization()19.1007
后端ceres优化部分24.2203
后端整个optimization45.8803
整个后端平均耗时47.725ms

备注:
用了四舍五入,可能total_time会比前面的和小一点点。
测试条件:Dell G7-7588 ; intel i7-8750H @2.2GHz; ubuntu16.04
数据集:vins-fusion的github上推荐的EuRoC ROS Bags其中的MH_01,一共将近3700帧图像。
相机内参和相机-IMU之间的外参都是已知的,初始化过程中没有估计这个参数,后端一共处理1831次,前端处理3682次,说明帧率高于后端处理速度时,会丢失图像信息.

六, C++知识点总结

本套代码大量应用了C++指针和引用编程,故也是一次学习指针引用编程的好机会.
一阵忙一阵闲,先发布了,本博客持续更新,若发现错误,欢迎批评指正

  • 19
    点赞
  • 138
    收藏
    觉得还不错? 一键收藏
  • 7
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 7
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值