VINS-MONO 中先验因子的构建流程
程序流程
非线性优化功能在Estimator::processImage()函数中,检查新一帧与滑窗中其它帧的视差来判断边缘化策略:
/*检查新帧的视差,确定边缘化类型*/
if (f_manager.addFeatureCheckParallax(frame_count, image, td))
marginalization_flag = MARGIN_OLD;
else
marginalization_flag = MARGIN_SECOND_NEW;
在完成了IMU-相机的相对外参数估计以及初始化后,solver_flag = NON_LINEAR,标记VINS系统进入非线性优化求解阶段:
if(ESTIMATE_EXTRINSIC == 2)
{
......initial_ex_rotation.CalibrationExRotation();//估计外参数
}
if (solver_flag == INITIAL)
{
if (frame_count == WINDOW_SIZE)
{
....//VINS 初始化操作
}
else
frame_count++;
}
else
{
.....
solveOdometry();
.....
}
然后进入Estimator::solveOdometry()中执行Estimator::optimization(),注意这里又进行了一次三角化的操作,其原因是相机位姿。。。。。。。。。。。。。。。。:
void Estimator::solveOdometry()
{
if (frame_count < WINDOW_SIZE)
return;
if (solver_flag == NON_LINEAR)
{
TicToc t_tri;
f_manager.triangulate(Ps, tic, ric);
ROS_DEBUG("triangulation costs %f", t_tri.toc());
optimization();
}
}
接下来进入Estimator:: optimization()中,向求解器中添加待优化的参数块:
:
for (int i = 0; i < WINDOW_SIZE + 1; i++)
{
.....
problem.AddParameterBlock(para_Pose[i], SIZE_POSE, local_parameterization);//相机位姿p,q
problem.AddParameterBlock(para_SpeedBias[i], SIZE_SPEEDBIAS);//相机速度与IMU的bias:v,ba,bg
}
for (int i = 0; i < NUM_OF_CAM; i++)
{
.....
problem.AddParameterBlock(para_Ex_Pose[i], SIZE_POSE, local_parameterization);//IMU与相机外参
....
}
if (ESTIMATE_TD)
{
problem.AddParameterBlock(para_Td[0], 1);//传感器时差
}
向求解器中添加也就是各残差块,即所谓的 因子:
if (last_marginalization_info)//存在边缘化信息==先验因子
{
........
problem.AddResidualBlock(marginalization_factor, NULL,
last_marginalization_parameter_blocks);//将last_marg_para_block加入残差块中,
//意味着该参数块也将会被ceres优化,从而用于了先验因子中dx的计算
}
for (int i = 0; i < WINDOW_SIZE; i++)//IMU因子
{
.....
problem.AddResidualBlock(imu_factor, NULL, para_Pose[i], para_SpeedBias[i], para_Pose[j], para_SpeedBias[j]);
}
for (auto &it_per_id : f_manager.feature)
{
....
for (auto &it_per_frame : it_per_id.feature_per_frame)
{
if (ESTIMATE_TD)//时差因子
{
.......
problem.AddResidualBlock(f_td, loss_function, para_Pose[imu_i],
......
}
else//视觉因子
{
.....
problem.AddResidualBlock(f, loss_function, para_Pose[imu_i], para_Pose[imu_j], para_Ex_Pose[0], para_Feature[feature_index]);
}
.....
}
各因子添加完毕后,便进行了ceres::Solve(options, &problem, &summary),更新了待优化的参数块。
优化完成后,最后便是边缘化的操作,这里以marginalization_flag == MARGIN_OLD为例,重点关注先验因子的构建过程:
首先
if (last_marginalization_info)
{
vector<int> drop_set;
for (int i = 0; i < static_cast<int>(last_marginalization_parameter_blocks.size()); i++)
{
if (last_marginalization_parameter_blocks[i] == para_Pose[0] ||
last_marginalization_parameter_blocks[i] == para_SpeedBias[0])
drop_set.push_back(i);
}
// construct new marginlization_factor
MarginalizationFactor *marginalization_factor = new MarginalizationFactor(last_marginalization_info);
ResidualBlockInfo *residual_block_info = new ResidualBlockInfo(marginalization_factor, NULL,
last_marginalization_parameter_blocks,
drop_set);
marginalization_info->addResidualBlockInfo(residual_block_info);
}
这里是从last_marginalization_parameter_blocks中取出最老帧的位姿、速度、bias,将其位置标记在了drop_set中,传入残差块中。
再往后就是将最老帧与次最老帧的IMU约束,最老帧观测到的路标点传入残差块中。
在边缘化操作之前,进行了marginalization_info->preMarginalize()操作,目的是为了计算这些待边缘化的残差块的残差、雅可比矩阵。
最后便是MarginalizationInfo::marginalize(),用所有的待边缘化的因子块构建先验:
Eigen::VectorXd bmm = b.segment(0, m);
Eigen::MatrixXd Amr = A.block(0, m, m, n);
Eigen::MatrixXd Arm = A.block(m, 0, n, m);
Eigen::MatrixXd Arr = A.block(m, m, n, n);
Eigen::VectorXd brr = b.segment(m, n);
A = Arr - Arm * Amm_inv * Amr;
b = brr - Arm * Amm_inv * bmm;
Eigen::SelfAdjointEigenSolver<Eigen::MatrixXd> saes2(A);
/*class Eigen::Select< ConditionMatrixType, ThenMatrixType, ElseMatrixType >*/
Eigen::VectorXd S = Eigen::VectorXd((saes2.eigenvalues().array() > eps).select(saes2.eigenvalues().array(), 0));
Eigen::VectorXd S_inv = Eigen::VectorXd((saes2.eigenvalues().array() > eps).select(saes2.eigenvalues().array().inverse(), 0));
Eigen::VectorXd S_sqrt = S.cwiseSqrt();
Eigen::VectorXd S_inv_sqrt = S_inv.cwiseSqrt();
linearized_jacobians = S_sqrt.asDiagonal() * saes2.eigenvectors().transpose();
linearized_residuals = S_inv_sqrt.asDiagonal() * saes2.eigenvectors().transpose() * b;
这里是将舒尔补操作后的A矩阵进行了特征值分解,再对分解后的S矩阵进行类似开方运算得到S_sqrt,从而
J
=
S
s
q
r
t
∗
S
s
q
r
t
T
J = S_{sqrt}* S_{sqrt}^T
J=Ssqrt∗SsqrtT
r
=
S
s
q
r
t
T
∗
b
r = S_{sqrt}^T*b
r=SsqrtT∗b
此过程即从信息矩阵中恢复出了边缘化得出的雅克比矩阵与残差。
至于其数学原理,应是对H矩阵进行SVD分解,将特征值矩阵S作为了分解值,参考此篇博客https://blog.csdn.net/weixin_41394379/article/details/89975386?utm_medium=distribute.pc_relevant.none-task-blog-BlogCommendFromMachineLearnPai2-8.nonecase&depth_1-utm_source=distribute.pc_relevant.none-task-blog-BlogCommendFromMachineLearnPai2-8.nonecase