2021SC@SDUSC
visual_feature_初探
这个节点的主要任务就是生成视觉特征点然后送给下一个visual节点进行视觉里程计的计算.
文章目录
0. 团队进展与计划
学期已经将近过半,现如今来总结一下前面的代码分析工作。对LVI-SAM的分析,我们先从代码的思想开始分析,当通过分析和观察实际运行时的节点图,我们对代码分析的工作按照节点进行切分。
a. 进展
首先,我们团队分析的是LVI-SAM两大模块中的lidar模块。如今,我们已经完成了lidar模块中/lvi_sam_imuPreintegration
(IMU预积分节点),/lvi_sam_featureExtraction
(激光雷达特征点提取节点) ,/lvi_sam_imageProjection
(生成深度图),总共3个节点的分析。从代码开始,通过代码的阅读,不断补充相关的知识点,从ROS的语法到代码运用到的数学原理,我们都有所收获。
b. 计划
接下来,我们团队,将继续分析lidar模块中的/lvi_sam_mapOptmization
(因子图优化节点),把lidar模块中的最后一个节点分析完,然后会串联lidar模块中的各个节点,并且和LIO-SAM进行比较,分析LVI-SAM中的优化的内容。最后,visual模块尝试进行分析,但不能保证在这学期结束能分析完。在其中一个队友继续分析/lvi_sam_mapOptmization
的时候,其他队友会继续分析visual模块,等待完成/lvi_sam_mapOptmization
的分析。
1. 输入与输出
下面开始节点lvi_sam_visual_feature
的分析。我们首先进行输入和输出的分析。
a. 输入
输入主要是原始图像信息和上一个imageProjection节点变换到世界坐标系的当前点云信息。
ros::Subscriber sub_img = n.subscribe(IMAGE_TOPIC, 5, img_callback);
ros::Subscriber sub_lidar = n.subscribe(POINT_CLOUD_TOPIC, 5, lidar_callback);
b. 输出
输出是给出一个带有深度的特征点和带有特征点的图片和是否重启的信号.
pub_feature = n.advertise<sensor_msgs::PointCloud>(PROJECT_NAME + "/vins/feature/feature", 5);
pub_match = n.advertise<sensor_msgs::Image> (PROJECT_NAME + "/vins/feature/feature_img", 5);
pub_restart = n.advertise<std_msgs::Bool> (PROJECT_NAME + "/vins/feature/restart", 5);
2. main函数
我们从main函数出发,开始分析节点。
首先,初始化ROS节点,虽然这里的初始化的时候,有vins这个名字,但是前面的blog也说了,这个名字是被launch文件中覆盖了。然后获取操作这个节点的句柄。并且控制日志的记录的等级。最后,使用readParameters()
函数读入这个节点所需要的参数。所有的参数定义在这个文件夹中的另外一个文件Parameters.cpp
中。
ros::init(argc, argv, "vins");
ros::NodeHandle n;
ROS_INFO("\033[1;32m----> Visual Feature Tracker Started.\033[0m");
ros::console::set_logger_level(ROSCONSOLE_DEFAULT_NAME, ros::console::levels::Warn);
readParameters(n);
a. readParameters
void readParameters(ros::NodeHandle &n)
{
std::string config_file;
n.getParam("vins_config_file", config_file);
cv::FileStorage fsSettings(config_file, cv::FileStorage::READ);
if(!fsSettings.isOpened())
{
std::cerr << "ERROR: Wrong path to settings" << std::endl;
}
// project name
fsSettings["project_name"] >> PROJECT_NAME;
std::string pkg_path = ros::package::getPath(PROJECT_NAME);
// sensor topics
fsSettings["image_topic"] >> IMAGE_TOPIC;
fsSettings["imu_topic"] >> IMU_TOPIC;
fsSettings["point_cloud_topic"] >> POINT_CLOUD_TOPIC;
// lidar configurations
fsSettings["use_lidar"] >> USE_LIDAR;
fsSettings["lidar_skip"] >> LIDAR_SKIP;
// feature and image settings
MAX_CNT = fsSettings["max_cnt"];
MIN_DIST = fsSettings["min_dist"];
ROW = fsSettings["image_height"];
COL = fsSettings["image_width"];
FREQ = fsSettings["freq"];
F_THRESHOLD = fsSettings["F_threshold"];
SHOW_TRACK = fsSettings["show_track"];
EQUALIZE = fsSettings["equalize"];
L_C_TX = fsSettings["lidar_to_cam_tx"];
L_C_TY = fsSettings["lidar_to_cam_ty"];
L_C_TZ = fsSettings["lidar_to_cam_tz"];
L_C_RX = fsSettings["lidar_to_cam_rx"];
L_C_RY = fsSettings["lidar_to_cam_ry"];
L_C_RZ = fsSettings["lidar_to_cam_rz"];
// fisheye mask
FISHEYE = fsSettings["fisheye"];
if (FISHEYE == 1)
{
std::string mask_name;
fsSettings["fisheye_mask"] >> mask_name;
FISHEYE_MASK = pkg_path + mask_name;
}
// camera config
CAM_NAMES.push_back(config_file);
WINDOW_SIZE = 20;
STEREO_TRACK = false;
FOCAL_LENGTH = 460;
PUB_THIS_FRAME = false;
if (FREQ == 0)
FREQ = 100;
fsSettings.release();
usleep(100);
}
首先,一开始,这个从参数服务器中,获取相应的参数"vins_config_file"
,既然有getParam,那么肯定还会有setParam,因此,用vscode的全局搜索"vins_config_file",可以在launch文件夹的module_san.launch
找到一个参数的设置。
<!-- VINS config -->
<param name="vins_config_file" type="string" value="$(find lvi_sam)/config/params_camera.yaml" />
这个参数包含了params_camera.yaml
的路径。所以,当变量config_file
获取到这个路径名之后,通过cv::FileStorage
读取里面的yaml
格式的各种参数信息。通过浏览params_camera.yaml
文件,里面的参数包含了话题的命名,与lidar有关的控制参数,lidar到摄像头的相对坐标,摄像机的模型,摄像机和IMU的标定参数等等,这个参数文件有着详细的注释,而且这个参数文件不只是这个节点使用。以下是这个参数文件的信息。
%YAML:1.0
# Project
project_name: "lvi_sam"
#common parameters
imu_topic: "/imu_raw"
image_topic: "/camera/image_raw"
point_cloud_topic: "lvi_sam/lidar/deskew/cloud_deskewed"
# Lidar Params
use_lidar: 1 # whether use depth info from lidar or not
lidar_skip: 3 # skip this amount of scans
align_camera_lidar_estimation: 1 # align camera and lidar estimation for visualization
# lidar to camera extrinsic
lidar_to_cam_tx: 0.05
lidar_to_cam_ty: -0.07
lidar_to_cam_tz: -0.07
lidar_to_cam_rx: 0.0
lidar_to_cam_ry: 0.0
lidar_to_cam_rz: -0.04
# camera model
model_type: MEI
camera_name: camera
# Mono camera config
image_width: 720
image_height: 540
mirror_parameters:
xi: 1.9926618269451453
distortion_parameters:
k1: -0.0399258932468764
k2: 0.15160828121223818
p1: 0.00017756967825777937
p2: -0.0011531239076798612
projection_parameters:
gamma1: 669.8940458885896
gamma2: 669.1450614220616
u0: 377.9459252967363
v0: 279.63655686698144
fisheye_mask: "/config/fisheye_mask_720x540.jpg"
#imu parameters The more accurate parameters you provide, the worse performance
acc_n: 0.02 # accelerometer measurement noise standard deviation.
gyr_n: 0.01 # gyroscope measurement noise standard deviation.
acc_w: 0.002 # accelerometer bias random work noise standard deviation.
gyr_w: 4.0e-5 # gyroscope bias random work noise standard deviation.
g_norm: 9.805 #
# Extrinsic parameter between IMU and Camera.
estimate_extrinsic: 0 # 0 Have an accurate extrinsic parameters. We will trust the following imu^R_cam, imu^T_cam, don't change it.
# 1 Have an initial guess about extrinsic parameters. We will optimize around your initial guess.
# 2 Don't know anything about extrinsic parameters. You don't need to give R,T. We will try to calibrate it. Do some rotation movement at beginning.
#Rotation from camera frame to imu frame, imu^R_cam
extrinsicRotation: !!opencv-matrix
rows: 3
cols: 3
dt: d
data: [ 0, 0, -1,
-1, 0, 0,
0, 1, 0]
#Translation from camera frame to imu frame, imu^T_cam
extrinsicTranslation: !!opencv-matrix
rows: 3
cols: 1
dt: d
data: [0.006422381632411965, 0.019939800449065116, 0.03364235163589248]
#feature traker paprameters
max_cnt: 150 # max feature number in feature tracking
min_dist: 20 # min distance between two features
freq: 20 # frequence (Hz) of publish tracking result. At least 10Hz for good estimation. If set 0, the frequence will be same as raw image
F_threshold: 1.0 # ransac threshold (pixel)
show_track: 1 # publish tracking image as topic
equalize: 1 # if image is too dark or light, trun on equalize to find enough features
fisheye: 1 # if using fisheye, trun on it. A circle mask will be loaded to remove edge noisy points
#optimization parameters
max_solver_time: 0.035 # max solver itration time (ms), to guarantee real time
max_num_iterations: 10 # max solver itrations, to guarantee real time
keyframe_parallax: 10.0 # keyframe selection threshold (pixel)
#unsynchronization parameters
estimate_td: 0 # online estimate time offset between camera and imu
td: 0 # initial value of time offset. unit: s. readed image clock + td = real image clock (IMU clock)
#rolling shutter parameters
rolling_shutter: 0 # 0: global shutter camera, 1: rolling shutter camera
rolling_shutter_tr: 0 # unit: s. rolling shutter read out time per frame (from data sheet).
#loop closure parameters
loop_closure: 1 # start loop closure
skip_time: 0.0
skip_dist: 0.0
debug_image: 0 # save raw image in loop detector for visualization prupose; you can close this function by setting 0
match_image_scale: 0.5
vocabulary_file: "/config/brief_k10L6.bin"
brief_pattern_file: "/config/brief_pattern.yml"
在理解了前面的几行代码的内容之后,readParameters()
后面的代码就是简单地把参数文件中的值赋值给变量,也就是把参数从硬盘读取到内存。
b. FeatureTracker类
在main函数接下来的2行中,读取摄像头的参数,用FeatureTracker的类去读取参数。下面解析FeatureTracker
。
// read camera params
for (int i = 0; i < NUM_OF_CAM; i++)
trackerData[i].readIntrinsicParameter(CAM_NAMES[i]);
而trackerData在这个文件的一开始就静态声明了。
// feature tracker variables
FeatureTracker trackerData[NUM_OF_CAM];
下面是这个类的头文件部分,具体内容我们在接下来的blog中探寻。
class FeatureTracker
{
public:
FeatureTracker();
void readImage(const cv::Mat &_img,double _cur_time);
void setMask();
void addPoints();
bool updateID(unsigned int i);
void readIntrinsicParameter(const string &calib_file);
void showUndistortion(const string &name);
void rejectWithF();
void undistortedPoints();
cv::Mat mask;
cv::Mat fisheye_mask;
cv::Mat prev_img, cur_img, forw_img;
vector<cv::Point2f> n_pts;
vector<cv::Point2f> prev_pts, cur_pts, forw_pts;
vector<cv::Point2f> prev_un_pts, cur_un_pts;
vector<cv::Point2f> pts_velocity;
vector<int> ids;
vector<int> track_cnt;
map<int, cv::Point2f> cur_un_pts_map;
map<int, cv::Point2f> prev_un_pts_map;
camodocal::CameraPtr m_camera;
double cur_time;
double prev_time;
static int n_id;
};
c. 其他部分
继续看main函数接下来的部分,是加载鱼眼mask来去除边界上的特征。
if(FISHEYE)
{
for (int i = 0; i < NUM_OF_CAM; i++)
{
trackerData[i].fisheye_mask = cv::imread(FISHEYE_MASK, 0);
if(!trackerData[i].fisheye_mask.data)
{
ROS_ERROR("load fisheye mask fail");
ROS_BREAK();
}
else
ROS_INFO("load mask success");
}
}
然后是初始化depthRegister
类,这个类也是和FeatureTracker
在一个头文件中定义和声明。
class DepthRegister
{
public:
ros::NodeHandle n;
// publisher for visualization
ros::Publisher pub_depth_feature;
ros::Publisher pub_depth_image;
ros::Publisher pub_depth_cloud;
tf::TransformListener listener;
tf::StampedTransform transform;
const int num_bins = 360;
vector<vector<PointType>> pointsArray;
DepthRegister(ros::NodeHandle n_in);
sensor_msgs::ChannelFloat32 get_depth(const ros::Time& stamp_cur, const cv::Mat& imageCur, const pcl::PointCloud<PointType>::Ptr& depthCloud, const camodocal::CameraPtr& camera_model, const vector<geometry_msgs::Point32>& features_2d);
void getColor(float p, float np, float&r, float&g, float&b)
同样这个类的具体内容也是在后面的blog中探寻。当所有准备工作都做好后,节点开始订阅相关的话题,并发布话题。同样,消息到来时需要调用的回调函数我们也在后面的blog中探讨。
// 订阅原始的图像和世界坐标系下的lidar点云
ros::Subscriber sub_img = n.subscribe(IMAGE_TOPIC, 5, img_callback);
ros::Subscriber sub_lidar = n.subscribe(POINT_CLOUD_TOPIC, 5, lidar_callback);
if (!USE_LIDAR)
sub_lidar.shutdown();
// 给vins estimator的消息
pub_feature = n.advertise<sensor_msgs::PointCloud>(PROJECT_NAME + "/vins/feature/feature", 5);
pub_match = n.advertise<sensor_msgs::Image> (PROJECT_NAME + "/vins/feature/feature_img", 5);
pub_restart = n.advertise<std_msgs::Bool> (PROJECT_NAME + "/vins/feature/restart", 5);
在买那函数剩下的部分中,就是使用ROS的多线程机制,申请2个多线程来运行当有图片消息和lidar消息到来的时候的回调函数。
ros::MultiThreadedSpinner spinner(2);
spinner.spin();