11-[LVI-SAM]visual_feature_img_callback浅析

2021SC@SDUSC

visual_feature_img_callback浅析

​ 上一blog,我们分析了LVI-SAM中的其中一个回调函数lidar_callback。通过上一blog,我们已知了lidar_callback通过订阅了lidar的消息,并通过处理,获得了升读信息,然后会把带有深度信息的点图,放入到共享内存中去,供visual_feature中的另外一个回调函数使用。而这也是我们这一个blog需要解决的问题。

​ 由第9篇blog,我们可以知道img_callback的作用:对新来的图像进行特征点的追踪。而这也是这个节点的主要任务。下面开始对这个回调函数进行浅析,之后的blog会对这个函数的每个细节进行研究。

0. 初始化及正确性校验

​ 一开始,首先判断是不是第一帧,如果是,则把数据初始化,记录第一个图像帧的时间。

​ 之后,需要判断相机的数据流是否稳定,如果不稳定,则需要发布restart的消息,让系统重启。这里判断数据流是否稳定的方法,是看每个图像帧到来的时间与上一帧图像的时间相差过大,或者是比上一帧图像的时间更老。并且,在发出restart之前,需要重置数据。

double cur_img_time = img_msg->header.stamp.toSec();

// 处理第一帧图像
if(first_image_flag)
{
    first_image_flag = false;
    first_image_time = cur_img_time;
    last_image_time = cur_img_time;
    return;
}
// 处理不稳定数据流
if (cur_img_time - last_image_time > 1.0 || cur_img_time < last_image_time)
{
    ROS_WARN("image discontinue! reset the feature tracker!");
    // 重置
    first_image_flag = true; 
    last_image_time = 0;
    pub_count = 1;
    std_msgs::Bool restart_flag;
    restart_flag.data = true;
    pub_restart.publish(restart_flag);
    return;
}

1. 数据准备

​ 在准备中,我们需要控制发布频率,以及转换图像编码。

a. 控制发布频率

​ 我们需要控制发布特征点的频率,并不是每读入一帧图像,就要发布特征点,因为读入图像帧的频率是很快的,为了避免ROS的通信机制负载过大,因此需要控制发布的频率。

if (round(1.0 * pub_count / (cur_img_time - first_image_time)) <= FREQ)
{
    PUB_THIS_FRAME = true;
    // reset the frequency control
    if (abs(1.0 * pub_count / (cur_img_time - first_image_time) - FREQ) < 0.01 * FREQ)
    {
        first_image_time = cur_img_time;
        pub_count = 0;
    }
}
else
{
    PUB_THIS_FRAME = false;
}

​ 这其中有几个参数需要说明。

pub_count: # 发布图像的个数
FREQ: 20 # 控制图像光流跟踪的频率,这里作者在参数配置文件中设为20HZ
PUB_THIS_FRAME: # 是否需要发布特征点的标志

​ 因此,整个逻辑控制块的意思是:累计发布数量 / 当前收到图片的时间距离设定的首张图片的时间为 小于20HZ(可以调整),就允许发布。同时,如果收到的图片的时间间隔比较长,发布的频率低于2HZ(可以调整),则重置流量控制,将当前收到的世家设置为第一张图片的时间,并重置pub_count

​ 这样做,我猜测为了避免出现波动。比如,前面有较长的一段时间没有收到图片(可能是因为网络原因),然后,突然收到了大量的图片,这样后面发布图片会超出20HZ的频率,但平均整个频率却又没有超出20HZ。因此,作者这样的做法比较巧妙,可以避免较大的波动。

b. 图像的格式调整和图像读取

​ 读取sensor_msgs::Image img的数据,并转为MONO8格式,用cv::Mat show_img接收。这里的8uc1是8bit的单色灰度图。而mono8就是一个8uc1的格式,我想这里应该是为了把图片从ROS的标准转换成OpenCV的标准,这样方便后续处理。

​ 这里简单说下cv_bridge。cv_bridge 的toCVCopy函数将ROS图像消息转化为OpenCV图像。ROS以自己的sensor_msgs / Image消息格式传递图像,但许多用户希望将图像与OpenCV结合使用。 CvBridge是一个ROS库,提供ROS和OpenCV之间的接口。 可以在vision_opencv stackcv_bridge包中找到CvBridge

cv_bridge::CvImageConstPtr ptr;
if (img_msg->encoding == "8UC1")
{
    sensor_msgs::Image img;
    img.header = img_msg->header;
    img.height = img_msg->height;
    img.width = img_msg->width;
    img.is_bigendian = img_msg->is_bigendian;
    img.step = img_msg->step;
    img.data = img_msg->data;
    img.encoding = "mono8";
    ptr = cv_bridge::toCvCopy(img, sensor_msgs::image_encodings::MONO8);
}
else
    ptr = cv_bridge::toCvCopy(img_msg, sensor_msgs::image_encodings::MONO8);

2. 对最新帧特征点的提取和光流追踪(核心)

​ img_callback()最最核心的语句出现在这里,也就是readImage(),这个函数实现了特征的处理和光流的追踪,里面基本上调用了feature_tracker.cpp里面的全部函数。在这里首先进行了单目和双目的判断。

​ 如果是单目摄像头,就调用readImage()这是核心。但我们后面blog再分析。 如果是双目摄像头,需要自适应直方图均衡化处理。这里的EQUALIZE是如果光太亮或太暗则为1,用来进行直方图均衡化。

​ 还有开头的两行,也是一些小细节,见下面的代码注释。

cv::Mat show_img = ptr->image;  //img_msg或img都是sensor_msg格式的,我们需要一个桥梁,转换为CV::Mat格式的数据,以供后续图像处理。
TicToc t_r; // 计算时间的类
for (int i = 0; i < NUM_OF_CAM; i++)
{
    ROS_DEBUG("processing camera %d", i);
    if (i != 1 || !STEREO_TRACK)
        trackerData[i].readImage(ptr->image.rowRange(ROW * i, ROW * (i + 1)), cur_img_time);
    else
    {
        if (EQUALIZE)
        {
            cv::Ptr<cv::CLAHE> clahe = cv::createCLAHE();
            clahe->apply(ptr->image.rowRange(ROW * i, ROW * (i + 1)), trackerData[i].cur_img);
        }
        else
            trackerData[i].cur_img = ptr->image.rowRange(ROW * i, ROW * (i + 1));
    }

    #if SHOW_UNDISTORTION
    trackerData[i].showUndistortion("undistrotion_" + std::to_string(i));
    #endif
}

3. 对新加入的特征点更新全局id

completed(或者是update())如果是true,说明没有更新完id,则持续循环,如果是false,说明更新完了则跳出循环。注意n_idstatic类型的数据,具有累加的功能。

for (unsigned int i = 0;; i++)
{
    bool completed = false;
    for (int j = 0; j < NUM_OF_CAM; j++)
        if (j != 1 || !STEREO_TRACK)
            completed |= trackerData[j].updateID(i);
    if (!completed)
        break;
}

4. 矫正、封装并发布特征点到pub_feature

​ 这里的详细步骤介绍见代码注释。

​ 将特征点id,矫正后归一化平面的3D点(x,y,z=1),像素2D点(u,v),像素的速度(vx,vy),封装成sensor_msgs::PointCloudPtr类型的feature_points实例中,发布到pub_img。

// 这里省略了最外层的if语句,详细见下面的附录代码。
pub_count++; //发布数量+1
// 用于封装发布的信息
sensor_msgs::PointCloudPtr feature_points(new sensor_msgs::PointCloud);
sensor_msgs::ChannelFloat32 id_of_point;
sensor_msgs::ChannelFloat32 u_of_point; // 像素坐标x
sensor_msgs::ChannelFloat32 v_of_point; // 像素坐标y
sensor_msgs::ChannelFloat32 velocity_x_of_point; // 速度x
sensor_msgs::ChannelFloat32 velocity_y_of_point; // 速度y

feature_points->header.stamp = img_msg->header.stamp;
feature_points->header.frame_id = "vins_body";

vector<set<int>> hash_ids(NUM_OF_CAM); // 哈希表id
for (int i = 0; i < NUM_OF_CAM; i++) // 循环相机数量
{
    auto &un_pts = trackerData[i].cur_un_pts; 
    auto &cur_pts = trackerData[i].cur_pts;
    auto &ids = trackerData[i].ids;
    auto &pts_velocity = trackerData[i].pts_velocity;
    for (unsigned int j = 0; j < ids.size(); j++) // 特征点的数量
    {
        if (trackerData[i].track_cnt[j] > 1) // 只发布追踪次数大于1的特征点
        {
            int p_id = ids[j];
            hash_ids[i].insert(p_id);
            geometry_msgs::Point32 p;
            p.x = un_pts[j].x;
            p.y = un_pts[j].y;
            p.z = 1;

            feature_points->points.push_back(p);
            id_of_point.values.push_back(p_id * NUM_OF_CAM + i);
            u_of_point.values.push_back(cur_pts[j].x);
            v_of_point.values.push_back(cur_pts[j].y);
            velocity_x_of_point.values.push_back(pts_velocity[j].x);
            velocity_y_of_point.values.push_back(pts_velocity[j].y);
        }
    }
}

// 封装信息,准备发布
feature_points->channels.push_back(id_of_point);
feature_points->channels.push_back(u_of_point);
feature_points->channels.push_back(v_of_point);
feature_points->channels.push_back(velocity_x_of_point);
feature_points->channels.push_back(velocity_y_of_point);

// 从共享内存中获得深度信息
pcl::PointCloud<PointType>::Ptr depth_cloud_temp(new pcl::PointCloud<PointType>());
mtx_lidar.lock();
*depth_cloud_temp = *depthCloud;
mtx_lidar.unlock();

// 获取深度
sensor_msgs::ChannelFloat32 depth_of_points = depthRegister->get_depth(img_msg->header.stamp, show_img, depth_cloud_temp, trackerData[0].m_camera, feature_points->points);
feature_points->channels.push_back(depth_of_points);

// 第一帧不发布,因为没有光流速度。
if (!init_pub)
{
    init_pub = 1;
}
else
    pub_feature.publish(feature_points);

get_depth函数(核心)

  1. 初始化需要返回的深度信息

    sensor_msgs::ChannelFloat32 depth_of_point;
    depth_of_point.name = "depth";
    depth_of_point.values.resize(features_2d.size(), -1);
    
  2. 检查点云图是否可用

    if (depthCloud->size() == 0)
        return depth_of_point;
    
  3. 在当前图像时间查找tf变换

    try{
        listener.waitForTransform("vins_world", "vins_body_ros", stamp_cur, ros::Duration(0.01));
        listener.lookupTransform("vins_world", "vins_body_ros", stamp_cur, transform);
    } 
    catch (tf::TransformException ex){
        // ROS_ERROR("image no tf");
        return depth_of_point;
    }
    
    double xCur, yCur, zCur, rollCur, pitchCur, yawCur;
    xCur = transform.getOrigin().x();
    yCur = transform.getOrigin().y();
    zCur = transform.getOrigin().z();
    tf::Matrix3x3 m(transform.getRotation());
    m.getRPY(rollCur, pitchCur, yawCur);
    Eigen::Affine3f transNow = pcl::getTransformation(xCur, yCur, zCur, rollCur, pitchCur, yawCur);
    
  4. 将点云图从世界坐标系转换到当前相机坐标系

    pcl::PointCloud<PointType>::Ptr depth_cloud_local(new pcl::PointCloud<PointType>());
    pcl::transformPointCloud(*depthCloud, *depth_cloud_local, transNow.inverse());
    
  5. 将未失真的归一化 (z) 2d 特征投影到单位球体上

    首先,归一化2d特征到单位球体上,然后转换到ROS的标准中。

    pcl::PointCloud<PointType>::Ptr features_3d_sphere(new pcl::PointCloud<PointType>());
    for (int i = 0; i < (int)features_2d.size(); ++i)
    {
        // 归一化2d特征到单位球体上
        Eigen::Vector3f feature_cur(features_2d[i].x, features_2d[i].y, features_2d[i].z); // z always equal to 1
        feature_cur.normalize(); 
        // 转换为ROS的标准
        PointType p;
        p.x =  feature_cur(2);
        p.y = -feature_cur(0);
        p.z = -feature_cur(1);
        p.intensity = -1; // intensity 会被用来保存深度信息。
        features_3d_sphere->push_back(p);
    }
    
  6. 投影深度点云图在距离图像上并过滤位于同一区域的点

    首先,过滤不在相机视野中的点,然后,在图像的范围中找到行号,在图像的范围中找到列号。但是找到的序列号(包括行和列)可能超过限制范围,需要过滤。最后,只保留较近的点。

    float bin_res = 180.0 / (float)num_bins; // 只能覆盖lidar前面的 (-90 ~ 90) 区域
    cv::Mat rangeImage = cv::Mat(num_bins, num_bins, CV_32F, cv::Scalar::all(FLT_MAX));
    
    for (int i = 0; i < (int)depth_cloud_local->size(); ++i)
    {
        PointType p = depth_cloud_local->points[i];
        // 过滤不在相机视野中的点
        if (p.x < 0 || abs(p.y / p.x) > 10 || abs(p.z / p.x) > 10)
            continue;
        // 在图像的范围中找到行号
        float row_angle = atan2(p.z, sqrt(p.x * p.x + p.y * p.y)) * 180.0 / M_PI + 90.0; // degrees, bottom -> up, 0 -> 360
        int row_id = round(row_angle / bin_res);
        // 在图像的范围中找到列号
        float col_angle = atan2(p.x, p.y) * 180.0 / M_PI; // degrees, left -> right, 0 -> 360
        int col_id = round(col_angle / bin_res);
        // id 可能超出约束条件
        if (row_id < 0 || row_id >= num_bins || col_id < 0 || col_id >= num_bins)
            continue;
        // 只保留较近的点
        float dist = pointDistance(p);
        if (dist < rangeImage.at<float>(row_id, col_id))
        {
            rangeImage.at<float>(row_id, col_id) = dist;
            pointsArray[row_id][col_id] = p;
        }
    }
    
  7. 从距离图像中提取下采样深度点云图

    pcl::PointCloud<PointType>::Ptr depth_cloud_local_filter2(new pcl::PointCloud<PointType>());
    for (int i = 0; i < num_bins; ++i)
    {
        for (int j = 0; j < num_bins; ++j)
        {
            if (rangeImage.at<float>(i, j) != FLT_MAX)
                depth_cloud_local_filter2->push_back(pointsArray[i][j]);
        }
    }
    *depth_cloud_local = *depth_cloud_local_filter2;
    publishCloud(&pub_depth_cloud, depth_cloud_local, stamp_cur, "vins_body_ros");
    
  8. 将深度点云图投影到单位球体上

    pcl::PointCloud<PointType>::Ptr depth_cloud_unit_sphere(new pcl::PointCloud<PointType>());
    for (int i = 0; i < (int)depth_cloud_local->size(); ++i)
    {
        PointType p = depth_cloud_local->points[i];
        float range = pointDistance(p);
        p.x /= range;
        p.y /= range;
        p.z /= range;
        p.intensity = range;
        depth_cloud_unit_sphere->push_back(p);
    }
    if (depth_cloud_unit_sphere->size() < 10)
        return depth_of_point;
    
  9. 使用球形深度点云创建 kd 树

    pcl::KdTreeFLANN<PointType>::Ptr kdtree(new pcl::KdTreeFLANN<PointType>());
    kdtree->setInputCloud(depth_cloud_unit_sphere);
    

    k-d树是每个叶子节点都为k维点的二叉树。所有非叶子节点可以视作用一个超平面把空间分割成两个半空间。节点左边的子树代表在超平面左边的点,节点右边的子树代表在超平面右边的点。选择超平面的方法如下:每个节点都与k维中垂直于超平面的那一维有关。因此,如果选择按照x轴划分,所有x值小于指定值的节点都会出现在左子树,所有x值大于指定值的节点都会出现在右子树。这样,超平面可以用该x值来确定,其法线为x轴的单位向量。

  10. 使用 kd-tree 找到特征深度,并返回结果
    详细分析见下面代码注释。

    vector<int> pointSearchInd;
    vector<float> pointSearchSqDis;
    float dist_sq_threshold = pow(sin(bin_res / 180.0 * M_PI) * 5.0, 2);
    for (int i = 0; i < (int)features_3d_sphere->size(); ++i)
    {
        kdtree->nearestKSearch(features_3d_sphere->points[i], 3, pointSearchInd, pointSearchSqDis);
        if (pointSearchInd.size() == 3 && pointSearchSqDis[2] < dist_sq_threshold)
        {
            float r1 = depth_cloud_unit_sphere->points[pointSearchInd[0]].intensity;
            Eigen::Vector3f A(depth_cloud_unit_sphere->points[pointSearchInd[0]].x * r1,
                              depth_cloud_unit_sphere->points[pointSearchInd[0]].y * r1,
                              depth_cloud_unit_sphere->points[pointSearchInd[0]].z * r1);
    
            float r2 = depth_cloud_unit_sphere->points[pointSearchInd[1]].intensity;
            Eigen::Vector3f B(depth_cloud_unit_sphere->points[pointSearchInd[1]].x * r2,
                              depth_cloud_unit_sphere->points[pointSearchInd[1]].y * r2,
                              depth_cloud_unit_sphere->points[pointSearchInd[1]].z * r2);
    
            float r3 = depth_cloud_unit_sphere->points[pointSearchInd[2]].intensity;
            Eigen::Vector3f C(depth_cloud_unit_sphere->points[pointSearchInd[2]].x * r3,
                              depth_cloud_unit_sphere->points[pointSearchInd[2]].y * r3,
                              depth_cloud_unit_sphere->points[pointSearchInd[2]].z * r3);
    
            // https://math.stackexchange.com/questions/100439/determine-where-a-vector-will-intersect-a-plane
            Eigen::Vector3f V(features_3d_sphere->points[i].x,
                              features_3d_sphere->points[i].y,
                              features_3d_sphere->points[i].z);
    
            Eigen::Vector3f N = (A - B).cross(B - C);
            float s = (N(0) * A(0) + N(1) * A(1) + N(2) * A(2)) 
                / (N(0) * V(0) + N(1) * V(1) + N(2) * V(2));
    
            float min_depth = min(r1, min(r2, r3));
            float max_depth = max(r1, max(r2, r3));
            if (max_depth - min_depth > 2 || s <= 0.5)
            {
                continue;
            } else if (s - max_depth > 0) {
                s = max_depth;
            } else if (s - min_depth < 0) {
                s = min_depth;
            }
            // 如果深度可用,则将特征转换为笛卡尔空间
            features_3d_sphere->points[i].x *= s;
            features_3d_sphere->points[i].y *= s;
            features_3d_sphere->points[i].z *= s;
            // 这里获得的深度是针对单位球体的,VINS 估计器需要归一化特征的深度(按 z 值),(激光雷达 x = 相机 z)
            features_3d_sphere->points[i].intensity = features_3d_sphere->points[i].x;
        }
    }
    
    // 可视化笛卡尔 3d 空间中的特征(包括没有深度的特征(默认 1))
    publishCloud(&pub_depth_feature, features_3d_sphere, stamp_cur, "vins_body_ros");
    
    // 更新返回的深度值
    for (int i = 0; i < (int)features_3d_sphere->size(); ++i)
    {
        if (features_3d_sphere->points[i].intensity > 3.0)
            depth_of_point.values[i] = features_3d_sphere->points[i].intensity;
    }
    
    // 可视化项目点在图像上进行可视化(这里处理较慢!)
    if (pub_depth_image.getNumSubscribers() != 0)
    {
        vector<cv::Point2f> points_2d;
        vector<float> points_distance;
    
        for (int i = 0; i < (int)depth_cloud_local->size(); ++i)
        {
            // 将点从 3D 转换为 2D
            Eigen::Vector3d p_3d(-depth_cloud_local->points[i].y,
                                 -depth_cloud_local->points[i].z,
                                 depth_cloud_local->points[i].x);
            Eigen::Vector2d p_2d;
            camera_model->spaceToPlane(p_3d, p_2d);
    
            points_2d.push_back(cv::Point2f(p_2d(0), p_2d(1)));
            points_distance.push_back(pointDistance(depth_cloud_local->points[i]));
        }
    
        cv::Mat showImage, circleImage;
        cv::cvtColor(imageCur, showImage, cv::COLOR_GRAY2RGB);
        circleImage = showImage.clone();
        for (int i = 0; i < (int)points_2d.size(); ++i)
        {
            float r, g, b;
            getColor(points_distance[i], 50.0, r, g, b);
            cv::circle(circleImage, points_2d[i], 0, cv::Scalar(r, g, b), 5);
        }
        cv::addWeighted(showImage, 1.0, circleImage, 0.7, 0, showImage); // blend camera image and circle image
    
        cv_bridge::CvImage bridge;
        bridge.image = showImage;
        bridge.encoding = "rgb8";
        sensor_msgs::Image::Ptr imageShowPointer = bridge.toImageMsg();
        imageShowPointer->header.stamp = stamp_cur;
        pub_depth_image.publish(imageShowPointer);
    }
    
    return depth_of_point;
    

5. 封装并发布到pub_match

​ 将图像封装到cv_bridge::cvtColor类型的ptr实例中发布到pub_match。

// 在图像中发布特征
if (pub_match.getNumSubscribers() != 0)
{
    ptr = cv_bridge::cvtColor(ptr, sensor_msgs::image_encodings::RGB8);
    //cv::Mat stereo_img(ROW * NUM_OF_CAM, COL, CV_8UC3);
    cv::Mat stereo_img = ptr->image;

    for (int i = 0; i < NUM_OF_CAM; i++)
    {
        cv::Mat tmp_img = stereo_img.rowRange(i * ROW, (i + 1) * ROW);
        cv::cvtColor(show_img, tmp_img, CV_GRAY2RGB); // show_img灰度图转RGB(tmp_img)

        //显示追踪状态,越红越好,越蓝越不行---cv::Scalar决定的
        for (unsigned int j = 0; j < trackerData[i].cur_pts.size(); j++)
        {
            if (SHOW_TRACK)
            {
                // 计算跟踪的特征点数量
                double len = std::min(1.0, 1.0 * trackerData[i].track_cnt[j] / WINDOW_SIZE);
                cv::circle(tmp_img, trackerData[i].cur_pts[j], 4, cv::Scalar(255 * (1 - len), 255 * len, 0), 4);
            } else {
                // 结合深度进行计算
                if(j < depth_of_points.values.size())
                {
                    if (depth_of_points.values[j] > 0)
                        cv::circle(tmp_img, trackerData[i].cur_pts[j], 4, cv::Scalar(0, 255, 0), 4);
                    else
                        cv::circle(tmp_img, trackerData[i].cur_pts[j], 4, cv::Scalar(0, 0, 255), 4);
                }
            }
        }
    }

    pub_match.publish(ptr->toImageMsg());
}

附录:代码

void img_callback(const sensor_msgs::ImageConstPtr &img_msg)
{
    double cur_img_time = img_msg->header.stamp.toSec();

    if(first_image_flag)
    {
        first_image_flag = false;
        first_image_time = cur_img_time;
        last_image_time = cur_img_time;
        return;
    }
    // detect unstable camera stream
    if (cur_img_time - last_image_time > 1.0 || cur_img_time < last_image_time)
    {
        ROS_WARN("image discontinue! reset the feature tracker!");
        first_image_flag = true; 
        last_image_time = 0;
        pub_count = 1;
        std_msgs::Bool restart_flag;
        restart_flag.data = true;
        pub_restart.publish(restart_flag);
        return;
    }
    last_image_time = cur_img_time;
    // frequency control
    if (round(1.0 * pub_count / (cur_img_time - first_image_time)) <= FREQ)
    {
        PUB_THIS_FRAME = true;
        // reset the frequency control
        if (abs(1.0 * pub_count / (cur_img_time - first_image_time) - FREQ) < 0.01 * FREQ)
        {
            first_image_time = cur_img_time;
            pub_count = 0;
        }
    }
    else
    {
        PUB_THIS_FRAME = false;
    }

    cv_bridge::CvImageConstPtr ptr;
    if (img_msg->encoding == "8UC1")
    {
        sensor_msgs::Image img;
        img.header = img_msg->header;
        img.height = img_msg->height;
        img.width = img_msg->width;
        img.is_bigendian = img_msg->is_bigendian;
        img.step = img_msg->step;
        img.data = img_msg->data;
        img.encoding = "mono8";
        ptr = cv_bridge::toCvCopy(img, sensor_msgs::image_encodings::MONO8);
    }
    else
        ptr = cv_bridge::toCvCopy(img_msg, sensor_msgs::image_encodings::MONO8);

    cv::Mat show_img = ptr->image;
    TicToc t_r;
    for (int i = 0; i < NUM_OF_CAM; i++)
    {
        ROS_DEBUG("processing camera %d", i);
        if (i != 1 || !STEREO_TRACK)
            trackerData[i].readImage(ptr->image.rowRange(ROW * i, ROW * (i + 1)), cur_img_time);
        else
        {
            if (EQUALIZE)
            {
                cv::Ptr<cv::CLAHE> clahe = cv::createCLAHE();
                clahe->apply(ptr->image.rowRange(ROW * i, ROW * (i + 1)), trackerData[i].cur_img);
            }
            else
                trackerData[i].cur_img = ptr->image.rowRange(ROW * i, ROW * (i + 1));
        }

        #if SHOW_UNDISTORTION
            trackerData[i].showUndistortion("undistrotion_" + std::to_string(i));
        #endif
    }

    for (unsigned int i = 0;; i++)
    {
        bool completed = false;
        for (int j = 0; j < NUM_OF_CAM; j++)
            if (j != 1 || !STEREO_TRACK)
                completed |= trackerData[j].updateID(i);
        if (!completed)
            break;
    }

   if (PUB_THIS_FRAME)
   {
        pub_count++;
        sensor_msgs::PointCloudPtr feature_points(new sensor_msgs::PointCloud);
        sensor_msgs::ChannelFloat32 id_of_point;
        sensor_msgs::ChannelFloat32 u_of_point;
        sensor_msgs::ChannelFloat32 v_of_point;
        sensor_msgs::ChannelFloat32 velocity_x_of_point;
        sensor_msgs::ChannelFloat32 velocity_y_of_point;

        feature_points->header.stamp = img_msg->header.stamp;
        feature_points->header.frame_id = "vins_body";

        vector<set<int>> hash_ids(NUM_OF_CAM);
        for (int i = 0; i < NUM_OF_CAM; i++)
        {
            auto &un_pts = trackerData[i].cur_un_pts;
            auto &cur_pts = trackerData[i].cur_pts;
            auto &ids = trackerData[i].ids;
            auto &pts_velocity = trackerData[i].pts_velocity;
            for (unsigned int j = 0; j < ids.size(); j++)
            {
                if (trackerData[i].track_cnt[j] > 1)
                {
                    int p_id = ids[j];
                    hash_ids[i].insert(p_id);
                    geometry_msgs::Point32 p;
                    p.x = un_pts[j].x;
                    p.y = un_pts[j].y;
                    p.z = 1;

                    feature_points->points.push_back(p);
                    id_of_point.values.push_back(p_id * NUM_OF_CAM + i);
                    u_of_point.values.push_back(cur_pts[j].x);
                    v_of_point.values.push_back(cur_pts[j].y);
                    velocity_x_of_point.values.push_back(pts_velocity[j].x);
                    velocity_y_of_point.values.push_back(pts_velocity[j].y);
                }
            }
        }

        feature_points->channels.push_back(id_of_point);
        feature_points->channels.push_back(u_of_point);
        feature_points->channels.push_back(v_of_point);
        feature_points->channels.push_back(velocity_x_of_point);
        feature_points->channels.push_back(velocity_y_of_point);

        // get feature depth from lidar point cloud
        pcl::PointCloud<PointType>::Ptr depth_cloud_temp(new pcl::PointCloud<PointType>());
        mtx_lidar.lock();
        *depth_cloud_temp = *depthCloud;
        mtx_lidar.unlock();

        sensor_msgs::ChannelFloat32 depth_of_points = depthRegister->get_depth(img_msg->header.stamp, show_img, depth_cloud_temp, trackerData[0].m_camera, feature_points->points);
        feature_points->channels.push_back(depth_of_points);
        
        // skip the first image; since no optical speed on frist image
        if (!init_pub)
        {
            init_pub = 1;
        }
        else
            pub_feature.publish(feature_points);

        // publish features in image
        if (pub_match.getNumSubscribers() != 0)
        {
            ptr = cv_bridge::cvtColor(ptr, sensor_msgs::image_encodings::RGB8);
            //cv::Mat stereo_img(ROW * NUM_OF_CAM, COL, CV_8UC3);
            cv::Mat stereo_img = ptr->image;

            for (int i = 0; i < NUM_OF_CAM; i++)
            {
                cv::Mat tmp_img = stereo_img.rowRange(i * ROW, (i + 1) * ROW);
                cv::cvtColor(show_img, tmp_img, CV_GRAY2RGB);

                for (unsigned int j = 0; j < trackerData[i].cur_pts.size(); j++)
                {
                    if (SHOW_TRACK)
                    {
                        // track count
                        double len = std::min(1.0, 1.0 * trackerData[i].track_cnt[j] / WINDOW_SIZE);
                        cv::circle(tmp_img, trackerData[i].cur_pts[j], 4, cv::Scalar(255 * (1 - len), 255 * len, 0), 4);
                    } else {
                        // depth 
                        if(j < depth_of_points.values.size())
                        {
                            if (depth_of_points.values[j] > 0)
                                cv::circle(tmp_img, trackerData[i].cur_pts[j], 4, cv::Scalar(0, 255, 0), 4);
                            else
                                cv::circle(tmp_img, trackerData[i].cur_pts[j], 4, cv::Scalar(0, 0, 255), 4);
                        }
                    }
                }
            }

            pub_match.publish(ptr->toImageMsg());
        }
    }
}

附录:引用

ManiiXu/VINS-Mono-Learning: VINS-Mono

VINS-Mono 代码解析一、前端_努力努力努力-程序员资料_mono8 - 程序员资料

  • 2
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值