ROS实验笔记之——基于stereo camera的视觉里程计的实现

82 篇文章 116 订阅

在project中,需要实现基于stereo camera的isual odometry。其中,包括了feature detection,feature tracking,3D point generation以及基于PnP的位姿估计。而关于camera的model是给定的(后面会写一个博客关于camera model的标定,感觉这个是非常重要的)

接下来先看看效果视频哈~:

基于stereo camera的视觉里程计的实现

 

目录

Introduction

一个编译的bug

estimator.cpp的编写

feature detection

feature tracking

3D point generation

PnP based pose estimation


 

Introduction

一个编译的bug

刚开始运行时可能出现报错如下:

/usr/bin/ld: cannot find -ldw

运行下面命令应该就可以了

sudo apt-get install libdw-dev

 

Linux下编译应用程序常常会出现如下错误:

/usr/bin/ld: cannot find -lxxx意思是编译过程找不到对应库文件。其中,-lxxx表示链接库文件 libxxx.so。

大多数可以直接通过apt-get来安装:

apt-get install libxxx-dev

参考:https://blog.csdn.net/qq_15304853/article/details/79172422

estimator.cpp的编写

#include "estimator.h"
#include "eigen3/Eigen/Geometry"
#include "opencv2/core/eigen.hpp"

//这个是主要的实现函数
//“config/realsense_1/realsense_n3_unsync.yaml”文件中也包含一些参数可以用来tune vo的性能
//数据应该是记录于“bag/realsense_1.bag”
//运行整个节点应该是通过“launch/stereo_vo_bag.launch”
//roslaunch stereo_vo stereo_vo_bag.launch


void drawImage(const cv::Mat& img, const vector<cv::Point2f>& pts, string name) {
  auto draw = img.clone();
  for (unsigned int i = 0; i < pts.size(); i++) {
    cv::circle(draw, pts[i], 2, cv::Scalar(0, 255, 0), -1, 8);
  }
  cv::imshow(name, draw);
  cv::waitKey(1);
}


Estimator::Estimator() {
  ROS_INFO("Estimator init begins.");
  prev_frame.frame_time = ros::Time(0.0);
  prev_frame.w_t_c = Eigen::Vector3d(0, 0, 0);
  prev_frame.w_R_c = Eigen::Matrix3d::Identity();
  fail_cnt = 0;
  init_finish = false;//还没初始化结束
}

void Estimator::reset() {
  ROS_ERROR("Lost, reset!");
  key_frame = prev_frame;
  fail_cnt = 0;
  init_finish = false;//重置了后,把初始化设置为0
}

void Estimator::setParameter() {
  for (int i = 0; i < 2; i++) {
    tic[i] = TIC[i];
    ric[i] = RIC[i];
    cout << " exitrinsic cam " << i << endl << ric[i] << endl << tic[i].transpose() << endl;
  }

  prev_frame.frame_time = ros::Time(0.0);
  prev_frame.w_t_c = tic[0];
  prev_frame.w_R_c = ric[0];
  key_frame = prev_frame;

  readIntrinsicParameter(CAM_NAMES);

  // transform between left and right camera
  Matrix4d Tl, Tr;
  Tl.setIdentity();
  Tl.block(0, 0, 3, 3) = ric[0];
  Tl.block(0, 3, 3, 1) = tic[0];
  Tr.setIdentity();
  Tr.block(0, 0, 3, 3) = ric[1];
  Tr.block(0, 3, 3, 1) = tic[1];
  Tlr = Tl.inverse() * Tr;
}


void Estimator::readIntrinsicParameter(const vector<string>& calib_file) {
  for (size_t i = 0; i < calib_file.size(); i++) {
    ROS_INFO("reading paramerter of camera %s", calib_file[i].c_str());
    camodocal::CameraPtr camera =
        camodocal::CameraFactory::instance()->generateCameraFromYamlFile(calib_file[i]);
    m_camera.push_back(camera);//先将左camera加入,再加入右camera
  }
}

//这是最开始的function
//在此完成一系列的操作(其他可以先不看,毕竟整个实现过程都是在下面函数中进行的。)
bool Estimator::inputImage(ros::Time time_stamp, const cv::Mat& _img, const cv::Mat& _img1) {
  //输入时间与两张图片;_img左图,_img1右图

  if(fail_cnt > 20){
    reset();
  }
  std::cout << "receive new image===========================" << std::endl;

  Estimator::frame cur_frame;
  //frame这个类中包含了:时间,左边的图片,左边图片的2D feature,当前帧的3D feature point, current camera pose (R与t) in the world frame 
  
  cur_frame.frame_time = time_stamp;
  cur_frame.img = _img;//当前帧的左图。右图只是用于估算3D feature points

  // cv::imshow("img", _img);
  // cv::waitKey(1);

  vector<cv::Point2f> left_pts_2d, right_pts_2d;
  vector<cv::Point3f> key_pts_3d; //关键帧的特征点
  
  //transform R|t from the keyframe to the current frame
  //将旋转矩阵R与平移向量t从关键帧转移到当前帧。从而得出相对于关键帧的位姿变化
  c_R_k.setIdentity();
  c_t_k.setZero();

  if (init_finish) {//判断是否初始化,也即,是否存在关键帧
    // To do: match features between the key frame and the current left image
    //对关键帧与当前帧进行特征匹配
    std::vector<cv::Point2f>flowed_cur_2d;//当前帧的2D feature
    std::vector<cv::Point3f>coores_3d_points;//特征点的3D位置
    // std::cout<<"这里错了"<<std::endl;
    // std::cout<<"进入前关键帧的特征点:"<<key_frame.uv.size()<<std::endl;
    trackFeatureBetweenFrames(key_frame,cur_frame.img,coores_3d_points,flowed_cur_2d); //获取了特征点的3D位置,以及在当前帧的2D位置 
    //通过输入关键帧与当前帧。以及函数的处理,获得了当前帧得2D feature与对应特征点的3D位置

    // To do: undistort the points of the left image and compute relative motion with the key frame. 
    std::vector<cv::Point2f>undistorted_points;
    undistorted_points=undistortedPts(flowed_cur_2d,m_camera[0]);

    //计算关键帧与当前帧的相对变换
    estimateTBetweenFrames(coores_3d_points,undistorted_points,c_R_k,c_t_k);
    //获取当前帧与世界坐标系的结果:
    cur_frame.w_R_c=key_frame.w_R_c*c_R_k.transpose();
    // cur_frame.w_t_c=key_frame.w_t_c+key_frame.w_R_c*(-(c_R_k.transpose()*c_t_k));
    cur_frame.w_t_c =  key_frame.w_R_c * (-(c_R_k.transpose() * c_t_k)) + key_frame.w_t_c;
  }
  else{
    cur_frame.w_R_c=c_R_k;
    cur_frame.w_t_c=c_t_k;
  }


  // To do: extract new features for the current frame.
  //在左图与右图中做特征跟踪
  std::vector<cv::Point2f> left_features, right_features;
  // trackFeatureLeftRight(_img,_img1,left_features,right_features);//获取左图与右图的特征
  trackFeatureLeftRight(_img, _img1, left_features, right_features);


  // To do: compute the camera pose of the current frame.


  // To do: undistort the 2d points of the current frame and generate the corresponding 3d points. 
   //将上述获得的特征点通过相机参数获取未失真的特征点
  std::vector<cv::Point2f>undistorted_points_left, undistorted_points_right;
  undistorted_points_left=undistortedPts(left_features,m_camera[0]);
  undistorted_points_right=undistortedPts(right_features,m_camera[1]);

  //然后获得特征点的3D坐标(通过双目相机的generate3dPoints函数)
  // generate3dPoints(undistorted_points_left,undistorted_points_right,cur_frame.xyz,cur_frame.uv);//目的是获取特征点的3D坐标
  generate3dPoints(undistorted_points_left,undistorted_points_right,cur_frame.xyz,left_features);//目的是获取特征点的3D坐标
  cur_frame.uv = left_features;//超级无敌坑

//
  // if(init_finish && FLOW_BACK){
  //   std::vector<cv::Point2f> flow_back_2d;
  //   std::vector<cv::Point3f> new_3d_points;
  //   trackFeatureBetweenFrames(cur_frame, key_frame.img, new_3d_points, flow_back_2d);

  //   std::vector<cv::Point2f> undistorted_points;
  //   undistorted_points = undistortedPts(flow_back_2d, m_camera[0]);
  //   Eigen::Matrix3d R_12;
  //   Eigen::Vector3d t_12;
  //   estimateTBetweenFrames(new_3d_points, undistorted_points, R_12, t_12);

  //   Eigen::Matrix3d rotation_add = R_12 * c_R_k.transpose();
  //   Eigen::AngleAxisd angleaxis_add(rotation_add);
  //   Eigen::AngleAxisd angleaxis_mean(0.5 * angleaxis_add.angle(), angleaxis_add.axis());
  //   Eigen::Matrix3d rotation_mean = angleaxis_mean.toRotationMatrix();
  //   Eigen::Vector3d translation_mean = 0.5 * (t_12 -(rotation_mean * c_t_k));

  //   cur_frame.w_t_c = key_frame.w_R_c * translation_mean + key_frame.w_t_c;
  //   cur_frame.w_R_c =  key_frame.w_R_c * rotation_mean;
  // }
/

  key_pts_3d = key_frame.xyz;//给予关键点
  // Change key frame,改变关键帧的条件
  //第一个条件就是平移向量(norm)大于某个阈值
  //或旋转大于某个阈值
  //或者关键帧中的特征点少于一定数目
  //或还没初始化(设置关键帧就是初始化的过程)
  if(c_t_k.norm() > TRANSLATION_THRESHOLD || acos(Quaterniond(c_R_k).w()) * 2.0 > ROTATION_THRESHOLD || key_pts_3d.size() < FEATURE_THRESHOLD || !init_finish){
    // cout << c_t_k.norm() << "," << acos(Quaterniond(c_R_k).w()) * 2.0 << "," << key_pts_3d.size()  << endl;
    key_frame = cur_frame;//将当前帧设置为关键帧(初始化的时候,将当前帧给到关键帧)
    // cout << c_t_k.norm() << "," << acos(Quaterniond(c_R_k).w()) * 2.0 << "," << key_frame.xyz.size()  << endl;
    // std::cout<<"赋值后关键帧的特征点:"<<key_frame.uv.size()<<std::endl;
    ROS_INFO("Change key frame to current frame.");
  }


  prev_frame = cur_frame;//当前帧已经成为前一帧了
  
  //the states of the current frame is required to be updated in the updateLatestStates function。
  //The position and the orientation are required to be transformed into the world frame (the IMU frame at initialization).
  updateLatestStates(cur_frame);//需要将当前帧的状态更新

  init_finish = true;

  return true;
}

//***********************Feature detection***************************//
//进行特征提取(采用opencv中的goodFeatureToTrack函数)
void Estimator::extractNewFeatures(const cv::Mat& img, vector<cv::Point2f>& uv) {
  
  //To do: extract the new 2d features of img and store them in uv.
  //输入图像,输出特征点,最大角点数目,质量水平系数,角点间的最小距离,mask=0的点忽略,使用的领域数
  cv::goodFeaturesToTrack(img, uv, MAX_CNT, 0.03, MIN_DIST, cv::noArray(), 5);

}

//***********************Feature Tracking***************************//
//it finds features in the right camera image that associates with those in the left camera image.
//从左图与右图中做feature tracking
bool Estimator::trackFeatureLeftRight(const cv::Mat& _img, const cv::Mat& _img1,
                                         vector<cv::Point2f>& left_pts, vector<cv::Point2f>& right_pts) {

  // To do: track features left to right frame and obtain corresponding 2D points.
  vector<cv::Point2f> left_features, right_features;
  //进行特征提取
  // extractNewFeatures(_img,left_features);
   extractNewFeatures(_img , left_features);
  //然后通过光流法实现特征的跟踪
  
  vector<unsigned char> status;//输出状态向量(无符号char),如果在当前图像中能够光流得到标定的特征点位置改变,则设置status的对应位置为1,否则设置为0 
  vector<float> error;//出误差的矢量; 向量的每个元素都设置为相应特征的误差,误差度量的类型可以在flags参数中设置; 如果未找到流,则未定义误差(使用status参数查找此类情况)。
  //通过光流法来对关键帧与当前帧的特征进行跟踪
  // cv::calcOpticalFlowPyrLK(_img,_img1,left_features,right_features,status,error);//cv::Size(15, 15)是每个金字塔层的搜索窗口的大小。
  cv::calcOpticalFlowPyrLK(_img, _img1, left_features, right_features, status, error);

  //选择有用的特征点(有位置变化的特征点)
  //获取当前帧的2D特征点,与关键帧的3D特征点
  left_pts.clear();
  right_pts.clear();
  for (uint32_t i = 0; i < left_features.size(); i++){
    if (status[i] == 1){
      left_pts.push_back(left_features[i]);
      right_pts.push_back(right_features[i]);
    }
  }
  
     if (int(left_pts.size()) < MIN_CNT) return false; 

  return true;
}

//it finds features in the current left image corresponding to those in the keyframe.
//实现当前帧(左图)与关键帧之间的feature tracking (可以采用LK光流法)
bool Estimator::trackFeatureBetweenFrames(const Estimator::frame& keyframe, const cv::Mat& cur_img,
                                             vector<cv::Point3f>& key_pts_3d,
                                             vector<cv::Point2f>& cur_pts_2d) {
  //由于下面的是需要获得的,所以在此清除一下,以防万一:
  key_pts_3d.clear();
  cur_pts_2d.clear();
  //输出一下关键帧的特征数目
  std::cout<<"the number of feature point for the Keyframe is:"<<keyframe.uv.size()<<std::endl;

  // To do: track features between the key frame and the current frame to obtain corresponding 2D, 3D points.
  vector<cv::Point2f> key_pts_2d=keyframe.uv;//关键帧的2D feature points


  vector<cv::Point2f> new_features;//输出场景的特征点(计算特征点在当前帧图像中的新的位置,然后输出)
  vector<unsigned char> status;//输出状态向量(无符号char),如果在当前图像中能够光流得到标定的特征点位置改变,则设置status的对应位置为1,否则设置为0 
  vector<float> error;//出误差的矢量; 向量的每个元素都设置为相应特征的误差,误差度量的类型可以在flags参数中设置; 如果未找到流,则未定义误差(使用status参数查找此类情况)。
  //通过光流法来对关键帧与当前帧的特征进行跟踪
  // cv::calcOpticalFlowPyrLK(keyframe.img,cur_img,key_pts_2d,new_features,status,error,cv::Size(15, 15));//cv::Size(15, 15)是每个金字塔层的搜索窗口的大小。
  // std::cout<<"这里错了"<<std::endl;
  cv::calcOpticalFlowPyrLK(keyframe.img, cur_img, key_pts_2d, new_features, status, error, cv::Size(15, 15));

  //选择有用的特征点(有位置变化的特征点)
  //获取当前帧的2D特征点,与关键帧的3D特征点
  for (uint32_t i=0;i<key_pts_2d.size();i++){
    if (status[i]==1){
      key_pts_3d.push_back(keyframe.xyz[i]);
      cur_pts_2d.push_back(new_features[i]);
    }
  }

  if(SHOW_FEATURE){
    drawImage(prev_frame.img, key_pts_2d, "key frame");
    drawImage(cur_img, cur_pts_2d, "current frame");
  }


  // std::cout<<"the number of tracking feature"<<keyframe.uv.size()<<std::endl;
  //如果当前帧检测的特征点少于一定的数目,则报错
  if (int(cur_pts_2d.size()) < MIN_CNT) {//在文件config/realsense_1/realsense_n3_unsync.yaml中赋值了
    return false;
  }

  return true;
}


//***********************3D Point Generation***************************//
//通过函数trackFeatureLeftRight获取匹配的特征对,然后就可以或取特征点的3D信息
//下面这个函数就是实现了将左图的特征点与右图的特征点输入,则可以获得当前特征点的3D位置。
//注意:computes the 3D position represented in the left camera frame for each undistorted pair of feature points.
void Estimator::generate3dPoints(const vector<cv::Point2f>& left_pts,
                                 const vector<cv::Point2f>& right_pts, 
                                 vector<cv::Point3f>& cur_pts_3d,
                                 vector<cv::Point2f>& cur_pts_2d) {

  Eigen::Matrix<double, 3, 4> P1, P2;

  P1 << 1, 0, 0, 0,  0, 1, 0, 0,  0, 0, 1, 0;
  P2.block(0,0,3,3) = (Tlr.block(0,0,3,3).transpose());
  P2.block(0,3,3,1) = -P2.block(0,0,3,3) * Tlr.block(0,3,3,1);

  vector<uchar> status;

  for (unsigned int i = 0; i < left_pts.size(); ++i) {
    Vector2d pl(left_pts[i].x, left_pts[i].y);
    Vector2d pr(right_pts[i].x, right_pts[i].y);
    Vector3d pt3;
    triangulatePoint(P1, P2, pl, pr, pt3);

    if (pt3[2] > 0) {
      cur_pts_3d.push_back(cv::Point3f(pt3[0], pt3[1], pt3[2]));
      status.push_back(1);
    } else {
      status.push_back(0);
    }
  }

  reduceVector<cv::Point2f>(cur_pts_2d, status);
}


//***********************PnP-based Relative Pose Estimation***************************//
//获得了关键帧的3D特征点以及当前帧对应的2D特征点后
//可以通过下面函数estimateTBetweenFrames来计算关键帧与当前帧的相对变换
//可以通过solvePnP or solvePnPRansac in OpenCV. Note that the relative transform is in the camera’s frame.
bool Estimator::estimateTBetweenFrames(vector<cv::Point3f>& key_pts_3d,
                                          vector<cv::Point2f>& cur_pts_2d, Matrix3d& R, Vector3d& t) {

  // To do: calculate relative pose between the key frame and the current frame using the matched 2d-3d points


  //当关键点的数目少于4时,无法进行计算,故此加个判别
  if(int(cur_pts_2d.size())<4){
    std::cout<<"the number of feature point for pose estimation is not enough"<<std::endl;
    return false;
  }

  //输出特征点的数量
  std::cout<<"3D point:"<<key_pts_3d.size()<<", 2D point:"<<cur_pts_2d.size()<<std::endl;

  //接下来通过函数solvePnPRansac来计算
  //输入参数:
  //特征点在世界坐标系下的点集
  //特征点在相机像平面的坐标;由于此时的cur_pts_2d是经过相机的内参与校正系数处理的,未失真的点
  cv::Mat pseudo_camera = cv::Mat::eye(3, 3, CV_32F);//创建一个3*3的单位矩阵
  vector<float> distCoeffs = {0, 0, 0, 0};
  //相机的内参
  //相机的畸变系数
  //旋转矩阵
  //平移向量
  cv::Mat rotation;
  cv::Mat tvec;
//  useExtrinsicGuess       若果求解PnP使用迭代算法,初始值可以使用猜测的初始值(true),也可以使用解析求解的结果作为初始值(false)。
// iterationsCount         Ransac算法的迭代次数,这只是初始值,根据估计外点的概率,可以进一步缩小迭代次数;(此值函数内部是会不断改变的),所以一开始可以赋一个大的值。
// reprojectionErrr        Ransac筛选内点和外点的距离阈值,这个根据估计内点的概率和每个点的均方差(假设误差按照高斯分布)可以计算出此阈值。
  cv::solvePnPRansac(key_pts_3d,cur_pts_2d,pseudo_camera,distCoeffs,rotation,tvec,false,200,0.01);

//然后进行Mat与 Matrix3d R 和 Vector3d t的转换
Vector3d rotation_eigen;
cv2eigen(rotation, rotation_eigen);//将Mat转为Vector3d
double theta = rotation_eigen.norm();
Vector3d axis = rotation_eigen / theta;
R = AngleAxisd(theta, axis);
cv2eigen(tvec, t);
// cout << R << endl;
// cout << t << endl;

  return true;
}


bool Estimator::inBorder(const cv::Point2f& pt, const int& row, const int& col) {
  const int BORDER_SIZE = 1;
  int img_x = cvRound(pt.x);
  int img_y = cvRound(pt.y);
  return BORDER_SIZE <= img_x && img_x < col - BORDER_SIZE && BORDER_SIZE <= img_y &&
      img_y < row - BORDER_SIZE;
}


double Estimator::distance(cv::Point2f pt1, cv::Point2f pt2) {
  double dx = pt1.x - pt2.x;
  double dy = pt1.y - pt2.y;
  return sqrt(dx * dx + dy * dy);
}


template <typename Derived>
void Estimator::reduceVector(vector<Derived>& v, vector<uchar> status) {
  int j = 0;
  for (int i = 0; i < int(v.size()); i++)
    if (status[i]) v[j++] = v[i];
  v.resize(j);
}


void Estimator::updateLatestStates(frame &latest_frame) {
  
  // To do: update the latest_time, latest_pointcloud, latest_P, latest_Q, latest_rel_P and latest_rel_Q.
  // latest_P and latest_Q should be the pose of the body (IMU) in the world frame.
  // latest_rel_P and latest_rel_Q should be the relative pose of the current body frame relative to the body frame of the key frame.
  // latest_pointcloud should be in the current camera frame.

  // latest_time=latest_frame.frame_time;
  Eigen::Matrix3d rotation_init;
  rotation_init << 0, 0, 1, -1, 0, 0, 0, -1, 0;
  latest_Q =  latest_frame.w_R_c;
  latest_P =  rotation_init * latest_frame.w_t_c;
  latest_pointcloud = latest_frame.xyz;//特征点的3D位置
  // estimateTBetweenFrames(key_frame.xyz,latest_frame.uv,c_R_k,c_t_k);
  // latest_rel_P=rotation_init * c_t_k;
  // latest_rel_Q=c_R_k;
  
  // cout << latest_P << endl;
  // cout << latest_Q.toRotationMatrix() << endl;
}


void Estimator::triangulatePoint(Eigen::Matrix<double, 3, 4>& Pose0, Eigen::Matrix<double, 3, 4>& Pose1,
                                 Eigen::Vector2d& point0, Eigen::Vector2d& point1,
                                 Eigen::Vector3d& point_3d) {
  Eigen::Matrix4d design_matrix = Eigen::Matrix4d::Zero();
  design_matrix.row(0) = point0[0] * Pose0.row(2) - Pose0.row(0);
  design_matrix.row(1) = point0[1] * Pose0.row(2) - Pose0.row(1);
  design_matrix.row(2) = point1[0] * Pose1.row(2) - Pose1.row(0);
  design_matrix.row(3) = point1[1] * Pose1.row(2) - Pose1.row(1);
  Eigen::Vector4d triangulated_point;
  triangulated_point = design_matrix.jacobiSvd(Eigen::ComputeFullV).matrixV().rightCols<1>();
  point_3d(0) = triangulated_point(0) / triangulated_point(3);
  point_3d(1) = triangulated_point(1) / triangulated_point(3);
  point_3d(2) = triangulated_point(2) / triangulated_point(3);
}


double Estimator::reprojectionError(Matrix3d &R, Vector3d &t, cv::Point3f &key_pts_3d, cv::Point2f &cur_pts_2d){
    Vector3d pt1(key_pts_3d.x, key_pts_3d.y, key_pts_3d.z);
    Vector3d pt2 = R * pt1 + t;
    pt2 = pt2 / pt2[2];
    return sqrt(pow(pt2[0] - cur_pts_2d.x, 2) + pow(pt2[1] - cur_pts_2d.y, 2));
}

//根据camera的参数来获取不是失真的点
vector<cv::Point2f> Estimator::undistortedPts(vector<cv::Point2f>& pts, camodocal::CameraPtr cam) {
  vector<cv::Point2f> un_pts;
  for (unsigned int i = 0; i < pts.size(); i++) {
    Eigen::Vector2d a(pts[i].x, pts[i].y);
    Eigen::Vector3d b;
    cam->liftProjective(a, b);
    un_pts.push_back(cv::Point2f(b.x() / b.z(), b.y() / b.z()));
  }
  return un_pts;
}

效果如下图所示

下面对各个主模块进行介绍~

 

feature detection

	void cv::goodFeaturesToTrack(
		cv::InputArray image, // 输入图像(CV_8UC1 CV_32FC1)
		cv::OutputArray corners, // 输出角点vector
		int maxCorners, // 最大角点数目
		double qualityLevel, // 质量水平系数(小于1.0的正数,一般在0.01-0.1之间)
		double minDistance, // 最小距离,小于此距离的点忽略
		cv::InputArray mask = noArray(), // mask=0的点忽略
		int blockSize = 3, // 使用的邻域数
		bool useHarrisDetector = false, // false ='Shi Tomasi metric'
		double k = 0.04 // Harris角点检测时使用
	);

 

feature tracking

calcOpticalFlowPyrLK()
void cv::calcOpticalFlowPyrLK 	( 	InputArray  	prevImg,
		InputArray  	nextImg,
		InputArray  	prevPts,
		InputOutputArray  	nextPts,
		OutputArray  	status,
		OutputArray  	err,
		Size  	winSize = Size(21, 21),
		int  	maxLevel = 3,
		TermCriteria  	criteria = TermCriteria(TermCriteria::COUNT+TermCriteria::EPS, 30, 0.01),
		int  	flags = 0,
		double  	minEigThreshold = 1e-4 
	)

/****************************参数:****************************************

    prevImg :buildOpticalFlowPyramid构造的金字塔或第一个8位输入图像
    nextImg :与prevImg相同大小和相同类型的金字塔或第二个输入图像
    prevPts :需要找到流的2D点的矢量(vector of 2D points for which the flow needs to be found;);点坐标必须是单精度浮点数。
    nextPts :输出二维点的矢量(具有单精度浮点坐标),包含第二图像中输入特征的计算新位置;当传递OPTFLOW_USE_INITIAL_FLOW标志时,向量必须与输入中的大小相同。
    status :输出状态向量(无符号字符);如果找到相应特征的流,则向量的每个元素设置为1,否则设置为0。
    err :输出误差的矢量; 向量的每个元素都设置为相应特征的误差,误差度量的类型可以在flags参数中设置; 如果未找到流,则未定义误差(使用status参数查找此类情况)。
    winSize :每个金字塔层的搜索窗口的大小。
    maxLevel :基于0的最大金字塔等级数;如果设置为0,则不使用金字塔(单级),如果设置为1,则使用两个级别,依此类推;如果将金字塔传递给输入,那么算法将使用与金字塔一样多的级别,但不超过maxLevel。
    criteria :参数,指定迭代搜索算法的终止条件(在指定的最大迭代次数criteria.maxCount之后或当搜索窗口移动小于criteria.epsilon时)。
    flags :操作标志:
        OPTFLOW_USE_INITIAL_FLOW使用初始估计,存储在nextPts中;如果未设置标志,则将prevPts复制到nextPts并将其视为初始估计。
        OPTFLOW_LK_GET_MIN_EIGENVALS使用最小特征值作为误差测量(参见minEigThreshold描述);如果没有设置标志,则将原稿周围的色块和移动点之间的L1距离除以窗口中的像素数,用作误差测量。
    minEigThreshold :算法计算光流方程的2x2正常矩阵的最小特征值,除以窗口中的像素数;如果此值小于minEigThreshold,则过滤掉相应的功能并且不处理其流程,因此它允许删除坏点并获得性能提升。
*/
//该函数实现了金字塔中Lucas-Kanade光流的稀疏迭代版本。

 

3D point generation

Given the matched feature pairs in the left and right image from the function trackFeaturesLeftRight, the 3D feature points are able to be computed. The function generate3dPoints is already provided, which computes the 3D position represented in the left camera frame for each undistorted pair of feature points.

 

PnP based pose estimation

关于SolvePnPRansac位姿估计算法

https://blog.csdn.net/xuelangwin/article/details/80847337

/*  max 注释
*   函数功能:用ransac的方式求解PnP问题
*
*   参数:
*   [in]    _opoints                参考点在世界坐标系下的点集;float or double
*   [in]    _ipoints                参考点在相机像平面的坐标;float or double
*   [in]    _cameraMatrix           相机内参
*   [in]    _distCoeffs             相机畸变系数
*   [out]   _rvec                   旋转矩阵
*   [out]   _tvec                   平移向量
*   [in]    useExtrinsicGuess       若果求解PnP使用迭代算法,初始值可以使用猜测的初始值(true),也可以使用解析求解的结果作为初始值(false)。
*   [in]    iterationsCount         Ransac算法的迭代次数,这只是初始值,根据估计外点的概率,可以进一步缩小迭代次数;(此值函数内部是会不断改变的),所以一开始可以赋一个大的值。
*   [in]    reprojectionErrr        Ransac筛选内点和外点的距离阈值,这个根据估计内点的概率和每个点的均方差(假设误差按照高斯分布)可以计算出此阈值。
*   [in]    confidence              此值与计算采样(迭代)次数有关。此值代表从n个样本中取s个点,N次采样可以使s个点全为内点的概率。
*   [out]   _inliers                返回内点的序列。为矩阵形式
*   [in]    flags                   最小子集的计算模型;
*                                                 SOLVEPNP_ITERATIVE(此方案,最小模型用的EPNP,内点选出之后用了一个迭代);
*                                                 SOLVE_P3P(P3P只用在最小模型上,内点选出之后用了一个EPNP)      
*                                                 SOLVE_AP3P(AP3P只用在最小模型上,内点选出之后用了一个EPNP)
*                                                 SOLVE_EPnP(最小模型上&内点选出之后都采用了EPNP)
*    返回值:
*         成功返回true,失败返回false;
*/
bool solvePnPRansac(InputArray _opoints, InputArray _ipoints,
	InputArray _cameraMatrix, InputArray _distCoeffs,
	OutputArray _rvec, OutputArray _tvec, bool useExtrinsicGuess,
	int iterationsCount, float reprojectionError, double confidence,
	OutputArray _inliers, int flags)

 

 

  • 1
    点赞
  • 10
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值