OpenCV的Sample分析:real_time_tracking(6)

OpenCV的Sample分析:real_time_tracking(6)

我们接着分析程序的第二步:

    // -- Step 2: Find out the 2D/3D correspondences

    vector<Point3f> list_points3d_model_match; // container for the model 3D coordinates found in the scene
    vector<Point2f> list_points2d_scene_match; // container for the model 2D coordinates found in the scene

    for(unsigned int match_index = 0; match_index < good_matches.size(); ++match_index)
    {
      Point3f point3d_model = list_points3d_model[ good_matches[match_index].trainIdx ];  // 3D point from model
      Point2f point2d_scene = keypoints_scene[ good_matches[match_index].queryIdx ].pt; // 2D point from the scene
      list_points3d_model_match.push_back(point3d_model);         // add 3D point
      list_points2d_scene_match.push_back(point2d_scene);         // add 2D point
    }

    // Draw outliers
    draw2DPoints(frame_vis, list_points2d_scene_match, red);

    Mat inliers_idx;
    vector<Point2f> list_points2d_inliers;

    if(good_matches.size() > 0) // None matches, then RANSAC crashes
    {

      // -- Step 3: Estimate the pose using RANSAC approach
    
    }

这段代码中,最让人摸不着头脑的是,for语句下面的赋值语句,但是要耐心一步一步分析

(1)首先数组list_points3d_model[]存放的是“盒子模型”中角点的3D坐标,

            good_matches[match_index].trainIdx 指匹配点对中存放3D坐标数组的index

(2)其次数组keypoints_scene[]存放的是“盒子模型”中角点的像素坐标,

            good_matches[match_index].queryIdx指匹配点对中存放像素数组的index

(3)注意,vector类型good_matches的内容的类型是DMatch,类DMatch有两个组内成员,分别是         

             trainIdx和queryldx,DMatch是opencv自带的匹配类

其实,我很奇怪为什么注释是“outliers”?这不是已经匹配好的点吗??

但是像draw2DPoints()函数还是挺有意思的,可以看一下,

// Draw only the 2D points
void draw2DPoints(cv::Mat image, std::vector<cv::Point2f> &list_points, cv::Scalar color)
{
  for( size_t i = 0; i < list_points.size(); i++)
  {
    cv::Point2f point_2d = list_points[i];
    // Draw Selected points
    cv::circle(image, point_2d, radius, color, -1, lineType );
  }
}

接着向下分析,第三步,用RANSAC方法估计盒子的位置和姿态

    Mat inliers_idx;
    vector<Point2f> list_points2d_inliers;

    if(good_matches.size() > 0) // None matches, then RANSAC crashes
    {

      // -- Step 3: Estimate the pose using RANSAC approach
      pnp_detection.estimatePoseRANSAC( list_points3d_model_match, list_points2d_scene_match,
                                        pnpMethod, inliers_idx,
                                        iterationsCount, reprojectionError, confidence );

     //..............
    }

先对函数estimatePoseRANSAC()函数有一点的了解,

list_point3d_model_match和list_point3d_model_match指相匹配的3D点和2D点,

pnpMethod = SOLVEPNP_ITERATIVE指这里采用的迭代PNP算法

iterationsCount指迭代PnP的最大迭代次数,

reprojectionError指判定为“内点”的最大条件(重投影误差在一定范围内)

cconfidence指ransac算法的置信度

inliers_idx指“内点”的索引

第四步,是描绘内点,这个比较简单

      // -- Step 4: Catch the inliers keypoints to draw
      for(int inliers_index = 0; inliers_index < inliers_idx.rows; ++inliers_index)
      {
        int n = inliers_idx.at<int>(inliers_index);         // i-inlier
        Point2f point2d = list_points2d_scene_match[n]; // i-inlier point 2D
        list_points2d_inliers.push_back(point2d);           // add i-inlier to list
      }

      // Draw inliers points 2D
      draw2DPoints(frame_vis, list_points2d_inliers, blue);

第五步,把PnP算法得到的R和t当作观测值,从而更新kalman滤波

      // -- Step 5: Kalman Filter
      good_measurement = false;
      // GOOD MEASUREMENT
      if( inliers_idx.rows >= minInliersKalman )
      {

        // Get the measured translation
        Mat translation_measured(3, 1, CV_64F);
        translation_measured = pnp_detection.get_t_matrix();

        // Get the measured rotation
        Mat rotation_measured(3, 3, CV_64F);
        rotation_measured = pnp_detection.get_R_matrix();

        // fill the measurements vector
        fillMeasurements(measurements, translation_measured, rotation_measured);

        good_measurement = true;

      }

      // Instantiate estimated translation and rotation
      Mat translation_estimated(3, 1, CV_64F);
      Mat rotation_estimated(3, 3, CV_64F);

      // update the Kalman filter with good measurements
      updateKalmanFilter( KF, measurements,
                          translation_estimated, rotation_estimated);

这段代码有以下几个步骤:

(1)判断本次观测信息是否有效(由匹配点数目决定)

(2)如果有效,就收集观测信息(由PnP算法得到的旋转矩阵和位移向量),组成观测向量

(3)根据观测向量,更新kalman滤波

第六步,根据更新后的旋转矩阵以及位移向量,求解出相机的透视矩阵,

pnp_detection_est.set_P_matrix(rotation_estimated, translation_estimated);

对一帧图像,大致要执行这六步

还是挺有意思的,把PnP问题和Kalman预测结合起来

并且在这个sample里,讨论“类”的构建方法






评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值