视觉SLAM十四讲CH7课后习题10_2

转载于:在ceres中实现ICP优化(仅优化位姿)_luo870604851的博客-CSDN博客一.仅优化位姿构造类和代价函数:struct ICPCeres{ ICPCeres ( Point3f uvw,Point3f xyz ) : _uvw(uvw),_xyz(xyz) {} // 残差的计算 template <typename T> bool operator() ( const T* const c...https://blog.csdn.net/luo870604851/article/details/82356394

10. *在Ceres 中实现PnP 和ICP 的优化。

在原作者的基础上我加了CMakeLists.txt文件,原文rotation.h和主要代码是分开写的,在Ubuntu上用cmake来编译时会报错,所以我将两个文件统一到了同一个cpp文件里,其次,借助原作者的思路将其用在了pnp求解上,这样能方便大家能够在Ubuntu上编译执行。具体的代码注释没有写,如果有大神写完了,到时候可以私信我,把他分享给我,我将感激不尽。

10icp.cpp

#include <iostream>
#include <opencv2/core/core.hpp>
#include <ceres/ceres.h>
#include <chrono>
#include <opencv2/features2d/features2d.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/calib3d/calib3d.hpp>
#include <Eigen/Core>
#include <Eigen/Geometry>
#include <Eigen/SVD>
#include <algorithm>
#include <cmath>
#include <limits>
#include "sophus/se3.hpp"


using namespace std;
using namespace Eigen;
using namespace cv;

#ifndef ROTATION_H
#define ROTATION_H

// math functions needed for rotation conversion. 
 
// dot and cross production 
 
template<typename T> 
inline T DotProduct(const T x[3], const T y[3]) {
  return (x[0] * y[0] + x[1] * y[1] + x[2] * y[2]);
}
 
template<typename T>
inline void CrossProduct(const T x[3], const T y[3], T result[3]){
  result[0] = x[1] * y[2] - x[2] * y[1];
  result[1] = x[2] * y[0] - x[0] * y[2];
  result[2] = x[0] * y[1] - x[1] * y[0];
}
 
 
//
 
 
// Converts from a angle anxis to quaternion : 
template<typename T>
inline void AngleAxisToQuaternion(const T* angle_axis, T* quaternion){
  const T& a0 = angle_axis[0];
  const T& a1 = angle_axis[1];
  const T& a2 = angle_axis[2];
  const T theta_squared = a0 * a0 + a1 * a1 + a2 * a2;
  
  
  if(theta_squared > T(std::numeric_limits<double>::epsilon()) ){
    const T theta = sqrt(theta_squared);
    const T half_theta = theta * T(0.5);
    const T k = sin(half_theta)/theta;
    quaternion[0] = cos(half_theta);
    quaternion[1] = a0 * k;
    quaternion[2] = a1 * k;
    quaternion[3] = a2 * k;
  }
  else{ // in case if theta_squared is zero
    const T k(0.5);
    quaternion[0] = T(1.0);
    quaternion[1] = a0 * k;
    quaternion[2] = a1 * k;
    quaternion[3] = a2 * k;
  }
}
 
 
template<typename T>
inline void QuaternionToAngleAxis(const T* quaternion, T* angle_axis){
  const T& q1 = quaternion[1];
  const T& q2 = quaternion[2];
  const T& q3 = quaternion[3];
  const T sin_squared_theta = q1 * q1 + q2 * q2 + q3 * q3;
  
  // For quaternions representing non-zero rotation, the conversion
  // is numercially stable
  if(sin_squared_theta > T(std::numeric_limits<double>::epsilon()) ){
    const T sin_theta = sqrt(sin_squared_theta);
    const T& cos_theta = quaternion[0];
    
    // If cos_theta is negative, theta is greater than pi/2, which
    // means that angle for the angle_axis vector which is 2 * theta
    // would be greater than pi...
    
    const T two_theta = T(2.0) * ((cos_theta < 0.0)
				  ? atan2(-sin_theta, -cos_theta)
				  : atan2(sin_theta, cos_theta));
    const T k = two_theta / sin_theta;
    
    angle_axis[0] = q1 * k;
    angle_axis[1] = q2 * k;
    angle_axis[2] = q3 * k;
  }
  else{
    // For zero rotation, sqrt() will produce NaN in derivative since
    // the argument is zero. By approximating with a Taylor series, 
    // and truncating at one term, the value and first derivatives will be 
    // computed correctly when Jets are used..
    const T k(2.0);
    angle_axis[0] = q1 * k;
    angle_axis[1] = q2 * k;
    angle_axis[2] = q3 * k;
  }
  
}
 
 
template<typename T>
inline void AngleAxisRotatePoint(const T angle_axis[3], const T pt[3], T result[3]) {
  const T theta2 = DotProduct(angle_axis, angle_axis);
  if (theta2 > T(std::numeric_limits<double>::epsilon())) {
    // Away from zero, use the rodriguez formula
    //
    //   result = pt costheta +
    //            (w x pt) * sintheta +
    //            w (w . pt) (1 - costheta)
    //
    // We want to be careful to only evaluate the square root if the
    // norm of the angle_axis vector is greater than zero. Otherwise
    // we get a division by zero.
    //
    const T theta = sqrt(theta2);
    const T costheta = cos(theta);
    const T sintheta = sin(theta);
    const T theta_inverse = 1.0 / theta;
 
    const T w[3] = { angle_axis[0] * theta_inverse,
                     angle_axis[1] * theta_inverse,
                     angle_axis[2] * theta_inverse };
 
    // Explicitly inlined evaluation of the cross product for
    // performance reasons.
    /*const T w_cross_pt[3] = { w[1] * pt[2] - w[2] * pt[1],
                              w[2] * pt[0] - w[0] * pt[2],
                              w[0] * pt[1] - w[1] * pt[0] };*/
    T w_cross_pt[3];
    CrossProduct(w, pt, w_cross_pt);                          
 
 
    const T tmp = DotProduct(w, pt) * (T(1.0) - costheta);
    //    (w[0] * pt[0] + w[1] * pt[1] + w[2] * pt[2]) * (T(1.0) - costheta);
 
    result[0] = pt[0] * costheta + w_cross_pt[0] * sintheta + w[0] * tmp;
    result[1] = pt[1] * costheta + w_cross_pt[1] * sintheta + w[1] * tmp;
    result[2] = pt[2] * costheta + w_cross_pt[2] * sintheta + w[2] * tmp;
  } else {
    // Near zero, the first order Taylor approximation of the rotation
    // matrix R corresponding to a vector w and angle w is
    //
    //   R = I + hat(w) * sin(theta)
    //
    // But sintheta ~ theta and theta * w = angle_axis, which gives us
    //
    //  R = I + hat(w)
    //
    // and actually performing multiplication with the point pt, gives us
    // R * pt = pt + w x pt.
    //
    // Switching to the Taylor expansion near zero provides meaningful
    // derivatives when evaluated using Jets.
    //
    // Explicitly inlined evaluation of the cross product for
    // performance reasons.
    /*const T w_cross_pt[3] = { angle_axis[1] * pt[2] - angle_axis[2] * pt[1],
                              angle_axis[2] * pt[0] - angle_axis[0] * pt[2],
                              angle_axis[0] * pt[1] - angle_axis[1] * pt[0] };*/
    T w_cross_pt[3];
    CrossProduct(angle_axis, pt, w_cross_pt); 
 
    result[0] = pt[0] + w_cross_pt[0];
    result[1] = pt[1] + w_cross_pt[1];
    result[2] = pt[2] + w_cross_pt[2];
  }
}
 
#endif // rotation.h

void find_feature_matches (
    const Mat& img_1, const Mat& img_2,
    std::vector<KeyPoint>& keypoints_1,
    std::vector<KeyPoint>& keypoints_2,
    std::vector< DMatch >& matches );

// 像素坐标转相机归一化坐标
Point2d pixel2cam ( const Point2d& p, const Mat& K );

void find_feature_matches (
    const Mat& img_1, const Mat& img_2,
    std::vector<KeyPoint>& keypoints_1,
    std::vector<KeyPoint>& keypoints_2,
    std::vector< DMatch >& matches );

// 像素坐标转相机归一化坐标
Point2d pixel2cam ( const Point2d& p, const Mat& K );

void pose_estimation_3d3d (
    const vector<Point3f>& pts1,
    const vector<Point3f>& pts2,
    Mat& R, Mat& t
);


struct cost_function_define
{
  cost_function_define(Point3f p1,Point3f p2):_p1(p1),_p2(p2){}
  template<typename T>
  bool operator()(const T* const cere_r,const T* const cere_t,T* residual)const
  {
    T p_1[3];
    T p_2[3];
    p_1[0]=T(_p1.x);
    p_1[1]=T(_p1.y);
    p_1[2]=T(_p1.z);
    AngleAxisRotatePoint(cere_r,p_1,p_2);
    p_2[0]=p_2[0]+cere_t[0];
    p_2[1]=p_2[1]+cere_t[1];
    p_2[2]=p_2[2]+cere_t[2];
    const T x=p_2[0]/p_2[2];
    const T y=p_2[1]/p_2[2];
    const T u=x*520.9+325.1;
    const T v=y*521.0+249.7;
    T p_3[3];
    p_3[0]=T(_p2.x);
    p_3[1]=T(_p2.y);
    p_3[2]=T(_p2.z);
    const T x1=p_3[0]/p_3[2];
    const T y1=p_3[1]/p_3[2];
    const T u1=x1*520.9+325.1;
    const T v1=y1*521.0+249.7;
    residual[0]=u-u1;
    residual[1]=v-v1;
    return true;
  }
   Point3f _p1,_p2;
};




int main ( int argc, char** argv )
{
    if ( argc != 5 )
    {
        cout<<"usage: pose_estimation_3d3d img1 img2 depth1 depth2"<<endl;
        return 1;
    }
    //-- 读取图像
    Mat img_1 = imread ( argv[1], CV_LOAD_IMAGE_COLOR );
    Mat img_2 = imread ( argv[2], CV_LOAD_IMAGE_COLOR );

    vector<KeyPoint> keypoints_1, keypoints_2;
    vector<DMatch> matches;
    find_feature_matches ( img_1, img_2, keypoints_1, keypoints_2, matches );
    cout<<"一共找到了"<<matches.size() <<"组匹配点"<<endl;

    // 建立3D点
    Mat depth1 = imread ( argv[3], CV_LOAD_IMAGE_UNCHANGED );       // 深度图为16位无符号数,单通道图像
    Mat depth2 = imread ( argv[4], CV_LOAD_IMAGE_UNCHANGED );       // 深度图为16位无符号数,单通道图像
    Mat K = ( Mat_<double> ( 3,3 ) << 520.9, 0, 325.1, 0, 521.0, 249.7, 0, 0, 1 );
    vector<Point3f> pts1, pts2;

    for ( DMatch m:matches )
    {
        ushort d1 = depth1.ptr<unsigned short> ( int ( keypoints_1[m.queryIdx].pt.y ) ) [ int ( keypoints_1[m.queryIdx].pt.x ) ];
        ushort d2 = depth2.ptr<unsigned short> ( int ( keypoints_2[m.trainIdx].pt.y ) ) [ int ( keypoints_2[m.trainIdx].pt.x ) ];
        if ( d1==0 || d2==0 )   // bad depth
            continue;
        Point2d p1 = pixel2cam ( keypoints_1[m.queryIdx].pt, K );
        Point2d p2 = pixel2cam ( keypoints_2[m.trainIdx].pt, K );
        float dd1 = float ( d1 ) /1000.0;
        float dd2 = float ( d2 ) /1000.0;
        pts1.push_back ( Point3f ( p1.x*dd1, p1.y*dd1, dd1 ) );
        pts2.push_back ( Point3f ( p2.x*dd2, p2.y*dd2, dd2 ) );
    }

    cout<<"3d-3d pairs: "<<pts1.size() <<endl;
    Mat R, t;
    pose_estimation_3d3d ( pts1, pts2, R, t );
    cout<<"ICP via SVD results: "<<endl;
    cout<<"R = "<<R<<endl;
    cout<<"t = "<<t<<endl;
    cout<<"R_inv = "<<R.t() <<endl;
    cout<<"t_inv = "<<-R.t() *t<<endl;

  for ( int i=0; i<5; i++ )
    {
        cout<<"p1 = "<<pts1[i]<<endl;
        cout<<"p2 = "<<pts2[i]<<endl;
        cout<<"(R*p2+t) = "<< 
            R * (Mat_<double>(3,1)<<pts2[i].x, pts2[i].y, pts2[i].z) + t
            <<endl;
        cout<<endl;
    }

    cout<<"----------------------------------"<<endl;
    cout<<"calling bundle adjustment"<<endl;

    
     double cere_rot[3],cere_tranf[3];
     cere_rot[0]=0;
     cere_rot[1]=0;
     cere_rot[2]=0;
     cere_tranf[0]=t.at<double>(0,0);
     cere_tranf[1]=t.at<double>(1,0);
     cere_tranf[2]=t.at<double>(2,0);

  //  bundleAdjustment( pts1, pts2, R, t );
    ceres::Problem problem;
  for(int i=0;i<pts1.size();i++)
  {
    ceres::CostFunction* costfunction=new ceres::AutoDiffCostFunction<cost_function_define,2,3,3>(new cost_function_define(pts1[i],pts2[i]));
    problem.AddResidualBlock(costfunction,NULL,cere_rot,cere_tranf);//注意,cere_rot不能为Mat类型
  }

  
  ceres::Solver::Options option;
  option.linear_solver_type=ceres::DENSE_SCHUR;
  //输出迭代信息到屏幕
  option.minimizer_progress_to_stdout=true;
  //显示优化信息
  ceres::Solver::Summary summary;
  //开始求解
  ceres::Solve(option,&problem,&summary);
  //显示优化信息
  cout<<summary.BriefReport()<<endl;

    // verify p1 = R*p2 + t

  cout<<"-----------optional after---------------"<<endl;
  
  Mat cam_3d = ( Mat_<double> ( 3,1 )<<cere_rot[0],cere_rot[1],cere_rot[2]);
Mat cam_9d;
cv::Rodrigues ( cam_3d, cam_9d ); // r为旋转向量形式,用Rodrigues公式转换为矩阵

cout<<"cam_9d:"<<endl<<cam_9d<<endl;

cout<<"cam_t:"<<cere_tranf[0]<<"  "<<cere_tranf[1]<<"  "<<cere_tranf[2]<<endl;
  Mat tranf_3d = ( Mat_<double> ( 3,1 )<<cere_tranf[0],cere_tranf[1],cere_tranf[2]);


  for ( int i=0; i<5; i++ )
    {
        cout<<"p1 = "<<pts1[i]<<endl;
        cout<<"p2 = "<<pts2[i]<<endl;
        cout<<"(R*p1+t) = "<< 
            cam_9d * (Mat_<double>(3,1)<<pts1[i].x, pts1[i].y, pts1[i].z) + tranf_3d
            <<endl;
        cout<<endl;
    }
  
    
}

void find_feature_matches ( const Mat& img_1, const Mat& img_2,
                            std::vector<KeyPoint>& keypoints_1,
                            std::vector<KeyPoint>& keypoints_2,
                            std::vector< DMatch >& matches )
{
    //-- 初始化
    Mat descriptors_1, descriptors_2;
    // used in OpenCV3 
    Ptr<FeatureDetector> detector = ORB::create();
    Ptr<DescriptorExtractor> descriptor = ORB::create();
    // use this if you are in OpenCV2 
    // Ptr<FeatureDetector> detector = FeatureDetector::create ( "ORB" );
    // Ptr<DescriptorExtractor> descriptor = DescriptorExtractor::create ( "ORB" );
    Ptr<DescriptorMatcher> matcher  = DescriptorMatcher::create("BruteForce-Hamming");
    //-- 第一步:检测 Oriented FAST 角点位置
    detector->detect ( img_1,keypoints_1 );
    detector->detect ( img_2,keypoints_2 );

    //-- 第二步:根据角点位置计算 BRIEF 描述子
    descriptor->compute ( img_1, keypoints_1, descriptors_1 );
    descriptor->compute ( img_2, keypoints_2, descriptors_2 );

    //-- 第三步:对两幅图像中的BRIEF描述子进行匹配,使用 Hamming 距离
    vector<DMatch> match;
   // BFMatcher matcher ( NORM_HAMMING );
    matcher->match ( descriptors_1, descriptors_2, match );

    //-- 第四步:匹配点对筛选
    double min_dist=10000, max_dist=0;

    //找出所有匹配之间的最小距离和最大距离, 即是最相似的和最不相似的两组点之间的距离
    for ( int i = 0; i < descriptors_1.rows; i++ )
    {
        double dist = match[i].distance;
        if ( dist < min_dist ) min_dist = dist;
        if ( dist > max_dist ) max_dist = dist;
    }

    printf ( "-- Max dist : %f \n", max_dist );
    printf ( "-- Min dist : %f \n", min_dist );

    //当描述子之间的距离大于两倍的最小距离时,即认为匹配有误.但有时候最小距离会非常小,设置一个经验值30作为下限.
    for ( int i = 0; i < descriptors_1.rows; i++ )
    {
        if ( match[i].distance <= max ( 2*min_dist, 30.0 ) )
        {
            matches.push_back ( match[i] );
        }
    }
}

Point2d pixel2cam ( const Point2d& p, const Mat& K )
{
    return Point2d
           (
               ( p.x - K.at<double> ( 0,2 ) ) / K.at<double> ( 0,0 ),
               ( p.y - K.at<double> ( 1,2 ) ) / K.at<double> ( 1,1 )
           );
}

void pose_estimation_3d3d (
    const vector<Point3f>& pts1,
    const vector<Point3f>& pts2,
    Mat& R, Mat& t
)
{
    Point3f p1, p2;     // center of mass
    int N = pts1.size();
    for ( int i=0; i<N; i++ )
    {
        p1 += pts1[i];
        p2 += pts2[i];
    }
    p1 = Point3f( Vec3f(p1) /  N);
    p2 = Point3f( Vec3f(p2) / N);
    vector<Point3f>     q1 ( N ), q2 ( N ); // remove the center
    for ( int i=0; i<N; i++ )
    {
        q1[i] = pts1[i] - p1;
        q2[i] = pts2[i] - p2;
    }

    // compute q1*q2^T
    Eigen::Matrix3d W = Eigen::Matrix3d::Zero();
    for ( int i=0; i<N; i++ )
    {
        W += Eigen::Vector3d ( q1[i].x, q1[i].y, q1[i].z ) * Eigen::Vector3d ( q2[i].x, q2[i].y, q2[i].z ).transpose();
    }
    cout<<"W="<<W<<endl;

    // SVD on W
    Eigen::JacobiSVD<Eigen::Matrix3d> svd ( W, Eigen::ComputeFullU|Eigen::ComputeFullV );
    Eigen::Matrix3d U = svd.matrixU();
    Eigen::Matrix3d V = svd.matrixV();
    cout<<"U="<<U<<endl;
    cout<<"V="<<V<<endl;

    Eigen::Matrix3d R_ = U* ( V.transpose() );
    Eigen::Vector3d t_ = Eigen::Vector3d ( p1.x, p1.y, p1.z ) - R_ * Eigen::Vector3d ( p2.x, p2.y, p2.z );

    // convert to cv::Mat
    R = ( Mat_<double> ( 3,3 ) <<
          R_ ( 0,0 ), R_ ( 0,1 ), R_ ( 0,2 ),
          R_ ( 1,0 ), R_ ( 1,1 ), R_ ( 1,2 ),
          R_ ( 2,0 ), R_ ( 2,1 ), R_ ( 2,2 )
        );
    t = ( Mat_<double> ( 3,1 ) << t_ ( 0,0 ), t_ ( 1,0 ), t_ ( 2,0 ) );
}

CMakeLists.txt

cmake_minimum_required(VERSION 2.8)
project(ch7_2)

set(CMAKE_BUILD_TYPE "Release")
add_definitions("-DENABLE_SSE")
#set(CMAKE_CXX_FLAGS "-std=c++11 -O2 ${SSE_FLAGS} -msse4")
set(CMAKE_CXX_FLAGS "-std=c++14 -O2 ${SSE_FLAGS} -msse4") 

list(APPEND CMAKE_MODULE_PATH ${PROJECT_SOURCE_DIR}/cmake)

find_package(OpenCV 3 REQUIRED)
find_package(G2O REQUIRED)
find_package(Sophus REQUIRED)
find_package(Ceres REQUIRED)

include_directories(
        ${OpenCV_INCLUDE_DIRS}
        ${G2O_INCLUDE_DIRS}
        ${Sophus_INCLUDE_DIRS}
        ${CERES_LIBRARIES}
        "/usr/local/include/eigen3/"
       
)      
        
add_executable(10pnp1 10pnp1.cpp)
target_link_libraries(10pnp1
        g2o_core g2o_stuff
        ${OpenCV_LIBS} ${CERES_LIBRARIES} fmt)
        
add_executable(10icp1 10icp1.cpp)
target_link_libraries(10icp1
      g2o_core g2o_stuff
        ${OpenCV_LIBS} ${CERES_LIBRARIES} fmt)        

执行结果:

./10icp1 1.png 2.png 1_depth.png 2_depth.png 

-- Max dist : 94.000000 
-- Min dist : 4.000000 
一共找到了79组匹配点
3d-3d pairs: 72
W= 271.775  -25.487  63.6929
-54.0082  96.3269 -144.436
 98.6846 -144.995  240.551
U=  0.558087  -0.829399 -0.0252034
 -0.428009  -0.313755   0.847565
  0.710878   0.462228   0.530093
V=  0.617887  -0.784771 -0.0484805
 -0.399894  -0.366747   0.839989
  0.676979   0.499631   0.540434
ICP via SVD results: 
R = [0.9969452356349386, 0.0598334680297723, -0.05020112795872303;
 -0.05932606950498385, 0.9981719688174682, 0.01153855034871776;
 0.0507997502148167, -0.008525067189780355, 0.9986724731399795]
t = [0.7207992665571041;
 -0.3333924223878955;
 -0.1504889019731461]
R_inv = [0.9969452356349386, -0.05932606950498385, 0.0507997502148167;
 0.0598334680297723, 0.9981719688174682, -0.008525067189780355;
 -0.05020112795872303, 0.01153855034871776, 0.9986724731399795]
t_inv = [-0.7307314580359445;
 0.2883721227716854;
 0.1903209253782324]
p1 = [-0.187062, -4.15408, 13.724]
p2 = [-0.0557393, -3.73382, 13.826]
(R*p2+t) = [-0.2522577364161404;
 -3.897544243811159;
 13.68615641608679]

p1 = [-1.21849, -0.588597, 7.924]
p2 = [-1.48606, -0.478307, 8.279]
(R*p2+t) = [-1.204950942925912;
 -0.6271353782194089;
 8.046107116892889]

p1 = [-3.13876, 0.800932, 6.698]
p2 = [-3.54823, 0.795163, 7.106]
(R*p2+t) = [-3.12573901212394;
 0.7528119377320688;
 6.759049858892065]

p1 = [-1.61722, 0.524364, 7.133]
p2 = [-1.9954, 0.602349, 7.419]
(R*p2+t) = [-1.604903085936805;
 0.4718387339466705;
 7.152161629899285]

p1 = [-3.13611, 0.50727, 6.558]
p2 = [-3.54854, 0.50108, 6.999]
(R*p2+t) = [-3.138279851397398;
 0.4580510316850339;
 6.65468298422584]

----------------------------------
calling bundle adjustment
iter      cost      cost_change  |gradient|   |step|    tr_ratio  tr_radius  ls_iter  iter_time  total_time
   0  2.028716e+05    0.00e+00    2.64e+06   0.00e+00   0.00e+00  1.00e+04        0    4.51e-05    1.87e-04
   1  8.030585e+02    2.02e+05    1.40e+05   1.34e+00   9.97e-01  3.00e+04        1    8.68e-05    2.94e-04
   2  1.474886e+02    6.56e+02    2.57e+02   1.31e-01   1.00e+00  9.00e+04        1    4.51e-05    3.47e-04
   3  1.474823e+02    6.32e-03    1.04e-01   8.21e-04   1.00e+00  2.70e+05        1    4.39e-05    3.97e-04
Ceres Solver Report: Iterations: 4, Initial cost: 2.028716e+05, Final cost: 1.474823e+02, Termination: CONVERGENCE
-----------optional after---------------
cam_9d:
[0.9979033018242032, -0.05090521379356253, 0.03997073200461912;
 0.049806825147126, 0.9983659841089123, 0.02801146092239384;
 -0.04133134860026962, -0.0259619140834198, 0.9988081390537464]
cam_t:-0.63524  -0.0415229  0.301918
p1 = [-0.187062, -4.15408, 13.724]
p2 = [-0.0557393, -3.73382, 13.826]
(R*p1+t) = [-0.06188644477170713;
 -3.813701172132307;
 14.12513974193004]

p1 = [-1.21849, -0.588597, 7.924]
p2 = [-1.48606, -0.478307, 8.279]
(R*p1+t) = [-1.504485891696794;
 -0.4678840063604472;
 8.282116008489821]

p1 = [-3.13876, 0.800932, 6.698]
p2 = [-3.54823, 0.795163, 7.106]
(R*p1+t) = [-3.540469400381376;
 0.7893890293101794;
 7.100869992025547]

p1 = [-1.61722, 0.524364, 7.133]
p2 = [-1.9954, 0.602349, 7.419]
(R*p1+t) = [-1.990645496013816;
 0.6012421905123817;
 7.479644052702291]

p1 = [-3.13611, 0.50727, 6.558]
p2 = [-3.54854, 0.50108, 6.999]
(R*p1+t) = [-3.528465233005454;
 0.4924174084735043;
 6.968551213550437]

10pnp1.cpp

#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/features2d/features2d.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/calib3d/calib3d.hpp>
#include <ceres/ceres.h>
#include <ceres/rotation.h>
#include <Eigen/Core>
#include <Eigen/Geometry>
#include "sophus/se3.hpp"
#include <chrono>
#include <algorithm>
#include <cmath>
#include <limits>

using namespace std;
using namespace Eigen;
using namespace cv;

#ifndef ROTATION_H
#define ROTATION_H

// math functions needed for rotation conversion. 
 
// dot and cross production 
 
template<typename T> 
inline T DotProduct(const T x[3], const T y[3]) {
  return (x[0] * y[0] + x[1] * y[1] + x[2] * y[2]);
}
 
template<typename T>
inline void CrossProduct(const T x[3], const T y[3], T result[3]){
  result[0] = x[1] * y[2] - x[2] * y[1];
  result[1] = x[2] * y[0] - x[0] * y[2];
  result[2] = x[0] * y[1] - x[1] * y[0];
}
 
 
//
 
 
// Converts from a angle anxis to quaternion : 
template<typename T>
inline void AngleAxisToQuaternion(const T* angle_axis, T* quaternion){
  const T& a0 = angle_axis[0];
  const T& a1 = angle_axis[1];
  const T& a2 = angle_axis[2];
  const T theta_squared = a0 * a0 + a1 * a1 + a2 * a2;
  
  
  if(theta_squared > T(std::numeric_limits<double>::epsilon()) ){
    const T theta = sqrt(theta_squared);
    const T half_theta = theta * T(0.5);
    const T k = sin(half_theta)/theta;
    quaternion[0] = cos(half_theta);
    quaternion[1] = a0 * k;
    quaternion[2] = a1 * k;
    quaternion[3] = a2 * k;
  }
  else{ // in case if theta_squared is zero
    const T k(0.5);
    quaternion[0] = T(1.0);
    quaternion[1] = a0 * k;
    quaternion[2] = a1 * k;
    quaternion[3] = a2 * k;
  }
}
 
 
template<typename T>
inline void QuaternionToAngleAxis(const T* quaternion, T* angle_axis){
  const T& q1 = quaternion[1];
  const T& q2 = quaternion[2];
  const T& q3 = quaternion[3];
  const T sin_squared_theta = q1 * q1 + q2 * q2 + q3 * q3;
  
  // For quaternions representing non-zero rotation, the conversion
  // is numercially stable
  if(sin_squared_theta > T(std::numeric_limits<double>::epsilon()) ){
    const T sin_theta = sqrt(sin_squared_theta);
    const T& cos_theta = quaternion[0];
    
    // If cos_theta is negative, theta is greater than pi/2, which
    // means that angle for the angle_axis vector which is 2 * theta
    // would be greater than pi...
    
    const T two_theta = T(2.0) * ((cos_theta < 0.0)
				  ? atan2(-sin_theta, -cos_theta)
				  : atan2(sin_theta, cos_theta));
    const T k = two_theta / sin_theta;
    
    angle_axis[0] = q1 * k;
    angle_axis[1] = q2 * k;
    angle_axis[2] = q3 * k;
  }
  else{
    // For zero rotation, sqrt() will produce NaN in derivative since
    // the argument is zero. By approximating with a Taylor series, 
    // and truncating at one term, the value and first derivatives will be 
    // computed correctly when Jets are used..
    const T k(2.0);
    angle_axis[0] = q1 * k;
    angle_axis[1] = q2 * k;
    angle_axis[2] = q3 * k;
  }
  
}
 
 
template<typename T>
inline void AngleAxisRotatePoint(const T angle_axis[3], const T pt[3], T result[3]) {
  const T theta2 = DotProduct(angle_axis, angle_axis);
  if (theta2 > T(std::numeric_limits<double>::epsilon())) {
    
    const T theta = sqrt(theta2);
    const T costheta = cos(theta);
    const T sintheta = sin(theta);
    const T theta_inverse = 1.0 / theta;
 
    const T w[3] = { angle_axis[0] * theta_inverse,
                     angle_axis[1] * theta_inverse,
                     angle_axis[2] * theta_inverse };
 
    // Explicitly inlined evaluation of the cross product for
    // performance reasons.
    /*const T w_cross_pt[3] = { w[1] * pt[2] - w[2] * pt[1],
                              w[2] * pt[0] - w[0] * pt[2],
                              w[0] * pt[1] - w[1] * pt[0] };*/
    T w_cross_pt[3];
    CrossProduct(w, pt, w_cross_pt);                          
 
 
    const T tmp = DotProduct(w, pt) * (T(1.0) - costheta);
    //    (w[0] * pt[0] + w[1] * pt[1] + w[2] * pt[2]) * (T(1.0) - costheta);
 
    result[0] = pt[0] * costheta + w_cross_pt[0] * sintheta + w[0] * tmp;
    result[1] = pt[1] * costheta + w_cross_pt[1] * sintheta + w[1] * tmp;
    result[2] = pt[2] * costheta + w_cross_pt[2] * sintheta + w[2] * tmp;
  } else {
    // Near zero, the first order Taylor approximation of the rotation
    // matrix R corresponding to a vector w and angle w is
    //
    //   R = I + hat(w) * sin(theta)
    //
    // But sintheta ~ theta and theta * w = angle_axis, which gives us
    //
    //  R = I + hat(w)
    //
    // and actually performing multiplication with the point pt, gives us
    // R * pt = pt + w x pt.
    //
    // Switching to the Taylor expansion near zero provides meaningful
    // derivatives when evaluated using Jets.
    //
    // Explicitly inlined evaluation of the cross product for
    // performance reasons.
    /*const T w_cross_pt[3] = { angle_axis[1] * pt[2] - angle_axis[2] * pt[1],
                              angle_axis[2] * pt[0] - angle_axis[0] * pt[2],
                              angle_axis[0] * pt[1] - angle_axis[1] * pt[0] };*/
    T w_cross_pt[3];
    CrossProduct(angle_axis, pt, w_cross_pt); 
 
    result[0] = pt[0] + w_cross_pt[0];
    result[1] = pt[1] + w_cross_pt[1];
    result[2] = pt[2] + w_cross_pt[2];
  }
}
 
#endif // rotation.h


void find_feature_matches(
  const Mat &img_1, const Mat &img_2,
  std::vector<KeyPoint> &keypoints_1,
  std::vector<KeyPoint> &keypoints_2,
  std::vector<DMatch> &matches);

// 像素坐标转相机归一化坐标
Point2d pixel2cam(const Point2d &p, const Mat &K);

//Ref:http://www.ceres-solver.org/nnls_tutorial.html#bundle-adjustment
struct PnPReprojectionError {
  PnPReprojectionError(Point2f pts_2d, Point3f pts_3d)
      : _pts_2d(pts_2d), _pts_3d(pts_3d) {}

  template <typename T>
  bool operator()(const T* const rotation_vector,
                  const T* const translation_vector,
                  T* residuals) const {
		    
    T p_transformed[3], p_origin[3];
    p_origin[0]=T(_pts_3d.x);
    p_origin[1]=T(_pts_3d.y);
    p_origin[2]=T(_pts_3d.z);
    ceres::AngleAxisRotatePoint(rotation_vector, p_origin, p_transformed);
    
    //旋转后加上平移向量
    p_transformed[0] += translation_vector[0]; 
    p_transformed[1] += translation_vector[1]; 
    p_transformed[2] += translation_vector[2];

    //归一化
    T xp = p_transformed[0] / p_transformed[2];
    T yp = p_transformed[1] / p_transformed[2];

    
    double fx=520.9, fy=521.0, cx=325.1, cy=249.7;
    // Compute final projected point position.
    T predicted_x = fx * xp + cx;
    T predicted_y = fy * yp + cy;

    // The error is the difference between the predicted and observed position.
    residuals[0] = T(_pts_2d.x) - predicted_x;
    residuals[1] = T(_pts_2d.y) - predicted_y;
    return true;
  }

   // 2,3,3指输出维度(residuals)为2
   //待优化变量(rotation_vector,translation_vector)维度分别为3
   static ceres::CostFunction* Create(const Point2f _pts_2d,
                                      const Point3f _pts_3d) {
     return (new ceres::AutoDiffCostFunction<PnPReprojectionError, 2, 3, 3>(
                 new PnPReprojectionError(_pts_2d, _pts_3d)));
   }

  Point2f _pts_2d;
  Point3f _pts_3d;
};

// 通过引入Sophus库简化计算,并使用雅克比矩阵的解析解代替自动求导
class PnPSE3ReprojectionError : public ceres::SizedCostFunction<2, 6> {
public:
    EIGEN_MAKE_ALIGNED_OPERATOR_NEW

    PnPSE3ReprojectionError(Eigen::Vector2d pts_2d, Eigen::Vector3d pts_3d) :
            _pts_2d(pts_2d), _pts_3d(pts_3d) {}

    virtual ~PnPSE3ReprojectionError() {}

    virtual bool Evaluate(
      double const* const* parameters, double *residuals, double **jacobians) const {

        Eigen::Map<const Eigen::Matrix<double,6,1>> se3(*parameters);	

        Sophus::SE3d T = Sophus::SE3d::exp(se3);

        Eigen::Vector3d Pc = T * _pts_3d;

        Eigen::Matrix3d K;
        double fx = 520.9, fy = 521.0, cx = 325.1, cy = 249.7;
        K << fx, 0, cx, 
	     0, fy, cy, 
	     0, 0, 1;

        Eigen::Vector2d error =  _pts_2d - (K * Pc).hnormalized();

        residuals[0] = error[0];
        residuals[1] = error[1];

        if(jacobians != NULL) {
            if(jacobians[0] != NULL) {
                Eigen::Map<Eigen::Matrix<double, 2, 6, Eigen::RowMajor>> J(jacobians[0]);
	      
                double x = Pc[0];
                double y = Pc[1];
                double z = Pc[2];

                double x2 = x*x;
                double y2 = y*y;
                double z2 = z*z;
		
		//雅克比矩阵推导看书187页公式(7.46)
                J(0,0) = -fx/z;
                J(0,1) =  0;
                J(0,2) =  fx*x/z2;
                J(0,3) =  fx*x*y/z2;
                J(0,4) = -fx-fx*x2/z2;
                J(0,5) =  fx*y/z;
                J(1,0) =  0;
                J(1,1) = -fy/z;
                J(1,2) =  fy*y/z2;
                J(1,3) =  fy+fy*y2/z2;
                J(1,4) = -fy*x*y/z2;
                J(1,5) = -fy*x/z;
            }
        }

        return true;
    }

private:
    const Eigen::Vector2d _pts_2d;
    const Eigen::Vector3d _pts_3d;
};


int main(int argc, char **argv){
  if (argc != 5) {
    cout << "usage: pose_estimation_3d2d img1 img2 depth1 depth2" << endl;
    return 1;
  }
  //-- 读取图像
  Mat img_1 = imread(argv[1], CV_LOAD_IMAGE_COLOR);
  Mat img_2 = imread(argv[2], CV_LOAD_IMAGE_COLOR);
  assert(img_1.data && img_2.data && "Can not load images!");

  vector<KeyPoint> keypoints_1, keypoints_2;
  vector<DMatch> matches;
  find_feature_matches(img_1, img_2, keypoints_1, keypoints_2, matches);
  cout << "一共找到了" << matches.size() << "组匹配点" << endl;

  // 建立3D点
  Mat d1 = imread(argv[3], CV_LOAD_IMAGE_UNCHANGED);       // 深度图为16位无符号数,单通道图像
  Mat K = (Mat_<double>(3, 3) << 520.9, 0, 325.1, 0, 521.0, 249.7, 0, 0, 1);
  vector<Point3f> pts_3d;
  vector<Point2f> pts_2d;
  vector<Vector3d> pts_3d_eigen;
  vector<Vector2d> pts_2d_eigen;
  
  for (DMatch m:matches) {
    ushort d = d1.ptr<unsigned short>(int(keypoints_1[m.queryIdx].pt.y))[int(keypoints_1[m.queryIdx].pt.x)];
    if (d == 0)   // bad depth
      continue;
    float dd = d / 5000.0;
    Point2d p1 = pixel2cam(keypoints_1[m.queryIdx].pt, K);
    pts_3d.push_back(Point3f(p1.x * dd, p1.y * dd, dd));//第一个相机观察到的3D点坐标
    pts_2d.push_back(keypoints_2[m.trainIdx].pt);//特征点在第二个相机中的投影
    pts_3d_eigen.push_back(Vector3d(p1.x * dd, p1.y * dd, dd));
    pts_2d_eigen.push_back(Vector2d(keypoints_2[m.trainIdx].pt.x, keypoints_2[m.trainIdx].pt.y));
  }

  cout << "3d-2d pairs: " << pts_3d.size() << endl;

  chrono::steady_clock::time_point t1 = chrono::steady_clock::now();
  Mat r, t;
  solvePnP(pts_3d, pts_2d, K, Mat(), r, t, false); // 调用OpenCV 的 PnP 求解,可选择EPNP,DLS等方法
  Mat R;
  cv::Rodrigues(r, R); // r为旋转向量形式,用Rodrigues公式转换为矩阵
  chrono::steady_clock::time_point t2 = chrono::steady_clock::now();
  chrono::duration<double> time_used = chrono::duration_cast<chrono::duration<double>>(t2 - t1);
  cout << "solve pnp in opencv cost time: " << time_used.count() << " seconds." << endl;

  cout << "R=" << endl << R << endl;
  cout << "t=" << endl << t << endl;
  cout << endl;
  //ceres求解PnP, 使用自动求导
  cout << "以下是ceres求解(自动求导)" << endl;
  double r_ceres[3]={0,0,0};
  double t_ceres[3]={0,0,0};
  
  ceres::Problem problem;
  for (size_t i = 0; i < pts_2d.size(); ++i) {
    ceres::CostFunction* cost_function =
	PnPReprojectionError::Create(pts_2d[i],pts_3d[i]);
    problem.AddResidualBlock(cost_function, 
			     nullptr /* squared loss */,
			     r_ceres,
			     t_ceres);
  }
  
  t1 = chrono::steady_clock::now();
  ceres::Solver::Options options;
  options.linear_solver_type = ceres::DENSE_SCHUR;
  options.minimizer_progress_to_stdout = true;
  ceres::Solver::Summary summary;
  ceres::Solve(options, &problem, &summary);
  std::cout << summary.BriefReport() << "\n";
  
  Mat r_ceres_cv=(Mat_<double>(3, 1) <<r_ceres[0], r_ceres[1], r_ceres[2]);
  Mat t_ceres_cv=(Mat_<double>(3, 1) <<t_ceres[0], t_ceres[1], t_ceres[2]);
  cv::Rodrigues(r_ceres_cv, R);
  cout << "R=" << endl << R << endl;
  cout << "t=" << endl << t_ceres_cv << endl;
  
  t2 = chrono::steady_clock::now();
  time_used = chrono::duration_cast<chrono::duration<double>>(t2 - t1);
  cout << "solve pnp in ceres cost time: " << time_used.count() << " seconds." << endl<< endl;
  
  //ceres求解PnP, 使用雅克比矩阵的解析解
  cout << "以下是ceres求解(雅克比矩阵给出解析解)" << endl;
  
  Sophus::Vector6d se3;
  se3<<0,0,0,0,0,0;// 初始化非常重要
  
  ceres::Problem problem_;
  for(int i=0; i<pts_3d_eigen.size(); ++i) {
      ceres::CostFunction *cost_function;
      cost_function = new PnPSE3ReprojectionError(pts_2d_eigen[i], pts_3d_eigen[i]);
      problem_.AddResidualBlock(cost_function, NULL, se3.data());
      
  }
  
  t1 = chrono::steady_clock::now();
  ceres::Solver::Options options_;
  options_.dynamic_sparsity = true;
  options_.max_num_iterations = 100;
  options_.sparse_linear_algebra_library_type = ceres::SUITE_SPARSE;
  options_.minimizer_type = ceres::TRUST_REGION;
  options_.linear_solver_type = ceres::SPARSE_NORMAL_CHOLESKY;
  options_.trust_region_strategy_type = ceres::DOGLEG;
  options_.minimizer_progress_to_stdout = true;
  options_.dogleg_type = ceres::SUBSPACE_DOGLEG;

  ceres::Solver::Summary summary_;
  ceres::Solve(options_, &problem_, &summary_);
  std::cout << summary_.BriefReport() << "\n";
  
  std::cout << "estimated pose: \n" << Sophus::SE3d::exp(se3).matrix() << std::endl;
  t2 = chrono::steady_clock::now();
  time_used = chrono::duration_cast<chrono::duration<double>>(t2 - t1);
  cout << "solve pnp in ceres cost time: " << time_used.count() << " seconds." << endl;
  
  return 0;
}

void find_feature_matches(const Mat &img_1, const Mat &img_2,
                          std::vector<KeyPoint> &keypoints_1,
                          std::vector<KeyPoint> &keypoints_2,
                          std::vector<DMatch> &matches) {
  //-- 初始化
  Mat descriptors_1, descriptors_2;
  // used in OpenCV3
  Ptr<FeatureDetector> detector = ORB::create();
  Ptr<DescriptorExtractor> descriptor = ORB::create();
  // use this if you are in OpenCV2
  // Ptr<FeatureDetector> detector = FeatureDetector::create ( "ORB" );
  // Ptr<DescriptorExtractor> descriptor = DescriptorExtractor::create ( "ORB" );
  Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create("BruteForce-Hamming");
  //-- 第一步:检测 Oriented FAST 角点位置
  detector->detect(img_1, keypoints_1);
  detector->detect(img_2, keypoints_2);

  //-- 第二步:根据角点位置计算 BRIEF 描述子
  descriptor->compute(img_1, keypoints_1, descriptors_1);
  descriptor->compute(img_2, keypoints_2, descriptors_2);

  //-- 第三步:对两幅图像中的BRIEF描述子进行匹配,使用 Hamming 距离
  vector<DMatch> match;
  // BFMatcher matcher ( NORM_HAMMING );
  matcher->match(descriptors_1, descriptors_2, match);

  //-- 第四步:匹配点对筛选
  double min_dist = 10000, max_dist = 0;

  //找出所有匹配之间的最小距离和最大距离, 即是最相似的和最不相似的两组点之间的距离
  for (int i = 0; i < descriptors_1.rows; i++) {
    double dist = match[i].distance;
    if (dist < min_dist) min_dist = dist;
    if (dist > max_dist) max_dist = dist;
  }

  printf("-- Max dist : %f \n", max_dist);
  printf("-- Min dist : %f \n", min_dist);

  //当描述子之间的距离大于两倍的最小距离时,即认为匹配有误.但有时候最小距离会非常小,设置一个经验值30作为下限.
  for (int i = 0; i < descriptors_1.rows; i++) {
    if (match[i].distance <= max(2 * min_dist, 30.0)) {
      matches.push_back(match[i]);
    }
  }
}

Point2d pixel2cam(const Point2d &p, const Mat &K) {
  return Point2d
    (
      (p.x - K.at<double>(0, 2)) / K.at<double>(0, 0),
      (p.y - K.at<double>(1, 2)) / K.at<double>(1, 1)
    );
}

CMakeLists.txt

和上面一样

执行结果:

./10pnp1 1.png 2.png 1_depth.png 2_depth.png 
-- Max dist : 94.000000 
-- Min dist : 4.000000 
一共找到了79组匹配点
3d-2d pairs: 75
solve pnp in opencv cost time: 0.000401286 seconds.
R=
[0.9979059095501266, -0.05091940089110201, 0.03988747043653948;
 0.04981866254253315, 0.9983623157438158, 0.02812094175376485;
 -0.04125404886078184, -0.0260749135288436, 0.998808391202765]
t=
[-0.1267821389557959;
 -0.008439496817520764;
 0.06034935748888207]

以下是ceres求解(自动求导)
iter      cost      cost_change  |gradient|   |step|    tr_ratio  tr_radius  ls_iter  iter_time  total_time
   0  2.025888e+04    0.00e+00    6.83e+05   0.00e+00   0.00e+00  1.00e+04        0    4.41e-05    1.86e-04
   1  1.650410e+02    2.01e+04    1.47e+04   1.51e-01   9.99e-01  3.00e+04        1    9.20e-05    2.98e-04
   2  1.498819e+02    1.52e+01    4.80e+01   7.30e-03   1.00e+00  9.00e+04        1    4.70e-05    3.52e-04
Ceres Solver Report: Iterations: 3, Initial cost: 2.025888e+04, Final cost: 1.498819e+02, Termination: CONVERGENCE
R=
[0.9979061714739172, -0.05092444873781777, 0.03987447121929216;
 0.04982431246493198, 0.9983622113118902, 0.02811463874620321;
 -0.04124088774099816, -0.02606905340019193, 0.9988090876804998]
t=
[-0.1267625259978171;
 -0.008424980436889882;
 0.06034877257668213]
solve pnp in ceres cost time: 0.000438856 seconds.

以下是ceres求解(雅克比矩阵给出解析解)
iter      cost      cost_change  |gradient|   |step|    tr_ratio  tr_radius  ls_iter  iter_time  total_time
   0  2.025888e+04    0.00e+00    6.83e+05   0.00e+00   0.00e+00  1.00e+04        0    8.70e-05    1.09e-04
   1  2.052736e+02    2.01e+04    3.79e+04   1.52e-01   9.97e-01  1.00e+04        1    1.74e-04    2.92e-04
   2  1.500357e+02    5.52e+01    2.38e+03   8.62e-03   9.97e-01  1.00e+04        1    9.20e-05    3.91e-04
   3  1.498820e+02    1.54e-01    7.50e+01   2.41e-04   9.99e-01  1.00e+04        1    8.89e-05    4.86e-04
   4  1.498819e+02    1.63e-04    4.05e+00   8.96e-06   9.97e-01  1.00e+04        1    8.70e-05    5.79e-04
Ceres Solver Report: Iterations: 5, Initial cost: 2.025888e+04, Final cost: 1.498819e+02, Termination: CONVERGENCE
estimated pose: 
   0.997906  -0.0509194   0.0398875   -0.126782
  0.0498187    0.998362   0.0281209 -0.00843933
  -0.041254  -0.0260749    0.998808   0.0603495
          0           0           0           1
solve pnp in ceres cost time: 0.000701 seconds.

版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://blog.csdn.net/luo870604851/article/details/82356394

视觉SLAM十四》的第七章主要介绍了ORB特征的手写实现。ORB特征是一种基于FAST角点检测和BRIEF描述子的特征提取方法,它在计算效率和鲁棒性上表现出色,被广泛应用于视觉SLAM中。 第七章还介绍了ORB特征的主要步骤,包括角点检测、特征描述子计算和特征匹配。在角点检测中,通过FAST算法检测图像中的角点位置。然后,利用BRIEF描述子计算对应角点位置的特征描述子。最后,通过特征匹配算法将当前帧的ORB特征与地图中的ORB特征进行匹配,从而实现相机的位姿估计和地图构建。 除了手写ORB特征的实现,第七章还介绍了ORB-SLAM系统的整体框架和关键技术。该系统结合了特征点法和直接法,实现了在无GPS和IMU信息的情况下进行实时的视觉SLAM。通过利用ORB特征进行初始化、追踪和建图,ORB-SLAM系统在室内和室外环境下都取得了良好的效果。 总而言之,视觉SLAM的第七章《视觉SLAM十四》介绍了手写ORB特征的实现方法,并介绍了ORB-SLAM系统的整体框架和关键技术。这些内容对于理解和应用视觉SLAM具有重要意义。<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* *3* [视觉SLAM十四——ch7](https://blog.csdn.net/weixin_58021155/article/details/123496372)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_1"}}] [.reference_item style="max-width: 50%"] - *2* [《视觉slam十四》学习笔记——ch7实践部分 比较opencv库下的ORB特征的提取和手写ORB的区别](https://blog.csdn.net/weixin_70026476/article/details/127415318)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_1"}}] [.reference_item style="max-width: 50%"] [ .reference_list ]
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值