【PCL自学:Recognition 1】基于对应分组算法的三维物体识别

11 篇文章 17 订阅

一、初识Recognition点云识别模块

  本章节旨在解释如何基于pcl_recognition模块执行3D对象识别。pcl_segmentation库包含了将点云分割成不同簇的算法。这些算法最适合处理由许多在空间上相互隔离的区域内的点云。在这种情况下,通常使用集群将点云分解为其组成部分,然后可以独立处理这些部分。有关该模块包含的所有类和方法的解释可以参考【PCL官网:点云识别模块介绍】。
  本章节解释了如何使用对应分组算法(Correspondence Grouping algorithms),以便将3D描述符匹配阶段后获得的点对点通信集聚到当前场景中出现的模型实例中。对于每个簇,代表场景中一个可能的模型实例,对应分组算法还输出识别该模型在当前场景中6DOF姿态估计的变换矩阵。
本章节文章中涉及到了两种聚类方法:

Hough (default)方法
这是一种基于3D Hough投票方案的聚类算法:
F. Tombari and L. Di Stefano: “Object recognition in 3D scenes with occlusions and clutter by Hough voting”, 4th Pacific-Rim Symposium on Image and Video Technology, 2010.
GC 方法
这是一种几何一致性聚类算法,增强了对对应关键点之间的简单几何约束:
H. Chen and B. Bhanu: “3D free-form object recognition in range images using local surface patches”, Pattern Recognition Letters, vol. 28, no. 10, pp. 1252-1262, 2007

二、基于对应分组算法识别的实例代码及分析

  在开始学习之前,可以从GitHub下载本章中使用的PCD数据集(milk.pcd和 milk_cartoon_all_small_clorox.pcd),并将文件放在已经创建好的测试文件夹中。
  另外,将以下代码复制并粘贴到编辑器中,并将其保存为ence_group .cpp(或者在这里下载源文件)。
  以下代码中的中文注释请仔细查看以便于理解整个程序内容。

#include <pcl/io/pcd_io.h>								// IO头文件用于传输pcd格式文件
#include <pcl/point_cloud.h>							// 点云头文件
#include <pcl/correspondence.h>							// 点,描述符等之间的匹配头文件
#include <pcl/features/normal_3d_omp.h>					// OMP加速计算点云法向
#include <pcl/features/shot_omp.h>						// 估计包含点和法线的给定点云数据集的方向直方图(SHOT)描述符的签名。
#include <pcl/features/board.h>
#include <pcl/filters/uniform_sampling.h>				// 均匀采样头文件
#include <pcl/recognition/cg/hough_3d.h>				// 三维霍夫变换头文件
#include <pcl/recognition/cg/geometric_consistency.h>	// 几何一致性头文件
#include <pcl/visualization/pcl_visualizer.h>			// 可视化头文件
#include <pcl/kdtree/kdtree_flann.h>					// kd树flann快速近邻搜索头文件
#include <pcl/kdtree/impl/kdtree_flann.hpp>
#include <pcl/common/transforms.h>
#include <pcl/console/parse.h>							// 命令行解析头文件

typedef pcl::PointXYZRGBA PointType; 					// 定义点类型为XYZRGBA
typedef pcl::Normal NormalType;							// 定义法向类型
typedef pcl::ReferenceFrame RFType;						// 定义参考系类型
typedef pcl::SHOT352 DescriptorType;					// 定义描述子类型为352字节SHOT

std::string model_filename_;							// 模型文件名,即milk_.pcd
std::string scene_filename_;							// 场景文件名,即milk_cartoon_all_small_clorox.pcd

// 算法相关参数初始化
bool show_keypoints_ (false);							// 不显示关键点
bool show_correspondences_ (false);						// 不显示描述符连线点
bool use_cloud_resolution_ (false);						// 不采用点云分辨率
bool use_hough_ (true);									// 采用霍夫变换
float model_ss_ (0.01f);								// 模型均匀采样半径(默认值0.01)
float scene_ss_ (0.03f);								// 场景均匀采样半径(默认值0.03)
float rf_rad_ (0.015f);									// 参考系半径(默认为0.015)
float descr_rad_ (0.02f);								// 描述符半径
float cg_size_ (0.01f);									// 聚类尺寸
float cg_thresh_ (5.0f);								// 聚类阈值
void
 showHelp (char *filename)
 {
   std::cout << std::endl;
   std::cout << "***************************************************************************" << std::endl;
   std::cout << "*                                                                         *" << std::endl;
   std::cout << "*             Correspondence Grouping Tutorial - Usage Guide              *" << std::endl;
   std::cout << "*                                                                         *" << std::endl;
   std::cout << "***************************************************************************" << std::endl << std::endl;
   std::cout << "Usage: " << filename << " model_filename.pcd scene_filename.pcd [Options]" << std::endl << std::endl;
   std::cout << "Options:" << std::endl;
   std::cout << "     -h:                     Show this help." << std::endl;
   std::cout << "     -k:                     Show used keypoints." << std::endl;
   std::cout << "     -c:                     Show used correspondences." << std::endl;
   std::cout << "     -r:                     Compute the model cloud resolution and multiply" << std::endl;
   std::cout << "                             each radius given by that value." << std::endl;
   std::cout << "     --algorithm (Hough|GC): Clustering algorithm used (default Hough)." << std::endl;
   std::cout << "     --model_ss val:         Model uniform sampling radius (default 0.01)" << std::endl;
   std::cout << "     --scene_ss val:         Scene uniform sampling radius (default 0.03)" << std::endl;
   std::cout << "     --rf_rad val:           Reference frame radius (default 0.015)" << std::endl;
   std::cout << "     --descr_rad val:        Descriptor radius (default 0.02)" << std::endl;
   std::cout << "     --cg_size val:          Cluster size (default 0.01)" << std::endl;
   std::cout << "     --cg_thresh val:        Clustering threshold (default 5)" << std::endl << std::endl;
 }
 
 void
 parseCommandLine (int argc, char *argv[]) // 解析命令行参数
 {
   //Show help
   if (pcl::console::find_switch (argc, argv, "-h"))
   {
     showHelp (argv[0]);
     exit (0);
   }
 
   //Model & scene filenames
   std::vector<int> filenames;
   filenames = pcl::console::parse_file_extension_argument (argc, argv, ".pcd");
   if (filenames.size () != 2)
   {
     std::cout << "Filenames missing.\n";
     showHelp (argv[0]);
     exit (-1);
   }
 
   model_filename_ = argv[filenames[0]];
   scene_filename_ = argv[filenames[1]];
 
   //-k显示用于计算对应的关键点,叠加到PCL可视化工具中,例如文末图片的蓝色点。
   //-c画了一条线连接每一对模型-场景的对应点的聚类过程。例如文末绿色线
   //-r估计模型点云的空间分辨率,然后将半径作为参数考虑,就像它们是以云分辨率的单位给出的一样;因此,当使用相同的命令行和不同的点云使用本例时,可以实现某种分辨率不变性。
   if (pcl::console::find_switch (argc, argv, "-k"))
   {
    show_keypoints_ = true;
   }
   if (pcl::console::find_switch (argc, argv, "-c"))
   {
     show_correspondences_ = true;
   }
   if (pcl::console::find_switch (argc, argv, "-r"))
   {
     use_cloud_resolution_ = true;
   }
 
   std::string used_algorithm;
   
// 可以通过命令行开关选择两种对应的聚类算法--algorithm (Hough|GC)
   if (pcl::console::parse_argument (argc, argv, "--algorithm", used_algorithm) != -1)
  {
  // Hough 聚类方法,可参考论文《Object recognition in 3D scenes with occlusions and clutter by Hough voting.》
    if (used_algorithm.compare ("Hough") == 0)
    {
      use_hough_ = true;
    }
  // GC 聚类方法是一种几何一致性聚类算法,增强了对应量之间的简单几何约束。具体可参考论文《基于局部表面补丁的距离图像三维自由形状目标识别》
    else if (used_algorithm.compare ("GC") == 0)
    {
      use_hough_ = false;
    }
    else
    {
      std::cout << "Wrong algorithm name.\n";
      showHelp (argv[0]);
      exit (-1);
    }
  }

  // 设置命令行通用参数
  pcl::console::parse_argument (argc, argv, "--model_ss", model_ss_);
  pcl::console::parse_argument (argc, argv, "--scene_ss", scene_ss_);
  pcl::console::parse_argument (argc, argv, "--rf_rad", rf_rad_);
  pcl::console::parse_argument (argc, argv, "--descr_rad", descr_rad_);
  pcl::console::parse_argument (argc, argv, "--cg_size", cg_size_);
  pcl::console::parse_argument (argc, argv, "--cg_thresh", cg_thresh_);
}

// 计算点云分辨率
// 对给定的点云执行空间分辨率计算,计算平均每个点与其最近邻居之间的距离。
double
computeCloudResolution (const pcl::PointCloud<PointType>::ConstPtr &cloud)
{
  double res = 0.0;
  int n_points = 0;
  int nres;
  std::vector<int> indices (2); // 点云索引
  std::vector<float> sqr_distances (2); // 点云距离
  pcl::search::KdTree<PointType> tree;
  tree.setInputCloud (cloud);

  for (std::size_t i = 0; i < cloud->size (); ++i)
  {
    if (! std::isfinite ((*cloud)[i].x))
    {
      continue;
    }
    //计算第二个邻点,因为第一个邻居就是点本身。
    nres = tree.nearestKSearch (i, 2, indices, sqr_distances);
    if (nres == 2)
    {
      res += sqrt (sqr_distances[1]);
      ++n_points;
    }
  }
    if (n_points != 0)
  {
    res /= n_points;
  }
  return res;
}

// 主程序
int
main (int argc, char *argv[])
{
  parseCommandLine (argc, argv);

  pcl::PointCloud<PointType>::Ptr model (new pcl::PointCloud<PointType> ());
  pcl::PointCloud<PointType>::Ptr model_keypoints (new pcl::PointCloud<PointType> ());
  pcl::PointCloud<PointType>::Ptr scene (new pcl::PointCloud<PointType> ());
  pcl::PointCloud<PointType>::Ptr scene_keypoints (new pcl::PointCloud<PointType> ());
  pcl::PointCloud<NormalType>::Ptr model_normals (new pcl::PointCloud<NormalType> ());
  pcl::PointCloud<NormalType>::Ptr scene_normals (new pcl::PointCloud<NormalType> ());
  pcl::PointCloud<DescriptorType>::Ptr model_descriptors (new pcl::PointCloud<DescriptorType> ());
  pcl::PointCloud<DescriptorType>::Ptr scene_descriptors (new pcl::PointCloud<DescriptorType> ());

  //
  //  加载模型点云
  //
  if (pcl::io::loadPCDFile (model_filename_, *model) < 0)
  {
    std::cout << "Error loading model cloud." << std::endl;
    showHelp (argv[0]);
    return (-1);
  }
  // 加载场景点云
    if (pcl::io::loadPCDFile (scene_filename_, *scene) < 0)
   {
    std::cout << "Error loading scene cloud." << std::endl;
    showHelp (argv[0]);
    return (-1);
  }

  //
  //  建立分辨率不变性。只有在命令行中启用了分辨率不变性标志时,程序才会调整将在接下来的部分中使用半径,方法是将它们乘以估计的模型云分辨率。
  //
  if (use_cloud_resolution_)
  {
    float resolution = static_cast<float> (computeCloudResolution (model));
    if (resolution != 0.0f)
    {
      model_ss_   *= resolution; // 相关参数都乘以分辨率
      scene_ss_   *= resolution;
      rf_rad_     *= resolution;
      descr_rad_  *= resolution;
      cg_size_    *= resolution;
    }

    std::cout << "Model resolution:       " << resolution << std::endl;
    std::cout << "Model sampling size:    " << model_ss_ << std::endl;
    std::cout << "Scene sampling size:    " << scene_ss_ << std::endl;
    std::cout << "LRF support radius:     " << rf_rad_ << std::endl;
    std::cout << "SHOT descriptor radius: " << descr_rad_ << std::endl;
    std::cout << "Clustering bin size:    " << cg_size_ << std::endl << std::endl;
  }

  //
  // 估计点云法向
  //
  pcl::NormalEstimationOMP<PointType, NormalType> norm_est;
  norm_est.setKSearch (10);
  norm_est.setInputCloud (model);
  norm_est.compute (*model_normals);

  norm_est.setInputCloud (scene);
  norm_est.compute (*scene_normals);

  //
  //  下采样点云提取关键点,以找到少量的关键点,这些关键点将被关联到一个3D描述符,以便执行关键点匹配和确定点对点对应。uniformsample使用的半径可以是命令行开关设置的半径,也可以是默认值。
  //

  pcl::UniformSampling<PointType> uniform_sampling; //均匀采样
  uniform_sampling.setInputCloud (model);
  uniform_sampling.setRadiusSearch (model_ss_); // 设置下采样半径
  uniform_sampling.filter (*model_keypoints);  
  std::cout << "Model total points: " << model->size () << "; Selected Keypoints: " << model_keypoints->size () << std::endl;

  uniform_sampling.setInputCloud (scene);
  uniform_sampling.setRadiusSearch (scene_ss_);
  uniform_sampling.filter (*scene_keypoints);
  std::cout << "Scene total points: " << scene->size () << "; Selected Keypoints: " << scene_keypoints->size () << std::endl;


  //
  //  计算关键点组成的SHOT描述符
  //
  pcl::SHOTEstimationOMP<PointType, NormalType, DescriptorType> descr_est;
  descr_est.setRadiusSearch (descr_rad_);

  descr_est.setInputCloud (model_keypoints);
  descr_est.setInputNormals (model_normals);
  descr_est.setSearchSurface (model);
  descr_est.compute (*model_descriptors);

  descr_est.setInputCloud (scene_keypoints);
  descr_est.setInputNormals (scene_normals);
  descr_est.setSearchSurface (scene);
  descr_est.compute (*scene_descriptors);

  //
  // 用KdTree找到模型-场景对应.
  // 将3D描述符与每个模型和场景关键点联系起来。使用SHOTEstimationOMP计算SHOT描述符。
  // 现在我们需要确定模型描述符和场景描述符之间的点对点对应关系。
  // 为此,程序使用KdTreeFLANN,该KdTreeFLANN的输入云被设置为包含模型描述符的云。对于每个描述符关联到一个场景关键点,它有效地找到最相似的模型描述符基于欧几里得距离,只有两个描述符足够相似,即他们的平方距离小于一个阈值,这里设置为0.25。
  //
 pcl::CorrespondencesPtr model_scene_corrs (new pcl::Correspondences ());

  pcl::KdTreeFLANN<DescriptorType> match_search;
  match_search.setInputCloud (model_descriptors);

  // 对于每个场景关键点描述符,在模型关键点描述符云中找到最近的邻居,并将其添加到对应向量中。
  for (std::size_t i = 0; i < scene_descriptors->size (); ++i)
  {
    std::vector<int> neigh_indices (1);
    std::vector<float> neigh_sqr_dists (1);
    if (!std::isfinite (scene_descriptors->at (i).descriptor[0])) //skipping NaNs
    {
      continue;
    }
    int found_neighs = match_search.nearestKSearch (scene_descriptors->at (i), 1, neigh_indices, neigh_sqr_dists);
    // 只有当平方描述符距离小于0.25时才添加match (SHOT描述符距离设计为0到1之间)
    if(found_neighs == 1 && neigh_sqr_dists[0] < 0.25f) 
    {
      pcl::Correspondence corr (neigh_indices[0], static_cast<int> (i), neigh_sqr_dists[0]);
      model_scene_corrs->push_back (corr);
    }
  }
  // 输出找到多少个匹配对儿
  std::cout << "Correspondences found: " << model_scene_corrs->size () << std::endl;

  //
  //  实际的聚类
  // 对之前发现的对应关系的关键点们进行实际的聚类。默认的算法是Hough3DGrouping,它是基于Hough投票过程的。
  // 请注意,这个算法需要为关键点所在云关联一个本地参考框架(LRF),这些关键点被作为参数传递!我们在调用聚类算法之前使用boardlocalreferenceframeestimate估计器显式地计算LRFs集。
  //
  std::vector<Eigen::Matrix4f, Eigen::aligned_allocator<Eigen::Matrix4f> > rototranslations;
  std::vector<pcl::Correspondences> clustered_corrs;

  //  使用Hough3D
  if (use_hough_)
  {
    //
    //  仅计算Hough的(关键点)参考帧
    //
    pcl::PointCloud<RFType>::Ptr model_rf (new pcl::PointCloud<RFType> ());
    pcl::PointCloud<RFType>::Ptr scene_rf (new pcl::PointCloud<RFType> ());

    pcl::BOARDLocalReferenceFrameEstimation<PointType, NormalType, RFType> rf_est;
    rf_est.setFindHoles (true);
    rf_est.setRadiusSearch (rf_rad_);

    rf_est.setInputCloud (model_keypoints);
    rf_est.setInputNormals (model_normals);
    rf_est.setSearchSurface (model);
    rf_est.compute (*model_rf);

    rf_est.setInputCloud (scene_keypoints);
    rf_est.setInputNormals (scene_normals);
    rf_est.setSearchSurface (scene);
    rf_est.compute (*scene_rf);

    //  聚类
    pcl::Hough3DGrouping<PointType, PointType, RFType, RFType> clusterer;
    clusterer.setHoughBinSize (cg_size_);
    clusterer.setHoughThreshold (cg_thresh_);
    clusterer.setUseInterpolation (true);
    clusterer.setUseDistanceWeight (false);

    clusterer.setInputCloud (model_keypoints);
    clusterer.setInputRf (model_rf);
    clusterer.setSceneCloud (scene_keypoints);
    clusterer.setSceneRf (scene_rf);
    clusterer.setModelSceneCorrespondences (model_scene_corrs);

    //clusterer.cluster (clustered_corrs);
    clusterer.recognize (rototranslations, clustered_corrs);
  }
  else // 除了使用Hough3DGrouping
  //通过之前描述的适当的命令行开关,也可以选择使用GeometricConsistencyGrouping算法。在这种情况下,不需要LRF计算,所以只是创建算法类的实例,传递正确的参数并调用识别方法。
  {
    pcl::GeometricConsistencyGrouping<PointType, PointType> gc_clusterer;
    gc_clusterer.setGCSize (cg_size_);
    gc_clusterer.setGCThreshold (cg_thresh_);

    gc_clusterer.setInputCloud (model_keypoints);
    gc_clusterer.setSceneCloud (scene_keypoints);
    gc_clusterer.setModelSceneCorrespondences (model_scene_corrs);

    //gc_clusterer.cluster (clustered_corrs);
    gc_clusterer.recognize (rototranslations, clustered_corrs);
  }

  //
  // 输出结果
  //
  std::cout << "Model instances found: " << rototranslations.size () << std::endl;
  for (std::size_t i = 0; i < rototranslations.size (); ++i)
  {
    std::cout << "\n    Instance " << i + 1 << ":" << std::endl;
    std::cout << "        Correspondences belonging to this instance: " << clustered_corrs[i].size () << std::endl;

    // 打印出识别到的物体旋转和平移矩阵
    Eigen::Matrix3f rotation = rototranslations[i].block<3,3>(0, 0);
    Eigen::Vector3f translation = rototranslations[i].block<3,1>(0, 3);

    printf ("\n");
    printf ("            | %6.3f %6.3f %6.3f | \n", rotation (0,0), rotation (0,1), rotation (0,2));
    printf ("        R = | %6.3f %6.3f %6.3f | \n", rotation (1,0), rotation (1,1), rotation (1,2));
    printf ("            | %6.3f %6.3f %6.3f | \n", rotation (2,0), rotation (2,1), rotation (2,2));
    printf ("\n");
    printf ("        t = < %0.3f, %0.3f, %0.3f >\n", translation (0), translation (1), translation (2));
  }

  //
  //  可视化结果
  //  最后程序在PCLVisualizer窗口中显示场景云,其中有一个红色的覆盖层和有一个模型的实例。如果命令行-k和-c已经被使用,程序还会显示模型云的独立呈现。如果启用了关键点可视化,关键点将显示为蓝点,如果启用了对应线可视化,则它们将显示为绿线,表示聚类过程中对应点的关系。
  //
  pcl::visualization::PCLVisualizer viewer ("Correspondence Grouping");
  viewer.addPointCloud (scene, "scene_cloud");

  pcl::PointCloud<PointType>::Ptr off_scene_model (new pcl::PointCloud<PointType> ());
  pcl::PointCloud<PointType>::Ptr off_scene_model_keypoints (new pcl::PointCloud<PointType> ());

  if (show_correspondences_ || show_keypoints_)
  {
    //  We are translating the model so that it doesn't end in the middle of the scene representation
    pcl::transformPointCloud (*model, *off_scene_model, Eigen::Vector3f (-1,0,0), Eigen::Quaternionf (1, 0, 0, 0));
    pcl::transformPointCloud (*model_keypoints, *off_scene_model_keypoints, Eigen::Vector3f (-1,0,0), Eigen::Quaternionf (1, 0, 0, 0));

    pcl::visualization::PointCloudColorHandlerCustom<PointType> off_scene_model_color_handler (off_scene_model, 255, 255, 128);
    viewer.addPointCloud (off_scene_model, off_scene_model_color_handler, "off_scene_model");
  }

  if (show_keypoints_)
  {
    pcl::visualization::PointCloudColorHandlerCustom<PointType> scene_keypoints_color_handler (scene_keypoints, 0, 0, 255);
    viewer.addPointCloud (scene_keypoints, scene_keypoints_color_handler, "scene_keypoints");
    viewer.setPointCloudRenderingProperties (pcl::visualization::PCL_VISUALIZER_POINT_SIZE, 5, "scene_keypoints");

    pcl::visualization::PointCloudColorHandlerCustom<PointType> off_scene_model_keypoints_color_handler (off_scene_model_keypoints, 0, 0, 255);
    viewer.addPointCloud (off_scene_model_keypoints, off_scene_model_keypoints_color_handler, "off_scene_model_keypoints");
    viewer.setPointCloudRenderingProperties (pcl::visualization::PCL_VISUALIZER_POINT_SIZE, 5, "off_scene_model_keypoints");
  }

  for (std::size_t i = 0; i < rototranslations.size (); ++i)
  {
    pcl::PointCloud<PointType>::Ptr rotated_model (new pcl::PointCloud<PointType> ());
    pcl::transformPointCloud (*model, *rotated_model, rototranslations[i]);

    std::stringstream ss_cloud;
    ss_cloud << "instance" << i;

    pcl::visualization::PointCloudColorHandlerCustom<PointType> rotated_model_color_handler (rotated_model, 255, 0, 0);
    viewer.addPointCloud (rotated_model, rotated_model_color_handler, ss_cloud.str ());

    if (show_correspondences_)
    {
      for (std::size_t j = 0; j < clustered_corrs[i].size (); ++j)
      {
        std::stringstream ss_line;
        ss_line << "correspondence_line" << i << "_" << j;
        PointType& model_point = off_scene_model_keypoints->at (clustered_corrs[i][j].index_query);
        PointType& scene_point = scene_keypoints->at (clustered_corrs[i][j].index_match);

        //  为在模型和场景之间发现的每一对对应点画一条线
        viewer.addLine<PointType, PointType> (model_point, scene_point, 0, 255, 0, ss_line.str ());
      }
    }
  }

  while (!viewer.wasStopped ())
  {
    viewer.spinOnce ();
  }

  return (0);
}

注意:在调用聚类算法之前,不需要显式计算LRFs。如果取到聚类算法的云没有关联一组lrf, Hough3DGrouping在进行聚类之前会自动计算出它们。特别是,在不设置LRF的情况下调用recognition(或cluster)方法时会发生这种情况:在这种情况下,需要将LRF的半径指定为集群算法的附加参数(使用setLocalRfSearchRadius方法)。

  程序运行结果:  

在这里插入图片描述
  使用 -k 的效果,蓝色点为计算出的关键点:

在这里插入图片描述
  使用-c的结果,绿色的线是聚类后对应关键点的连线。

在这里插入图片描述
  同时使用 -k 和 -c 的结果:

在这里插入图片描述

三、三维物体识别的可能对象验证及代码分析

  本节旨在解释如何在混乱和严重遮挡的3D场景中通过验证模型假设来实现三维物体识别。在描述符匹配之后,本将运行PCL中可用的对应分组算法之一,以便对点对点对应集进行聚类,确定场景中可能的对象的实例。在这些假设的基础上,采用全局假设验证算法,以减少误报的数量。
  更多关于全局可能对象验证方法的细节可以在这篇文章找到:
A. Aldoma, F. Tombari, L. Di Stefano, M. Vincze, A global hypothesis verification method for 3D object recognition, ECCV 2012
  为了获得更多在杂点中的三维物体识别和基于特征的标准识别的信息,建议提前阅读论文:A. Aldoma, Z.C. Marton, F. Tombari, W. Wohlkinger, C. Potthast, B. Zeisl, R.B. Rusu, S. Gedikli, M. Vincze,“Point Cloud Library: Three-Dimensional Object Recognition and 6 DOF Pose Estimation”, IEEE Robotics and Automation Magazine,2012。
  在开始之前,需要从GitHub上下载PCD云示例(milk.pcd和milk_卡通all_small_clorox.pcd),并将文件放在源文件夹中。
  以下开始代码,请注意查看注释:

 /*
  * 软件许可协议(BSD许可)
  *
  *  Point Cloud Library (PCL) - www.pointclouds.org
  *  Copyright (c) 2014-, Open Perception, Inc.
  *
  *  All rights reserved.
  *
  * 	只要满足以下条件,允许以源代码和二进制形式进行再分配和使用,无论是否修改
  *
  *   * 重新发布源代码必须保留上述版权*声明、本条件列表和以下免责声明。
  *   * 以二进制形式重新发布必须在发布时提供的文档和/或其他材料中复制上述版权声明、本条件列表和以下免责声明。
  *   * 在没有事先书面许可的情况下,无论是版权所有者的名字还是其贡献者的名字都不能被用来支持或推广从本软件衍生出来的产品。
  *
  * 本软件由版权持有人和贡献者*“按现状”提供,任何明示或默示保证,包括但不限于,
  * 针对特定用途的适销性和适用性*的默示保证均不作任何声明。在任何情况下,
  * *版权所有人或贡献者不应对任何直接、间接、附带、特殊、惩戒性或相应的损害(包括但不限于采购替代商品或服务;
  * 丧失使用、资料或利润;或业务中断),但造成的,并根据任何责任理论,无论是合同、责任,或因使用本软件而以任何方式产生的侵权行为(包括疏忽或其他),即使被告知此类损害的可能性。
  *
  */
 
 #include <pcl/io/pcd_io.h>
 #include <pcl/point_cloud.h>
 #include <pcl/correspondence.h>
 #include <pcl/features/normal_3d_omp.h>
 #include <pcl/features/shot_omp.h>
 #include <pcl/features/board.h>
 #include <pcl/filters/uniform_sampling.h>
 #include <pcl/recognition/cg/hough_3d.h>
 #include <pcl/recognition/cg/geometric_consistency.h>
 #include <pcl/recognition/hv/hv_go.h>
 #include <pcl/registration/icp.h>
 #include <pcl/visualization/pcl_visualizer.h>
 #include <pcl/kdtree/kdtree_flann.h>
 #include <pcl/kdtree/impl/kdtree_flann.hpp>
 #include <pcl/common/transforms.h> 
 #include <pcl/console/parse.h>
 
 typedef pcl::PointXYZRGBA PointType;
 typedef pcl::Normal NormalType;
 typedef pcl::ReferenceFrame RFType;
 typedef pcl::SHOT352 DescriptorType;
 
 struct CloudStyle  // 点云类型结构体(用于可视化)
 {
     double r;
     double g;
     double b;
     double size;
 
     CloudStyle (double r,
                double g,
                 double b,
                 double size) :
         r (r),
         g (g),
         b (b),
         size (size)
     {
     }
 };
 
 CloudStyle style_white (255.0, 255.0, 255.0, 4.0);
 CloudStyle style_red (255.0, 0.0, 0.0, 3.0);
 CloudStyle style_green (0.0, 255.0, 0.0, 5.0);
 CloudStyle style_cyan (93.0, 200.0, 217.0, 4.0);
 CloudStyle style_violet (255.0, 0.0, 255.0, 8.0);
 
 std::string model_filename_;  	// 模型点云文件名
 std::string scene_filename_;	// 场景点云文件名
 
 // 算法相关参数
 bool show_keypoints_ (false);	// 显示关键点
 bool use_hough_ (true);		// 使用霍夫/GC聚类方法
 float model_ss_ (0.02f);		// 模型下采样半径
 float scene_ss_ (0.02f);		// 场景下采样半径
 float rf_rad_ (0.015f);		// 参考系半径
 float descr_rad_ (0.02f);		// 描述子半径
 float cg_size_ (0.01f);		// 聚类大小
 float cg_thresh_ (5.0f);		// 聚类阈值
 int icp_max_iter_ (5);			// ICP最大迭代次数
 float icp_corr_distance_ (0.005f); // ICP迭代距离阈值
 // 验证相关参数(用于验证搜索到的点云是否是目标点云)详细含义可以查看上述论文
 // "A global hypothesis verification method for 3D object recognition"
 // 后面有时间会出一章用于解析这篇论文的带读文章
 float hv_resolution_ (0.005f);
 float hv_occupancy_grid_resolution_ (0.01f);
 float hv_clutter_reg_ (5.0f);		// 杂乱度规范
 float hv_inlier_th_ (0.005f);		// 内围阈值
 float hv_occlusion_th_ (0.01f);	// 遮挡阈值
 float hv_rad_clutter_ (0.03f);		// 杂乱度半径
 float hv_regularizer_ (3.0f);		// 调整值
 float hv_rad_normals_ (0.05);		// 法线半径
 bool hv_detect_clutter_ (true);	// 启用杂乱度检测

/**
 * 打印帮助信息
 */
void
showHelp (char *filename)
{
  std::cout << std::endl;
  std::cout << "***************************************************************************" << std::endl;
  std::cout << "*                                                                         *" << std::endl;
  std::cout << "*          Global Hypothese Verification Tutorial - Usage Guide          *" << std::endl;
  std::cout << "*                                                                         *" << std::endl;
  std::cout << "***************************************************************************" << std::endl << std::endl;
  std::cout << "Usage: " << filename << " model_filename.pcd scene_filename.pcd [Options]" << std::endl << std::endl;
  std::cout << "Options:" << std::endl;
  std::cout << "     -h:                          Show this help." << std::endl;
  std::cout << "     -k:                          Show keypoints." << std::endl;
  std::cout << "     --algorithm (Hough|GC):      Clustering algorithm used (default Hough)." << std::endl;
  std::cout << "     --model_ss val:              Model uniform sampling radius (default " << model_ss_ << ")" << std::endl;
  std::cout << "     --scene_ss val:              Scene uniform sampling radius (default " << scene_ss_ << ")" << std::endl;
  std::cout << "     --rf_rad val:                Reference frame radius (default " << rf_rad_ << ")" << std::endl;
  std::cout << "     --descr_rad val:             Descriptor radius (default " << descr_rad_ << ")" << std::endl;
  std::cout << "     --cg_size val:               Cluster size (default " << cg_size_ << ")" << std::endl;
  std::cout << "     --cg_thresh val:             Clustering threshold (default " << cg_thresh_ << ")" << std::endl << std::endl;
  std::cout << "     --icp_max_iter val:          ICP max iterations number (default " << icp_max_iter_ << ")" << std::endl;
  std::cout << "     --icp_corr_distance val:     ICP correspondence distance (default " << icp_corr_distance_ << ")" << std::endl << std::endl;
  std::cout << "     --hv_clutter_reg val:        Clutter Regularizer (default " << hv_clutter_reg_ << ")" << std::endl;
  std::cout << "     --hv_inlier_th val:          Inlier threshold (default " << hv_inlier_th_ << ")" << std::endl;
  std::cout << "     --hv_occlusion_th val:       Occlusion threshold (default " << hv_occlusion_th_ << ")" << std::endl;
  std::cout << "     --hv_rad_clutter val:        Clutter radius (default " << hv_rad_clutter_ << ")" << std::endl;
  std::cout << "     --hv_regularizer val:        Regularizer value (default " << hv_regularizer_ << ")" << std::endl;
  std::cout << "     --hv_rad_normals val:        Normals radius (default " << hv_rad_normals_ << ")" << std::endl;
  std::cout << "     --hv_detect_clutter val:     TRUE if clutter detect enabled (default " << hv_detect_clutter_ << ")" << std::endl << std::endl;
}

/**
 * 命令行解析 (Argc,Argv)
 * @param argc
 * @param argv
 */
void
parseCommandLine (int argc,char *argv[])
{
  // 显示帮助
  if (pcl::console::find_switch (argc, argv, "-h"))
  {
    showHelp (argv[0]);
    exit (0);
  }

  //Model & scene filenames
  std::vector<int> filenames;
  filenames = pcl::console::parse_file_extension_argument (argc, argv, ".pcd");
  if (filenames.size () != 2)
  {
    std::cout << "Filenames missing.\n";    
    showHelp (argv[0]);
    exit (-1);
  }

  model_filename_ = argv[filenames[0]];
  scene_filename_ = argv[filenames[1]];

  if (pcl::console::find_switch (argc, argv, "-k"))
  {
    show_keypoints_ = true;
  }

// 选择使用那种聚类方法 Hough和GC
  std::string used_algorithm;
  if (pcl::console::parse_argument (argc, argv, "--algorithm", used_algorithm) != -1)
  {
    if (used_algorithm.compare ("Hough") == 0)
    {
      use_hough_ = true;
    }
    else if (used_algorithm.compare ("GC") == 0)
    {
      use_hough_ = false;
    }
        else
    {
      std::cout << "Wrong algorithm name.\n";
      showHelp (argv[0]);
      exit (-1);
    }
  }

  //  用户设置算法参数
  pcl::console::parse_argument (argc, argv, "--model_ss", model_ss_);
  pcl::console::parse_argument (argc, argv, "--scene_ss", scene_ss_);
  pcl::console::parse_argument (argc, argv, "--rf_rad", rf_rad_);
  pcl::console::parse_argument (argc, argv, "--descr_rad", descr_rad_);
  pcl::console::parse_argument (argc, argv, "--cg_size", cg_size_);
  pcl::console::parse_argument (argc, argv, "--cg_thresh", cg_thresh_);
  pcl::console::parse_argument (argc, argv, "--icp_max_iter", icp_max_iter_);
  pcl::console::parse_argument (argc, argv, "--icp_corr_distance", icp_corr_distance_);
  pcl::console::parse_argument (argc, argv, "--hv_clutter_reg", hv_clutter_reg_);
  pcl::console::parse_argument (argc, argv, "--hv_inlier_th", hv_inlier_th_);
  pcl::console::parse_argument (argc, argv, "--hv_occlusion_th", hv_occlusion_th_);
  pcl::console::parse_argument (argc, argv, "--hv_rad_clutter", hv_rad_clutter_);
  pcl::console::parse_argument (argc, argv, "--hv_regularizer", hv_regularizer_);
  pcl::console::parse_argument (argc, argv, "--hv_rad_normals", hv_rad_normals_);
  pcl::console::parse_argument (argc, argv, "--hv_detect_clutter", hv_detect_clutter_);
}

// 主程序入口
int
main (int argc,
      char *argv[])
{
  parseCommandLine (argc, argv);	
  // 创建点云指针
  pcl::PointCloud<PointType>::Ptr model (new pcl::PointCloud<PointType> ());
  pcl::PointCloud<PointType>::Ptr model_keypoints (new pcl::PointCloud<PointType> ());
  pcl::PointCloud<PointType>::Ptr scene (new pcl::PointCloud<PointType> ());
  pcl::PointCloud<PointType>::Ptr scene_keypoints (new pcl::PointCloud<PointType> ());
  pcl::PointCloud<NormalType>::Ptr model_normals (new pcl::PointCloud<NormalType> ());
  pcl::PointCloud<NormalType>::Ptr scene_normals (new pcl::PointCloud<NormalType> ());
  pcl::PointCloud<DescriptorType>::Ptr model_descriptors (new pcl::PointCloud<DescriptorType> ());
  pcl::PointCloud<DescriptorType>::Ptr scene_descriptors (new pcl::PointCloud<DescriptorType> ());

  /**
   * 读入点云文件
   */
  if (pcl::io::loadPCDFile (model_filename_, *model) < 0)
  {
    std::cout << "Error loading model cloud." << std::endl;
    showHelp (argv[0]);
    return (-1);
  }
  if (pcl::io::loadPCDFile (scene_filename_, *scene) < 0)
  {
    std::cout << "Error loading scene cloud." << std::endl;
    showHelp (argv[0]);
    return (-1);
  }

  /**
  * 计算法向
   */
  pcl::NormalEstimationOMP<PointType, NormalType> norm_est;
  norm_est.setKSearch (10);
  norm_est.setInputCloud (model);
  norm_est.compute (*model_normals);

  norm_est.setInputCloud (scene);
  norm_est.compute (*scene_normals);

  /**
   *  点云下采样提取关键点
   */
  pcl::UniformSampling<PointType> uniform_sampling;
  uniform_sampling.setInputCloud (model);
  uniform_sampling.setRadiusSearch (model_ss_);
  uniform_sampling.filter (*model_keypoints);
  std::cout << "Model total points: " << model->size () << "; Selected Keypoints: " << model_keypoints->size () << std::endl;

  uniform_sampling.setInputCloud (scene);
  uniform_sampling.setRadiusSearch (scene_ss_);
  uniform_sampling.filter (*scene_keypoints);
  std::cout << "Scene total points: " << scene->size () << "; Selected Keypoints: " << scene_keypoints->size () << std::endl;

 /**
   *  利用关键点计算点云SHOT描述子
   */
  pcl::SHOTEstimationOMP<PointType, NormalType, DescriptorType> descr_est;
  descr_est.setRadiusSearch (descr_rad_);

  descr_est.setInputCloud (model_keypoints);
  descr_est.setInputNormals (model_normals);
  descr_est.setSearchSurface (model);
  descr_est.compute (*model_descriptors);

  descr_est.setInputCloud (scene_keypoints);
  descr_est.setInputNormals (scene_normals);
  descr_est.setSearchSurface (scene);
  descr_est.compute (*scene_descriptors);

  /**
   *  用KdTree找到模型-场景的对应关系
   */
  pcl::CorrespondencesPtr model_scene_corrs (new pcl::Correspondences ());
  pcl::KdTreeFLANN<DescriptorType> match_search;
  match_search.setInputCloud (model_descriptors);// 将模型的描述子放入kdTree
  std::vector<int> model_good_keypoints_indices; // 用于保存模型关键点索引
  std::vector<int> scene_good_keypoints_indices; // 用于保存场景关键点索引

  for (std::size_t i = 0; i < scene_descriptors->size (); ++i)
  {
    std::vector<int> neigh_indices (1);
    std::vector<float> neigh_sqr_dists (1);
    if (!std::isfinite (scene_descriptors->at (i).descriptor[0]))  //跳过无效值 NaNs
    {
      continue;
    }
    int found_neighs = match_search.nearestKSearch (scene_descriptors->at (i), 1, neigh_indices, neigh_sqr_dists);
   if (found_neighs == 1 && neigh_sqr_dists[0] < 0.25f)
   {
      pcl::Correspondence corr (neigh_indices[0], static_cast<int> (i), neigh_sqr_dists[0]);
      model_scene_corrs->push_back (corr);
      model_good_keypoints_indices.push_back (corr.index_query);
      scene_good_keypoints_indices.push_back (corr.index_match);
    }
  }
  pcl::PointCloud<PointType>::Ptr model_good_kp (new pcl::PointCloud<PointType> ());
  pcl::PointCloud<PointType>::Ptr scene_good_kp (new pcl::PointCloud<PointType> ());
  pcl::copyPointCloud (*model_keypoints, model_good_keypoints_indices, *model_good_kp);
  pcl::copyPointCloud (*scene_keypoints, scene_good_keypoints_indices, *scene_good_kp);

  std::cout << "Correspondences found: " << model_scene_corrs->size () << std::endl;

  /**
   *  以关键点进行聚类
   */
  std::vector<Eigen::Matrix4f, Eigen::aligned_allocator<Eigen::Matrix4f> > rototranslations;
  std::vector < pcl::Correspondences > clustered_corrs;

  if (use_hough_) // 使用霍夫方法聚类并设置参数
  {
    pcl::PointCloud<RFType>::Ptr model_rf (new pcl::PointCloud<RFType> ());
    pcl::PointCloud<RFType>::Ptr scene_rf (new pcl::PointCloud<RFType> ());

    pcl::BOARDLocalReferenceFrameEstimation<PointType, NormalType, RFType> rf_est;
    rf_est.setFindHoles (true);
    rf_est.setRadiusSearch (rf_rad_);

    rf_est.setInputCloud (model_keypoints);
    rf_est.setInputNormals (model_normals);
    rf_est.setSearchSurface (model);
    rf_est.compute (*model_rf);

    rf_est.setInputCloud (scene_keypoints);
    rf_est.setInputNormals (scene_normals);
    rf_est.setSearchSurface (scene);
    rf_est.compute (*scene_rf);

    //  开始聚类
    pcl::Hough3DGrouping<PointType, PointType, RFType, RFType> clusterer;
    clusterer.setHoughBinSize (cg_size_);
    clusterer.setHoughThreshold (cg_thresh_);
    clusterer.setUseInterpolation (true);
    clusterer.setUseDistanceWeight (false);

    clusterer.setInputCloud (model_keypoints);
    clusterer.setInputRf (model_rf);
    clusterer.setSceneCloud (scene_keypoints);
    clusterer.setSceneRf (scene_rf);
    clusterer.setModelSceneCorrespondences (model_scene_corrs);

    clusterer.recognize (rototranslations, clustered_corrs);
  }
  else // 否则使用GC方法聚类并设置参数
  {
    pcl::GeometricConsistencyGrouping<PointType, PointType> gc_clusterer;
    gc_clusterer.setGCSize (cg_size_);
    gc_clusterer.setGCThreshold (cg_thresh_);

    gc_clusterer.setInputCloud (model_keypoints);
    gc_clusterer.setSceneCloud (scene_keypoints);
    gc_clusterer.setModelSceneCorrespondences (model_scene_corrs);

    gc_clusterer.recognize (rototranslations, clustered_corrs);
  }

  /**
   * 若没有实例则返回
   */
  if (rototranslations.size () <= 0)
  {
    std::cout << "*** No instances found! ***" << std::endl;
   return (0);
  }
  else
  {
    std::cout << "Recognized Instances: " << rototranslations.size () << std::endl << std::endl;
  }

  /**
     * 为发现的每个实例生成点云
   */
  std::vector<pcl::PointCloud<PointType>::ConstPtr> instances;

  for (std::size_t i = 0; i < rototranslations.size (); ++i)
  {
    pcl::PointCloud<PointType>::Ptr rotated_model (new pcl::PointCloud<PointType> ());
    pcl::transformPointCloud (*model, *rotated_model, rototranslations[i]);
    instances.push_back (rotated_model);
  }

  /**
   * 对每个实例的点云进行最邻近迭代ICP
   */
  std::vector<pcl::PointCloud<PointType>::ConstPtr> registered_instances;
  if (true)
  {
    std::cout << "--- ICP ---------" << std::endl;

    for (std::size_t i = 0; i < rototranslations.size (); ++i)
    {
      pcl::IterativeClosestPoint<PointType, PointType> icp;
      icp.setMaximumIterations (icp_max_iter_);
      icp.setMaxCorrespondenceDistance (icp_corr_distance_);
      icp.setInputTarget (scene);
      icp.setInputSource (instances[i]);
      pcl::PointCloud<PointType>::Ptr registered (new pcl::PointCloud<PointType>);
      icp.align (*registered);
      registered_instances.push_back (registered);
      std::cout << "Instance " << i << " ";
      if (icp.hasConverged ())
      {
        std::cout << "Aligned!" << std::endl;
      }
      else
      {
        std::cout << "Not Aligned!" << std::endl;
      }
    }

    std::cout << "-----------------" << std::endl << std::endl;
  }

  /**
   * 假设验证
   */
  std::cout << "--- Hypotheses Verification ---" << std::endl;
  std::vector<bool> hypotheses_mask;  // Mask Vector to identify positive hypotheses

  pcl::GlobalHypothesesVerification<PointType, PointType> GoHv;

  GoHv.setSceneCloud (scene);  // Scene Cloud
  GoHv.addModels (registered_instances, true);  //Models to verify
  GoHv.setResolution (hv_resolution_);
  GoHv.setResolutionOccupancyGrid (hv_occupancy_grid_resolution_);
  GoHv.setInlierThreshold (hv_inlier_th_);
  GoHv.setOcclusionThreshold (hv_occlusion_th_);
  GoHv.setRegularizer (hv_regularizer_);
  GoHv.setRadiusClutter (hv_rad_clutter_);
  GoHv.setClutterRegularizer (hv_clutter_reg_);
  GoHv.setDetectClutter (hv_detect_clutter_);
  GoHv.setRadiusNormals (hv_rad_normals_);

  GoHv.verify ();
  GoHv.getMask (hypotheses_mask);  // i-element TRUE if hvModels[i] verifies hypotheses

  for (std::size_t i = 0; i < hypotheses_mask.size (); i++)
  {
    if (hypotheses_mask[i])
    {
      std::cout << "Instance " << i << " is GOOD! <---" << std::endl;
    }
    else
    {
      std::cout << "Instance " << i << " is bad!" << std::endl;
    }
  }
  std::cout << "-------------------------------" << std::endl;

  /**
   * 可视化
   */
  pcl::visualization::PCLVisualizer viewer ("Hypotheses Verification");
  viewer.addPointCloud (scene, "scene_cloud");

  pcl::PointCloud<PointType>::Ptr off_scene_model (new pcl::PointCloud<PointType> ());
  pcl::PointCloud<PointType>::Ptr off_scene_model_keypoints (new pcl::PointCloud<PointType> ());

  pcl::PointCloud<PointType>::Ptr off_model_good_kp (new pcl::PointCloud<PointType> ());
  pcl::transformPointCloud (*model, *off_scene_model, Eigen::Vector3f (-1, 0, 0), Eigen::Quaternionf (1, 0, 0, 0));
  pcl::transformPointCloud (*model_keypoints, *off_scene_model_keypoints, Eigen::Vector3f (-1, 0, 0), Eigen::Quaternionf (1, 0, 0, 0));
  pcl::transformPointCloud (*model_good_kp, *off_model_good_kp, Eigen::Vector3f (-1, 0, 0), Eigen::Quaternionf (1, 0, 0, 0));

  if (show_keypoints_)
  {
    CloudStyle modelStyle = style_white;
    pcl::visualization::PointCloudColorHandlerCustom<PointType> off_scene_model_color_handler (off_scene_model, modelStyle.r, modelStyle.g, modelStyle.b);
    viewer.addPointCloud (off_scene_model, off_scene_model_color_handler, "off_scene_model");
    viewer.setPointCloudRenderingProperties (pcl::visualization::PCL_VISUALIZER_POINT_SIZE, modelStyle.size, "off_scene_model");
  }

  if (show_keypoints_)
  {
    CloudStyle goodKeypointStyle = style_violet;
    pcl::visualization::PointCloudColorHandlerCustom<PointType> model_good_keypoints_color_handler (off_model_good_kp, goodKeypointStyle.r, goodKeypointStyle.g,
                                                                                                    goodKeypointStyle.b);
    viewer.addPointCloud (off_model_good_kp, model_good_keypoints_color_handler, "model_good_keypoints");
    viewer.setPointCloudRenderingProperties (pcl::visualization::PCL_VISUALIZER_POINT_SIZE, goodKeypointStyle.size, "model_good_keypoints");

    pcl::visualization::PointCloudColorHandlerCustom<PointType> scene_good_keypoints_color_handler (scene_good_kp, goodKeypointStyle.r, goodKeypointStyle.g,
                                                                                                    goodKeypointStyle.b);
    viewer.addPointCloud (scene_good_kp, scene_good_keypoints_color_handler, "scene_good_keypoints");
    viewer.setPointCloudRenderingProperties (pcl::visualization::PCL_VISUALIZER_POINT_SIZE, goodKeypointStyle.size, "scene_good_keypoints");
  }

  for (std::size_t i = 0; i < instances.size (); ++i)
  {
    std::stringstream ss_instance;
    ss_instance << "instance_" << i;

    CloudStyle clusterStyle = style_red;
    pcl::visualization::PointCloudColorHandlerCustom<PointType> instance_color_handler (instances[i], clusterStyle.r, clusterStyle.g, clusterStyle.b);
    viewer.addPointCloud (instances[i], instance_color_handler, ss_instance.str ());
    viewer.setPointCloudRenderingProperties (pcl::visualization::PCL_VISUALIZER_POINT_SIZE, clusterStyle.size, ss_instance.str ());

    CloudStyle registeredStyles = hypotheses_mask[i] ? style_green : style_cyan;
    ss_instance << "_registered" << std::endl;
    pcl::visualization::PointCloudColorHandlerCustom<PointType> registered_instance_color_handler (registered_instances[i], registeredStyles.r,
                                                                                                   registeredStyles.g, registeredStyles.b);
    viewer.addPointCloud (registered_instances[i], registered_instance_color_handler, ss_instance.str ());
    viewer.setPointCloudRenderingProperties (pcl::visualization::PCL_VISUALIZER_POINT_SIZE, registeredStyles.size, ss_instance.str ());
  }
  while (!viewer.wasStopped ())
  {
    viewer.spinOnce ();
  }

  return (0);
}

代码效果:

  原始场景点云图像:
在这里插入图片描述
  参数合适时有效的识别图像:
在这里插入图片描述
  可以通过使用霍夫投票分组算法的更大的bin大小参数来模拟更多的误报,如下:红色蓝色为误报点云。
在这里插入图片描述


【博主简介】
  斯坦福的兔子,男,天津大学机械工程工学硕士。毕业至今从事光学三维成像及点云处理相关工作。因工作中使用的三维处理库为公司内部库,不具有普遍适用性,遂自学开源PCL库及其相关数学知识以备使用。谨此将自学过程与君共享。
博主才疏学浅,尚不具有指导能力,如有问题还请各位在评论处留言供大家共同讨论。
若前辈们有工作机会介绍欢迎私信。

  • 10
    点赞
  • 30
    收藏
    觉得还不错? 一键收藏
  • 11
    评论
评论 11
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值