GPD源码分析

代码流程

目录

代码流程

GPD的使用-输入输出

gpd使用示例

gpd算法框架

构造函数-GraspDetector(const std::string &config_filename)

点云预处理

抓取检测


代码处理流程

  1. 采样
  2. 通过generateGraspCandidateSets函数产生抓取姿态
  3. 通过工作空间过滤抓取姿态-std::vector<std::unique_ptr<candidate::HandSet>> hand_set_list_filtered = filterGraspsWorkspace(hand_set_list, workspace_grasps_);
  4. 通过上述得到的3D抓取姿态得到2D图像数据-image_generator_->createImages(cloud, hand_set_list_filtered, images, hands);
  5. 用深度学习训练好的数据进行分类-classifier_->classifyImages(images);
  6. 选择得分最高的num_selected抓取姿态- hands = selectGrasps(hands);
  7. 聚类- clusters = clustering_->findClusters(hands);
  8. 通过得分再次排序得到结果输出
     

GPD的使用-输入输出

gpd使用示例

先看下面一段代码,为标准gpd算法的使用方法,可以看到输入为点云,cloud_camera_;

输入:std::unique_ptr<gpd::util::Cloud> cloud_camera_

输出: std::vector<std::unique_ptr<gpd::candidate::Hand>> grasps; // detect grasp poses

gpd对象: std::unique_ptr<gpd::GraspDetector> grasp_detector_; // used to run the GPD algorithm

处理过程:preprocessPointCloud→ detectGrasps,主要包含点云预处理和抓取姿态检测两部分,后面研究这两部分主要实现方法

void GraspDetection::sampleGrasps()
{  
  std::unique_ptr<gpd::util::Cloud> cloud_camera_;   
  // TODO: set alpha channel to 1
  // GPD required XYZRGBA
  PointCloudRGBA::Ptr grasp_cloud(new PointCloudRGBA);
  pcl::copyPointCloud(*cloud.get(), *grasp_cloud.get());
 
  // Construct the cloud camera
  Eigen::Matrix3Xd camera_view_point(3, 1);
  camera_view_point << view_point_.at(0), view_point_.at(1), view_point_.at(2);
  cloud_camera_.reset(new gpd::util::Cloud(grasp_cloud, 0, camera_view_point));
 
  std::vector<std::unique_ptr<gpd::candidate::Hand>> grasps;  // detect grasp poses
  grasp_detector_->preprocessPointCloud(*cloud_camera_);      // preprocess the point cloud
  grasps = grasp_detector_->detectGrasps(*cloud_camera_);     // detect grasps in the point cloud
 
  // Use grasps with score > 0
  std::vector<unsigned int> grasp_id;
  for (unsigned int i = 0; i < grasps.size(); i++)
  {
    if (grasps.at(i)->getScore() > 0.0)
    {
      grasp_id.push_back(i);
    }
  }
 
  if (grasp_id.empty())
  {
    ROS_ERROR_NAMED(LOGNAME, "No grasp candidates found with a positive cost");
    result_.grasp_state = "failed";
    server_->setAborted(result_);
    return;
  }
 
  for (auto id : grasp_id)
  {
    // transform grasp from camera optical link into frame_id (panda_link0)
    const Eigen::Isometry3d transform_opt_grasp =
        Eigen::Translation3d(grasps.at(id)->getPosition()) * Eigen::Quaterniond(grasps.at(id)->getOrientation());
 
    const Eigen::Isometry3d transform_base_grasp = trans_base_cam_ * transform_cam_opt_ * transform_opt_grasp;
    const Eigen::Vector3d trans = transform_base_grasp.translation();
    const Eigen::Quaterniond rot(transform_base_grasp.rotation());
 
    // convert back to PoseStamped
    geometry_msgs::PoseStamped grasp_pose;
    grasp_pose.header.frame_id = frame_id_;
    grasp_pose.pose.position.x = trans.x();
    grasp_pose.pose.position.y = trans.y();
    grasp_pose.pose.position.z = trans.z();
 
    grasp_pose.pose.orientation.w = rot.w();
    grasp_pose.pose.orientation.x = rot.x();
    grasp_pose.pose.orientation.y = rot.y();
    grasp_pose.pose.orientation.z = rot.z();
 
    feedback_.grasp_candidates.emplace_back(grasp_pose);
 
    // Grasp is selected based on cost not score
    // Invert score to represent grasp with lowest cost
    feedback_.costs.emplace_back(static_cast<double>(1.0 / grasps.at(id)->getScore()));
  }
 
  server_->publishFeedback(feedback_);
  result_.grasp_state = "success";
  server_->setSucceeded(result_);
}

gpd算法框架

构造函数-GraspDetector(const std::string &config_filename)

构造函数通过导入配置文件实现构造,具体代码如下本节后面,配置文件如下

Robot Hand Geometry

配置机械手的情况,包含手指宽度、手的直径、手的深度、手的高度等内容,按照实际手爪填写即可

Grasp Descriptor  (cm)

配置了手内部“cube”立方体?的宽度、深度和高度,以及图像的宽度和高度,还有图像通道数

Preprocessing of point cloud

主要负责点云预处理内容,配置了是否降采样开关、降采样的距离、是否用统计去除法剔除噪声、点云中的相机位置、是否只处理不属于桌子的点云、工作空间等

Grasp candidate generation

配置了点云抓取姿态的采样数目、多线程数量、近邻搜索的半径、抓取采样手姿态的数量、近邻点云软转轴

Filtering of candidates (mm)

配置了滤波算法的手指最小宽度和最大宽度

Filtering of candidates based on their approach direction

是否使用接近朝向滤波、选择朝向、角度阈值

Clustering of grasps

设置了每个聚类的最小内置层数,0为不适用

Grasp selection

选择多少抓取结果,按照得分排序后选择

Visualization

配置了每个过程是否显示出来

 Path to config file for robot hand geometry
hand_geometry_filename = 0
 
# Path to config file for volume and image geometry
image_geometry_filename = 0
 
# ==== Robot Hand Geometry ====
#   finger_width: the width of the finger
#   outer_diameter: the diameter of the robot hand (= maximum aperture + 2 * finger width)
#   hand_depth: the finger length (measured from hand base to finger tip)
#   hand_height: the height of the hand
#   init_bite: the minimum amount of the object to be covered by the hand
# Panda (Visual approximation)
finger_width = 0.01
hand_outer_diameter = 0.12
hand_depth = 0.1
hand_height = 0.05
init_bite = 0.01
 
# ==== Grasp Descriptor ==== (cm)
#   volume_width: the width of the cube inside the robot hand
#   volume_depth: the depth of the cube inside the robot hand
#   volume_height: the height of the cube inside the robot hand
#   image_size: the size of the image (width and height; image is square)
#   image_num_channels: the number of image channels
volume_width = 0.10
volume_depth = 0.06
volume_height = 0.02
image_size = 60
image_num_channels = 15
 
# Path to directory that contains neural network parameters
weights_file = /home/song/gpd/models/lenet/15channels/params/
 
# Preprocessing of point cloud
#   voxelize: if the cloud gets voxelized/downsampled
#   remove_outliers: if statistical outliers are removed from the cloud (used to remove noise)
#   workspace: the workspace of the robot (dimensions of a cube centered at origin of point cloud)
#   camera_position: the position of the camera from which the cloud was taken
#   sample_above_plane: only draws samples which do not belong to the table plane
voxelize = 1
voxel_size = 0.003
remove_outliers = 1
workspace = -1.0 1.0 -1.0 1.0 -1.0 1.0
# workspace = -2.0 2.0 -2.0 2.0 -2.0 2.0
camera_position = 0 0 0
# camera_position = 0.15 0 0.04
sample_above_plane = 1
 
# Grasp candidate generation
#   num_samples: the number of samples to be drawn from the point cloud
#   num_threads: the number of CPU threads to be used
#   nn_radius: the radius for the neighborhood search
#   num_orientations: the number of robot hand orientations to evaluate
#   rotation_axes: the axes about which the point neighborhood gets rotated
num_samples = 50
num_threads = 8
nn_radius = 0.01
num_orientations = 8
num_finger_placements = 10
hand_axes = 2
deepen_hand = 1
 
# Filtering of candidates (mm)
#   min_aperture: the minimum gripper width
#   max_aperture: the maximum gripper width
#   workspace_grasps: dimensions of a cube centered at origin of point cloud; should be smaller than <workspace>
min_aperture = 0.0
max_aperture = 0.85
workspace_grasps = -1 1 -1 1 -1 1
# workspace_grasps = -2 2 -2 2 -2 2
 
 
 
# Filtering of candidates based on their approach direction
#   filter_approach_direction: turn filtering on/off
#   direction: the direction to compare against
#   angle_thresh: angle in radians above which grasps are filtered
filter_approach_direction = 1
direction = 0 0 1
thresh_rad = 2.0
 
# Clustering of grasps
#   min_inliers: minimum number of inliers per cluster; set to 0 to turn off clustering
min_inliers = 1
 
# Grasp selection
#   num_selected: number of selected grasps (sorted by score)
num_selected = 100
 
# Visualization
#   plot_normals: plot the surface normals
#   plot_samples: plot the samples
#   plot_candidates: plot the grasp candidates
#   plot_filtered_candidates: plot the grasp candidates which remain after filtering
#   plot_valid_grasps: plot the candidates that are identified as valid grasps
#   plot_clustered_grasps: plot the grasps that after clustering
#   plot_selected_grasps: plot the selected grasps (final output)
plot_normals = 0
plot_samples = 0
plot_candidates = 0
plot_filtered_candidates = 0
plot_valid_grasps = 0
plot_clustered_grasps = 0
plot_selected_grasps = 0
GraspDetector::GraspDetector(const std::string &config_filename) {
  Eigen::initParallel();
 
  // Read parameters from configuration file.
  util::ConfigFile config_file(config_filename);
  config_file.ExtractKeys();
 
  // Read hand geometry parameters.
  std::string hand_geometry_filename =
      config_file.getValueOfKeyAsString("hand_geometry_filename", "");
  if (hand_geometry_filename == "0") {
    hand_geometry_filename = config_filename;
  }
  candidate::HandGeometry hand_geom(hand_geometry_filename);
  std::cout << hand_geom;
 
  // Read plotting parameters.
  plot_normals_ = config_file.getValueOfKey<bool>("plot_normals", false);
  plot_samples_ = config_file.getValueOfKey<bool>("plot_samples", true);
  plot_candidates_ = config_file.getValueOfKey<bool>("plot_candidates", false);
  plot_filtered_candidates_ =
      config_file.getValueOfKey<bool>("plot_filtered_candidates", false);
  plot_valid_grasps_ =
      config_file.getValueOfKey<bool>("plot_valid_grasps", false);
  plot_clustered_grasps_ =
      config_file.getValueOfKey<bool>("plot_clustered_grasps", false);
  plot_selected_grasps_ =
      config_file.getValueOfKey<bool>("plot_selected_grasps", false);
  printf("============ PLOTTING ========================\n");
  printf("plot_normals: %s\n", plot_normals_ ? "true" : "false");
  printf("plot_samples %s\n", plot_samples_ ? "true" : "false");
  printf("plot_candidates: %s\n", plot_candidates_ ? "true" : "false");
  printf("plot_filtered_candidates: %s\n",
         plot_filtered_candidates_ ? "true" : "false");
  printf("plot_valid_grasps: %s\n", plot_valid_grasps_ ? "true" : "false");
  printf("plot_clustered_grasps: %s\n",
         plot_clustered_grasps_ ? "true" : "false");
  printf("plot_selected_grasps: %s\n",
         plot_selected_grasps_ ? "true" : "false");
  printf("==============================================\n");
 
  // Create object to generate grasp candidates.
  candidate::CandidatesGenerator::Parameters generator_params;
  generator_params.num_samples_ =
      config_file.getValueOfKey<int>("num_samples", 1000);
  generator_params.num_threads_ =
      config_file.getValueOfKey<int>("num_threads", 1);
  generator_params.remove_statistical_outliers_ =
      config_file.getValueOfKey<bool>("remove_outliers", false);
  generator_params.sample_above_plane_ =
      config_file.getValueOfKey<bool>("sample_above_plane", false);
  generator_params.voxelize_ =
      config_file.getValueOfKey<bool>("voxelize", true);
  generator_params.voxel_size_ =
      config_file.getValueOfKey<double>("voxel_size", 0.003);
  generator_params.normals_radius_ =
      config_file.getValueOfKey<double>("normals_radius", 0.03);
  generator_params.refine_normals_k_ =
      config_file.getValueOfKey<int>("refine_normals_k", 0);
  generator_params.workspace_ =
      config_file.getValueOfKeyAsStdVectorDouble("workspace", "-1 1 -1 1 -1 1");
 
  candidate::HandSearch::Parameters hand_search_params;
  hand_search_params.hand_geometry_ = hand_geom;
  hand_search_params.nn_radius_frames_ =
      config_file.getValueOfKey<double>("nn_radius", 0.01);
  hand_search_params.num_samples_ =
      config_file.getValueOfKey<int>("num_samples", 1000);
  hand_search_params.num_threads_ =
      config_file.getValueOfKey<int>("num_threads", 1);
  hand_search_params.num_orientations_ =
      config_file.getValueOfKey<int>("num_orientations", 8);
  hand_search_params.num_finger_placements_ =
      config_file.getValueOfKey<int>("num_finger_placements", 10);
  hand_search_params.deepen_hand_ =
      config_file.getValueOfKey<bool>("deepen_hand", true);
  hand_search_params.hand_axes_ =
      config_file.getValueOfKeyAsStdVectorInt("hand_axes", "2");
  hand_search_params.friction_coeff_ =
      config_file.getValueOfKey<double>("friction_coeff", 20.0);
  hand_search_params.min_viable_ =
      config_file.getValueOfKey<int>("min_viable", 6);
  candidates_generator_ = std::make_unique<candidate::CandidatesGenerator>(
      generator_params, hand_search_params);
 
  printf("============ CLOUD PREPROCESSING =============\n");
  printf("voxelize: %s\n", generator_params.voxelize_ ? "true" : "false");
  printf("voxel_size: %.3f\n", generator_params.voxel_size_);
  printf("remove_outliers: %s\n",
         generator_params.remove_statistical_outliers_ ? "true" : "false");
  printStdVector(generator_params.workspace_, "workspace");
  printf("sample_above_plane: %s\n",
         generator_params.sample_above_plane_ ? "true" : "false");
  printf("normals_radius: %.3f\n", generator_params.normals_radius_);
  printf("refine_normals_k: %d\n", generator_params.refine_normals_k_);
  printf("==============================================\n");
 
  printf("============ CANDIDATE GENERATION ============\n");
  printf("num_samples: %d\n", hand_search_params.num_samples_);
  printf("num_threads: %d\n", hand_search_params.num_threads_);
  printf("nn_radius: %3.2f\n", hand_search_params.nn_radius_frames_);
  printStdVector(hand_search_params.hand_axes_, "hand axes");
  printf("num_orientations: %d\n", hand_search_params.num_orientations_);
  printf("num_finger_placements: %d\n",
         hand_search_params.num_finger_placements_);
  printf("deepen_hand: %s\n",
         hand_search_params.deepen_hand_ ? "true" : "false");
  printf("friction_coeff: %3.2f\n", hand_search_params.friction_coeff_);
  printf("min_viable: %d\n", hand_search_params.min_viable_);
  printf("==============================================\n");
 
  // TODO: Set the camera position.
  //  Eigen::Matrix3Xd view_points(3,1);
  //  view_points << camera_position[0], camera_position[1], camera_position[2];
 
  // Read grasp image parameters.
  std::string image_geometry_filename =
      config_file.getValueOfKeyAsString("image_geometry_filename", "");
  if (image_geometry_filename == "0") {
    image_geometry_filename = config_filename;
  }
  descriptor::ImageGeometry image_geom(image_geometry_filename);
  std::cout << image_geom;
 
  // Read classification parameters and create classifier.
  std::string model_file = config_file.getValueOfKeyAsString("model_file", "");
  std::string weights_file =
      config_file.getValueOfKeyAsString("weights_file", "");
  if (!model_file.empty() || !weights_file.empty()) {
    int device = config_file.getValueOfKey<int>("device", 0);
    int batch_size = config_file.getValueOfKey<int>("batch_size", 1);
    classifier_ = net::Classifier::create(
        model_file, weights_file, static_cast<net::Classifier::Device>(device),
        batch_size);
    min_score_ = config_file.getValueOfKey<int>("min_score", 0);
    printf("============ CLASSIFIER ======================\n");
    printf("model_file: %s\n", model_file.c_str());
    printf("weights_file: %s\n", weights_file.c_str());
    printf("batch_size: %d\n", batch_size);
    printf("==============================================\n");
  }
 
  // Read additional grasp image creation parameters.
  bool remove_plane = config_file.getValueOfKey<bool>(
      "remove_plane_before_image_calculation", false);
 
  // Create object to create grasp images from grasp candidates (used for
  // classification).
  image_generator_ = std::make_unique<descriptor::ImageGenerator>(
      image_geom, hand_search_params.num_threads_,
      hand_search_params.num_orientations_, false, remove_plane);
 
  // Read grasp filtering parameters based on robot workspace and gripper width.
  workspace_grasps_ = config_file.getValueOfKeyAsStdVectorDouble(
      "workspace_grasps", "-1 1 -1 1 -1 1");
  min_aperture_ = config_file.getValueOfKey<double>("min_aperture", 0.0);
  max_aperture_ = config_file.getValueOfKey<double>("max_aperture", 0.085);
  printf("============ CANDIDATE FILTERING =============\n");
  printStdVector(workspace_grasps_, "candidate_workspace");
  printf("min_aperture: %3.4f\n", min_aperture_);
  printf("max_aperture: %3.4f\n", max_aperture_);
  printf("==============================================\n");
 
  // Read grasp filtering parameters based on approach direction.
  filter_approach_direction_ =
      config_file.getValueOfKey<bool>("filter_approach_direction", false);
  std::vector<double> approach =
      config_file.getValueOfKeyAsStdVectorDouble("direction", "1 0 0");
  direction_ << approach[0], approach[1], approach[2];
  thresh_rad_ = config_file.getValueOfKey<double>("thresh_rad", 2.3);
 
  // Read clustering parameters.
  int min_inliers = config_file.getValueOfKey<int>("min_inliers", 1);
  clustering_ = std::make_unique<Clustering>(min_inliers);
  cluster_grasps_ = min_inliers > 0 ? true : false;
  printf("============ CLUSTERING ======================\n");
  printf("min_inliers: %d\n", min_inliers);
  printf("==============================================\n\n");
 
  // Read grasp selection parameters.
  num_selected_ = config_file.getValueOfKey<int>("num_selected", 100);
 
  // Create plotter.
  plotter_ = std::make_unique<util::Plot>(hand_search_params.hand_axes_.size(),
                                          hand_search_params.num_orientations_);
}

点云预处理

点云预处理主要是非法制剔除、工作空间过滤、体素滤波、通过法向量剔除桌面、降采样等

void GraspDetector::preprocessPointCloud(util::Cloud &cloud) {
  candidates_generator_->preprocessPointCloud(cloud);
}
void CandidatesGenerator::preprocessPointCloud(util::Cloud &cloud) {
  printf("Processing cloud with %zu points.\n",
         cloud.getCloudOriginal()->size());
 
  cloud.removeNans();
 
  cloud.filterWorkspace(params_.workspace_);
 
  if (params_.voxelize_) {
    cloud.voxelizeCloud(params_.voxel_size_);
  }
 
  cloud.calculateNormals(params_.num_threads_, params_.normals_radius_);
 
  if (params_.refine_normals_k_ > 0) {
    cloud.refineNormals(params_.refine_normals_k_);
  }
 
  if (params_.sample_above_plane_) {
    cloud.sampleAbovePlane();
  }
 
  cloud.subsample(params_.num_samples_);
}  

评论 10
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值