从源码看kalibr 角点提取原理

从源码看kalibr 角点提取原理

因为目前涉及kalibr角点提取的代码改写,还想改成用任意标定板都可以提取角点的代码,所以想仔细想仔细研究一下他的提取原理,看看为什么标定板的行列非得大于3.
编译这套docker代码,需要在工作空间的根目录下运行:

catkin build -DCMAKE_BUILD_TYPE=Release -j4

********************************let’s start *****************************************
阅读源码可以发现,核心代码在这里:
\catkin_ws\src\kalibr\aslam_cv\aslam_cameras\src\GridDetector.cpp

bool GridDetector::findTargetNoTransformation(const cv::Mat & image, const aslam::Time & stamp,
    GridCalibrationTargetObservation & outObservation) const {
  bool success = false;

  // Extract the calibration target corner points
  Eigen::MatrixXd cornerPoints;
  std::vector<bool> validCorners;
  success = _target->computeObservation(image, cornerPoints, validCorners);

  // Set the image, target, and timestamp regardless of success.
  outObservation.setTarget(_target);
  outObservation.setImage(image);
  outObservation.setTime(stamp);

  // Set the observed corners in the observation
  for (int i = 0; i < cornerPoints.rows(); i++) {
    if (validCorners[i])
      outObservation.updateImagePoint(i, cornerPoints.row(i).transpose());
  }

  return success;
}

分析可知,_target应该包括提取的角点。找到其对应头文件,看他的私有成员可以知道_target是是GridCalibrationTargetBase::Ptr类型。

 private:
  /// the camera geometry
  boost::shared_ptr<CameraGeometryBase> _geometry;

  /// \brief the calibration target
  GridCalibrationTargetBase::Ptr _target;

  /// \brief detector options
  GridDetectorOptions _options;

前面提到,提取关键角点的代码为:

 success = _target->computeObservation(image, cornerPoints, validCorners);

找到computeObservation这个函数,位于\catkin_ws\src\kalibr\aslam_cv\aslam_cameras\include\aslam\cameras\GridCalibrationTargetBase.hpp,发现是一个虚函数:

  virtual bool computeObservation(const cv::Mat & /*image*/,
                                  Eigen::MatrixXd & /*outImagePoints*/,
                                  std::vector<bool> & /*outCornerObserved*/) const
  {
    SM_ASSERT_TRUE(Exception, true, "you need to implement this method for each target!");
    return false;
  };

这说明,应该是每一个不同的板子用不同的计算角点的方法。对于April tag,找到他的computeObservation的路径为:\catkin_ws\src\kalibr\aslam_cv\aslam_cameras_april\include\aslam\cameras\GridCalibrationTargetAprilgrid.hpp

 /// \brief extract the calibration target points from an image and write to an observation
  bool computeObservation(const cv::Mat & image,
                          Eigen::MatrixXd & outImagePoints,
                          std::vector<bool> &outCornerObserved) const;

让我们看看cpp的具体实现它干了什么:

/// \brief extract the calibration target points from an image and write to an observation
bool GridCalibrationTargetAprilgrid::computeObservation(
    const cv::Mat & image, Eigen::MatrixXd & outImagePoints,
    std::vector<bool> &outCornerObserved) const {

  bool success = true;

  // detect the tags
  std::vector<AprilTags::TagDetection> detections = _tagDetector->extractTags(image);

  /* handle the case in which a tag is identified but not all tag
   * corners are in the image (all data bits in image but border
   * outside). tagCorners should still be okay as apriltag-lib
   * extrapolates them, only the subpix refinement will fail
   */

  //min. distance [px] of tag corners from image border (tag is not used if violated)
  std::vector<AprilTags::TagDetection>::iterator iter = detections.begin();
  for (iter = detections.begin(); iter != detections.end();) {
    // check all four corners for violation
    bool remove = false;

    for (int j = 0; j < 4; j++) {
      remove |= iter->p[j].first < _options.minBorderDistance;
      remove |= iter->p[j].first > (float) (image.cols) - _options.minBorderDistance;  //width
      remove |= iter->p[j].second < _options.minBorderDistance;
      remove |= iter->p[j].second > (float) (image.rows) - _options.minBorderDistance;  //height
    }

    //also remove tags that are flagged as bad
    if (iter->good != 1)
      remove |= true;

    //also remove if the tag ID is out-of-range for this grid (faulty detection)
    if (iter->id >= (int) size() / 4)
      remove |= true;

    // delete flagged tags
    if (remove) {
      SM_DEBUG_STREAM("Tag with ID " << iter->id << " is only partially in image (corners outside) and will be removed from the TargetObservation.\n");

      // delete the tag and advance in list
      iter = detections.erase(iter);
    } else {
      //advance in list
      ++iter;
    }
  }

  //did we find enough tags?
  if (detections.size() < _options.minTagsForValidObs) {
    success = false;

    //immediate exit if we dont need to show video for debugging...
    //if video is shown, exit after drawing video...
    if (!_options.showExtractionVideo)
      return success;
  }

  //sort detections by tagId
  std::sort(detections.begin(), detections.end(),
            AprilTags::TagDetection::sortByIdCompare);

  // check for duplicate tagIds (--> if found: wild Apriltags in image not belonging to calibration target)
  // (only if we have more than 1 tag...)
  if (detections.size() > 1) {
    for (unsigned i = 0; i < detections.size() - 1; i++)
      if (detections[i].id == detections[i + 1].id) {
        //show the duplicate tags in the image
        cv::destroyAllWindows();
        cv::namedWindow("Wild Apriltag detected. Hide them!");
        cv::startWindowThread();

        cv::Mat imageCopy = image.clone();
        cv::cvtColor(imageCopy, imageCopy, cv::COLOR_GRAY2RGB);

        //mark all duplicate tags in image
        for (int j = 0; j < detections.size() - 1; j++) {
          if (detections[j].id == detections[j + 1].id) {
            detections[j].draw(imageCopy);
            detections[j + 1].draw(imageCopy);
          }
        }

        cv::putText(imageCopy, "Duplicate Apriltags detected. Hide them.",
                    cv::Point(50, 50), cv::FONT_HERSHEY_SIMPLEX, 0.8,
                    CV_RGB(255,0,0), 2, 8, false);
        cv::putText(imageCopy, "Press enter to exit...", cv::Point(50, 80),
                    cv::FONT_HERSHEY_SIMPLEX, 0.8, CV_RGB(255,0,0), 2, 8, false);
        cv::imshow("Duplicate Apriltags detected. Hide them", imageCopy);  // OpenCV call

        // and exit
        SM_FATAL_STREAM("\n[ERROR]: Found apriltag not belonging to calibration board. Check the image for the tag and hide it.\n");

        cv::waitKey();
        exit(0);
      }
  }

  // convert corners to cv::Mat (4 consecutive corners form one tag)
  /// point ordering here
  ///          11-----10  15-----14
  ///          | TAG 2 |  | TAG 3 |
  ///          8-------9  12-----13
  ///          3-------2  7-------6
  ///    y     | TAG 0 |  | TAG 1 |
  ///   ^      0-------1  4-------5
  ///   |-->x
  cv::Mat tagCorners(4 * detections.size(), 2, CV_32F);

  for (unsigned i = 0; i < detections.size(); i++) {
    for (unsigned j = 0; j < 4; j++) {
      tagCorners.at<float>(4 * i + j, 0) = detections[i].p[j].first;
      tagCorners.at<float>(4 * i + j, 1) = detections[i].p[j].second;
    }
  }

  //store a copy of the corner list before subpix refinement
  cv::Mat tagCornersRaw = tagCorners.clone();

  //optional subpixel refinement on all tag corners (four corners each tag)
  if (_options.doSubpixRefinement && success)
    cv::cornerSubPix(
        image, tagCorners, cv::Size(2, 2), cv::Size(-1, -1),
        cv::TermCriteria(cv::TermCriteria::Type::EPS + cv::TermCriteria::Type::MAX_ITER, 30, 0.1));

  if (_options.showExtractionVideo) {
    //image with refined (blue) and raw corners (red)
    cv::Mat imageCopy1 = image.clone();
    cv::cvtColor(imageCopy1, imageCopy1, cv::COLOR_GRAY2RGB);
    for (unsigned i = 0; i < detections.size(); i++)
      for (unsigned j = 0; j < 4; j++) {
        //raw apriltag corners
        //cv::circle(imageCopy1, cv::Point2f(detections[i].p[j].first, detections[i].p[j].second), 2, CV_RGB(255,0,0), 1);

        //subpixel refined corners
        cv::circle(
            imageCopy1,
            cv::Point2f(tagCorners.at<float>(4 * i + j, 0),
                        tagCorners.at<float>(4 * i + j, 1)),
            3, CV_RGB(0,0,255), 1);

        if (!success)
          cv::putText(imageCopy1, "Detection failed! (frame not used)",
                      cv::Point(50, 50), cv::FONT_HERSHEY_SIMPLEX, 0.8,
                      CV_RGB(255,0,0), 3, 8, false);
      }

    cv::imshow("Aprilgrid: Tag corners", imageCopy1);  // OpenCV call
    cv::waitKey(1);

    /* copy image for modification */
    cv::Mat imageCopy2 = image.clone();
    cv::cvtColor(imageCopy2, imageCopy2, cv::COLOR_GRAY2RGB);
    /* highlight detected tags in image */
    for (unsigned i = 0; i < detections.size(); i++) {
      detections[i].draw(imageCopy2);

      if (!success)
        cv::putText(imageCopy2, "Detection failed! (frame not used)",
                    cv::Point(50, 50), cv::FONT_HERSHEY_SIMPLEX, 0.8,
                    CV_RGB(255,0,0), 3, 8, false);
    }

    cv::imshow("Aprilgrid: Tag detection", imageCopy2);  // OpenCV call
    cv::waitKey(1);

    //if success is false exit here (delayed exit if _options.showExtractionVideo=true for debugging)
    if (!success)
      return success;
  }

  //insert the observed points into the correct location of the grid point array
  /// point ordering
  ///          12-----13  14-----15
  ///          | TAG 2 |  | TAG 3 |
  ///          8-------9  10-----11
  ///          4-------5  6-------7
  ///    y     | TAG 0 |  | TAG 1 |
  ///   ^      0-------1  2-------3
  ///   |-->x

  outCornerObserved.resize(size(), false);
  outImagePoints.resize(size(), 2);

  for (unsigned int i = 0; i < detections.size(); i++) {
    // get the tag id
    unsigned int tagId = detections[i].id;

    // calculate the grid idx for all four tag corners given the tagId and cols
    unsigned int baseId = (int) (tagId / (_cols / 2)) * _cols * 2
        + (tagId % (_cols / 2)) * 2;
    unsigned int pIdx[] = { baseId, baseId + 1, baseId + (unsigned int) _cols
        + 1, baseId + (unsigned int) _cols };

    // add four points per tag
    for (int j = 0; j < 4; j++) {
      //refined corners
      double corner_x = tagCorners.row(4 * i + j).at<float>(0);
      double corner_y = tagCorners.row(4 * i + j).at<float>(1);

      //raw corners
      double cornerRaw_x = tagCornersRaw.row(4 * i + j).at<float>(0);
      double cornerRaw_y = tagCornersRaw.row(4 * i + j).at<float>(1);

      //only add point if the displacement in the subpixel refinement is below a given threshold
      double subpix_displacement_squarred = (corner_x - cornerRaw_x)
          * (corner_x - cornerRaw_x)
          + (corner_y - cornerRaw_y) * (corner_y - cornerRaw_y);

      //add all points, but only set active if the point has not moved to far in the subpix refinement
      outImagePoints.row(pIdx[j]) = Eigen::Matrix<double, 1, 2>(corner_x,
                                                                corner_y);

      if (subpix_displacement_squarred <= _options.maxSubpixDisplacement2) {
        outCornerObserved[pIdx[j]] = true;
      } else {
        SM_DEBUG_STREAM("Subpix refinement failed for point: " << pIdx[j] << " with displacement: " << sqrt(subpix_displacement_squarred) << "(point removed) \n");
        outCornerObserved[pIdx[j]] = false;
      }
    }
  }

  //succesful observation
  return success;
}

}  // namespace cameras
}  // namespace aslam



(好长…不想看T - T)
分析一下这个内容,应该分为以下几步:

  1. 检测所有Tag;
  2. 删掉被标记Bad的Tag,以及超出grid的(其实还不太理解超出grid是啥,友友们知道可以给我留言);
  3. 删掉检测重复的Tag;
  4. convert corners to cv::Mat;
  5. 精细化剩余的点的结果;
  6. 最后输出这些点(中间有些涉及用什么形式记录数据,我就不写了)。

本人主要关注几个问题:
1.April Tag ID 的作用?
2.为什么Kalibr不支持3*3以下的april tag 标定板?
3.提取的一堆点,怎么知道谁是哪个ID的点?

为解决上面的问题,看一下detection的输出是什么,也就是这行代码干了什么事情,得到什么结构的数据:

  // detect the tags
  std::vector<AprilTags::TagDetection> detections = _tagDetector->extractTags(image);

显然,detections 是一个vector,问题是里面是装了什么样的角点。看了hpp文件,顺着这些线索找一下extractTags的位置。

  // create a detector instance
  AprilTags::TagCodes _tagCodes;
  boost::shared_ptr<AprilTags::TagDetector> _tagDetector;

_tagDetector对应的头文件在\catkin_ws\src\kalibr\aslam_offline_calibration\ethz_apriltag2\include\apriltags\TagDetector.h,然后打开它的cpp一探究竟,可以看到里面装的是thisTagDetection这个东西:

      // Otherwise, keep the new one if it either has strictly *lower* error, or greater perimeter.
      if ( thisTagDetection.hammingDistance < otherTagDetection.hammingDistance ||
	   thisTagDetection.observedPerimeter > otherTagDetection.observedPerimeter )
	goodDetections[odidx] = thisTagDetection;
    }

     if ( newFeature )
       goodDetections.push_back(thisTagDetection);

  }

thisTagDetection是一个struct ,对应文件在:\catkin_ws\src\kalibr\aslam_offline_calibration\ethz_apriltag2\include\apriltags\TagDetection.h。所以有一个问题,它里面存了它的角点吗?答案是有的:

  //! Position (in fractional pixel coordinates) of the detection.
  /*  The points travel counter-clockwise around the target, always
   *  starting from the same corner of the tag.
   */
  std::pair<float,float> p[4];

所以可以回答上面的问题,即detections内装了对应Tag ID点的四个角点的坐标,并且是按逆时针存储的(在代码里有示意图).
然后解决下一个问题,为什么2*2的板子不能被识别呢,是哪里出了问题?

让我们回到\catkin_ws\src\kalibr\aslam_cv\aslam_cameras_april\src\GridCalibrationTargetAprilgrid.cpp这个文件,也就是看看这个函数干了啥,当他拿到角点以后:

bool GridCalibrationTargetAprilgrid::computeObservation(
    const cv::Mat & image, Eigen::MatrixXd & outImagePoints,
    std::vector<bool> &outCornerObserved) const {

对每一个Tag它是这样操作的:

  std::vector<AprilTags::TagDetection>::iterator iter = detections.begin();
  for (iter = detections.begin(); iter != detections.end();) {
    // check all four corners for violation
    bool remove = false;

    for (int j = 0; j < 4; j++) {
      remove |= iter->p[j].first < _options.minBorderDistance;
      remove |= iter->p[j].first > (float) (image.cols) - _options.minBorderDistance;  //width
      remove |= iter->p[j].second < _options.minBorderDistance;
      remove |= iter->p[j].second > (float) (image.rows) - _options.minBorderDistance;  //height
    }

    //also remove tags that are flagged as bad
    if (iter->good != 1)
      remove |= true;

    //also remove if the tag ID is out-of-range for this grid (faulty detection)
    if (iter->id >= (int) size() / 4)
      remove |= true;

    // delete flagged tags
    if (remove) {
      SM_DEBUG_STREAM("Tag with ID " << iter->id << " is only partially in image (corners outside) and will be removed from the TargetObservation.\n");

      // delete the tag and advance in list
      iter = detections.erase(iter);
    } else {
      //advance in list
      ++iter;
    }
  }

让我们瞅瞅针对April的_options是什么样的,可以在这里找到\catkin_ws\src\kalibr\aslam_cv\aslam_cameras_april\include\aslam\cameras\GridCalibrationTargetAprilgrid.hpp

  struct AprilgridOptions {
    AprilgridOptions() :
      doSubpixRefinement(true),
      maxSubpixDisplacement2(1.5),
      showExtractionVideo(false),
      minTagsForValidObs(4),
      minBorderDistance(4.0),
      blackTagBorder(2) {};

所以可以知道为什么会限制 2 ∗ 2 2*2 22的了,因为minTagsForValidObs(4)。如果不满足这个阈值,就会返回false。我试过改成不同的数,比如1,2。这意味着若检测符合条件的TAG大于这个值,则程序可以继续处理角点,但实际上失败了。我猜测,之所以要求用 3 ∗ 3 3*3 33以上数量的April Tag 标定板,是因为在detect April Tag 中会存在失败的情况,然后被程序直接remove,更少的Tag意味着容错率更低,甚至可以所有的Tag都被remove了,然后提示没有识别到角点。因此要求的 3 ∗ 3 3*3 33也许是一个合适的Tag数量。

遗留几个问题:
1.怎么样的Tag才能被移除?
2.怎么样才能更好的旋转标定板,使得remove的Tag尽可能的减少?
3.能否修改remove的条件,在保证精度下降可接受范围内,实现 2 ∗ 2 2*2 22的标定?

### 回答1: 要在Ubuntu 20.04上安装Kalibr,您可以按照以下步骤进行操作: 1. 打开终端并更新软件包列表: sudo apt update 2. 安装Kalibr所需的依赖项: sudo apt install python-catkin-tools python-rosinstall python-rosinstall-generator python-wstool python-git python-pip python-matplotlib python-scipy python-git python-yaml python-pyqt5 python-opencv python-numpy python-six python-pandas python-pyqtgraph 3. 创建一个工作空间并克隆Kalibr存储库: mkdir -p ~/kalibr_ws/src cd ~/kalibr_ws/src git clone https://github.com/ethz-asl/kalibr.git 4. 使用catkin工具构建Kalibr: cd ~/kalibr_ws catkin build -DCMAKE_BUILD_TYPE=Release -j4 5. 添加Kalibr到环境变量中: echo "source ~/kalibr_ws/devel/setup.bash" >> ~/.bashrc source ~/.bashrc 6. 测试Kalibr是否安装成功: kalibr 这些步骤应该可以帮助您在Ubuntu 20.04上安装Kalibr。如果您遇到任何问题,请随时向我提问。 ### 回答2: Kalibr是一个广泛用作自主无人驾驶运动控制系统、机器人控制和其他自主机器人项目的开源工具包。Kalibr软件包提供了一个高度可定制的视觉惯性系统校准框架,可以用于各种机器人和自主无人驾驶应用程序的开发和测试。在Ubuntu 20.04上安装Kalibr可以让开发和测试基于计算机视觉的自主机器人项目变得更加便捷和高效。以下是在Ubuntu 20.04上安装Kalibr的步骤: 1. 安装依赖项 Kalibr需要许多依赖项才能成功安装和运行。打开终端并输入以下命令进行安装: sudo apt update sudo apt install python-catkin-tools python-rosinstall ros-melodic-cv-bridge ros-melodic-tf ros-melodic-image-transport ros-melodic-pcl-ros 2. 克隆Kalibr仓库 在终端中输入以下命令克隆Kalibr仓库: git clone https://github.com/ethz-asl/kalibr.git 3. 编译Kalibr 进入kalibr文件夹并输入以下命令编译Kalibr: cd kalibr catkin build kalibr 4. 安装Kalibr 在进行完上述步骤后,输入以下命令安装Kalibr: catkin build kalibr -DCMAKE_INSTALL_PREFIX=/usr/local 5. 测试安装 输入以下命令检查Kalibr是否成功安装: kalibr_calibrate_imu_camera --help 6. 运行Kalibr 可以通过命令行或使用Kalibr的GUI界面运行Kalibr。例如,要通过命令行运行Kalibr,请在终端中输入kalibr_calibrate_imu_camera,根据所选的参数输入完整的命令行。或者,通过在命令行中输入kalibr_calibrate_imu_camera --gui启动图形用户界面。 以上是在Ubuntu 20.04上安装Kalibr的步骤。成功安装Kalibr之后,您可以在您的机器人或无人驾驶项目中使用Kalibr的强大功能进行系统校准、位姿估计和其他视觉惯性任务。 ### 回答3: Kalibr是一个开源的多摄像头标定工具,它可以通过对多个摄像头进行标定来提高机器人视觉应用的准确性。在Ubuntu 20.04中安装Kalibr需要以下步骤: 1. 安装依赖库 在Ubuntu 20.04中,我们需要安装一些依赖库,包括ceres-solver、ros-melodic-cv-camera和ros-melodic-camera-info-manager。安装命令如下: sudo apt-get update sudo apt-get install -y cmake \ build-essential \ libboost-all-dev \ libeigen3-dev \ libsuitesparse-dev \ libgoogle-glog-dev \ libatlas-base-dev \ libsuitesparse-dev \ libceres-dev \ ros-melodic-cv-camera \ ros-melodic-camera-info-manager 2. 安装Kalibr 在安装Kalibr之前,我们需要先安装pip和numpy。运行以下命令: sudo apt-get install python-pip pip install numpy --user 接下来,我们可以使用以下命令安装Kalibr: mkdir -p ~/kalibr_ws/src && cd ~/kalibr_ws/src git clone https://github.com/ethz-asl/kalibr.git cd .. catkin_make 注意:如果你使用的是ROS Kinetic,需要使用git checkout kinetic-devel命令选择相应的分支。 3. 测试Kalibr 安装完成后,可以使用demo-bag测试Kalibr。使用以下命令创建数据目录,并下载demo-bag文件: mkdir -p ~/kalibr_demo/data cd ~/kalibr_demo/data wget https://files.ifi.uzh.ch/rpg/website/kalibr_demo_data/indoor_forward_3.bag 执行以下命令运行Kalibr: rosrun kalibr_calibrate_cameras calibrate_cameras --bag ./data/indoor_forward_3.bag --topics /cam0/image_raw /cam1/image_raw --models pinhole-radtan pinhole-radtan --target ./resources/april_6x6.yaml 以上就是在Ubuntu 20.04中安装Kalibr的步骤。注意在安装过程中可能会出现依赖库不匹配或其他问题,需要根据具体情况进行解决。
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值