关键点的描述符KeyPoint对象

corners:包含大量本地信息的像素块,并能够在另一张图中被快速识别

keypoints:作为 corners 的扩展,它将像素块的信息进行编码从而使得更易辨识,至少在原则上唯一

descriptors:它是对 keypoints 进一步处理的结果。通常它具有更低的维度,从而使得图像块能够在另一幅不同的图像中被更快地识别

KeyPoints对象

为了描述关键点,Opencv 关键点的类定义如下:

class cv::KeyPoint {
public:
	cv::Point2f pt; // coordinates of the keypoint
	float size; // diameter of the meaningful keypoint neighborhood
	float angle; // computed orientation of the keypoint (-1 if none)
	float response; // response for which the keypoints was selected
	int octave; // octave (pyramid layer) keypoint was extracted from
	int class_id; // object id, can be used to cluster keypoints by object
	cv::KeyPoint(
		cv::Point2f _pt,
		float _size,
		float _angle = -1,
		float _response = 0,
		int _octave = 0,
		int _class_id = -1
	);
	cv::KeyPoint(
		float x,
		float y,
		float _size,
		float _angle = -1,
		float _response = 0,
		int _octave = 0,
		int _class_id = -1
	);
	...
};


参数说明:

pt:关键点的位置
size:关键点的范围
angle:关键点角度
response:能够给某个关键点更强烈响应的检测器,有时能够被理解为特性实际存在的概率
octave:标示了关键点被找到的层级,总是希望在相同的层级找到对应的关键点
class_id:标示关键点来自于哪一个目标
为了查找并计算描述符,Opencv 定义了如下抽象类:

class cv::Feature2D : public cv::Algorithm {
public:
	virtual void detect(
		cv::InputArray image, // Image on which to detect
		vector< cv::KeyPoint >& keypoints, // Array of found keypoints
		cv::InputArray mask = cv::noArray()
	) const;
	virtual void detect(
		cv::InputArrayOfArrays images, // Images on which to detect
		vector<vector< cv::KeyPoint > >& keypoints, // keypoints for each image
		cv::InputArrayOfArrays masks = cv::noArray()
	) const;
	virtual void compute(
		cv::InputArray image, // Image where keypoints are located
		std::vector<cv::KeyPoint>& keypoints, // input/output vector of keypoints
		cv::OutputArray descriptors); // computed descriptors, M x N matrix,
									  // where M is the number of keypoints
									  // and N is the descriptor size
	virtual void compute(
		cv::InputArrayOfArrays image, // Images where keypoints are located
		std::vector<std::vector<cv::KeyPoint> >& keypoints, //I/O vec of keypnts
		cv::OutputArrayOfArrays descriptors); // computed descriptors,
											  // vector of (Mi x N) matrices, where
											  // Mi is the number of keypoints in
											  // the i-th image and N is the
											  // descriptor size
	virtual void detectAndCompute(
		cv::InputArray image, // Image on which to detect
		cv::InputArray mask, // Optional region of interest mask
		std::vector<cv::KeyPoint>& keypoints, // found or provided keypoints
		cv::OutputArray descriptors, // computed descriptors
		bool useProvidedKeypoints = false); // if true,
											// the provided keypoints are used,
											// otherwise they are detected
	virtual int descriptorSize() const; // size of each descriptor in elements
	virtual int descriptorType() const; // type of descriptor elements
	virtual int defaultNorm() const; // the recommended norm to be used
									 // for comparing descriptors.
									 // Usually, it's NORM_HAMMING for
									 // binary descriptors and NORM_L2
									 // for all others.
	virtual void read(const cv::FileNode&);
	virtual void write(cv::FileStorage&) const;
	...
};








函数说明:

detect:用于计算 Keypoint
compute:用于计算 descriptor
detectAndCompute:不同的关键点检测算法对于同一幅图像常常得到不同的结果,而且在算法计算过程中需要一种特殊的图像表示,其计算量很大,如果分开进行此步骤将重复进行两次,因此如果需要得到描述符,通常建议直接使detectAndCompute 函数
descriptorSize:返回描述符向量的长度
descriptorType:描述符元素的类型
defaultNorm:描述符的归一化方法,指定了如何比较两个描述符,比如对于01描述符,可以使用 NORM_HAMMING;而对于 SIFT 和 SURF,则可以使用 NORM_L2 或者 NORM_L1。
一个实际的实现可以只实现其中的某个或几个方法:

cv::Feature2D::detect():FAST(只寻找FAST关键点,因为FAST只能寻找关键点)
cv::Feature2D::compute():FREAK(对已知的关键点计算FREAK描述符,FREAK只能用于描述符)
cv::Feature2D::detectAndCompute():SIFT,SURF,ORB,BRISK。(4种方法既能够寻找关键点也可以生成对应的描述符)算法中将隐式调用检测和计算方法

  • 0
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
以下是使用PCL库进行NARF关键点提取的示例代码: ```c++ #include <iostream> #include <pcl/io/pcd_io.h> #include <pcl/features/narf_descriptor.h> int main(int argc, char** argv) { if(argc != 2) { std::cerr << "Usage: " << argv[0] << " cloud.pcd" << std::endl; return -1; } pcl::PointCloud<pcl::PointXYZ>::Ptr cloud(new pcl::PointCloud<pcl::PointXYZ>); pcl::io::loadPCDFile(argv[1], *cloud); pcl::PointCloud<pcl::Narf36>::Ptr narfs(new pcl::PointCloud<pcl::Narf36>); pcl::RangeImage range_image; pcl::RangeImage::CoordinateFrame coordinate_frame = pcl::RangeImage::CAMERA_FRAME; float angular_resolution = 0.5f; float support_size = 0.2f; Eigen::Affine3f scene_sensor_pose(Eigen::Affine3f::Identity()); range_image.createFromPointCloud(*cloud, angular_resolution, pcl::deg2rad(360.0f), pcl::deg2rad(180.0f), scene_sensor_pose, coordinate_frame, support_size, support_size/2.0f); pcl::PointCloud<pcl::PointXYZ>::Ptr keypoints(new pcl::PointCloud<pcl::PointXYZ>); pcl::NarfKeypoint narf_keypoint_detector; narf_keypoint_detector.setRangeImage(&range_image); narf_keypoint_detector.getParameters().support_size = support_size; narf_keypoint_detector.compute(*keypoints); pcl::NarfDescriptor narf_descriptor(&range_image, &narf_keypoint_detector); narf_descriptor.getParameters().support_size = support_size; narf_descriptor.getParameters().rotation_invariant = true; narf_descriptor.compute(*narfs); boost::shared_ptr<pcl::visualization::PCLVisualizer> viewer(new pcl::visualization::PCLVisualizer("NARF Keypoints")); viewer->setBackgroundColor(0, 0, 0); viewer->addPointCloud<pcl::PointXYZ>(cloud, "cloud"); viewer->setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_POINT_SIZE, 1, "cloud"); viewer->addPointCloud<pcl::PointXYZ>(keypoints, "keypoints"); viewer->setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_POINT_SIZE, 5, "keypoints"); viewer->spin(); return 0; } ``` 该代码加载点云数据后,使用 `pcl::RangeImage` 对其进行预处理,然后使用 `pcl::NarfKeypoint` 进行关键点提取,最终使用 `pcl::NarfDescriptor` 计算NARF描述符。提取出的关键点和原始点云数据一起显示在可视化窗口中。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值