FLANN特征匹配

         FLANN (Fast Approximate Nearest Neighbor Search Library)快速最近邻逼近搜索函数库。即实现快速高效匹配。      

         特征匹配记录下目标图像与待匹配图像的特征点(KeyPoint),并根据特征点集合构造特征量(descriptor),对这个特征量进行比较、筛选,最终得到一个匹配点的映射集合。我们也可以根据这个集合的大小来衡量两幅图片的匹配程度。

        特征匹配与模板匹配不同,由于是计算特征点集合的相关度,转置操作对匹配影响不大,但它容易受到失真、缩放的影响。

 

代码示例:

#include <opencv2/opencv.hpp>
#include <opencv2/xfeatures2d.hpp>
#include <iostream>

using namespace cv;
using namespace std;
using namespace cv::xfeatures2d;

int main(int argc, char** argv) 
{
	Mat img1 = imread("D:/cv400/data/box.png", IMREAD_GRAYSCALE);
	Mat img2 = imread("D:/cv400/data/box_in_scene.png", IMREAD_GRAYSCALE);
	if (img1.empty() || img2.empty()) 
	{
		cout << "Load image error..." << endl;
		return -1;
	}
	imshow("object image", img1);
	imshow("object in scene", img2);

	// surf featurs extraction
	double t1 = (double)getTickCount();
	int minHessian = 400;
	Ptr<SURF> detector = SURF::create(minHessian);
	vector<KeyPoint> keypoints_obj;
	vector<KeyPoint> keypoints_scene;
	Mat descriptor_obj, descriptor_scene;
	detector->detectAndCompute(img1, Mat(), keypoints_obj, descriptor_obj);
	detector->detectAndCompute(img2, Mat(), keypoints_scene, descriptor_scene);

	// matching
	FlannBasedMatcher matcher;
	vector<DMatch> matches;
	matcher.match(descriptor_obj, descriptor_scene, matches);
	double t2 = (double)getTickCount();
	double t = (t2 - t1) / getTickFrequency();
	cout << "spend time : " << t << "s" << endl;
	
	//求匹配点最近距离
	double minDist = 1000;
	for (int i = 0; i < descriptor_obj.rows; i++)
	{
		double dist = matches[i].distance;
		if (dist < minDist) 
			minDist = dist;	
	}
	cout<<"min distance : "<< minDist<<endl;

	//距离较近即匹配较好的点
	vector<DMatch> goodMatches;
	for (int i = 0; i < descriptor_obj.rows; i++)
	{
		double dist = matches[i].distance;
		if (dist < max(3 * minDist, 0.02)) 
			goodMatches.push_back(matches[i]);	
	}

	Mat matchesImg;
	drawMatches(img1, keypoints_obj, img2, keypoints_scene, goodMatches, matchesImg, Scalar::all(-1),
		         Scalar::all(-1), vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS);
	imshow("Flann Matching Result", matchesImg);

	waitKey(0);
	return 0;
}

运行截图:

耗时 175 ms 

 

接下来,在前面程序的基础上,将场景中的目标图像定位出来(画出来),要用到两个函数:

#include <opencv2/opencv.hpp>
#include <opencv2/xfeatures2d.hpp>
#include <opencv2/calib3d/calib3d.hpp>
#include <iostream>

using namespace cv;
using namespace std;
using namespace cv::xfeatures2d;

int main(int argc, char** argv) 
{
	Mat img1 = imread("D:/cv400/data/box.png", IMREAD_GRAYSCALE);
	Mat img2 = imread("D:/cv400/data/box_in_scene.png", IMREAD_GRAYSCALE);
	if (img1.empty() || img2.empty()) 
	{
		cout << "Load image error..." << endl;
		return -1;
	}
	imshow("object image", img1);
	imshow("object in scene", img2);

	// surf featurs extraction
	double t1 = (double)getTickCount();
	int minHessian = 400;
	Ptr<SURF> detector = SURF::create(minHessian);
	vector<KeyPoint> keypoints_obj;
	vector<KeyPoint> keypoints_scene;
	Mat descriptor_obj, descriptor_scene;
	detector->detectAndCompute(img1, Mat(), keypoints_obj, descriptor_obj);
	detector->detectAndCompute(img2, Mat(), keypoints_scene, descriptor_scene);

	// matching
	FlannBasedMatcher matcher;
	vector<DMatch> matches;
	matcher.match(descriptor_obj, descriptor_scene, matches);
	double t2 = (double)getTickCount();
	double t = (t2 - t1) / getTickFrequency();
	cout << "spend time : " << t << "s" << endl;
	
	//求匹配点最近距离
	double minDist = 1000;
	for (int i = 0; i < descriptor_obj.rows; i++)
	{
		double dist = matches[i].distance;
		if (dist < minDist) 
			minDist = dist;	
	}
	cout<<"min distance : "<< minDist<<endl;

	//距离较近即匹配较好的点
	vector<DMatch> goodMatches;
	for (int i = 0; i < descriptor_obj.rows; i++)
	{
		double dist = matches[i].distance;
		if (dist < max(3 * minDist, 0.02)) 
			goodMatches.push_back(matches[i]);	
	}

	
	//寻找匹配上的关键点的变换
	vector<Point2f> obj;  //目标特征点
	vector<Point2f> objInScene;  //场景中目标特征点
	for (size_t t = 0; t < goodMatches.size(); t++) 
	{
		obj.push_back(keypoints_obj[goodMatches[t].queryIdx].pt);
		objInScene.push_back(keypoints_scene[goodMatches[t].trainIdx].pt);
	}
	Mat imgBH = findHomography(obj, objInScene, RANSAC);

	//映射点
	vector<Point2f> obj_corners(4);
	vector<Point2f> scene_corners(4);
	obj_corners[0] = Point(0, 0);
	obj_corners[1] = Point(img1.cols, 0);
	obj_corners[2] = Point(img1.cols, img1.rows);
	obj_corners[3] = Point(0, img1.rows);
	perspectiveTransform(obj_corners, scene_corners, imgBH);

	//四个点之间画线
	Mat dst;
	cvtColor(img2, dst, COLOR_GRAY2BGR);
	line(dst, scene_corners[0], scene_corners[1], Scalar(0, 0, 255), 2, 8, 0);
	line(dst, scene_corners[1], scene_corners[2], Scalar(0, 0, 255), 2, 8, 0);
	line(dst, scene_corners[2], scene_corners[3], Scalar(0, 0, 255), 2, 8, 0);
	line(dst, scene_corners[3], scene_corners[0], Scalar(0, 0, 255), 2, 8, 0);

	imshow("find object in sence", dst);
	waitKey(0);
	return 0;
}

运行截图:

  • 13
    点赞
  • 94
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
使用FLANN特征匹配进行图像匹配时,可以通过设置距离阈值来判断两幅图像是否匹配成功。具体来说,可以通过以下步骤实现: 1. 提取图像的特征点和特征描述符。 2. 使用FLANN匹配器进行特征匹配。 3. 计算所有匹配点对的距离,并记录最小距离和最大距离。 4. 根据最小距离和最大距离设置距离阈值,判断两幅图像是否为同一幅图像。 下面是一个简单的示例代码,其中特征点和特征描述符是使用SIFT算法提取的: ```python import cv2 # 读取两幅图像 img1 = cv2.imread('image1.jpg', 0) img2 = cv2.imread('image2.jpg', 0) # 提取特征点和特征描述符 sift = cv2.xfeatures2d.SIFT_create() kp1, des1 = sift.detectAndCompute(img1, None) kp2, des2 = sift.detectAndCompute(img2, None) # 设置FLANN匹配器参数 FLANN_INDEX_KDTREE = 0 index_params = dict(algorithm=FLANN_INDEX_KDTREE, trees=5) search_params = dict(checks=50) flann = cv2.FlannBasedMatcher(index_params, search_params) # 进行特征匹配 matches = flann.knnMatch(des1, des2, k=2) # 计算所有匹配点对的距离 distances = [m[0].distance for m in matches] # 计算最小距离和最大距离 min_dist = min(distances) max_dist = max(distances) # 设置距离阈值 threshold = 0.8 * max_dist # 根据距离阈值判断两幅图像是否匹配成功 good_matches = [m for m in matches if m[0].distance < threshold * m[1].distance] if len(good_matches) > 10: print("两幅图像匹配成功!") else: print("两幅图像不匹配。") ``` 在这个示例中,我们通过计算所有匹配点对的距离来设置距离阈值,并使用阈值判断两幅图像是否匹配成功。如果匹配点对的数量大于10个,则认为两幅图像匹配成功。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值