SIFT算法的解释已经有很多了,下面是自己测试的代码和结果
值得注意的是,其中BNMatcher的两个参数为
normType – One of NORM_L1, NORM_L2, NORM_HAMMING, NORM_HAMMING2. L1 and L2 norms are preferable choices for SIFT and SURF descriptors, NORM_HAMMING should be used with ORB, BRISK and BRIEF, NORM_HAMMING2 should be used with ORB when WTA_K==3 or 4 (see ORB::ORB constructor description).
crossCheck – If it is false, this is will be default BFMatcher behaviour when it finds the k nearest neighbors for each query descriptor. If crossCheck==true, then the knnMatch() method with k=1 will only return pairs (i,j) such that for i-th query descriptor the j-th descriptor in the matcher’s collection is the nearest and vice versa, i.e. the BFMatcher will only return consistent pairs. Such technique usually produces best results with minimal number of outliers when there are enough matches. This is alternative to the ratio test, used by D. Lowe in SIFT paper.
// read image and scale them
string filepath1 = "filepath1";
Mat org_src1 = imread(filepath1,CV_8UC1);
Mat src1(org_src1.rows / 8.0, org_src1.cols / 8.0, CV_8UC1);
resize(org_src1, src1, src1.size());
imshow("src1", src1);
string filepath2 = "filepath2";
Mat org_src2 = imread(filepath2, CV_8UC1);
Mat src2(org_src2.rows / 8.0, org_src2.cols / 8.0, CV_8UC1);
resize(org_src2, src2, src2.size());
imshow("src2", src2);
// detect sift keypoints
std::vector<KeyPoint> points1, points2;
SiftFeatureDetector detector;
detector.detect(src1, points1);
detector.detect(src2, points2);
// computing descriptors
SiftDescriptorExtractor extractor;
Mat descriptors1, descriptors2;
extractor.compute(src1, points1, descriptors1);
extractor.compute(src2, points2, descriptors2);
// matching descriptors
BFMatcher matcher(NORM_L2);
vector<DMatch> matches;
matcher.match(descriptors1, descriptors2, matches);
// drawing the results
namedWindow("matches", 1);
Mat img_matches;
drawMatches(src1, points1, src2, points2, matches, img_matches);
imshow("matches", img_matches);
waitKey(0);
图片1
图片2
匹配效果
原始图像
旋转图像
旋转180°