Python+OpenCV:特征匹配(Feature Matching)
Basics of Brute-Force Matcher
Brute-Force matcher is simple. It takes the descriptor of one feature in first set and is matched with all other features in second set using some distance calculation. And the closest one is returned.
For BF matcher, first we have to create the BFMatcher object using cv.BFMatcher(). It takes two optional params:
First one is normType. It specifies the distance measurement to be used. By default, it is cv.NORM_L2. It is good for SIFT, SURF etc (cv.NORM_L1 is also there).
For binary string based descriptors like ORB, BRIEF, BRISK etc, cv.NORM_HAMMING should be used, which used Hamming distance as measurement. If ORB is using WTA_K == 3 or 4, cv.NORM_HAMMING2 should be used.
Second param is boolean variable, crossCheck which is false by default. If it is true, Matcher returns only those matches with value (i,j) such that i-th descriptor in set A has j-th descriptor in set B as the best match and vice-versa.
That is, the two features in both sets should match each other. It provides consistent result, and is a good alternative to ratio test proposed by D.Lowe in SIFT paper.
Once it is created, two important methods are BFMatcher.match() and BFMatcher.knnMatch(). First one returns the best match.
Second method returns k best matches where k is specified by the user. It may be useful when we need to do additional work on that.
Like we used cv.drawKeypoints() to draw keypoints, cv.drawMatches() helps us to draw the matches.
It stacks two images horizontally and draw lines from first image to second image showing best matches.
There is also cv.drawMatchesKnn which draws all the k best matches. If k=2, it will draw two match-lines for each keypoint. So we have to pass a mask if we want to selectively draw it.
Let's see one example for each of SIFT and ORB (Both use different distance measurements).
Brute-Force Matching with ORB Descriptors
####################################################################################################
# 图像特征匹配(Feature Matching)
def lmc_cv_image_feature_matching():
"""
函数功能: 图像特征匹配(Feature Matching)。
"""
# 读取图像
image1 = lmc_cv.imread('D:/99-Research/Python/Image/Brochure01.jpg', lmc_cv.IMREAD_GRAYSCALE)
image2 = lmc_cv.imread('D:/99-Research/Python/Image/Brochure02.jpg', lmc_cv.IMREAD_GRAYSCALE)
# image1 = lmc_cv.imread('D:/99-Research/Python/Image/pcb_rotated_01.png', lmc_cv.IMREAD_GRAYSCALE)
# image2 = lmc_cv.imread('D:/99-Research/Python/Image/pcb_rotated_02.png', lmc_cv.IMREAD_GRAYSCALE)
# Initiate ORB detector
orb = lmc_cv.ORB_create()
# find the keypoints and descriptors with ORB
keypoints1, descriptors1 = orb.detectAndCompute(image1, None)
keypoints2, descriptors2 = orb.detectAndCompute(image2, None)
# create BFMatcher object
bf_matcher = lmc_cv.BFMatcher(lmc_cv.NORM_HAMMING, crossCheck=True)
# Match descriptors.
matches = bf_matcher.match(descriptors1, descriptors2)
# Sort them in the order of their distance.
matches = sorted(matches, key=lambda x: x.distance)
# Draw first 50 matches.
result_image = lmc_cv.drawMatches(image1, keypoints1, image2, keypoints2, matches[:50], None,
flags=lmc_cv.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS)
# 显示图像
pyplot.figure('Feature Matching')
pyplot.subplot(1, 1, 1)
pyplot.imshow(result_image, 'gray')
pyplot.title('Feature Matching')
pyplot.xticks([])
pyplot.yticks([])
# 根据用户输入保存图像
if ord("q") == (lmc_cv.waitKey(0) & 0xFF):
# 销毁窗口
pyplot.close('all')
return
What is this Matcher Object?
The result of matches = bf.match(des1,des2) line is a list of DMatch objects. This DMatch object has following attributes:
- DMatch.distance - Distance between descriptors. The lower, the better it is.
- DMatch.trainIdx - Index of the descriptor in train descriptors
- DMatch.queryIdx - Index of the descriptor in query descriptors
- DMatch.imgIdx - Index of the train image.
Brute-Force Matching with SIFT Descriptors and Ratio Test
####################################################################################################
# 图像特征匹配(Feature Matching SIFT)
def lmc_cv_image_feature_matching_sift():
"""
函数功能: 图像特征匹配(Feature Matching)。
"""
# 读取图像
image1 = lmc_cv.imread('D:/99-Research/Python/Image/Brochure01.jpg', lmc_cv.IMREAD_GRAYSCALE)
image2 = lmc_cv.imread('D:/99-Research/Python/Image/Brochure02.jpg', lmc_cv.IMREAD_GRAYSCALE)
# image1 = lmc_cv.imread('D:/99-Research/Python/Image/pcb_rotated_01.png', lmc_cv.IMREAD_GRAYSCALE)
# image2 = lmc_cv.imread('D:/99-Research/Python/Image/pcb_rotated_02.png', lmc_cv.IMREAD_GRAYSCALE)
# Initiate SIFT detector
sift = lmc_cv.SIFT_create()
# find the keypoints and descriptors with SIFT
keypoints1, descriptors1 = sift.detectAndCompute(image1, None)
keypoints2, descriptors2 = sift.detectAndCompute(image2, None)
# BFMatcher with default params
bf_matcher = lmc_cv.BFMatcher()
matches = bf_matcher.knnMatch(descriptors1, descriptors2, k=2)
# Apply ratio test
good = []
for m, n in matches:
if m.distance < 0.55 * n.distance:
good.append([m])
# cv.drawMatchesKnn expects list of lists as matches.
result_image = lmc_cv.drawMatchesKnn(image1, keypoints1, image2, keypoints2, good, None,
flags=lmc_cv.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS)
# 显示图像
pyplot.figure('Feature Matching with SIFT')
pyplot.subplot(1, 1, 1)
pyplot.imshow(result_image, 'gray')
pyplot.title('Feature Matching with SIFT')
pyplot.xticks([])
pyplot.yticks([])
# 根据用户输入保存图像
if ord("q") == (lmc_cv.waitKey(0) & 0xFF):
# 销毁窗口
pyplot.close('all')
return
FLANN based Matcher
####################################################################################################
# 图像特征匹配(Feature Matching with FLANN)
def lmc_cv_image_feature_matching_flann():
"""
函数功能: 图像特征匹配(Feature Matching with FLANN)。
"""
# 读取图像
image1 = lmc_cv.imread('D:/99-Research/Python/Image/Brochure01.jpg', lmc_cv.IMREAD_GRAYSCALE)
image2 = lmc_cv.imread('D:/99-Research/Python/Image/Brochure02.jpg', lmc_cv.IMREAD_GRAYSCALE)
# Initiate SIFT detector
sift = lmc_cv.SIFT_create()
# find the keypoints and descriptors with SIFT
keypoints1, descriptors1 = sift.detectAndCompute(image1, None)
keypoints2, descriptors2 = sift.detectAndCompute(image2, None)
# FLANN parameters
flann_index_kdtree = 1
index_params = dict(algorithm=flann_index_kdtree, trees=5)
search_params = dict(checks=50) # or pass empty dictionary
flann = lmc_cv.FlannBasedMatcher(index_params, search_params)
matches = flann.knnMatch(descriptors1, descriptors2, k=2)
# Need to draw only good matches, so create a mask
matches_mask = [[0, 0] for i in range(len(matches))]
# ratio test as per Lowe's paper
for i, (m, n) in enumerate(matches):
if m.distance < 0.7 * n.distance:
matches_mask[i] = [1, 0]
draw_params = dict(matchColor=(0, 255, 0),
singlePointColor=(255, 0, 0),
matchesMask=matches_mask,
flags=lmc_cv.DrawMatchesFlags_DEFAULT)
# cv.drawMatchesKnn expects list of lists as matches.
result_image = lmc_cv.drawMatchesKnn(image1, keypoints1, image2, keypoints2, matches, None, **draw_params)
# 显示图像
pyplot.figure('Feature Matching with FLANN')
pyplot.subplot(1, 1, 1)
pyplot.imshow(result_image, 'gray')
pyplot.title('Feature Matching with FLANN')
pyplot.xticks([])
pyplot.yticks([])
# 根据用户输入保存图像
if ord("q") == (lmc_cv.waitKey(0) & 0xFF):
# 销毁窗口
pyplot.close('all')
return