图像拼合实验
设图像高为h,相同部分的宽度为wx
拼接后图像的宽w=wA+wB-wx
因此,可以先构建一个高为h,宽为W*2的空白图像,将左图像向右平移wx,右图像粘贴在右侧。则右图像刚好覆盖左图像中的相同部分。最终拼接图像完成,完成后的图像左侧有宽度为wx的空白即为所检测出的两幅图像的相同部分,可根据需要选择是否去除。示例图如下。
实现上述效果的步骤如下:
-
采用surft特征检测算法检测两幅图像的关键特征点;
-
建立FLANN匹配器,采用目前最快的特征匹配(最近邻搜索)算法FlannBasedMatcher匹配关键点
3.从所匹配的全部关键点中筛选出优秀的特征点(基于距离筛选)
-
根据查询图像和模板图像的特征描述子索引得出仿射变换矩阵
-
获取左边图像到右边图像的投影映射关系
-
透视变换将左图像放在相应的位置
-
将有图像拷贝到特定位置完成拼接
图片1:
图片2:
代码如下:
```bash
import cv2
import numpy as np
def cv_show(name, image):
cv2.imshow(name, image)
cv2.waitKey(0)
cv2.destroyAllWindows()
def detectAndCompute(image):
image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
sift = cv2.xfeatures2d.SIFT_create()
(kps, features) = sift.detectAndCompute(image, None)
kps = np.float32([kp.pt for kp in kps]) # 得到的点需要进一步转换才能使用
return (kps, features)
def matchKeyPoints(kpsA, kpsB, featuresA, featuresB, ratio=0.75, reprojThresh=4.0):
# ratio是最近邻匹配的推荐阈值
# reprojThresh是随机取样一致性的推荐阈值
matcher = cv2.BFMatcher()
rawMatches = matcher.knnMatch(featuresA, featuresB, 2)
matches = []
for m in rawMatches:
if len(m) == 2 and m[0].distance < ratio * m[1].distance:
matches.append((m[0].queryIdx, m[0].trainIdx))
kpsA = np.float32([kpsA[m[0]] for m in matches]) # 使用np.float32转化列表
kpsB = np.float32([kpsB[m[1]] for m in matches])
(M, status) = cv2.findHomography(kpsA, kpsB, cv2.RANSAC, reprojThresh)
return (M, matches, status) # 并不是所有的点都有匹配解,它们的状态存在status中
def stich(imgA, imgB, M):
result = cv2.warpPerspective(imgA, M, (imgA.shape[1] + imgB.shape[1], imgA.shape[0]))
result[0:imageA.shape[0], 0:imageB.shape[1]] = imageB
cv_show('result', result)
def drawMatches(imgA, imgB, kpsA, kpsB, matches, status):
(hA, wA) = imgA.shape[0:2]
(hB, wB) = imgB.shape[0:2]
drawImg = np.zeros((max(hA, hB), wA + wB, 3), 'uint8')
drawImg[0:hB, 0:wB] = imageB
drawImg[0:hA, wB:] = imageA
for ((queryIdx, trainIdx), s) in zip(matches, status):
if s == 1:
# 注意将float32 --> int
pt1 = (int(kpsB[trainIdx][0]), int(kpsB[trainIdx][1]))
pt2 = (int(kpsA[trainIdx][0]) + wB, int(kpsA[trainIdx][1]))
cv2.line(drawImg, pt1, pt2, (0, 0, 255))
cv_show("drawImg", drawImg)
# 读取图像
imageA = cv2.imread('23.jpg')
cv_show("imageA", imageA)
imageB = cv2.imread('24.jpg')
cv_show("imageB", imageB)
# 计算SIFT特征点和特征向量
(kpsA, featuresA) = detectAndCompute(imageA)
(kpsB, featuresB) = detectAndCompute(imageB)
# 基于最近邻和随机取样一致性得到一个单应性矩阵
(M, matches, status) = matchKeyPoints(kpsA, kpsB, featuresA, featuresB)
# 绘制匹配结果
drawMatches(imageA, imageB, kpsA, kpsB, matches, status)
# 拼接
stich(imageA, imageB, M)
得出结果: