利用python和OpenCV实现图像拼接

1 篇文章 0 订阅

python+OpenCV实现image stitching

在最新的OpenCV官方文档中可以找到C++版本的Stitcher类的说明, 但是python版本的还没有及时更新, 本篇对python版本的实现做一个简单的介绍.

由于官方文档中还没有python版本的Stitcher类的说明, 因此只能自己去GitHub源码上找, 以下是stitching的样例:

from __future__ import print_function
import cv2 as cv
import numpy as np
import argparse
import sys

modes = (cv.Stitcher_PANORAMA, cv.Stitcher_SCANS)

parser = argparse.ArgumentParser(description='Stitching sample.')
parser.add_argument('--mode',
    type = int, choices = modes, default = cv.Stitcher_PANORAMA,
    help = 'Determines configuration of stitcher. The default is `PANORAMA` (%d), '
         'mode suitable for creating photo panoramas. Option `SCANS` (%d) is suitable '
         'for stitching materials under affine transformation, such as scans.' % modes)
parser.add_argument('--output', default = 'result.jpg',
    help = 'Resulting image. The default is `result.jpg`.')
parser.add_argument('img', nargs='+', help = 'input images')
args = parser.parse_args()

# read input images
imgs = []
for img_name in args.img:
    img = cv.imread(img_name)
    if img is None:
        print("can't read image " + img_name)
        sys.exit(-1)
    imgs.append(img)

stitcher = cv.Stitcher.create(args.mode)
status, pano = stitcher.stitch(imgs)

if status != cv.Stitcher_OK:
    print("Can't stitch images, error code = %d" % status)
    sys.exit(-1)

cv.imwrite(args.output, pano);
print("stitching completed successfully. %s saved!" % args.output)

上面写了一大堆, 然鹅, 直接拿来用的话, 用下面的代码可以了, 简单粗暴

import numpy as np
import cv2
from cv2 import Stitcher

if __name__ == "__main__":
    img1 = cv2.imread('1.jpg')
    img2 = cv2.imread('2.jpg')
    stitcher = cv2.createStitcher(False)
    #stitcher = cv2.Stitcher.create(cv2.Stitcher_PANORAMA), 根据不同的OpenCV版本来调用
    (_result, pano) = stitcher.stitch((img1, img2))
    cv2.imshow('pano',pano)
    cv2.waitKey(0)

效果如下:
原图:
在这里插入图片描述
在这里插入图片描述
拼接后的图像:
在这里插入图片描述

  • 5
    点赞
  • 35
    收藏
    觉得还不错? 一键收藏
  • 11
    评论
可以通过以下步骤来实现图像拼接、融合和黑边处理: 1. 读取需要拼接的两张图像并将它们转换为灰度图像。 ```python import cv2 img1 = cv2.imread('image1.jpg') img2 = cv2.imread('image2.jpg') gray1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY) gray2 = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY) ``` 2. 通过 SIFT 算法来匹配两张图像中的关键点。 ```python sift = cv2.xfeatures2d.SIFT_create() kp1, des1 = sift.detectAndCompute(gray1, None) kp2, des2 = sift.detectAndCompute(gray2, None) bf = cv2.BFMatcher() matches = bf.knnMatch(des1, des2, k=2) good = [] for m, n in matches: if m.distance < 0.75 * n.distance: good.append(m) ``` 3. 获取两张图像中的关键点坐标并计算出变换矩阵。 ```python src_pts = np.float32([kp1[m.queryIdx].pt for m in good]).reshape(-1, 1, 2) dst_pts = np.float32([kp2[m.trainIdx].pt for m in good]).reshape(-1, 1, 2) M, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC, 5.0) ``` 4. 将第二张图像投影到第一张图像的平面上,并创建一个新的图像来存储拼接结果。 ```python h, w = gray1.shape result = cv2.warpPerspective(img2, M, (w + img2.shape[1], h)) result[0:h, 0:w] = img1 ``` 5. 进行图像融合,使用掩码来避免边缘出现黑边。 ```python mask = np.zeros(gray1.shape, dtype=np.uint8) mask[0:h, 0:w] = 255 mask = cv2.warpPerspective(mask, M, (w + img2.shape[1], h)) result_masked = cv2.bitwise_and(result, result, mask=mask) ``` 6. 去除图像的黑边。 ```python gray = cv2.cvtColor(result_masked, cv2.COLOR_BGR2GRAY) ret, thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU) contours, hierarchy = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) x, y, w, h = cv2.boundingRect(contours[0]) result_cropped = result_masked[y:y+h, x:x+w] ``` 7. 保存拼接、融合和去黑边后的图像。 ```python cv2.imwrite('result.jpg', result_cropped) ``` 完整代码如下: ```python import cv2 import numpy as np img1 = cv2.imread('image1.jpg') img2 = cv2.imread('image2.jpg') gray1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY) gray2 = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY) sift = cv2.xfeatures2d.SIFT_create() kp1, des1 = sift.detectAndCompute(gray1, None) kp2, des2 = sift.detectAndCompute(gray2, None) bf = cv2.BFMatcher() matches = bf.knnMatch(des1, des2, k=2) good = [] for m, n in matches: if m.distance < 0.75 * n.distance: good.append(m) src_pts = np.float32([kp1[m.queryIdx].pt for m in good]).reshape(-1, 1, 2) dst_pts = np.float32([kp2[m.trainIdx].pt for m in good]).reshape(-1, 1, 2) M, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC, 5.0) h, w = gray1.shape result = cv2.warpPerspective(img2, M, (w + img2.shape[1], h)) result[0:h, 0:w] = img1 mask = np.zeros(gray1.shape, dtype=np.uint8) mask[0:h, 0:w] = 255 mask = cv2.warpPerspective(mask, M, (w + img2.shape[1], h)) result_masked = cv2.bitwise_and(result, result, mask=mask) gray = cv2.cvtColor(result_masked, cv2.COLOR_BGR2GRAY) ret, thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU) contours, hierarchy = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) x, y, w, h = cv2.boundingRect(contours[0]) result_cropped = result_masked[y:y+h, x:x+w] cv2.imwrite('result.jpg', result_cropped) ``` 注意:在某些情况下,可能需要对图像进行调整或剪裁才能获得更好的拼接、融合和去黑边效果。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 11
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值