OpenCV学习日记5
2017-05-27 10:44:35 1000sprites 阅读数 2339更多
分类专栏: 计算机视觉
版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://blog.csdn.net/shengshengwang/article/details/72779289
1. solvePnP,cvPOSIT(过时),solvePnPRansac [1][2]
解析:给定物体3D点集与对应的图像2D点集,以及摄像头内参数的情况下计算物体的3D姿态。solvePnP和cvPOSIT
的输出都是旋转矩阵和位移向量,不过solvePnP是精确解,cvPOSIT是近似解。因为solvePnP调用的是
cvFindExtrinsicCameraParams2,通过已知的内参进行未知外参求解;而cvPOSIT是用仿射投影模型近似透视投影模
型,不断迭代计算出估计值(在物体深度变化相对于物体到摄像机的距离比较大时,该算法可能不收敛)。
solvePnP和solvePnPRansac函数原型,如下所示:
(1)cv2.solvePnP(objectPoints, imagePoints, cameraMatrix, distCoeffs[, rvec[, tvec[, useExtrinsicGuess[, flags]]]])
→ retval, rvec, tvec
(2)cv2.solvePnPRansac(objectPoints, imagePoints, cameraMatrix, distCoeffs[, rvec[, tvec[, useExtrinsicGuess[,
iterationsCount[, reprojectionError[, minInliersCount[, inliers[, flags]]]]]]]]) → rvec, tvec, inliers
2. 对极几何(Epipolar Geometry)
解析:
在双目立体视觉系统中,有两个摄像机在不同角度拍摄物理空间中的同一实体点,在两副图像上分别有两个成像点。
立体匹配就是已知其中的一个成像点,在另一副图像上找出该成像点的对应点。极线几何约束是一种常用的匹配约束
技术,它是一种点对直线的约束,将对应点匹配从整幅图像寻找压缩到在一条直线上寻找。
-
import cv2
-
import numpy as np
-
from matplotlib import pyplot as plt
-
img1 = cv2.imread('myleft.jpg',0) #queryimage # left image
-
img2 = cv2.imread('myright.jpg',0) #trainimage # right image
-
sift = cv2.SIFT()
-
# find the keypoints and descriptors with SIFT
-
kp1, des1 = sift.detectAndCompute(img1,None)
-
kp2, des2 = sift.detectAndCompute(img2,None)
-
# FLANN parameters
-
FLANN_INDEX_KDTREE = 0
-
index_params = dict(algorithm=FLANN_INDEX_KDTREE,trees=5)
-
search_params = dict(checks=50)
-
flann = cv2.FlannBasedMatcher(index_params,search_params)
-
matches = flann.knnMatch(des1,des2,k=2)
-
good = []
-
pts1 = []
-
pts2 = []
-
# ratio test as per Lowe's paper
-
for i,(m,n) in enumerate(matches):
-
if m.distance < 0.8*n.distance:
-
good.append(m)
-
pts2.append(kp2[m.trainIdx].pt)
-
pts1.append(kp1[m.queryIdx].pt)
-
pts1 = np.float32(pts1)
-
pts2 = np.float32(pts2)
-
F, mask = cv2.findFundamentalMat(pts1,pts2,cv2.FM_LMEDS)
-
# we select only inlier points
-
pts1 = pts1[mask.ravel()==1]
-
pts2 = pts2[mask.ravel()==1]
-
def drawlines(img1,img2,lines,pts1,pts2):
-
''' img1 - image on which we draw the epilines for the points in img2
-
lines - corresponding epilines
-
'''
-
r,c = img1.shape
-
img1 = cv2.cvtColor(img1,cv2.COLOR_GRAY2BGR)
-
img2 = cv2.cvtColor(img2,cv2.COLOR_GRAY2BGR)
-
for r,pt1,pt2 in zip(lines,pts1,pts2):
-
color = tuple(np.random.randint(0,255,3).tolist())
-
x0,y0 = map(int, [0,-r[2]/r[1]])
-
x1,y1 = map(int, [c,-(r[2]+r[0]*c)/r[1]])
-
img1 = cv2.line(img1, (x0,y0), (x1,y1), color, 1)
-
img1 = cv2.circle(img1,tuple(pt1),5,color,-1)
-
img2 = cv2.circle(img2,tuple(pt2),5,color,-1)
-
return img1,img2
-
# find epilines corresponding to points in right image (second image) and
-
# drawing its lines on left image
-
lines1 = cv2.computeCorrespondEpilines(pts2.reshape(-1,1,2), 2,F)
-
lines1 = lines1.reshape(-1,3)
-
img5,img6 = drawlines(img1,img2,lines1,pts1,pts2)
-
# find epilines corresponding to points in left image (first image) and
-
# drawing its lines on right image
-
lines2 = cv2.computeCorrespondEpilines(pts1.reshape(-1,1,2), 1,F)
-
lines2 = lines2.reshape(-1,3)
-
img3,img4 = drawlines(img2,img1,lines2,pts2,pts1)
-
plt.subplot(121),plt.imshow(img5)
-
plt.subplot(122),plt.imshow(img3)
-
plt.show()
结果输出,如下所示:
说明:findFundamentalMat和computeCorrespondEpilines函数原型,如下所示:
(1)cv2.findFundamentalMat(points1, points2[, method[, param1[, param2[, mask]]]]) → retval, mask
(2)cv2.computeCorrespondEpilines(points, whichImage, F[, lines]) → lines
3. 立体图像中的深度地图
解析:如果同一场景有两幅图像,那么就可以获得图像的深度信息。如下所示:
构建立体图像中的深度地图过程,如下所示:
-
import cv2
-
from matplotlib import pyplot as plt
-
imgL = cv2.imread('tsukuba_l.png',0)
-
imgR = cv2.imread('tsukuba_r.png',0)
-
stereo = cv2.createStereoBM(numDisparities=16, blockSize=15)
-
disparity = stereo.compute(imgL,imgR)
-
plt.imshow(disparity,'gray')
-
plt.show()
结果输出,如下所示:
说明:左侧为原始图像,右侧为深度图像。结果中的噪音可以通过调整numDisparities和blockSize得到更好的结果。
createStereoBM函数原型为cv2.createStereoBM([numDisparities[, blockSize]]) → retval。
4. BRIEF算子详解 [4]
解析:BRIEF(Binary Robust Independent Elementary Features)是一种对特征点描述子计算和匹配的快速方法,
但它不提供查找特征的方法,原始文献推荐使用CenSurE特征检测器。同时它不具备旋转不变性和尺度不变性而且对
噪声敏感。如下所示:
-
import cv2
-
img = cv2.imread('simple.jpg',0)
-
# initiate STAR detector
-
star = cv2.FeatureDetector_create("STAR")
-
# initiate BRIEF extractor
-
brief = cv2.DescriptorExtractor_create("BRIEF")
-
# find the keypoints with STAR
-
kp = star.detect(img,None)
-
# compute the descriptors with BRIEF
-
kp, des = brief.compute(img, kp)
-
print brief.getInt('bytes')
-
print des.shape
说明:在OpenCV中CenSurE检测器叫做STAR检测器。
5.opencv-4.0.0和Windows 7:ImportError: ERROR: recursion is detected during loading of "cv2" binary extensions. Check OpenCV installation
解析:
(1)将D:\opencv-4.0.0\build\python\cv2\python-3.6\cv2.cp36-win_amd64.pyd修改为cv2.pyd,然后拷贝到D:\Anaconda3\Lib\site-packages目录。
(2)将D:\opencv-4.0.0\build\x64\vc15\bin目录下的.dll文件拷贝到D:\Anaconda3\Library\bin目录。
参考文献:
[1] Camera Calibration and 3D Reconstruction:
http://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html
[2] 三维姿态:关于solvePnP与cvPOSIT:http://blog.csdn.net/abc20002929/article/details/8520063
[3] 极线约束(epipolar constraint):http://blog.csdn.net/tianwaifeimao/article/details/19544861
[4] createStereoBM:http://docs.opencv.org/3.0-
beta/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#cv2.createStereoBM
[5] 特征工程BRIEF:http://dnntool.com/2017/03/27/brief/