最近在学习eyetohand手眼标定,一开始迟迟不肯自己写程序,总想着找网上现成的,但是没找到,最后还是咬咬牙学习矩阵变换原理巴拉巴拉的,尝试用python程序写。
思路是:在拍摄了一组标定板的照片后,如上图1所示。
利用opencv的findChessboardCorners()函数检测棋盘角点,如图2所示
从而获取目标真实坐标(3D)和图片角点坐标(2D),接着利用opencv的solvePnP()函数获取标定板到相机的旋转矩阵和平移向量。
实际编程中遇到了一系列困扰很久的问题:
第一个问题:
rvec, tvec = cv2.solvePnP(np.array(object_points), np.array(image_points), cv2.UMat(camera_matrix), cv2.UMat(dist_coeffs))
报了以下错误:
发生异常: error
OpenCV(4.8.0) D:\a\opencv-python\opencv-python\opencv\modules\calib3d\src\solvepnp.cpp:825: error: (-215:Assertion failed) ( (npoints >= 4) || (npoints == 3 && flags == SOLVEPNP_ITERATIVE && useExtrinsicGuess) || (npoints >= 3 && flags == SOLVEPNP_SQPNP) ) && npoints == std::max(ipoints.checkVector(2, CV_32F), ipoints.checkVector(2, CV_64F)) in function 'cv::solvePnPGeneric'
File "C:\Users\z\Desktop\py\eye2hand\board2camera.py", line 42, in get_calibration_parameters
rvec, tvec = cv2.solvePnP(np.array(object_points), np.array(image_points), cv2.UMat(camera_matrix), cv2.UMat(dist_coeffs))
File "C:\Users\z\Desktop\py\eye2hand\board2camera.py", line 62, in <module>
rvec, pnp_t = get_calibration_parameters(object_points, image_points, camera_matrix, dist_coeffs)
cv2.error: OpenCV(4.8.0) D:\a\opencv-python\opencv-python\opencv\modules\calib3d\src\solvepnp.cpp:825: error: (-215:Assertion failed) ( (npoints >= 4) || (npoints == 3 && flags == SOLVEPNP_ITERATIVE && useExtrinsicGuess) || (npoints >= 3 && flags == SOLVEPNP_SQPNP) ) && npoints == std::max(ipoints.checkVector(2, CV_32F), ipoints.checkVector(2, CV_64F)) in function 'cv::solvePnPGeneric'
问题排查:
排除了照片路径问题,最后发现是输入参数中存在数据不匹配问题。
object_points数组的每个元素都应该是一个二维数组,其中每一行表示一个物体点的坐标。例如,我使用的棋盘格是 9x6 的,那么 object_points 数组的每个元素应该是一个 Nx3 的二维数组,其中 N 表示检测到的角点数量54*14。
image_points 数组的每个元素也应该是一个二维数组,其中每一行表示一个图像点的坐标。例如,我这里测试14张图像,那么 image_points 数组的每个元素应该是一个 N x 2 的二维数组,其中 N 表示检测到的角点数量54乘以14。
如图3所示,两个数组都不是二维的,不满足输入要求。
解决办法:
设定shape为二维。
接着运行程序,爆出第二个错误:
发生异常: ValueError
too many values to unpack (expected 2)
File "C:\Users\z\Desktop\py\eye2hand\board2camera.py", line 42, in get_calibration_parameters
rvec, tvec = cv2.solvePnP(np.array(object_points), np.array(image_points), cv2.UMat(camera_matrix), cv2.UMat(dist_coeffs))
File "C:\Users\z\Desktop\py\eye2hand\board2camera.py", line 62, in <module>
rvec, pnp_t = get_calibration_parameters(object_points, image_points, camera_matrix, dist_coeffs)
ValueError: too many values to unpack (expected 2)
原来是solvePnP返回的值数量超过了两个,但是我只有两个,于是其余值用‘_’代替。
_, rvec, tvec= cv2.solvePnP(np.array(object_points), np.array(image_points), cv2.UMat(camera_matrix), cv2.UMat(dist_coeffs))
至此问题解决。
Rotation Vector:
[[ 0.99426456 -0.00119183 -0.10694183]
[-0.00344484 0.99906216 -0.04316168]
[ 0.10689298 0.04328253 0.993328 ]]
Translation Vector:
[[ -9.73908016]
[-11.06567888]
[ 59.21845568]]
最后贴上这段程序:
import cv2
import numpy as np
import os
# 提取角点
def extract_chessboard_corners(folder_path):
# 获取文件夹中的标定图像
calibration_images = [file for file in os.listdir(folder_path) if file.startswith('ColorMap')]
#棋盘格尺寸,隔间25mm
board_size = (9, 6)
# 创建用于存储目标点和图像点的列表
object_points = []
image_points = []
# 读取标定图像,并提取角点
for i,image_file in enumerate(calibration_images):
# 读取彩色图像
image_file = 'ColorMap' + str(i) + '.png'
color_image_path = os.path.join(folder_path, image_file)
color_image_path = os.path.normpath(color_image_path) # 将路径规范化
color_image = cv2.imread(color_image_path)
# 转换为灰度图像
gray_image = cv2.cvtColor(color_image, cv2.COLOR_BGR2GRAY)
# 检测棋盘格角点
ret, corners = cv2.findChessboardCorners(gray_image, board_size, None)
# 如果成功找到角点,则添加到列表中
if ret:
object_points.append(np.zeros((54, 3), np.float32))
object_points[-1][:, :2] = np.mgrid[0:9, 0:6].T.reshape(-1, 2)
image_points.append(corners.reshape(-1, 2))
_, rvec, tvec = cv2.solvePnP(object_points[-1], image_points[-1], camera_matrix, dist_coeffs)
else:
print("未找到角点")
return object_points, image_points
# 进行相机标定
def get_calibration_parameters(object_points, image_points, camera_matrix, dist_coeffs):
_, rvec, pnp_t= cv2.solvePnP(np.array(object_points), np.array(image_points), cv2.UMat(camera_matrix), cv2.UMat(dist_coeffs))
return rvec, pnp_t
# 文件夹路径
folder_path = './data'
# 相机内参矩阵
#camera_matrix = [[2426.171811863809, 0, 977.4637153834994], [0, 2426.671349554621, 613.0850541358279], [0, 0, 1]]
camera_matrix = np.array([[2426.171811863809, 0, 977.4637153834994], [0, 2426.671349554621, 613.0850541358279], [0, 0, 1]])
# 相机畸变系数
#dist_coeffs = [-0.02665168168678513,0.08966277495184476,-0.0003435021135962532,-0.0003253252064104588,0]
dist_coeffs = np.array([-0.02665168168678513, 0.08966277495184476, -0.0003435021135962532, -0.0003253252064104588, 0])
# 提取棋盘格角点
temp_object, temp_image = extract_chessboard_corners(folder_path)
# 将三维数组拼接成二维数组
object_points = np.concatenate(temp_object)
# 先去除冗余维度 1,再将三维数组拼接成二维数组
ttemp_image = np.squeeze(temp_image)
image_points = np.concatenate(ttemp_image)
# 获取标定参数
rvec, pnp_t = get_calibration_parameters(object_points, image_points, camera_matrix, dist_coeffs)
# 将 rvec 和 tvec 转换为 NumPy 数组
rvec = rvec.get()
pnp_R, _= cv2.Rodrigues(rvec)
pnp_t = pnp_t.get()
# 输出结果
print("Object Points:")
print(np.array(object_points))
print("Image Points:")
print(np.array(image_points))
print("Rotation Vector:")
print(pnp_R)
print("Translation Vector:")
print(pnp_t)