世界坐标系、相机坐标系、图像坐标系、像素坐标系之间的转换公式参考:世界坐标系、相机坐标系、图像坐标系、像素坐标系之间的转换
其中图像坐标到世界坐标的转化公式作者讲解的也比较清楚,但是对于Zc的值,作者并没有给出进一步讲解
自己项目中有涉及到图像坐标到世界坐标的转化,故这里想写出来供大家参考,如有理解有误,欢迎指正。这里的前提是世界坐标的原点在地面上。
上述公式可以简写为:
进一步:
其中 R,K,T均为已知值。将上述等式中的变量做以下改写,方便描述:
只观察世界坐标计算公式中等式两边的第三项,则:
其中,Mat2(2,0)表示3×1矩阵的第三项。如果目标物体在地面则Zw=0;如果目标物体有一定高度,则Zw=实际物体高度,求出Zc后即可根据像素坐标求得世界坐标。
最后贴上2D转3D的计算代码供大家理解:
def 2Dto3Dpts(point2D, rVec, tVec, cameraMat, height):
"""
Function used to convert given 2D points back to real-world 3D points
point2D : An array of 2D points
rVec : Rotation vector
tVec : Translation vector
cameraMat: Camera Matrix used in solvePnP
height : Height in real-world 3D space
Return : output_array: Output array of 3D points
"""
point3D = []
point2D = (np.array(point2D, dtype='float32')).reshape(-1, 2)
numPts = point2D.shape[0]
point2D_op = np.hstack((point2D, np.ones((numPts, 1))))
rMat = cv2.Rodrigues(rVec)[0]
rMat_inv = np.linalg.inv(rMat)
kMat_inv = np.linalg.inv(cameraMat)
for point in range(numPts):
uvPoint = point2D_op[point, :].reshape(3, 1)
tempMat = np.matmul(rMat_inv, kMat_inv)
tempMat1 = np.matmul(tempMat, uvPoint)
tempMat2 = np.matmul(rMat_inv, tVec)
s = (height + tempMat2[2]) / tempMat1[2]
p = tempMat1 * s - tempMat2
point3D.append(p)
point3D = (np.array(point3D, dtype='float32')).reshape([-1, 1, 3])
return point3D
或者python代码:
def imageview2ego(self, image_view_points, camera_intrinsic, ego2camera_matrix, height):
'''
:param image_view_points: np.ndarray 3*n [[u,v,1],,,,]
:param camera_intrinsic: np.ndarray 3*3
:param ego2camera_matrix: nd.ndarray 4*4 ego -> camera
:param height: int defalut = 0
:return: ndarray 3*n [[x_ego,y_ego,hegiht],,,]
'''
camera_intrinsic_inv = np.linalg.inv(camera_intrinsic)
R_inv = ego2camera_matrix[:3, :3].T
T = ego2camera_matrix[:3, 3]
mat1 = np.dot(np.dot(R_inv, camera_intrinsic_inv), image_view_points)
mat2 = np.dot(R_inv, T)
Zc = (height + mat2[2]) / mat1[2]
points_ego = Zc * mat1 - np.expand_dims(mat2, 1)
return points_ego