机械臂手眼标定主要是为了获取机械臂与相机之间的相对位姿关系。本文主要利用opencv中的calibrateHandEye()函数进行标定。
calibrateHandEye()函数
#每个输入为多帧图像对应的变换矩阵的list->[array1,array2,...]
R_cam2gripper,T_cam2gripper = calibrateHandEye(R_gripper2base,T_gripper2base,R_target2camera,T_target2camera):
注:从机械臂中读取到的机械臂末端位姿即Ts_end_to_base
眼在手上
即相机被固定在机械臂末端,标定需要得到相机坐标系到机械臂末端坐标系的变换矩阵。
对于眼在手上的情况,calibrateHandEye()的输入输出参数为:
"""
R_cam2gripper,T_cam2gripper = calibrateHandEye(R_gripper2base,T_gripper2base,R_target2camera,T_target2camera):
R_gripper2base : R_end_to_base
T_gripper2base : T_end_to_base
R_target2camera : R_board_to_camera
T_target2camera : T_board_to_camera
R_cam2gripper : R_camera_to_end
T_cam2gripper : T_camera_to_end
"""
眼在手外
即相机被固定在世界坐标系中某个位置,标定需要得到相机坐标系到机械臂基座坐标系的变换矩阵。
对于眼在手上的情况,calibrateHandEye()的输入输出参数为:
"""
R_cam2gripper,T_cam2gripper = calibrateHandEye(R_gripper2base,T_gripper2base,R_target2camera,T_target2camera):
R_gripper2base : R_base_to_hand
T_gripper2base : T_base_to_hand
R_target2camera : R_board_to_camera
T_target2camera : T_board_to_camera
R_cam2gripper : R_camera_to_base
T_cam2gripper : T_camera_to_base
"""
代码实现(以眼在手外为例)
marker_detect.py
检测标定板特征点像素坐标,生成对应标定板3D坐标,并计算Ts_board_to_camera。
import numpy as np
import time
import cv2
import cv2.aruco as aruco
# 设置相机参数
mtx = np.array([
[916.28, 0, 650.7],
[0, 916.088, 352.941],
[0, 0, 1],
])
dist = np.array([0, 0, 0, 0, 0])
# 欧拉角转换为旋转矩阵
def eulerAnglesToRotationMatrix(theta):
R_x = np.array([[1,0,0],
[0,np.cos(theta[0]),-np.sin(theta[0])],
[0,np.sin(theta[0]),np.cos(theta[0])]])
R_y = np.array([[np.cos(theta[1]),0,np.sin(theta[2])],
#[0,-1,0],
[0, 1, 0],
[-np.sin(theta[1]),0,np.cos(theta[1])]])
R_z = np.array([[np.cos(theta[2]),-np.sin(theta[2]),0],
[np.sin(theta[2]),np.cos(theta[2]),0],
[0, 0,1]])
#return np.dot(np.dot(R_z,R_y),R_x)
# combined rotation matrix
R = np.dot(R_z, R_y.dot(R_x))
return R
# 标定板特征点检测 input:image
# 将检测得到的特征点按照maker的id大小排序
def aruco_detect(frame):
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
aruco_dict = aruco.Dictionary_get(