KinectAzure-Open3D获取相机内参,depth2pointcloud

KinectAzure-Open3D获取相机内参,depth2pointcloud

参考:https://blog.csdn.net/weixin_42532587/article/details/112258490

使用Azure kinect SDK获取的rgb图与深度图
内参获取相关资料可以在https://github.com/microsoft/Azure-Kinect-Sensor-SDK/tree/develop/examples/calibration找到。配置好kinect的环境变量后,运行calibration/main.cpp/(或者在Visual Studio 2017中加载CmakeList.txt并进行构建),将生成Calibration_info.exe。连接并执行Azure Kinect后,输出如下:

calibration_info.exe  > out.txt

下面是Azure kinect sdk的一些内参,供大家参考
图像尺寸为640×576的内参,格式为 K4A_DEPTH_MODE_NFOV_UNBINNED, Depth captured at 640x576. Passive IR is also captured at 640x576

Found 1 connected devices:

===== Device 0: {DeviceID:XXXXXX} =====
resolution width: 640
resolution height: 576
principal point x: 324.021362
principal point y: 318.569580
focal length x: 504.390198
focal length y: 504.617188
radial distortion coefficients:
k1: 0.224424
k2: -0.066181
k3: -0.000697
k4: 0.565078
k5: -0.062617
k6: -0.011959
center of distortion in Z=1 plane, x: 0.000000
center of distortion in Z=1 plane, y: 0.000000
tangential distortion coefficient x: 0.000028
tangential distortion coefficient y: 0.000063
metric radius: 0.000000

K4A_DEPTH_MODE_WFOV_UNBINNED
Depth captured at 1024x1024. Passive IR is also captured at 1024x1024.

Found 1 connected devices:

===== Device 0: 000576395012 =====
resolution width: 1024
resolution height: 1024
principal point x: 516.021362
principal point y: 498.569580
focal length x: 504.390198
focal length y: 504.617188
radial distortion coefficients:
k1: 0.224424
k2: -0.066181
k3: -0.000697
k4: 0.565078
k5: -0.062617
k6: -0.011959
center of distortion in Z=1 plane, x: 0.000000
center of distortion in Z=1 plane, y: 0.000000
tangential distortion coefficient x: 0.000028
tangential distortion coefficient y: 0.000063
metric radius: 0.000000

颜色图calibration.color_camera_calibration参数(1080p)

Found 1 connected devices:

===== Device 0: 000576395012 =====
resolution width: 1920
resolution height: 1080
principal point x: 959.184265
principal point y: 545.431702
focal length x: 913.492004
focal length y: 913.715515
radial distortion coefficients:
k1: 0.521646
k2: -2.740919
k3: 1.595894
k4: 0.401455
k5: -2.563089
k6: 1.521002
center of distortion in Z=1 plane, x: 0.000000
center of distortion in Z=1 plane, y: 0.000000
tangential distortion coefficient x: 0.000542
tangential distortion coefficient y: 0.000067
metric radius: 0.000000

https://blog.csdn.net/zxxxiazai/article/details/108168120(应该还是需要再标定)

https://blog.csdn.net/z504727099/article/details/119084764(azure 做orbslam2)

  • 2
    点赞
  • 12
    收藏
    觉得还不错? 一键收藏
  • 10
    评论
### 回答1: 以下是使用Open3D库进行双目相机标定的示例代码: ```python import open3d as o3d import numpy as np import cv2 # 相机内参矩阵 K = np.array([[fx, 0, cx], [0, fy, cy], [0, 0, 1]]) # 棋盘格纹路大小 pattern_size = (6, 9) # 准备对象点 objp = np.zeros((pattern_size[0] * pattern_size[1], 3), np.float32) objp[:, :2] = np.mgrid[0:pattern_size[0], 0:pattern_size[1]].T.reshape(-1, 2) # 存储对象点和图像点的数组 obj_points = [] # 3D点在世界坐标系中 img_points_l = [] # 2D点在左相机图像平面上 img_points_r = [] # 2D点在右相机图像平面上 # 读取左右相机图像 img_left = cv2.imread('left.png') img_right = cv2.imread('right.png') # 查找棋盘格角点 found_left, corners_left = cv2.findChessboardCorners(img_left, pattern_size, None) found_right, corners_right = cv2.findChessboardCorners(img_right, pattern_size, None) if found_left and found_right: # 提取左右相机图像平面上的角点 gray_left = cv2.cvtColor(img_left, cv2.COLOR_BGR2GRAY) gray_right = cv2.cvtColor(img_right, cv2.COLOR_BGR2GRAY) corners_left = cv2.cornerSubPix(gray_left, corners_left, (11, 11), (-1, -1), criteria=(cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)) corners_right = cv2.cornerSubPix(gray_right, corners_right, (11, 11), (-1, -1), criteria=(cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)) # 绘制并显示棋盘格角点 cv2.drawChessboardCorners(img_left, pattern_size, corners_left, found_left) cv2.drawChessboardCorners(img_right, pattern_size, corners_right, found_right) cv2.imshow('Left Image', img_left) cv2.imshow('Right Image', img_right) cv2.waitKey(0) # 存储对象点和图像点 obj_points.append(objp) img_points_l.append(corners_left) img_points_r.append(corners_right) # 双目相机标定 rms, K_left, D_left, K_right, D_right, R, T, E, F = cv2.stereoCalibrate(obj_points, img_points_l, img_points_r, K, None, K, None, gray_left.shape[::-1], flags=cv2.CALIB_FIX_INTRINSIC) # 打印标定结果 print('RMS:', rms) print('K_left:', K_left) print('D_left:', D_left) print('K_right:', K_right) print('D_right:', D_right) print('R:', R) print('T:', T) print('E:', E) print('F:', F) # 保存标定结果 np.savez('stereo_calib.npz', K_left=K_left, D_left=D_left, K_right=K_right, D_right=D_right, R=R, T=T, E=E, F=F) else: print('Chessboard not found.') ``` 在上述代码中,我们首先定义了相机内参矩阵 `K` 和棋盘格纹路大小 `pattern_size`。然后,我们准备了对象点 `objp`,该对象点包含棋盘格上每个角点的坐标。接着,我们读取了左右相机的图像,使用OpenCV的 `cv2.findChessboardCorners()` 函数在两张图像中查找棋盘格角点,并使用 `cv2.cornerSubPix()` 函数获取更加精确的角点坐标。 接下来,我们将对象点和图像点存储到数组中,并使用 `cv2.stereoCalibrate()` 函数进行双目相机标定。最后,我们将标定结果打印出来,并将其保存到文件中。 需要注意的是,此代码示例只提供了一个简单的双目相机标定方法,实际应用中可能需要更加完善的标定方法,并考虑更多的误差因素。 ### 回答2: Open3D 双目相机标定代码主要涉及以下步骤: 1. 导入 Open3D 库和其他必要的依赖库。 2. 读取双目相机的图像和深度图像,并将它们转换为 Open3D 中的格式。 3. 创建相机参数对象(Intrinsic)并设置内参矩阵和畸变系数。 4. 创建点云对象(PointCloud)。 5. 遍历图像像素,将像素坐标转换为相机坐标,并计算图像点对应的深度值。 6. 将相机坐标系下的点转换为世界坐标系下的点。 7. 将世界坐标系下的点添加到点云对象中。 8. 使用 RandomSampling 进行下采样(可选)。 9. 进行相机标定,返回双目相机的外参矩阵(Extrinsic)和重投影误差。 10. 打印输出双目相机的外参矩阵和重投影误差。 以下是一个示例代码: ```python import open3d as o3d import numpy as np # 读取图像和深度图像 color_image = o3d.io.read_image("color_image.png") depth_image = o3d.io.read_image("depth_image.png") # 创建相机参数对象并设置内参矩阵和畸变系数 intrinsic = o3d.camera.PinholeCameraIntrinsic() intrinsic.set_intrinsics( color_image.width, color_image.height, focal_length_x, focal_length_y, principal_point_x, principal_point_y, radial_distortion, tangential_distortion ) # 创建点云对象 pointcloud = o3d.geometry.PointCloud() # 遍历图像像素,计算相机坐标和深度值 for row in range(color_image.height): for col in range(color_image.width): color = color_image[row, col] depth = depth_image[row, col] / 1000.0 # 将深度值转换为米 # 将像素坐标转换为相机坐标 cam_point = o3d.camera.PinholeCameraIntrinsic.backproject_pixel(intrinsic, [col, row], depth) # 将相机坐标转换为世界坐标 world_point = np.dot(intrinsic.intrinsic_matrix, cam_point) # 添加世界坐标到点云对象中 pointcloud.points.append(world_point) # 使用 RandomSampling 进行下采样 downsampled_pointcloud = pointcloud.voxel_down_sample(voxel_size=0.01) # 进行相机标定 extrinsic = downsampled_pointcloud.get_rotation_matrix_from_xyz() reprojection_error = downsampled_pointcloud.estimate_normals() # 打印输出外参矩阵和重投影误差 print("Extrinsic Matrix:", extrinsic) print("Reprojection Error:", reprojection_error) ``` 这只是一个简单的示例代码,具体实现可能因相机型号和使用的深度图像格式等而异。您需要根据实际情况调整代码。
评论 10
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值