zed深度估计

深度图:每个像素值都是摄像头到物体表面之间的距离的估计值。

The conversion equation to get real depth from disparity is:
Z = fB/d
where Z = distance along the camera Z axis
f = focal length (in pixels) 就用zed给的fx
B = baseline (in mm)
d = disparity (in pixels)

ZED intrinsic parameters for both left and right sensors at each resolution :
fx and fy are the focal length in pixels.
cx and cy are the optical center coordinates in pixels.
k1 and k2 are distortion parameters.

获得focal lenth in pixels:

  1. 标定:
    OpenCV camera calibration or Matlab camera calibration toolbox (Bouquet) :
    take 10-20 images of a checkerboard.
    The intrinsic parameters give you the true center of the lens in pixels and the focal length in pixels.

  2. 公式算

focal_lenth_in_pixel = focal_lenth_in_mm * image_width_in_pixels /sensor_width_mm

(image_width_in_pixels / sensor_width_mm 就是 number of pixels per world unit in x and y directions respectively, 即the individual imager elements, 下文中的Sx)
也就是:

fx = F * Sx
fy = F * Sy

Sx and Sy cannot be measured directly via any camera calibration process, and neither is the physical focal length F directly measurable.
Only the combinations fx = sx * F and fy = F*sy can be derived without dismantling the camera.

此方法 close enough for accurate pose estimation(due to mechanical inaccuracies).

另外的方法:

if you know the horizontal field of view, say in degrees,

focal_pixel = (image_width_in_pixels * 0.5) / tan(FOV * 0.5 * PI/180)

关于Field Of View:
在这里插入图片描述
FOV = 2 * arctan(Y1 / f)
Y1是传感器的长度的一半

there’s a concept called “the rule of 16”, which says that the usable,
actual sensor diagonal for a 1" tube is 16mm

在这里插入图片描述

针孔相机模型:
https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html

  • 2
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值