深度图:每个像素值都是摄像头到物体表面之间的距离的估计值。
The conversion equation to get real depth from disparity is:
Z = fB/d
where Z = distance along the camera Z axis
f = focal length (in pixels) 就用zed给的fx
B = baseline (in mm)
d = disparity (in pixels)
ZED intrinsic parameters for both left and right sensors at each resolution :
fx and fy are the focal length in pixels.
cx and cy are the optical center coordinates in pixels.
k1 and k2 are distortion parameters.
获得focal lenth in pixels:
-
标定:
OpenCV camera calibration or Matlab camera calibration toolbox (Bouquet) :
take 10-20 images of a checkerboard.
The intrinsic parameters give you the true center of the lens in pixels and the focal length in pixels. -
公式算
focal_lenth_in_pixel = focal_lenth_in_mm * image_width_in_pixels /sensor_width_mm
(image_width_in_pixels / sensor_width_mm 就是 number of pixels per world unit in x and y directions respectively, 即the individual imager elements, 下文中的Sx)
也就是:
fx = F * Sx
fy = F * Sy
Sx and Sy cannot be measured directly via any camera calibration process, and neither is the physical focal length F directly measurable.
Only the combinations fx = sx * F and fy = F*sy can be derived without dismantling the camera.
此方法 close enough for accurate pose estimation(due to mechanical inaccuracies).
另外的方法:
if you know the horizontal field of view, say in degrees,
focal_pixel = (image_width_in_pixels * 0.5) / tan(FOV * 0.5 * PI/180)
关于Field Of View:
FOV = 2 * arctan(Y1 / f)
Y1是传感器的长度的一半
there’s a concept called “the rule of 16”, which says that the usable,
actual sensor diagonal for a 1" tube is 16mm
针孔相机模型: