内容介绍
之前做实验需要有无人机的定位信息,一般的话用GPS或者动捕来做比较稳定,但是更多数情况下的室内并没有他们,就得去自定位啦。采用相机的包括vins、orb3等算法,都需要输入相机的内参或者外参等。
购买的相机同型号理论上来说参数都是一致的,但是生产过程中或者使用中参数可能发生变化。参数标定就很重要啦。
之前做实验直接用的网上分享的内参,旋转矩阵里边都是0 1,这是不精确的,得到的实验结果也就差挺多。为了提高实验精度,自己学这用kalibr标定了相机。
给大家分享一下D435i相机的内参外参数。
---------------------------------------------------------------------------------------------------------------------------------
D435i相机参数
内参
这里我使用的VINS-Fusion做的自定位,就给出一下VINS-Fusion的启动文件吧。
D435i_left.yaml
%YAML:1.0
---
model_type: PINHOLE
camera_name: camera
image_width: 640
image_height: 480
# distortion_parameters:
# k1: -0.1
# k2: 0.01
# p1: 5e-5
# p2: -1e-4
# projection_parameters:
# fx: 388.81756591796875
# fy: 388.81756591796875
# cx: 319.6447448730469
# cy: 237.4071502685547
distortion_parameters:
k1: 0.007632535269979
k2: -0.013282213867321
p1: 0.001613519137183
p2: 0.001256099806541
projection_parameters:
fx: 381.354472717965
fy: 382.74974724457
cx: 323.382002464865
cy: 242.163396985489
D435i_right.yaml
%YAML:1.0
---
model_type: PINHOLE
camera_name: camera
image_width: 640
image_height: 480
# distortion_parameters:
# k1: -0.1
# k2: 0.01
# p1: 5e-5
# p2: -1e-4
# projection_parameters:
# fx: 388.81756591796875
# fy: 388.81756591796875
# cx: 319.6447448730469
# cy: 237.4071502685547
distortion_parameters:
k1: 0.003777452996663
k2: -0.005942226689154
p1: 0.001688457291355
p2: 0.000840626243983
projection_parameters:
fx: 381.439076348885
fy: 382.781558524495
cx: 323.799585452161
cy: 242.708950600827
外参
这里IMU直接用的D435i自带的IMU。
D435i_realsense_stereo_imu_config.yaml
%YAML:1.0
#common parameters
#support: 1 imu 1 cam; 1 imu 2 cam: 2 cam;
imu: 1
num_of_cam: 2
imu_topic: "/camera/imu"
image0_topic: "/camera/infra1/image_rect_raw"
image1_topic: "/camera/infra2/image_rect_raw"
output_path: "~/output/"
cam0_calib: "D435i_left.yaml"
cam1_calib: "D435i_right.yaml"
image_width: 640
image_height: 480
# Extrinsic parameter between IMU and Camera.
estimate_extrinsic: 1 # 0 Have an accurate extrinsic parameters. We will trust the following imu^R_cam, imu^T_cam, don't change it.
# 1 Have an initial guess about extrinsic parameters. We will optimize around your initial guess.
# body_T_cam0: !!opencv-matrix
# rows: 4
# cols: 4
# dt: d
# data: [ 0.9999202097864208 , 0.0020216727466406 ,-0.0124694386395178, -0.0060393592650275,
# -0.0021078333909270 , 0.9999739698836603 ,-0.0069004777740324 ,-0.0188520327459142,
# 0.0124551635507247, 0.0069262106825675 , 0.9998984430963509 , 0.0189713118846234,
# 0, 0, 0, 1 ]
# body_T_cam1: !!opencv-matrix
# rows: 4
# cols: 4
# dt: d
# data: [ 0.9999447194362803 , 0.0020474649944443,-0.0103133873482650, 0.0419288393042368,
# -0.0020872338059752 , 0.9999904229336581 ,-0.0038467513570324, -0.0189071108664730,
# 0.0103054124875243 , 0.0038680651571764 , 0.9999394164375169 , 0.0196587335326899,
# 0, 0, 0, 1 ]
body_T_cam0: !!opencv-matrix
rows: 4
cols: 4
dt: d
data: [ 1 , 0 ,0, -0.0040393592650275,
0 , 1 ,0 ,0,
0, 0 , 1 , 0,
0, 0, 0, 1 ]
body_T_cam1: !!opencv-matrix
rows: 4
cols: 4
dt: d
data: [ 1 ,0,0, 0.0419288393042368,
0 , 1 ,0, 0,
0 ,0 , 1 , 0,
0, 0, 0, 1 ]
#Multiple thread support
multiple_thread: 1
#feature traker paprameters
max_cnt: 150 # max feature number in feature tracking
min_dist: 30 # min distance between two features
freq: 10 # frequence (Hz) of publish tracking result. At least 10Hz for good estimation. If set 0, the frequence will be same as raw image
F_threshold: 1.0 # ransac threshold (pixel)
show_track: 1 # publish tracking image as topic
flow_back: 1 # perform forward and backward optical flow to improve feature tracking accuracy
#optimization parameters
max_solver_time: 0.04 # max solver itration time (ms), to guarantee real time
max_num_iterations: 8 # max solver itrations, to guarantee real time
keyframe_parallax: 10.0 # keyframe selection threshold (pixel)
#imu parameters The more accurate parameters you provide, the better performance
acc_n: 0.04 # accelerometer measurement noise standard deviation. #0.2 0.04
gyr_n: 0.004 # gyroscope measurement noise standard deviation. #0.05 0.004
acc_w: 0.002 # accelerometer bias random work noise standard deviation. #0.002
gyr_w: 4.0e-5 # gyroscope bias random work noise standard deviation. #4.0e-5
g_norm: 9.805 # gravity magnitude
#unsynchronization parameters
estimate_td: 1 # online estimate time offset between camera and imu
td: -0.01157588930057003 # initial value of time offset. unit: s. readed image clock + td = real image clock (IMU clock)
#loop closure parameters
load_previous_pose_graph: 0 # load and reuse previous pose graph; load from 'pose_graph_save_path'
pose_graph_save_path: "/home/dji/output/pose_graph/" # save and load path
save_image: 1 # save image in pose graph for visualization prupose; you can close this function by setting 0
分享结束,拜~~~