一.开发环境
- Ubuntu20.04
- OpenCV4
- Python3
二.银牛相机的配置
1.安装SDK
1)卸载之前的安装版本
a.压缩包
如果之前是用压缩包解压方式安装的,用下述命令卸载:
sudo ./uninstall.sh
uninstall.sh内容如下:
#!/bin/bash
NUM=$1
systemctl stop inuservice@1.service
systemctl stop inuservice@2.service
systemctl stop inuservice@3.service
sudo rm -rf /etc/systemd/system/inuservice1.service
sudo rm -rf /etc/systemd/system/inuservice3.service
sudo rm -rf /etc/systemd/system/multi-user.target.wants/inuservice@3.service
sudo rm -rf /etc/systemd/system/multi-user.target.wants/inuservice@1.service
sudo rm -rf /etc/systemd/system/multi-user.target.wants/inuservice@2.service
sudo rm -rf /etc/systemd/system/inuservice2.service
sudo rm -rf /lib/systemd/system/inuservice@.service
sudo rm -rf /sys/fs/cgroup/devices/system.slice/system-inuservice.slice
sudo rm -rf /sys/fs/cgroup/systemd/system.slice/system-inuservice.slice
sudo rm -rf /sys/fs/cgroup/systemd/system.slice/system-inuservice.slice/inuservice@2.service
sudo rm -rf /sys/fs/cgroup/systemd/system.slice/system-inuservice.slice/inuservice@1.service
sudo rm -rf /sys/fs/cgroup/systemd/system.slice/system-inuservice.slice/inuservice@3.service
dpkg -r inudev
rm -rf /opt/Inuitive
b.deb包
如果之前是用deb包安装的,用下述命令卸载:
sudo dpkg -r inudev
2) 安装
a.用压缩包方式安装
X86平台
sudo ./inu_installX86.sh
如果出错就使用
sudo bash inu_installX86.sh
inu_installX86.sh内容如下:
#!/bin/bash
NUM=$1
#tar命令在解压时会默认指定参数--same-owner,即打包的时候是谁的,解压后就给谁
#解压时指定参数--no-same-owner(即tar --no-same-owner -zxvf xxxx.tar.gz),则会将执行该tar命令的用户作为解压后的文件目录的所有者
tar --no-same-owner -zxvf InuSDK421_U20_130.tar.gz -C /opt/
#sleep 8s
echo "please wait config ...."
if [ "$NUM" = "2" ]
then
sudo bash inu_config.sh 2
elif [ "$NUM" = "3" ]
then
sudo bash inu_config.sh 3
else
sudo bash inu_config.sh
fi
inu_config.sh内容如下:
#!/bin/bash
NUM=$1
# eg ./inu_config.sh 1 start single service
# eg ./inu_config.sh 2 start two service
# eg ./inu_config.sh default start single service
if [ "$NUM" = "2" ]
then
echo "2 service"
systemctl stop inuservice@1.service
systemctl stop inuservice@2.service
cp /opt/Inuitive/InuDev/bin/inuservice@.service /lib/systemd/system
systemctl daemon-reload
# multiple moudle
systemctl start inuservice@{1..2}.service
systemctl reload-or-restart inuservice@{1..2}.service
systemctl enable inuservice@{1..2}.service
elif [ "$NUM" = "3" ]
then
echo "3 service"
systemctl stop inuservice@1.service
systemctl stop inuservice@2.service
systemctl stop inuservice@3.service
cp /opt/Inuitive/InuDev/bin/inuservice@.service /lib/systemd/system
systemctl daemon-reload
# multiple moudle
systemctl start inuservice@{1..3}.service
systemctl reload-or-restart inuservice@{1..3}.service
systemctl enable inuservice@{1..3}.service
else
echo "1 service"
systemctl stop inuservice.service
cp /opt/Inuitive/InuDev/bin/inuservice.service /lib/systemd/system
systemctl daemon-reload
# multiple moudle
systemctl start inuservice.service
systemctl reload-or-restart inuservice.service
systemctl enable inuservice.service
fi
b.用deb包安装
sudo dpkg -i inudev_xxxxxx.deb
2.编译安装
tar -xzvf catkin_wsxxxx.tar.gz
cd ~/catkin_ws/
catkin_make -DCMAKE_VERBOSE_MAKEFILE=OFF -DCMAKE_BUILD_TYPE=Release -DCATKIN_WHITELIST_PACKAGES="inudev_ros_nodelet"
三.银牛相机的标定
1.开启相机
cd inuchip_ws
source devel/setup.bash
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/Inuitive/InuDev/lib
roslaunch inudev_ros_nodelet inudev_ros_nodelet.launch
2.打开可视化
rqt_image_viewer或者rviz都可以,rqt_image_viewer显示的图像大一点。
rqt_image_view
//rviz
使用rostopic list查看话题:
-
彩色图像
/camera/color/image_raw
-
鱼眼
/sensor_msgs/Image/Fisheye
-
红外
/sensor_msgs/Image/Video/left/image
/sensor_msgs/Image/Video/right/image
-
IMU
/imu/sensor_msgs/Imu
如果需要可视化IMU则打开Rviz(直接在终端输入rviz),然后在Rviz中点击左下角Add添加rviz_imu_plugin,可以看见有坐标系出现,如果没有需要安装 (sudo apt-get install ros-noetic-imu-tools 安装imu功能包)
勾选Enable acceleration后可以看到红色圆锥,此时再动相机,圆锥头会跟着动。
3.录制bag
1) 降频
rosrun topic_tools throttle messages /camera/color/image_raw 4.0 /color
rosrun topic_tools throttle messages /sensor_msgs/Image/Fisheye 4.0 /fish
rosrun topic_tools throttle messages /imu/sensor_msgs/Imu 200.0 /imu
2) 录制bag
rosbag record -O [bagname].bag [bag_topic],添加话题名,可以自己录制自己想要的话题。
rosbag record -O inuchip.bag /color /fish /imu
录制要求:
-
保证图像能够覆盖整个棋盘格;
-
分别在x,y,z轴上进行充分平移,各来回三次;
-
分别在x,y,z轴上进行充分旋转,各三次
https://github.com/ethz-asl/kalibr/wiki
4.标定
1) 相机标定
将bag包放入kalibr工作空间下,使用kalibr进行标定
source devel/setup.bash
rosrun kalibr kalibr_calibrate_cameras \
--target april.yaml \
--bag inuchip.bag \
--models pinhole-radtan \
--topics /color \
--bag-from-to 3 202 \
--show-extraction \
--approx-sync 0.04
注意: \前要有空格
参数解释:
-
--target april.yaml
是标定板的配置文件,注意如果选择棋格盘,注意targetCols和targetRows表示的是内侧角点的数量,不是格子数量。
#april.yaml
target_type: 'aprilgrid' #gridtype
tagCols: 6 #number of apriltags
tagRows: 6 #number of apriltags
tagSize: 0.088 #size of apriltag, edge to edge [m]
tagSpacing: 0.3 # 小格子与大格子边长比例
codeOffset: 0 #code offset for the first tag in the aprilboard
-
--bag inuchip.bag 是录制的数据包
-
--models pinhole-radtan表示摄像头的相机模型和畸变模型(解释参考https://github.com/ethz-asl/kalibr/wiki/supported-models,根据需要选取,其中VINS使用的畸变模型是radtan鱼眼的相机模型是omni),这部分是与话题对应的,每个话题都要有对应的相机模型。
-
--topics /color 表示三个摄像头对应的拍摄的数据话题
-
--bag-from-to 3 202表示处理bag中3-202秒的数据。(控制时长)
-
--show-extraction表示显示检测特征点的过程,这些参数可以相应的调整。
可以使用rosbag info 来参看录制的包的信息。eg:rosbag info res.bag
2) IMU标定
Ubuntu20.04环境下IMU标定_AAAAAJoker的博客-CSDN博客
3) 相机与IMU的联合标定
将bag放到kalibr工作空间下,使用kalibr进行标定
source devel/setup.bash
rosrun kalibr kalibr_calibrate_imu_camera \
--target april.yaml \
--cam cam.yaml \
--imu imu.yaml \
--bag inuchip.bag
标定的结果包括变化矩阵,畸变参数和内参
四.银牛相机实时运行Vins-Fusion
1.编写yaml文件
参照PATH/src/VINS-Fusion/config/euroc下的euroc_mono_imu_config.yaml文件编写yaml文件。
主要是修改imu_topic、image0_topic、output_path、body_T_cam0、pose_graph_save_path
%YAML:1.0
#common parameters
#support: 1 imu 1 cam; 1 imu 2 cam: 2 cam;
imu: 1
num_of_cam: 1
imu_topic: "/imu/sensor_msgs/Imu"
image0_topic: "/camera/color/image_raw"
output_path: "/home/alex/work/VINS/catkin_VinsFusion_ws/output/"
cam0_calib: "cam0_pinhole.yaml"
image_width: 800
image_height: 600
# Extrinsic parameter between IMU and Camera.
estimate_extrinsic: 1 # 0 Have an accurate extrinsic parameters. We will trust the following imu^R_cam, imu^T_cam, don't change it.
# 1 Have an initial guess about extrinsic parameters. We will optimize around your initial guess.
body_T_cam0: !!opencv-matrix
rows: 4
cols: 4
dt: d
data: [-0.32188201267804395, 0.9260851270909544, -0.19687129627057998, -0.00020087757645535857,
-0.6179128847195002, -0.047936873030362814, 0.7847838703118792, -0.00012099418629572827,
0.7173392759440634, 0.37425712229027486, 0.5876699495491028, -2.8659704908285424e-05,
0, 0, 0, 1]
#Multiple thread support
multiple_thread: 2
#feature traker paprameters
max_cnt: 150 # max feature number in feature tracking
min_dist: 30 # min distance between two features
freq: 10 # frequence (Hz) of publish tracking result. At least 10Hz for good estimation. If set 0, the frequence will be same as raw image
F_threshold: 1.0 # ransac threshold (pixel)
show_track: 1 # publish tracking image as topic
flow_back: 1 # perform forward and backward optical flow to improve feature tracking accuracy
#optimization parameters
max_solver_time: 0.04 # max solver itration time (ms), to guarantee real time
max_num_iterations: 8 # max solver itrations, to guarantee real time
keyframe_parallax: 10.0 # keyframe selection threshold (pixel)
#imu parameters The more accurate parameters you provide, the better performance
#acc_n: 0.014329894548292574 # accelerometer measurement noise standard deviation.
#gyr_n: 0.001296995167210268 # gyroscope measurement noise standard deviation.
#acc_w: 0.0007700623770766193 # accelerometer bias random work noise standard deviation.
#gyr_w: 3.867635618850541e-05 # gyroscope bias random work noise standard deviation.
#g_norm: 9.81007 # gravity magnitude
#2023.8.28
#使用Vins的默认参数
#imu parameters The more accurate parameters you provide, the better performance
acc_n: 0.1 # accelerometer measurement noise standard deviation.
gyr_n: 0.01 # gyroscope measurement noise standard deviation.
acc_w: 0.001 # accelerometer bias random work noise standard deviation.
gyr_w: 0.0001 # gyroscope bias random work noise standard deviation.
g_norm: 9.81007 # gravity magnitude
#2023.8.28
#unsynchronization parameters
estimate_td: 1 # online estimate time offset between camera and imu
td: 0.0 # initial value of time offset. unit: s. readed image clock + td = real image clock (IMU clock)
#loop closure parameters
load_previous_pose_graph: 0 # load and reuse previous pose graph; load from 'pose_graph_save_path'
pose_graph_save_path: "/home/alex/work/VINS/catkin_VinsFusion_ws/output/pose_graph/" # save and load path
save_image: 1 # save image in pose graph for visualization prupose; you can close this function by setting 0
2.启动相机
进入相机空间下
source devel/setup.bash
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/Inuitive/InuDev/lib
roslaunch inudev_ros_nodelet inudev_ros_nodelet.launch
3.运行vins
进入vins空间下
1)开启Rviz
source devel/setup.bash
roslaunch vins vins_rviz.launch
2)运行vins节点
source devel/setup.bash
rosrun vins vins_node /home/alex/work/VINS/catkin_VinsFusion_ws/src/VINS-Fusion/config/inuchip/inuchip_imu.yaml
3)开启回环节点
如果需要回环功能开启回环节点:
rosrun loop_fusion loop_fusion_node /home/alex/work/VINS/catkin_VinsFusion_ws/src/VINS-Fusion/config/inuchip/inuchip_imu.yaml
五.银牛相机实时运行ORB-SLAM2
1.build Mono节点
1)将路径添加到环境变量中
在终端输入
gedit .bashrc
然后将export ROS_PACKAGE_PATH=${ROS_PACKAGE_PATH}:PATH/ORB_SLAM2/Examples/ROS添加到文件中,注意PATH需要换成自己的路径
2)编译
chmod +x build_ros.sh
./build_ros.sh
- Error: the rosdep view is empty: call ‘sudo rosdep init‘ and ‘rosd
solution:
(1)rosdep update
若报错Command 'rosdep' not found, but can be installed with: sudo apt install python3-rosdep2,则按照提示安装python3-rosdep2
(2)sudo apt install python3-rosdep2
(3)rosdep update
(4)sudo apt install ros-noetic-desktop-full
完成之后再输入上面的命令:
chmod +x build_ros.sh
./build_ros.sh
- 如果使用的opencv是4版本的,需要将PATH/ORB_SLAM2/Examples/ROS/ORB_SLAM2中CMakeList.txt中的find_package(OpenCV 3.0 QUIET )改为find_package(OpenCV)
2.编写yaml文件
进入/home/alex/work/ORB/ORB_SLAM2/Examples/Monocular文件夹下,复制TUM3.yaml,重命名为inuchip.yaml根据之前标定的内容修改
%YAML:1.0
#--------------------------------------------------------------------------------------------
# Camera Parameters. Adjust them!
#--------------------------------------------------------------------------------------------
# Camera calibration and distortion parameters (OpenCV)
Camera.fx: 753.2901306904256
Camera.fy: 760.3634710184783
Camera.cx: 420.7085598335765
Camera.cy: 297.26316492471716
Camera.k1: 0.08196923874785361
Camera.k2: -0.4670882026026694
Camera.p1: -0.0008884144534117069
Camera.p2: -0.0020323630010551713
# Camera frames per second
Camera.fps: 30.0
# Color order of the images (0: BGR, 1: RGB. It is ignored if images are grayscale)
Camera.RGB: 1
#--------------------------------------------------------------------------------------------
# ORB Parameters
#--------------------------------------------------------------------------------------------
# ORB Extractor: Number of features per image
ORBextractor.nFeatures: 1000
# ORB Extractor: Scale factor between levels in the scale pyramid
ORBextractor.scaleFactor: 1.2
# ORB Extractor: Number of levels in the scale pyramid
ORBextractor.nLevels: 8
# ORB Extractor: Fast threshold
# Image is divided in a grid. At each cell FAST are extracted imposing a minimum response.
# Firstly we impose iniThFAST. If no corners are detected we impose a lower value minThFAST
# You can lower these values if your images have low contrast
ORBextractor.iniThFAST: 20
ORBextractor.minThFAST: 7
#--------------------------------------------------------------------------------------------
# Viewer Parameters
#--------------------------------------------------------------------------------------------
Viewer.KeyFrameSize: 0.05
Viewer.KeyFrameLineWidth: 1
Viewer.GraphLineWidth: 0.9
Viewer.PointSize:2
Viewer.CameraSize: 0.08
Viewer.CameraLineWidth: 3
Viewer.ViewpointX: 0
Viewer.ViewpointY: -0.7
Viewer.ViewpointZ: -1.8
Viewer.ViewpointF: 500
3.启动相机
进入相机空间下
source devel/setup.bash
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/Inuitive/InuDev/lib
roslaunch inudev_ros_nodelet inudev_ros_nodelet.launch
4.运行ORB-SLAM2
rosrun ORB_SLAM2 Mono /home/alex/work/ORB/ORB_SLAM2/Vocabulary/ORBvoc.txt /home/alex/work/ORB/ORB_SLAM2/Examples/Monocular/inuchip.yaml