【学习总结】cam_lidar_calibration:激光雷达与相机联合标定

由于深度学习需要自己构造数据集,对于雷达和相机等多传感器融合数据,传感器标定是不可避免的,在前段时间学习了激光雷达和相机的联合标定,在此记录一下

源码链接:GitHub - acfr/cam_lidar_calibration: (ITSC 2021) Optimising the selection of samples for robust lidar camera calibration. This package estimates the calibration parameters from camera to lidar frame.(ITSC 2021) Optimising the selection of samples for robust lidar camera calibration. This package estimates the calibration parameters from camera to lidar frame. - GitHub - acfr/cam_lidar_calibration: (ITSC 2021) Optimising the selection of samples for robust lidar camera calibration. This package estimates the calibration parameters from camera to lidar frame.https://github.com/acfr/cam_lidar_calibration如果github无法访问,可是尝试镜像站:

GitHub - acfr/cam_lidar_calibration: (ITSC 2021) Optimising the selection of samples for robust lidar camera calibration. This package estimates the calibration parameters from camera to lidar frame.(ITSC 2021) Optimising the selection of samples for robust lidar camera calibration. This package estimates the calibration parameters from camera to lidar frame. - GitHub - acfr/cam_lidar_calibration: (ITSC 2021) Optimising the selection of samples for robust lidar camera calibration. This package estimates the calibration parameters from camera to lidar frame.https://hub.nuaa.cf/acfr/cam_lidar_calibration论文链接:https://arxiv.org/abs/2103.12287

1.环境配置

系统:ubunutu20.04

ros -noetic

激光雷达: VLP-16

相机:海康威视

1.1 构建工作空间并复制源码

考虑到可能github访问不方便,下面涉及到github网址的部分均使用镜像站hub.nuaa.cf代替,如果需要官网源码,将涉及到镜像网址的部分均替换成github网址即可。

# 创建工作空间
(base) hzt@hzt-ubuntu20:~$ mkdir -p calib_ws/src &&cd calib_ws
(base) hzt@hzt-ubuntu20:~/calib_ws$ cd src
(base) hzt@hzt-ubuntu20:~/calib_ws/src$ catkin_init_workspace 


# 克隆仓库
(base) hzt@hzt-ubuntu20:~/calib_ws/src$ git clone https://hub.nuaa.cf/acfr/cam_lidar_calibration


# 编译
(base) hzt@hzt-ubuntu20:~/calib_ws/src$ cd ..
(base) hzt@hzt-ubuntu20:~/calib_ws$ catkin_make







1.2 部分编译报错

        1.2.1

#编译时报错
(base) hzt@hzt-ubuntu20:~/calib_ws$ catkin_make
Base path: /home/hzt/calib_ws
Source space: /home/hzt/calib_ws/src
Build space: /home/hzt/calib_ws/build
Devel space: /home/hzt/calib_ws/devel
Install space: /home/hzt/calib_ws/install
####
#### Running command: "cmake /home/hzt/calib_ws/src -DCATKIN_DEVEL_PREFIX=/home/hzt/calib_ws/devel -DCMAKE_INSTALL_PREFIX=/home/hzt/calib_ws/install -G Unix Makefiles" in "/home/hzt/calib_ws/build"
####
-- The C compiler identification is GNU 9.4.0
-- The CXX compiler identification is GNU 9.4.0
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Using CATKIN_DEVEL_PREFIX: /home/hzt/calib_ws/devel
-- Using CMAKE_PREFIX_PATH: /opt/ros/noetic
-- This workspace overlays: /opt/ros/noetic
-- Found PythonInterp: /home/hzt/anaconda3/bin/python3 (found suitable version "3.10.9", minimum required is "3") 
-- Using PYTHON_EXECUTABLE: /home/hzt/anaconda3/bin/python3
-- Using Debian Python package layout
-- Could NOT find PY_em (missing: PY_EM) 
CMake Error at /opt/ros/noetic/share/catkin/cmake/empy.cmake:30 (message):
  Unable to find either executable 'empy' or Python module 'em'...  try
  installing the package 'python3-empy'
Call Stack (most recent call first):
  /opt/ros/noetic/share/catkin/cmake/all.cmake:164 (include)
  /opt/ros/noetic/share/catkin/cmake/catkinConfig.cmake:20 (include)
  CMakeLists.txt:58 (find_package)
-- Configuring incomplete, errors occurred!
See also "/home/hzt/calib_ws/build/CMakeFiles/CMakeOutput.log".
Invoking "cmake" failed

解决方法:

# 如果出现上述报错,这是因为环境中存在多个python环境,所以在编译的时候声明所使用的python(根本原因是anaconda

(base) hzt@hzt-ubuntu20:~/calib_ws$ catkin_make -DPYTHON_EXECUTABLE=/usr/bin/python3
1.2.2
/home/hzt/calib_ws/src/cam_lidar_calibration/include/cam_lidar_calibration/optimiser.h:10:10: fatal error: opencv/cv.hpp: 没有那个文件或目录
   10 | #include <opencv/cv.hpp>
      |          ^~~~~~~~~~~~~~~
compilation terminated.

根据报错提示找到对应代码处,更改导入头文件

#include <opencv2/opencv.hpp>

1.2.3

/home/hzt/calib_ws/src/cam_lidar_calibration/src/optimiser.cpp:427:48: error: ‘CV_REDUCE_SUM’ was not declared in this scope
  427 |         cv::reduce(trans_diff, summed_diff, 1, CV_REDUCE_SUM, CV_64F);

optimiser.h头文件中添加:

#include<opencv2/core/core_c.h>

1.2.4 anaconda 与ros自带的QT路径冲突

usr/bin/ld: /home/hzt/anaconda3/lib/libQt5Core.so.5.15.2: undefined reference to `std::__exception_ptr::exception_ptr::_M_release()@CXXABI_1.3.13'
/usr/bin/ld: /home/hzt/anaconda3/lib/libQt5Widgets.so.5.15.2: undefined reference to `std::__throw_bad_array_new_length()@GLIBCXX_3.4.29'
/usr/bin/ld: /home/hzt/anaconda3/lib/libQt5Core.so.5.15.2: undefined reference to `std::__exception_ptr::exception_ptr::_M_addref()@CXXABI_1.3.13'
collect2: error: ld returned 1 exit status

cam_lidar_calib_ws/src/cam_lidar_calibration/CMakeLists.txt里找到

find_package(Qt5 REQUIRED Core Widgets)

在这行代码前面添加Qt5config.cmake文件的路径

SET(CMAKE_PREFIX_PATH "/usr/lib/x86_64-linux-gnu/cmake")

修改完后,最好删除build和devel文件夹,重新运行编译命令。

1.3 运行示例

打开终端进入工作空间,刷新环境变量:

(base) hzt@hzt-ubuntu20:~$ cd calib_ws/
(base) hzt@hzt-ubuntu20:~/calib_ws$ source devel/setup.bash 

运行示例标定:

roslaunch cam_lidar_calibration run_optimiser.launch import_samples:=true

终端可以看到对50个样本位姿进行计算VOQ得分,结果保存在src/cam_lidar_calibration/data/vlp

显示标定结果:

roslaunch cam_lidar_calibration assess_results.launch csv:="$(rospack find cam_lidar_calibration)/data/vlp/calibration_quickstart.csv" visualise:=true

可以看到显示结果:

至此,源码的下载与环境安装结束。

2.实际标定

在进行实际标定之前,首先先检查雷达和相机的topic,这套代码需要激光雷达的点云具有XYZIR的点云topic、相机图像topic,和发布相机内参的topic(即camera_info)。

2.1 准备标定板

代码的readme里已经给了标定板的下载地址,自己准备的标定板也可以,标定板采用棋盘格就可以,尽量大一点。

 2.2 准备rosbag

运行自己的激光雷达和相机,确认发布的topic包含了激光雷达点云、相机图像、camrea_info。

当相机可以发布camrea_info时,打开rviz,调整标定板的距离(对于vlp-16线雷达,我自己的实际经验是保持在2米左右),标定板如图所示成45度,让雷达尽可能多的线打在标定板上,保持静止不动,录制包含上述话题的bag(我录制的七八秒钟即可):

 更改棋盘格的位姿(距离、高度、俯仰、左右移动),每次更改之后就保持静止,一共录制九个左右(也可以更多)。

当相机不发布camrea_info时,需要自己手动发布,先看看它主要包含什么内容。通过打印该话题内容可以看到主要是header、矩阵D、K、R、P,其中D是相机畸变系数、K为相机内参、R为旋转矩阵,可以直接设置为单位阵,P为校正后的投影矩阵。

 了解了camrea_info的内容之后,就可以自己手动发布了,在cam_lidar_calib_ws/src/cam_lidar_calibration/src/目录下新建cam_info.cpp文件,其代码如下:


#include <ros/ros.h>
#include <iostream>
#include <opencv2/opencv.hpp>
#include <sensor_msgs/CameraInfo.h>
#include <math.h>

using namespace std;
using namespace cv;

sensor_msgs::CameraInfo getCameraInfo(void){        // extract cameraInfo.
    sensor_msgs::CameraInfo cam;

    vector<double> D{-1.2634357e-01, 1.7202073e-01, 0.000000, 0.000000, 0.000000};
    boost::array<double, 9UL> K = {
        1.0690539e+03 ,  0.0000000e+00,   6.3718590e+02 ,
   0.0000000e+00 ,  1.0688654e+03 ,  4.8939118e+02,
   0.0000000e+00 ,  0.0000000e+00 ,  1.0000000e+00
    };
    
     boost::array<double, 12UL> P = {
         1.0690539e+03 ,  0.0000000e+00,   6.3718590e+02 ,0.0000000e+00,
   0.0000000e+00 ,  1.0688654e+03 ,  4.8939118e+02,0.0000000e+00,
   0.0000000e+00 ,  0.0000000e+00 ,  1.0000000e+00,0.0000000e+00
        
    };
    boost::array<double, 9> r = {1, 0, 0, 0, 1, 0, 0, 0, 1};

    cam.height = 1200;
    cam.width = 1920;
    cam.distortion_model = "plumb_bob";
    cam.D = D;
    cam.K = K;
    cam.P = P;
    cam.R = r;
    cam.binning_x = 0;
    cam.binning_y = 0;
    cam.header.frame_id = "camera";  //frame_id为camera,也就是相机名字
    cam.header.stamp = ros::Time::now();
    cam.header.stamp.nsec = 0;
    return cam;
}

int main(int argc, char** argv) {

  ros::init(argc, argv, "camera_info");  //初始化了一个节点,名字为camera_info
  ros::NodeHandle n;
  ros::Publisher pub = n.advertise<sensor_msgs::CameraInfo>("/camera/camera_info", 1000);  
  sensor_msgs::CameraInfo camera_info_dyn;
  ros::Rate rate(1);  //点云更新频率0.5Hz

  while (ros::ok())
  {
      camera_info_dyn = getCameraInfo();
      pub.publish(camera_info_dyn); //发布出去
      rate.sleep();
  }
    ros::spin();
    return 0;
}



在CMakeLists.txt里添加可执行权限:

add_executable(cam_info src/cam_info.cpp)
target_link_libraries(cam_info
        ${catkin_LIBRARIES}
        ${OpenCV_LIBS}

重新编译,运行该代码

(base) hzt@hzt-ubuntu20:~/cam_lidar_calib_ws$ rosrun cam_lidar_calibration cam_info

 可以打印出来对比info信息。

2.3 开始标定

打开src/cam_lidar_calibration/cfg/params.yaml,修改参数,主要更改topic名称,以及棋盘格参数。

 打开终端,启动标定程序

hzt@hzt-ubuntu20:~/cam_lidar_calib_ws$ roslaunch cam_lidar_calibration run_optimiser.launch import_samples:=false

(可选)仅对于相机不发布camrea_info,启动手动发布camrea_info节点

(base) hzt@hzt-ubuntu20:~/cam_lidar_calib_ws$ rosrun cam_lidar_calibration cam_info

播放录制好的bag,(0701.bag 更改为你自己bag的名字)

rosbag play 0701.bag -r 0.1

 此时可能打开的rviz图像和雷达点云都看不见,别急,可能是rviz的topic设置不对:

 点击Panels,勾选Displays,更改Fixed Frame为自己的点云frame,image的topic也更改为自己的话题。

通过rqt_config工具,截取xyz坐标轴,使rviz中的点云尽量只保留棋盘格区域;
 

 点击capture进行一次截取,拟合边界成功则如下所示:

重新载入下一个rosbag,调整xyz坐标轴,使rviz中的点云尽量只保留棋盘格区域,再点击capture;重复截取,包括至少3个位姿,可以更多;如果某次截取效果不好,可以discard当次截取;

数量足够后,点击optimize按钮,后台开始优化;优化结果会保存在cam_lidar_calib_ws/src/cam_lidar_calibration/data下。

 

 

2.4 检验标定效果

(py3.8) hzt@hzt-ubuntu20:~/cam_lidar_calib_ws$ roslaunch cam_lidar_calibration assess_results.launch csv:="$(rospack find cam_lidar_calibration)/data/vlp/calibration_quickstart.csv" visualise:=true

其中

csv:="$(rospack find cam_lidar_calibration)/data/vlp/calibration_quickstart.csv"

替换为你自己保存的标定结果路径。运行后终端显示标定的平均重投影误差,可以看到标定结果如下:

 至此,标定结束,终端显示标定参数:

Calibration params (roll,pitch,yaw,x,y,z): -1.5809,0.0041,-1.4960,0.0706,0.0141,-0.1139

  • 6
    点赞
  • 42
    收藏
    觉得还不错? 一键收藏
  • 25
    评论
对于给定的YAML文件内容,你可以使用YAML-CPP库来读取和解析它。以下是一个示例代码,展示如何读取该YAML文件中的内容: ```cpp #include <iostream> #include <yaml-cpp/yaml.h> int main() { // 读取YAML文件 YAML::Node config = YAML::LoadFile("config.yaml"); // 获取lidar节点 YAML::Node lidar = config["sensor"]["lidar"]["lidar"]; // 遍历lidar数组中的每个元素 for (std::size_t i = 0; i < lidar.size(); ++i) { // 获取driver节点 YAML::Node driver = lidar[i]["driver"]; // 获取driver节点的frame_id值 std::string frameId = driver["frame_id"].as<std::string>(); // 获取driver节点的device_type值 std::string deviceType = driver["device_type"].as<std::string>(); // 输出frame_id和device_type值 std::cout << "Frame ID: " << frameId << std::endl; std::cout << "Device Type: " << deviceType << std::endl; // 其他操作... } // 获取camera节点下的camera数组 YAML::Node camera = config["sensor"]["camera"]["camera"]; // 遍历camera数组中的每个元素 for (std::size_t i = 0; i < camera.size(); ++i) { // 获取driver节点 YAML::Node driver = camera[i]["driver"]; // 获取driver节点的frame_id值 std::string frameId = driver["frame_id"].as<std::string>(); // 获取driver节点的device_type值 std::string deviceType = driver["device_type"].as<std::string>(); // 输出frame_id和device_type值 std::cout << "Frame ID: " << frameId << std::endl; std::cout << "Device Type: " << deviceType << std::endl; // 其他操作... } return 0; } ``` 在上述示例中,假设你的YAML文件名为"config.yaml",你可以根据需要修改文件名。通过使用YAML-CPP库的`LoadFile`函数加载YAML文件,并使用`[]`运算符获取相应的节点和值。 请确保在编译和运行代码之前已经安装了YAML-CPP库,并将其包含到你的项目中。希望这可以帮助到你!如果你有任何疑问,请随时提问。
评论 25
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值