MIAS-LCEC Toolbox User Guide

Project Navigation

Project WebsiteMIAS Group

Article
Online, Target-Free LiDAR-Camera Extrinsic Calibration via Cross-Modal Mask Matching

Toolbox CodeZWhuang666/MIAS-LCEC (github.com)

Toolbox Demobilibili video

Dataset Download:[Google Drive ] [BaiDuYun]

Program Details

1. Start MIAS-LCEC Toolbox

The program consists of two independent pacakges:

No.Package NameFunction
1zvisionA C++ package , which is the main UI of the project
2c3mA Python package, the main function is to receive command and data from zvision package and complete the segmentation function and cross-modal mask matching function, and then send the matched points pair back to zvision package for further PNP solution.

The zvsion package and c3m package communicate with each other through ros2 topic publishing and subscribing mechanism,there will be some problem if you run both zvision package and c3m package in two computers in a local network since there will be two pubulishing node and subscribe node with the same name in the network. If you need to run zvision and c3m on two or more computers at the same time, please disconnect the computers with the network.

To start MIAS-LCEC Toolbox, you only need to run:

cd ~/MIAS-LCEC/
sh mias_lcec.sh

2. UI Introduction

2.1 control panel

This panel includes checkboxs, inputbox and buttons for general cloud and picture control.

ControlsFunction
render Mode checkboxes and inputboxeschoose the cloud rendering mode and set the rendering params.
load CloudPoints Buttonload a cloud file and display. then use left/right/mid/mid press/ of the mouse to rotate and zoom the cloud
Rosbag2 Read Buttonread the pcd and image in a rosbag2 file, if the checkbox Folder is selected, it can handle all the bag files in a folder, if the checkbox color is selected, all image files will be in RGB mode.
take picture Buttontake a cloud picture by the cloud camera and show it in the cloud image window.
load image Buttonload an image from a file and show it in the cloud image window
ReadJson Buttonread a json file including pics rendering condition and intrinsic/extrinsic parameters, and then set the “cloud camera window” and set the intrinsic/extrinsic parameters of the cloud camera.
SaveJson Buttonsave the intrinsic/extrinsic parameters of the cloud camera, and the pic rendering conditions to a json file.
Remove All Buttonremove all loaded pictures and clouds.

2.2 image matching panel

ControlsFunction
load picture & render controlsload a picture and render the picture by cloud or render the cloud by the picture
autocalibration controlsautomatic calibration and generate calibration report
manual calibration controlspicking points from cloud and image, and then complete the manual calibration

2.3 cloud camera panel

ControlsFunction
camera coordination controlsadjust the x,y,z and rotation angles around axis for the cloud camera
pics rendering condition controlsadjust the params for rendering pictures genrated by the cloud camera
points filter controlsadjust the filter params for the cloud camera to filt the points
speedup conditions controlsadjust params for speedup the cloudcamera projection process

2.4 cloud camera parameters

The inputbox in the panel is to show the distcoeffs, intrinsic parameters and extrinsic parameters of the cloud camera.

3. Browse LiDAR Point Clouds and Camera Images

In MIAS-LCEC Toolbox, it is convenient to browse point clouds and camera images.

  • Load and browse point cloud: Click button [load CloudPoints] in control panel, you can easily load pcd/ply files and browse them in the toolbox interface. It allows 3D perspective transformations through simple mouse operations, enabling the observation of point clouds from various view points.

  • Load and browse images: Click button [LoadCameraSrc] in image matching panel, you can read images (bmp/png/jpg) in the internal window [cloud image window]. You can also drag the internal window in the interface to view images wherever you want. The window can be adjusted to different sizes.

4. Automatic Calibration

4.1 Single Calibration

StepOperation
1deselect checkbox “batch” and checkbox “jsonbatch
2click Button “calibration
3select a cloud file according to the promption
4select a image file according to the promption
5select a json file of intrinsic,config and extrinsic according to the promption
6the result file will be found at directory “cal” in the same directory with the cloud file

4.2 Batch Calibration

When you have a group of clouds and images with same intrinsic and extrinsic parameters, batch calibration function would be a better choice.

4.2.1 File Naming Rules

To ensure the cloud and its matching image be recognized correclty, file name shall comply with below rules:

  • cloud file and its matching image file shall be put in the same folder.
  • the essential name of the cloud and the image shall be the same.
  • pcd name shall be : essential name+“_merge.pcd”
  • image name shall be : essential name+“_1.png”
  • example: test_merge.pcd, test_1.png, will be regarded as a pair and calibrate.
4.2.2 Operation Steps
StepOperation
1select checkbox “batch” and deselect checkbox “jsonbatch”
2click Button “calibration”
3select a json file of intrinsic,config and extrinsic according to the promption
4select a folder including cloud and image files
6the result file will be found at directory “intelibatch” in the same directory with the cloud file

4.3 JsonBatch Calibration

When dealing with multiple point clouds and images with varying intrinsic and extrinsic parameters, you can organize the files by grouping those with the same intrinsic and extrinsic parameters into separate folders. Then, by editing a batch testing JSON file, you can instruct the program to automatically batch calibrate all the files.

For batchtesting json file , please read the example file “batchtestingexample.json”:

{ 
    "targetFolder":"DatasetsReleaseTest",
    "batchType":1,
    "itrCounts":6,
    "testFileFolder":"/media/D/zvoutput/DatasetsRelease/TF70Data",
    "trueJsonFolder":"/media/D/zvoutput/DatasetsRelease/TF70TrueJson",
    "RemarksA":"this is the example Json file of bath Test for TF70 Dataset",
    "RemarksB":""
}
4.3.1 Operation Steps
StepOperation
1deselect checkbox “batch” and select checkbox “jsonbatch
2click Button “calibration
3select the batch testing json file according to the promption
4the result file will be found at directory you assigned in the batch testing json file

5. Manual Calibration

5.1 Manual Calibration

StepOperation
1press “load cloudPoints” button in “Control Panel” to load a cloud
2press “Read Json” Button in"Control Panel" to load a json file including intrinsic/distcoeffs/picRenderChoice
3Press “load CameraSrc” Button to load a image, make sure the “undistort” checkbox is selected
4Press “Picking” Button, and press “set pick” button in cloud image window to set the “undistort picture” as picking image, then using right-click mouse to pick points in cloud, and using left-click 0.2S to pick points in picture.
5Press “Manual Calibration” , then a evaluation table will appear in the cloud image window, and a testreport will be generated in the “man” folder at the same location of the cloud file
6then you can use “Render Camera by Cloud” and “Render Cloud by Camera” to further observe the fusion effect

5.2 Rendering Function

Rendering function is designed to observe the fusion effect of cloud and camera images.it can be used for further adjust the manual calibration results and get a better extrinsic calibration.

StepOperation
1press “load cloudPoints” button to load a cloud
2press “Read Json” Button to load a json file including intrinsic/distcoeffs/picRenderChoice
3Press “load CameraSrc” Button to load a image, make sure the “undistort” checkbox is selected
4Press “Render Camera by Cloud”, a rendered picture will appeared in the cloud image window
5Press “Render Cloud by Camera”, the cloud will be rendered by the image

6. Calibration Configuration

Calibration config is critial for geting a correct calibration result.

6.1 Objects in the Config Json File

In this program, we integrate all the configs in a json file.

  • object “picRenderChoice”: setting the parameters for how to take pictures of the cloud by the cloud camera
  • object “cloudCamera.Intrinsic”: setting the intrinsic of the cloud camera,including K matrix and DistCoeffs vector
  • object “cloudCamera.Extrinsic”: setting true extrinsic (R and T vec) of the real camera, the extrinsic parameters do not affect the calibration result, they are only used to evalute the calibration result.
  • object "cloudCamera.EulerR": the corresponding Euler angle of R, which is calculated by the program throught the R and T vec.
  • object “cloudCamera.WorldT” it is used to depict the point translation, and is calculated by the program using -(R*)^(-1)t*.
  • object “EulerType”: default value is 5, means “ZYX”.

The program will automatically calculate the EulerR and WorldT anytime we reading or write a json file, or using the json file for calibration. so the calibration will keep correct even if the EulerR and WorldT is wrong input manually.

6.2 Definitions of [picRenderChoice]

the params in the picRenderChoice is to define how to take cloud pictures by the virtual camera, it’s very important for calibration, please read this section carefully before starting calibration.

No.ParamsdefinistionDefault Value
1fourNeighbordefine if the up/down/left/right four neighbor points need to be rendered in the projected image,default is falsefalse
2eightNeighbordefine if the up/down/left/right eight neighbor points need to be rendered in the projected image,default is falsetrue
3baseGraydefine the base gray value when calculate the gray value of the projected points0
4contrastdefine the contrast of the projected points2
5lowIntensitydefine the min intensity of the points to be projected0
6zMinonly the cloud points with Z value greater than zMin will be projected-30
7zMaxonly the cloud points with Z value less than zMax will be projected30
8xMinonly the cloud points with x value greater than xMin will be projected-40
9xMaxonly the cloud points with x value less than xMax will be projected800
10picWidththe width of the virtual picture ,** it should be equal to the real camera picture width **1200
11picHeightthe height of the virtual picture, ** it shoud be equal to the real camera picture height **800
12samplingStepdefines the interval of sampling from the cloud points1
13dMinonly the cloud points with the Zc value greater than dMin will be projected-10
14dMaxonly the cloud points with the Zc value less than dMax will be projected800
15colordefines take colorfull image or gray image for the virtual camerafalse
16OMPdefines if the OMP function will be used in virtual cameratrue
17OMPthreadsdefines the qty of threads used in the function3
18selfRotationdefines virtual camera rotation methodtrue
19saveornotdefines if the picture will be saved or not(only effectively in manual mode)false

7. How to Read the Calibration Report

The calibration report is written in json format, it includes below objects:

  • trueCamera: recording the Intrinsic and Extrinsic set by the calibration config
  • calCamera: recording the calibration output Intrinsice and Extrinsic
  • Evaluation: recording all the evaluation indicators.
  • cloudCamera: recording the Intrinsic and Extrinsic of the cloud camera used to take cloud pictures in the calibration
  • matePRC: recording the setting of picRenderChoice used in the calibration, it’s the same with the input config before calibration.

main evaluation indicators

  • Er: the Errors of EulerR of the calCamera against trueCamera
  • Et: the Errors of WorldT of the calCamera against trueCamera
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

ZevieZ

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值