python激光扫描雷达_激光雷达-单目视觉测距算法库

Limo

Lidar-Monocular Visual Odometry. This library is designed to be an open platform for visual odometry algortihm development. We focus explicitely on the simple integration of the following key methodologies:

Keyframe selection

Landmark selection

Prior estimation

Depth integration from different sensors.

Scale integration by groundplane constraint.

The core library keyframe_bundle_adjustment is a backend that should faciliate to swap these modules and easily develop those algorithms.

It is supposed to be an add-on module to do temporal inference of the optimization graph in order to smooth the result

In order to do that online a windowed approach is used

Keyframes are instances in time which are used for the bundle adjustment, one keyframe may have several cameras (and therefore images) associated with it

The selection of Keyframes tries to reduce the amount of redundant information while extending the time span covered by the optimization window to reduce drift

Methodologies for Keyframe selection:

Difference in time

Difference in motion

We use this library for combining Lidar with monocular vision.

Limo2 on KITTI is LIDAR with monocular Visual Odometry, supported with groundplane constraint

Now we switched from kinetic to melodic

Details

This work was accepted on IROS 2018. See https://arxiv.org/pdf/1807.07524.pdf .

If you refer to this work please cite:

@inproceedings{graeter2018limo,

title={LIMO: Lidar-Monocular Visual Odometry},

author={Graeter, Johannes and Wilczynski, Alexander and Lauer, Martin},

booktitle={2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},

pages={7872--7879},

year={2018},

organization={IEEE}

}

Please note that Limo2 differs from the publication. We enhanced the speed a little and added additional groundplane reconstruction for pure monocular visual odometry and a combination of scale from LIDAR and the groundplane (best performing on KITTI). For information on Limo2, please see my dissertation https://books.google.de/books?hl=en&lr=&id=cZW8DwAAQBAJ&oi .

Installation

Docker

To facilitate the development I created a standalone dockerfile.

# This is where you put the rosbags this will be available at /limo_data in the container

mkdir $HOME/limo_data

cd limo/docker

docker-compose build limo

You can run the docker and go to the entrypoint with

docker-compose run limo bash

Go to step Run in this tutorial and use tmux for terminals.

You can invoke a jupyter notebook with a python interface for limo with

docker-compose up limo

and open the suggested link from the run output in a browser.

Semantic segmentation

The monocular variant expects semantic segmentation of the images. You can produce this for example with my fork from NVIDIA's semantic segmentation:

Clone my fork

git clone https://github.com/johannes-graeter/semantic-segmentation

Download best_kitti.pth as described in the README.md from NVIDIA and put it in the semantic-segmentation folder

I installed via their docker, for which you must be logged in on (and register if necessary) https://ngc.nvidia.com/

Build the container with

docker-compose build semantic-segmentation

Run the segmentation with

docker-copmose run semantic-segmentation

Note that without a GPU this will take some time. With the Nvidia Quadro P2000 on my laptop i took around 6 seconds per image.

Requirements

In any case:

ceres:

you will need sudo make install to install the headers.

tested with libsuitesparse-dev from standard repos.

png++:

sudo apt-get install libpng++-dev

install ros:

you will need to install ros-perception (for pcl).

don't forget to source your ~/.bashrc afterwards.

install catkin_tools:

sudo apt-get install python-catkin-tools

install opencv_apps:

sudo apt-get install ros-melodic-opencv-apps

install git:

sudo apt-get install git

Build

initiate a catkin workspace:

cd ${your_catkin_workspace}

catkin init

clone limo into src of workspace:

mkdir ${your_catkin_workspace}/src

cd ${your_catkin_workspace}/src

git clone https://github.com/johannes-graeter/limo.git

clone dependencies and build repos

cd ${your_catkin_workspace}/src/limo

bash install_repos.sh

unittests:

cd ${your_catkin_workspace}/src/limo

catkin run_tests --profile limo_release

Run

get test data Sequence 04 or Sequence 01. This is a bag file generated from Kitti sequence 04 with added semantic labels.

in different terminals (for example with tmux)

roscore

rosbag play 04.bag -r 0.1 --pause --clock

source ${your_catkin_workspace}/devel_limo_release/setup.sh

roslaunch demo_keyframe_bundle_adjustment_meta kitti_standalone.launch

unpause rosbag (hit space in terminal)

rviz -d ${your_catkin_workspace}/src/demo_keyframe_bundle_adjustment_meta/res/default.rviz

watch limo trace the trajectory in rviz :)

Before submitting an issue, please have a look at the section Known issues.

Known issues

Unittest of LandmarkSelector.voxel fails with libpcl version 1.7.2 or smaller (just 4 landmarks are selected). Since this works with pcl 1.8.1 which is standard for ros melodic, this is ignored. This should lower the performance of the software only by a very small amount.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Python可以用来处理激光雷达数据,并进行测距操作。激光雷达是一种测距仪器,它使用激光束来测量距离和地形高度。在自动驾驶系统中,激光雷达可以通过其扫描仪来获取周围环境的三维图像,以帮助汽车做出行驶决策。激光雷达数据通常以点云的形式呈现,其中每个点代表扫描扫描到的一个物体或表面。 要获取并处理激光雷达数据,可以使用Python编程语言以及ROS(Robotic Operating System)框架提供的激光雷达数据包(LaserScan)。ROS是一个开源的机器人软件框架,它提供了丰富的功能和工具,可以方便地处理激光雷达数据。下面是一个使用ROS工具的示例代码: import rospy from sensor_msgs.msg import LaserScan def callback(data): # 处理激光雷达数据 for i in range(len(data.ranges)): print(data.ranges[i]) rospy.init_node('lidar_data') # 初始化节点 rospy.Subscriber('/scan', LaserScan, callback) # 订阅激光雷达数据 rospy.spin() # 运行ROS节点 这段代码使用了ROS提供的激光雷达数据包(LaserScan)来订阅激光雷达数据。当收到新的数据时,回调函数callback会被调用。在回调函数中,可以对激光雷达数据进行处理,例如打印每个点的距离信息。通过运行ROS节点,可以实时获取并处理激光雷达数据。<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* *2* *3* [chatgpt赋能pythonPython如何获取激光雷达数据](https://blog.csdn.net/qq_43479892/article/details/131408461)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_1"}}] [.reference_item style="max-width: 100%"] [ .reference_list ]

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值