ubuntu16.04+kinetic2.0+orbslam2

安装Kinect2的驱动
Kinect2的驱动包括libfreenect2和iai_kinect2,后者为使用ROS时要安装的驱动,而且依赖前者。
安装libfreenect2
参考:https://github.com/OpenKinect/libfreenect2
下载libfreenect2

  1. git clone https://github.com/OpenKinect/libfreenect2.git
  2. cd libfreenect2

Download upgrade deb files
4. cd depends
5. ./download_debs_trusty.sh

Install build tools
sudo apt-get install build-essential cmake pkg-config

Install libusb. The version must be >= 1.0.20.
sudo dpkg -i debs/libusb*deb

Install TurboJPEG
sudo apt-get install libturbojpeg libjpeg-turbo8-dev

Install OpenGL
sudo dpkg -i debs/libglfw3*deb; sudo apt-get install -f

Build (if you have run cd depends previously, cd … back to the libfreenect2 root directory first.)
mkdir build && cd build
cmake … -DCMAKE_INSTALL_PREFIX=$HOME/freenect2
make
make install

Set up udev rules for device access:
sudo cp …/platform/linux/udev/90-kinect2.rules /etc/udev/rules.d/

then replug the Kinect. Run the test program:
./bin/Protonect

安装iai_kinect2
参考:https://github.com/code-iai/iai_kinect2
安装ROS indigo
Setup your ROS environment
安装 libfreenect2
Enable C++11 by using cmake … -DENABLE_CXX11=ON instead of cmake … .
If you are compiling libfreenect2 with CUDA, use cmake … -DENABLE_CXX11=ON -DCUDA_PROPAGATE_HOST_FLAGS=off.
Clone this repository into your catkin workspace, install the dependencies and build it:

cd ~/catkin_ws/src/
git clone https://github.com/code-iai/iai_kinect2.git
cd iai_kinect2
rosdep install -r --from-paths .
cd ~/catkin_ws
catkin_make -DCMAKE_BUILD_TYPE=“Release”

Connect your sensor and run kinect2_bridge:
roslaunch kinect2_bridge kinect2_bridge.launch

Calibrate your sensor using the kinect2_calibration. Further details
Add the calibration files to the kinect2_bridge/data/ folder. Further details
Restart kinect2_bridge and view the results using:

rosrun kinect2_viewer kinect2_viewer kinect2 sd cloud

用Kinect2跑RGB-D SLAM 通过以上两步,我们安装了Kinect2的驱动和RGB-D SLAM
注意:rgbdslam.launch和openni+rgbdslam.launch只适用于Kinect1。
如果要用Kinect2,需要修改launch文件,参考:https://answers.ros.org/question/230412/solved-slam-with-kinect2/

<launch>
<node pkg="rgbdslam" type="rgbdslam" name="rgbdslam" cwd="node" required="true" output="screen"> 
<!-- Input data settings-->
<param name="config/topic_image_mono"              value="/kinect2/qhd/image_color_rect"/>  
<param name="config/camera_info_topic"             value="/kinect2/qhd/camera_info"/>

<param name="config/topic_image_depth"             value="/kinect2/qhd/image_depth_rect"/>

<param name="config/topic_points"                  value=""/> <!--if empty, poincloud will be reconstructed from image and depth -->

<!-- These are the default values of some important parameters -->
<param name="config/feature_extractor_type"        value="SIFTGPU"/><!-- also available: SIFT, SIFTGPU, SURF, SURF128 (extended SURF), ORB. -->
<param name="config/feature_detector_type"         value="SIFTGPU"/><!-- also available: SIFT, SURF, GFTT (good features to track), ORB. -->
<param name="config/detector_grid_resolution"      value="3"/><!-- detect on a 3x3 grid (to spread ORB keypoints and parallelize SIFT and SURF) -->

<param name="config/optimizer_skip_step"           value="15"/><!-- optimize only every n-th frame -->
<param name="config/cloud_creation_skip_step"      value="2"/><!-- subsample the images' pixels (in both, width and height), when creating the cloud (and therefore reduce memory consumption) -->

<param name="config/backend_solver"                value="csparse"/><!-- pcg is faster and good for continuous online optimization, cholmod and csparse are better for offline optimization (without good initial guess)-->

<param name="config/pose_relative_to"              value="first"/><!-- optimize only a subset of the graph: "largest_loop" = Everything from the earliest matched frame to the current one. Use "first" to optimize the full graph, "inaffected" to optimize only the frames that were matched (not those inbetween for loops) -->

<param name="config/maximum_depth"           value="2"/>
<param name="config/subscriber_queue_size"         value="20"/>

<param name="config/min_sampled_candidates"        value="30"/><!-- Frame-to-frame comparisons to random frames (big loop closures) -->
<param name="config/predecessor_candidates"        value="20"/><!-- Frame-to-frame comparisons to sequential frames-->
<param name="config/neighbor_candidates"           value="20"/><!-- Frame-to-frame comparisons to graph neighbor frames-->
<param name="config/ransac_iterations"             value="140"/>

<param name="config/g2o_transformation_refinement"           value="1"/>
<param name="config/icp_method"           value="gicp"/>  <!-- icp, gicp ... -->

<!--
<param name="config/max_rotation_degree"           value="20"/>
<param name="config/max_translation_meter"           value="0.5"/>

<param name="config/min_matches"           value="30"/>   

<param name="config/min_translation_meter"           value="0.05"/>
<param name="config/min_rotation_degree"           value="3"/>
<param name="config/g2o_transformation_refinement"           value="2"/>
<param name="config/min_rotation_degree"           value="10"/>

<param name="config/matcher_type"         value="SIFTGPU"/>
 -->
</node>
</launch>

打开终端,执行:
roslaunch kinect2_bridge kinect2_bridge.launch

打开另一个终端,执行:
roslaunch rgbdslam rgbdslam_kinect2.launch

**注意:打开终端后,执行source path-to-catkin_ws/devel/setup.bash

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值