T265+ROS+opencv4.5.3

25 篇文章 4 订阅

在这里插入图片描述OpenCV4 for CUDA安装

OpenCV提供图像处理过程中的基础API,所以首先完成OpenCV4的构建。为了利用Jetson平台拥有CUDA加速的优势需要先安装OpenCV的CUDA版本。(Jetpack默认为非CUDA版本)
更新安装包

sudo apt update
sudo apt install -y build-essential cmake git libgtk2.0-dev pkg-config  libswscale-dev libtbb2 libtbb-dev
sudo apt install -y python-dev python3-dev python-numpy python3-numpy
sudo apt install -y curl

安装视频和图片依赖

sudo apt install -y  libjpeg-dev libpng-dev libtiff-dev
sudo apt install -y libavcodec-dev libavformat-dev
sudo apt install -y libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev
sudo apt install -y libv4l-dev v4l-utils qv4l2 v4l2ucp libdc1394-22-dev

克隆OpenCV和OpenCV_Contrib源码

curl -L https://github.com/opencv/opencv/archive/4.5.3.zip -o opencv-4.5.3.zip
curl -L https://github.com/opencv/opencv_contrib/archive/4.5.3.zip -o opencv_contrib-4.5.3.zip

解压缩包

unzip opencv-4.5.3.zip 
unzip opencv_contrib-4.5.3.zip 
cd opencv-4.5.3/
mkdir build
cd build/

使用Cmake编译工程

cmake     -D WITH_CUDA=ON \
        -D OPENCV_EXTRA_MODULES_PATH=../opencv_contrib/modules \
        -D WITH_GSTREAMER=ON \
        -D WITH_LIBV4L=ON \
        -D BUILD_opencv_python2=ON \
        -D BUILD_opencv_python3=ON \
        -D BUILD_TESTS=OFF \
        -D BUILD_PERF_TESTS=OFF \
        -D BUILD_EXAMPLES=OFF \
        -D CMAKE_BUILD_TYPE=RELEASE \
        -D CMAKE_INSTALL_PREFIX=/usr/local ..

use cuda

cmake -D CMAKE_BUILD_TYPE=Release -D CMAKE_INSTALL_PREFIX=/usr -D OPENCV_ENABLE_NONFREE=1 -D BUILD_opencv_python2=1 -D BUILD_opencv_python3=1 -D WITH_FFMPEG=1 -D CUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda-10.2 -D CUDA_ARCH_BIN=7.2 -D CUDA_ARCH_PTX=7.2 -D WITH_CUDA=1 -D WITH_CUDNN=1 -D ENABLE_FAST_MATH=1 -D CUDA_FAST_MATH=1 -D WITH_CUBLAS=1 -D OPENCV_GENERATE_PKGCONFIG=1 -D OPENCV_EXTRA_MODULES_PATH=../opencv_contrib/modules -D BUILD_EXAMPLES=OFF -D CUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda-10.2 ..

编译和安装

make -j4
sudo make install

安装Intel® RealSense™ SDK
命令安装2进制包进行安装。
1: 注册服务器公钥

2: 添加镜像源

sudo add-apt-repository "deb https://librealsense.intel.com/Debian/apt-repo $(lsb_release -cs) main" -u

3: 安装环境

sudo apt-get update
sudo apt-get install apt-utils -y

sudo apt-get install librealsense2-utils librealsense2-dev -y

4: 插上T265 在终端里面运行 realsense-viewer 测试

realsense-viewer

如果成功运行,可以得到如下的界面效果
在这里插入图片描述

sh安装
安装脚本:https://github.com/jetsonhacks/installRealSenseSDK.git

请先克隆仓库,然后进行修改。

git clone https://github.com/jetsonhacks/installRealSenseSDK.git

执行脚本

./buildLibrealsense.sh 

在这里插入图片描述
安装realsense-ros
安装依赖

sudo apt-get install ros-melodic-ddynamic-reconfigure

创建工作区

mkdir -p ~/catkin_ws/src
cd ~/catkin_ws/src/

克隆源码仓库

git clone https://github.com/IntelRealSense/realsense-ros.git
cd realsense-ros/
git checkout `git tag | sort -V | grep -P "^2.\d+\.\d+" | tail -1`
cd ..

构建模块

catkin_init_workspace
cd ..
catkin_make clean

解决问题
cv_bridge
在这里插入图片描述
修改cv_bridge配置文件,将API对接到安装的OpenCV4 with CUDA

修改文件/opt/ros/melodic/share/cv_bridge/cmake/cv_bridgeConfig.cmake
在这里插入图片描述
ddynamic_reconfigure
在这里插入图片描述
安装ddynamic_reconfigure:

sudo apt-get install ros-melodic-ddynamic-reconfigure

继续构建

catkin_make -DCATKIN_ENABLE_TESTING=False -DCMAKE_BUILD_TYPE=Release
catkin_make install

添加环境变量

echo "source ~/catkin_ws/devel/setup.bash" >> ~/.bashrc
source ~/.bashrc

动测试demo,即可看到T265的位姿数据

roslaunch realsense2_camera demo_t265.launch

在这里插入图片描述
如果需要查看图像数据,则需要对rs_t265.launch lanunch文件中使能图像输出
在这里插入图片描述
重新启动测试demo即可查看到图像数据

roslaunch realsense2_camera demo_t265.launch

在这里插入图片描述
使用Opencv库读取T265

 #include<iostream>
#include<string>
#include <librealsense2/rs.hpp>
#include <opencv2/opencv.hpp>
#include<opencv2/core/core.hpp>
#include<opencv2/highgui/highgui.hpp>

using namespace std;
using namespace cv;

int main(int argc,char** argv)
{
    rs2::config cfg;

    // 使能 左右目图像数据
    cfg.enable_stream(RS2_STREAM_FISHEYE,1, RS2_FORMAT_Y8);
    cfg.enable_stream(RS2_STREAM_FISHEYE,2, RS2_FORMAT_Y8);

    // 使能 传感器的POSE和6DOF IMU数据
    cfg.enable_stream(RS2_STREAM_POSE, RS2_FORMAT_6DOF);

    rs2::pipeline pipe;
    pipe.start(cfg);

    rs2::frameset data;

    while (1)
   {
    data = pipe.wait_for_frames();
	// Get a frame from the pose stream
	auto f = data.first_or_default(RS2_STREAM_POSE);
	auto pose = f.as<rs2::pose_frame>().get_pose_data();
	
	cout<<"px: "<<pose.translation.x<<"   py: "<<pose.translation.y<<"   pz: "<<pose.translation.z<<
	"vx: "<<pose.velocity.x<<"   vy: "<<pose.velocity.y<<"   vz: "<<pose.velocity.z<<endl;
	cout<<"ax: "<<pose.acceleration.x<<"   ay: "<<pose.acceleration.y<<"   az: "<<pose.acceleration.z<<
	"gx: "<<pose.angular_velocity.x<<"   gy: "<<pose.angular_velocity.y<<"   gz: "<<pose.angular_velocity.z<<endl;

     rs2::frame image_left = data.get_fisheye_frame(1);
      rs2::frame image_right = data.get_fisheye_frame(2);

      if (!image_left || !image_right)
          break;

      cv::Mat cv_image_left(cv::Size(848, 800), CV_8U, (void*)image_left.get_data(), cv::Mat::AUTO_STEP);
      cv::Mat cv_image_right(cv::Size(848, 800), CV_8U, (void*)image_right.get_data(), cv::Mat::AUTO_STEP);

      cv::imshow("left", cv_image_left);
      cv::imshow("right", cv_image_right);
      cv::waitKey(1);
    }

    return 0;
}

即可读取到T265的图像数据和位姿数据

在这里插入图片描述
参考资料

[1] https://www.intelrealsense.com/get-started-tracking-camera/
[2] https://blog.csdn.net/u011341856/article/details/106430940?utm_medium=distribute.pc_relevant.none-task-blog-BlogCommendFromMachineLearnPai2-1.add_param_isCf&depth_1-utm_source=distribute.pc_relevant.none-task-blog-BlogCommendFromMachineLearnPai2-1.add_param_isCf
[3] https://github.com/IntelRealSense/librealsense/tree/master/examples/pose
[4] https://github.com/IntelRealSense/librealsense/tree/master/tools/enumerate-devices
[5] https://blog.csdn.net/weixin_44631150/article/details/104495156?utm_medium=distribute.wap_aggpage_search_result.none-task-blog-2allsobaiduend~default-2-104495156.nonecase&utm_term=t265%E8%BE%93%E5%87%BAimu
  • 1
    点赞
  • 13
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

飞同学

随时为您服务

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值