xtion pro live 单目视觉半直接法(SVO)实践

svo的下载编译可以查看教程:https://github.com/uzh-rpg/rpg_svo/wiki
跑完下作者提供的数据集后,我就想用自己手边的摄像头来看下实际效果。
但要用自己的摄像头首先得对摄像头进行标定,获得相机的参数,这里我采用的是
ATAN模型,标定的是xtion_pro_live 的rgb摄像头。
(相机标定:https://github.com/uzh-rpg/rpg_svo/wiki/Camera-Calibration)

要标定它的话,首先我们要下载PTAM,用ptam的cameracalibrator来标定。
下载ptam包,编译之前安装依赖项:

sudo apt-get install freeglut3-dev
sudo apt-get install libblas-dev
sudo apt-get install liblapacke-dev

成功编译后,进入cameracalibrator.launch修改输入topic :

<launch>
   <node name="cameracalibrator" pkg="ptam" type="cameracalibrator" clear_params="true" output="screen">
        <remap from="image" to="/mono_image" />
        <remap from="pose" to="pose"/>
    <rosparam file="$(find ptam)/PtamFixParams.yaml"/>
   </node>
</launch>

这里你会有疑问输入图像/mono_image从哪里来呢?
ptam的cameracalibrator需要的图像是单通道的灰度图像,而xtion_pro_live提供的/camera/rgb/image_raw是3通道的rgb图像,所以需要转换格式:/camera/rgb/image_raw –>/mono_image。所以/mono_image就是转换后的单通道灰度图像。
那怎么进行转换呢?不用担心,这里有转换程序:

    //rgb图像转换为灰度图像的程序  
    #include <ros/ros.h>   
    #include <opencv2/opencv.hpp>  
    #include <opencv/cv.h>  
    #include <opencv/highgui.h>  
    #include <image_transport/image_transport.h>  
    #include <cv_bridge/cv_bridge.h>  
    #include <sensor_msgs/image_encodings.h>  
    #include "ros/ros.h"  
    #include "std_msgs/String.h"  

    ros::Publisher image_pub ;  

    void chatterCallback(const sensor_msgs::ImageConstPtr& msg)  
    {  
        cv_bridge::CvImagePtr  cv_ptr;  
        cv_ptr = cv_bridge::toCvCopy(msg,sensor_msgs::image_encodings::BGR8);   
        cv::Mat image_gray;  
        cv::cvtColor(cv_ptr->image, image_gray,CV_BGR2GRAY);//灰度化  

        cv_bridge::CvImage  cvi;  
        sensor_msgs::Image  ros_img;  
        ros::Time time
  • 3
    点赞
  • 22
    收藏
    觉得还不错? 一键收藏
  • 3
    评论
要实时读取xtion pro live的信息,您需要使用OpenNI SDK和SensorKinect驱动程序。您可以使用C++编写代码来读取深度、RGB和红外图像。以下是一个简单的代码示例: ``` #include <iostream> #include <OpenNI.h> using namespace std; using namespace openni; int main() { Status rc = OpenNI::initialize(); if (rc != STATUS_OK) { cerr << "Failed to initialize OpenNI" << endl; return 1; } Device device; rc = device.open(ANY_DEVICE); if (rc != STATUS_OK) { cerr << "Failed to open device" << endl; return 1; } VideoStream depthStream; rc = depthStream.create(device, SENSOR_DEPTH); if (rc == STATUS_OK) { rc = depthStream.start(); if (rc != STATUS_OK) { cerr << "Failed to start depth stream" << endl; depthStream.destroy(); } } else { cerr << "Failed to create depth stream" << endl; return 1; } VideoStream colorStream; rc = colorStream.create(device, SENSOR_COLOR); if (rc == STATUS_OK) { rc = colorStream.start(); if (rc != STATUS_OK) { cerr << "Failed to start color stream" << endl; colorStream.destroy(); } } else { cerr << "Failed to create color stream" << endl; return 1; } VideoFrameRef depthFrame; VideoFrameRef colorFrame; while (true) { rc = OpenNI::waitForAnyStream(&depthStream, 1, NULL, SAMPLE_READ_WAIT_TIMEOUT); if (rc != STATUS_OK) { cerr << "Wait failed" << endl; break; } rc = depthStream.readFrame(&depthFrame); if (rc != STATUS_OK) { cerr << "Read failed" << endl; break; } rc = OpenNI::waitForAnyStream(&colorStream, 1, NULL, SAMPLE_READ_WAIT_TIMEOUT); if (rc != STATUS_OK) { cerr << "Wait failed" << endl; break; } rc = colorStream.readFrame(&colorFrame); if (rc != STATUS_OK) { cerr << "Read failed" << endl; break; } // Process depth and color frames here depthStream.releaseFrame(&depthFrame); colorStream.releaseFrame(&colorFrame); } depthStream.destroy(); colorStream.destroy(); device.close(); OpenNI::shutdown(); return 0; } ``` 这段代码使用OpenNI SDK和SensorKinect驱动程序创建了一个深度流和一个RGB流,并在一个无限循环中读取它们的帧。您可以在循环中添加代码来处理深度和颜色帧。
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值