ROS下Kinect2的驱动安装及简单应用

https://blog.csdn.net/sunbibei/article/details/51594824

简单运用

保存图片

目的就一个, 将Kinect2的RGB图保存下来. 在前面的叙述中, 输入rosrun kinect2_viewer kinect2_viewer sd cloud来查看显示

效果. 这句命令实质就是运行kinect2_viewer包中的kinect2_viewer节点, 给定两个参数sd 和 cloud. 进入这个包的路径, 是可

以看到这个节点源码. 源码中主函数摘录如下:

int main(int argc, char **argv) {
  ... ... // 省略了部分代码
  topicColor = "/" + ns + topicColor;
  topicDepth = "/" + ns + topicDepth;
  OUT_INFO("topic color: " FG_CYAN << topicColor << NO_COLOR);
  OUT_INFO("topic depth: " FG_CYAN << topicDepth << NO_COLOR);
  // Receiver是一个类, 也定义在该文件中.useExact(true), useCompressed(false)
  Receiver receiver(topicColor, topicDepth, useExact, useCompressed);

  OUT_INFO("starting receiver...");
  // 运行时给出参数"cloud", 则mode = Receiver::CLOUD
  // 运行时给出参数"image", 则mode = Receiver::IMAGE
  // 运行时给出参数"both", 则mode = Receiver::BOTH
  receiver.run(mode);

  ros::shutdown();
  return 0;
}

Receiver类的实现也不算太复杂. 其中用于显示的主循环在imageViewer()或cloudViewer()中. 依据给的参数的不同, 将会调

用不同的函数. 对应关系如下所示:

switch(mode) {
case CLOUD:
  cloudViewer();
  break;
case IMAGE:
  imageViewer();
  break;
case BOTH:
  imageViewerThread = std::thread(&Receiver::imageViewer, this);
  cloudViewer();
  break;
}

BOTH选项, 将会出现两个窗口来显示图像. 上面两个函数都已经实现了图片保存的功能.代码摘录如下, 两个函数实现类似,

故只是摘录了imageViewer()的内容:

int key = cv::waitKey(1);
switch(key & 0xFF) {
case 27:
case 'q':
  running = false;
  break;
case ' ':
case 's':
  if(mode == IMAGE) {
    createCloud(depth, color, cloud);
    saveCloudAndImages(cloud, color, depth, depthDisp);
  } else {
    save = true;
  }
  break;
}

其中关键函数saveCloudAndImages()的实现如下:

void saveCloudAndImages(
    const pcl::PointCloud<pcl::PointXYZRGBA>::ConstPtr cloud,
    const cv::Mat &color, const cv::Mat &depth, const cv::Mat &depthColored) {

  oss.str("");
  oss << "./" << std::setfill('0') << std::setw(4) << frame;
  // 所有文件都保存在当前路径下
  const std::string baseName = oss.str();
  const std::string cloudName = baseName + "_cloud.pcd";
  const std::string colorName = baseName + "_color.jpg";
  const std::string depthName = baseName + "_depth.png";
  const std::string depthColoredName = baseName + "_depth_colored.png";

  OUT_INFO("saving cloud: " << cloudName);
  // writer是该类的pcl::PCDWriter类型的成员变量
  writer.writeBinary(cloudName, *cloud);
  OUT_INFO("saving color: " << colorName);
  cv::imwrite(colorName, color, params);
  OUT_INFO("saving depth: " << depthName);
  cv::imwrite(depthName, depth, params);
  OUT_INFO("saving depth: " << depthColoredName);
  cv::imwrite(depthColoredName, depthColored, params);
  OUT_INFO("saving complete!");
  ++frame;
}

从上面代码中可以看出来, 如果想要保存图片, 只需要选中显示图片的窗口, 然后输入单击键盘s键或者空格键就OK, 保存的图片就在当前目录.

如果有一些特别的需求, 在上面源码的基础上来进行实现, 将会事半功倍. 下面就是一个小小的例子.

保存图片序列

如果想要保存一个图片序列, 仅仅控制开始结束, 例如, 按键B(Begin)开始保存, 按键E(End)结束保存.

完成这样一个功能, 在源码的基础上进行适当更改就可以满足要求. 首选, 需要每一次对按键B和E的判断, 需要新增到上面摘

录的switch(key & 0xFF)块中. 需要连续保存的话, 简单的设定一个标志位即可.

首先, 复制vewer.cpp文件, 命名为save_seq.cpp. 修改save_seq.cpp文件, 在Receiver类中bool save变量下面添加一个新的

成员变量, bool save_seq. 在类的构造函数的初始化列表中新增该变量的初始化save_seq(false).
定位到void imageViewer()函数, 修改对应的switch(key & 0xFF)块如下:

int key = cv::waitKey(1);
switch(key & 0xFF) {
case 27:
case 'q':
  running = false;
  break;
case 'b': save_seq = true; save = true; break;
case 'e': save_seq = false; save = false; break;
case ' ':
case 's':
  if (save_seq) break;
  if(mode == IMAGE) {
    createCloud(depth, color, cloud);
    saveCloudAndImages(cloud, color, depth, depthDisp);
  } else {
    save = true;
  }
  break;
}
if (save_seq) {
  createCloud(depth, color, cloud);
  saveCloudAndImages(cloud, color, depth, depthDisp);
}

定位到void cloudViewer()函数, 修改对应的if (save)块如下:

if(save || save_seq) {
  save = false;
  cv::Mat depthDisp;
  dispDepth(depth, depthDisp, 12000.0f);
  saveCloudAndImages(cloud, color, depth, depthDisp);
}

定位到void keyboardEvent(const pcl::visualization::KeyboardEvent &event, void *)函数, 修改源码如下:

if(event.keyUp()) {
  switch(event.getKeyCode()) {
  case 27:
  case 'q':
    running = false;
    break;
  case ' ':
  case 's':
    save = true;
    break;
  case 'b':
    save_seq = true;
    break;
  case 'e':
    save_seq = false;
    break;
  }
}

在CMakeList.txt的最后, 添加如下指令:

add_executable(save_seq src/save_seq.cpp)
target_link_libraries(save_seq
  ${catkin_LIBRARIES}
  ${OpenCV_LIBRARIES}
  ${PCL_LIBRARIES}
  ${kinect2_bridge_LIBRARIES}
)

返回到catkin主目录, 运行catkin_make, 编译, 链接没有问题的话, 就可以查看效果了. 当然了, 首先是需要启动kinect_bride的.

依次运行下述指令:

roslaunch kinect2_bridge kinect2_bridge.launch
rosrun kinect2_viewer save_seq hd cloud

点击点云图获取坐标

主要想法是, 鼠标在点云图中移动时, 实时显示当前鼠标所指的点的三维坐标, 点击鼠标时, 获取该坐标发送出去.

这样的话, 首先就需要对鼠标有一个回调函数, 当鼠标状态改变时, 做出对应的变化. 鼠标变化可以简化为三种情况:

鼠标移动
鼠标左键点击
鼠标右键点击

因为涉及到回调函数, 而且也只是一个小功能, 代码实现很简单. 直接使用了三个全局变量代表这三个状态(代码需要支持

C++11, 不想那么麻烦的话, 可以将类型更改为volatile int), 以及一个全局的普通函数:

std::atomic_int mouseX;
std::atomic_int mouseY;
std::atomic_int mouseBtnType;

void onMouse(int event, int x, int y, int flags, void* ustc) {
    // std::cout << "onMouse: " << x << " " << y << std::endl;
    mouseX  = x;
    mouseY  = y;
    mouseBtnType = event;
}

在初始化中添加两个ros::Publisher, 分别对应鼠标左键和右键压下应该发布数据到达的话题. 如下所示:

ros::Publisher leftBtnPointPub =
    nh.advertise<geometry_msgs::PointStamped>("/kinect2/click_point/left", 1);
ros::Publisher rightBtnPointPub =
    nh.advertise<geometry_msgs::PointStamped>("/kinect2/click_point/right", 1);

其中消息格式是geometry_msgs/PointStamped, ROS自带的消息, 在源码头部需要包含头文件, #include

<geometry_msgs/PointStamped.h>, 具体定义如下:

std_msgs/Header header
  uint32 seq
  time stamp
  string frame_id
geometry_msgs/Point point
  float64 x
  float64 y
  float64 z

然后再重写cloudViewer()函数如下:

void cloudViewer() {
  cv::Mat color, depth;
  std::chrono::time_point<std::chrono::high_resolution_clock> start, now;
  double fps = 0;
  size_t frameCount = 0;
  std::ostringstream oss;
  std::ostringstream ossXYZ; // 新增一个string流
  const cv::Point pos(5, 15);
  const cv::Scalar colorText = CV_RGB(255, 0, 0);
  const double sizeText = 0.5;
  const int lineText = 1;
  const int font = cv::FONT_HERSHEY_SIMPLEX;
  // 从全局变量获取当前鼠标坐标
  int img_x = mouseX;
  int img_y = mouseY;
  geometry_msgs::PointStamped ptMsg;
  ptMsg.header.frame_id = "kinect_link";

  lock.lock();
  color = this->color;
  depth = this->depth;
  updateCloud = false;
  lock.unlock();

  const std::string window_name = "color viewer";
  cv::namedWindow(window_name);
  // 注册鼠标回调函数, 第三个参数是C++11中的关键字, 若不支持C++11, 替换成NULL
  cv::setMouseCallback(window_name, onMouse, nullptr);
  createCloud(depth, color, cloud);

  for(; running && ros::ok();) {
    if(updateCloud) {
      lock.lock();
      color = this->color;
      depth = this->depth;
      updateCloud = false;
      lock.unlock();

      createCloud(depth, color, cloud);
      img_x = mouseX;
      img_y = mouseY;
      const pcl::PointXYZRGBA& pt = cloud->points[img_y * depth.cols + img_x];
      ptMsg.point.x = pt.x;
      ptMsg.point.y = pt.y;
      ptMsg.point.z = pt.z;

      // 根据鼠标左键压下或右键压下, 分别发布三维坐标到不同的话题上去
      switch (mouseBtnType) {
      case cv::EVENT_LBUTTONUP:
          ptMsg.header.stamp = ros::Time::now();
          leftBtnPointPub.publish(ptMsg);
          ros::spinOnce();
          break;
      case cv::EVENT_RBUTTONUP:
          ptMsg.header.stamp = ros::Time::now();
          rightBtnPointPub.publish(ptMsg);
          ros::spinOnce();
          break;
      default:
          break;
      }
      mouseBtnType = cv::EVENT_MOUSEMOVE;

      ++frameCount;
      now = std::chrono::high_resolution_clock::now();
      double elapsed =
          std::chrono::duration_cast<std::chrono::milliseconds>(now - start).count() / 1000.0;
      if(elapsed >= 1.0) {
        fps = frameCount / elapsed;
        oss.str("");
        oss << "fps: " << fps << " ( " << elapsed / frameCount * 1000.0 << " ms)";
        start = now;
        frameCount = 0;
      }

      cv::putText(color, oss.str(), pos, font, sizeText, colorText, lineText, CV_AA);
      ossXYZ.str("");
      ossXYZ << "( " << ptMsg.point.x << ", " << ptMsg.point.y
                                  << ", " << ptMsg.point.z << " )";
      cv::putText(color, ossXYZ.str(), cv::Point(img_x, img_y), font, 1, colorText, 3, CV_AA);
      // cv::circle(color, cv::Point(mouseX, mouseY), 5, cv::Scalar(0, 0, 255), -1);
      cv::imshow(window_name, color);
      // cv::imshow(window_name, depth);
      cv::waitKey(1);
    }

  }
  cv::destroyAllWindows();
  cv::waitKey(100);
}

最后, 稍微改写一下主函数就OK了, 整个主函数摘录如下, 其中去掉了多余的参数解析, 让使用更加固定, 简单.

int main(int argc, char **argv) {
#if EXTENDED_OUTPUT
  ROSCONSOLE_AUTOINIT;
  if(!getenv("ROSCONSOLE_FORMAT"))
  {
    ros::console::g_formatter.tokens_.clear();
    ros::console::g_formatter.init("[${severity}] ${message}");
  }
#endif

  ros::init(argc, argv, "kinect2_viewer", ros::init_options::AnonymousName);

  if(!ros::ok()) {
    return 0;
  }

  std::string ns = K2_DEFAULT_NS;
  std::string topicColor = K2_TOPIC_HD K2_TOPIC_IMAGE_COLOR K2_TOPIC_IMAGE_RECT;
  std::string topicDepth = K2_TOPIC_HD K2_TOPIC_IMAGE_DEPTH K2_TOPIC_IMAGE_RECT;
  bool useExact = true;
  bool useCompressed = false;
  Receiver::Mode mode = Receiver::CLOUD;

  topicColor = "/" + ns + topicColor;
  topicDepth = "/" + ns + topicDepth;
  OUT_INFO("topic color: " FG_CYAN << topicColor << NO_COLOR);
  OUT_INFO("topic depth: " FG_CYAN << topicDepth << NO_COLOR);

  Receiver receiver(topicColor, topicDepth, useExact, useCompressed);

  OUT_INFO("starting receiver...");
  OUT_INFO("Please click mouse in color viewer...");
  receiver.run(mode);

  ros::shutdown();
  return 0;
}

在CMakeList.txt中加入下面两段话, 然后make就OK.

add_executable(click_rgb src/click_rgb.cpp)
target_link_libraries(click_rgb
  ${catkin_LIBRARIES}
  ${OpenCV_LIBRARIES}
  ${PCL_LIBRARIES}
  ${kinect2_bridge_LIBRARIES}
)

install(TARGETS click_rgb
  RUNTIME DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION}
)

运行的话, 输入rosrun kinect2_viewer click_rgb就OK. 效果如下图所示, 图中红色坐标位置, 实际是鼠标所在位置, 截图时, 鼠标截不下来.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值