ROS深度图转激光/点云原理

深度图转激光在ROS包depthimage_to_laserscan中实现,本篇讲解其计算过程。关于点云转激光数据的思路也是类似的,只需要将一定高度范围内的数据进行投影即可。

 

1. 深度图转激光原理

原理如图(1)所示。深度图转激光中,对任意给定的一个深度图像点 m(u,v,z)

,其转换激光的步骤为:

a.将深度图像的点 m(u,v,z)

转换到深度相机坐标系下的坐标点 M(x,y,z)

b.计算直线 AO

CO 的夹角 AOC

,计算公式如下:

   θ=atan(x/z)

c.将叫 AOC

影射到相应的激光数据槽中.已知激光的最小最大范围 [α,β] ,激光束共细分为 N 分,那么可用数组 laser[N] 表示激光数据。那么点 M 投影到数组laser中的索引值 n

可如下计算:

   n=N(θα)/(βα)

laser[n]

的值为 M 在x轴上投影的点 C 到相机光心 O 的距离 r

,即

laser[n]=r=OC=z2+x2

图(1)



 template<typename T>
    void convert(const sensor_msgs::ImageConstPtr& depth_msg, const image_geometry::PinholeCameraModel& cam_model, 
		 const sensor_msgs::LaserScanPtr& scan_msg, const int& scan_height) const{
     <span style="background-color: rgb(51, 102, 255);"> // Use correct principal point from calibration
      float center_x = cam_model.cx();
      float center_y = cam_model.cy();</span>

      // Combine unit conversion (if necessary) with scaling by focal length for computing (X,Y)
   <span style="background-color: rgb(51, 102, 255);">   double unit_scaling = depthimage_to_laserscan::DepthTraits<T>::toMeters( T(1) );
      float constant_x = unit_scaling / cam_model.fx();
      float constant_y = unit_scaling / cam_model.fy();</span>
      
      const T* depth_row = reinterpret_cast<const T*>(&depth_msg->data[0]);
      int row_step = depth_msg->step / sizeof(T);

      int offset = (int)(cam_model.cy()-scan_height/2);
      depth_row += offset*row_step; // Offset to center of image

      for(int v = offset; v < offset+scan_height_; v++, depth_row += row_step){
		for (int u = 0; u < (int)depth_msg->width; u++) // Loop over each pixel in row
		{	
		  T depth = depth_row[u];
		  
		<span style="background-color: rgb(51, 51, 255);">  double r = depth; // Assign to pass through NaNs and Infs
		  double th = -atan2((double)(u - center_x) * constant_x, unit_scaling); // Atan2(x, z), but depth divides out
		  int index = (th - scan_msg->angle_min) / scan_msg->angle_increment;
		  
		  if (depthimage_to_laserscan::DepthTraits<T>::valid(depth)){ // Not NaN or Inf
		    // Calculate in XYZ
		    double x = (u - center_x) * depth * constant_x;
		    double z = depthimage_to_laserscan::DepthTraits<T>::toMeters(depth);
		    
		    // Calculate actual distance
		    r = sqrt(pow(x, 2.0) + pow(z, 2.0));</span>
		  }
	  
	  // Determine if this point should be used.
	  if(use_point(r, scan_msg->ranges[index], scan_msg->range_min, scan_msg->range_max)){
	    scan_msg->ranges[index] = r;
	  }
	}
      }
    }
    
    image_geometry::PinholeCameraModel cam_model_; ///< image_geometry helper class for managing sensor_msgs/CameraInfo messages.
    
    float scan_time_; ///< Stores the time between scans.
    float range_min_; ///< Stores the current minimum range to use.
    float range_max_; ///< Stores the current maximum range to use.
    int scan_height_; ///< Number of pixel rows to use when producing a laserscan from an area.
    std::string output_frame_id_; ///< Output frame_id for each laserscan.  This is likely NOT the camera's frame_id.
};



2  深度图转点云原理

首先,要了解下世界坐标到图像的映射过程,考虑世界坐标点M(Xw,Yw,Zw)映射到图像点m(u,v)的过程,如下图所示:

详细原理请参考相机标定,这里不做赘述。形式化表示如下:

zcuv1=f/dx000f/dy0u0v01[RT]xwywzw1

其中 u,v

为图像坐标系下的任意坐标点。 u0,v0 分别为图像的中心坐标。 xw,yw,zw 表示世界坐标系下的三维坐标点。 zc 表示相机坐标的z轴值,即目标到相机的距离。 R,T

分别为外参矩阵的3x3旋转矩阵和3x1平移矩阵。

 对外参矩阵的设置:由于世界坐标原点和相机原点是重合的,即没有旋转和平移,所以:

R=000010001

, T=000

.

注意到,相机坐标系和世界坐标系的坐标原点重合,因此相机坐标和世界坐标下的同一个物体具有相同的深度,即 zc=zw

.于是公式可进一步简化为:

zcuv1=f/dx000f/dy0u0v01100010001000xwywzc1

 从以上的变换矩阵公式,可以计算得到图像点 [u,v]T

 到世界坐标点 [xw,yw,zw]T

的变换公式:

xwywzw===zc(uu0)dx/fzc(vv0)dy/fzc


代码请参考https://github.com/ros-perception/image_pipeline/blob/indigo/depth_image_proc/src/nodelets/point_cloud_xyzrgb.cpp

template<typename T>
void PointCloudXyzrgbNodelet::convert(const sensor_msgs::ImageConstPtr& depth_msg,
                                      const sensor_msgs::ImageConstPtr& rgb_msg,
                                      const PointCloud::Ptr& cloud_msg,
                                      int red_offset, int green_offset, int blue_offset, int color_step)
{
  // Use correct principal point from calibration
  float center_x = model_.cx();
  float center_y = model_.cy();

  // Combine unit conversion (if necessary) with scaling by focal length for computing (X,Y)
<span style="background-color: rgb(51, 51, 255);">  double unit_scaling = DepthTraits<T>::toMeters( T(1) );
  float constant_x = unit_scaling / model_.fx();
  float constant_y = unit_scaling / model_.fy();</span>
  float bad_point = std::numeric_limits<float>::quiet_NaN ();
  
  const T* depth_row = reinterpret_cast<const T*>(&depth_msg->data[0]);
  int row_step = depth_msg->step / sizeof(T);
  const uint8_t* rgb = &rgb_msg->data[0];
  int rgb_skip = rgb_msg->step - rgb_msg->width * color_step;

  sensor_msgs::PointCloud2Iterator<float> iter_x(*cloud_msg, "x");
  sensor_msgs::PointCloud2Iterator<float> iter_y(*cloud_msg, "y");
  sensor_msgs::PointCloud2Iterator<float> iter_z(*cloud_msg, "z");
  sensor_msgs::PointCloud2Iterator<uint8_t> iter_r(*cloud_msg, "r");
  sensor_msgs::PointCloud2Iterator<uint8_t> iter_g(*cloud_msg, "g");
  sensor_msgs::PointCloud2Iterator<uint8_t> iter_b(*cloud_msg, "b");
  sensor_msgs::PointCloud2Iterator<uint8_t> iter_a(*cloud_msg, "a");

  for (int v = 0; v < int(cloud_msg->height); ++v, depth_row += row_step, rgb += rgb_skip)
  {
    for (int u = 0; u < int(cloud_msg->width); ++u, rgb += color_step, ++iter_x, ++iter_y, ++iter_z, ++iter_a, ++iter_r, ++iter_g, ++iter_b)
    {
      T depth = depth_row[u];

      // Check for invalid measurements
      if (!DepthTraits<T>::valid(depth))
      {
        *iter_x = *iter_y = *iter_z = bad_point;
      }
      else
      {
       <span style="background-color: rgb(51, 102, 255);"> // Fill in XYZ
        *iter_x = (u - center_x) * depth * constant_x;
        *iter_y = (v - center_y) * depth * constant_y;
        *iter_z = DepthTraits<T>::toMeters(depth);</span>
      }

      // Fill in color
      *iter_a = 255;
      *iter_r = rgb[red_offset];
      *iter_g = rgb[green_offset];
      *iter_b = rgb[blue_offset];
    }
  }
}

} // namespace depth_image_proc



评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值