Kinect实现简单的三维重建 + code (aipiano csdn)
1. 相机针孔模型
首先,我们来介绍一下相机针孔模型 。如下图所示, 是空间中的一个点,
是其对应图像中的点。为了简化模型,这里图像中心在相机的光学中心上,如图。
这里有两个坐标系,分别是相机坐标系 和图像坐标系
。这里点
和
是随机的(存在映射关系)。
在数学上,简化这个图形,如下:
f是焦距(光学中心和像平面的距离),由截线定理 可知:
把以上两个公式,写成向量的形式:
2. 齐次坐标系
![$s \in \mathbb{R} \setminus \{0\}$](https://i-blog.csdnimg.cn/blog_migrate/334eb312b39dd9bd1fdd2e4d9a2e7f52.png)
3. 像点偏移
4. 像素单位
![$ f, \hat{c}_x $](https://i-blog.csdnimg.cn/blog_migrate/304d0c53d7834b0a9c6b5b4889e912cf.png)
![$ \hat{c}_y $](https://i-blog.csdnimg.cn/blog_migrate/45053125987662a24953ce91a4fcd04b.png)
![$ k_x $](https://i-blog.csdnimg.cn/blog_migrate/2d1b122b0ba304780b212485bff4d33d.png)
![$ k_y $](https://i-blog.csdnimg.cn/blog_migrate/cf5b25f36aa4d2af05c48d58a1a1634e.png)
![$ k_x $](https://i-blog.csdnimg.cn/blog_migrate/2d1b122b0ba304780b212485bff4d33d.png)
![$ k_y $](https://i-blog.csdnimg.cn/blog_migrate/cf5b25f36aa4d2af05c48d58a1a1634e.png)
5. 转换
6. 总结
describes the relation between a threedimensional point in space captured by a camera and its equivalent, twodimensional point
at the image plane.
The parameters are called:
: intrisic parameters or intrinsics
: extrinsic parameters or extrinsics
It is important to understand, that our model (excepted the transformation) has been derived for one general camera. In context of Kinect you have seperate intrinsics for the depth- and RGB-camera!
In addation, we used the transformation to combine the coordinate systems of the depth- and RGB-camera. That implicates, that we only have got one transformation-matrix.
The application of our model/formula in context of Kinect is explained at class-description.
CODE:
// C++ 标准库
#include <iostream>
#include <string>
using namespace std;
// OpenCV 库
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
// PCL 库
#include <pcl/io/pcd_io.h>
#include <pcl/point_types.h>
// 定义点云类型
typedef pcl::PointXYZRGBA PointT;
typedef pcl::PointCloud<PointT> PointCloud;
// 相机内参
const double camera_factor = 1000;
const double camera_cx = 325.5;
const double camera_cy = 253.5;
const double camera_fx = 518.0;
const double camera_fy = 519.0;
// 主函数
int main( int argc, char** argv )
{
// 读取./data/rgb.png和./data/depth.png,并转化为点云
// 图像矩阵
cv::Mat rgb, depth;
// 使用cv::imread()来读取图像
// API: http://docs.opencv.org/modules/highgui/doc/reading_and_writing_images_and_video.html?highlight=imread#cv2.imread
rgb = cv::imread( "./data/rgb.png" );
// rgb 图像是8UC3的彩色图像
// depth 是16UC1的单通道图像,注意flags设置-1,表示读取原始数据不做任何修改
depth = cv::imread( "./data/depth.png", -1 );
// 点云变量
// 使用智能指针,创建一个空点云。这种指针用完会自动释放。
PointCloud::Ptr cloud ( new PointCloud );
// 遍历深度图
for (int m = 0; m < depth.rows; m++)
for (int n=0; n < depth.cols; n++)
{
// 获取深度图中(m,n)处的值
ushort d = depth.ptr<ushort>(m)[n];
// d 可能没有值,若如此,跳过此点
if (d == 0)
continue;
// d 存在值,则向点云增加一个点
PointT p;
// 计算这个点的空间坐标
p.z = double(d) / camera_factor;
p.x = (n - camera_cx) * p.z / camera_fx;
p.y = (m - camera_cy) * p.z / camera_fy;
// 从rgb图像中获取它的颜色
// rgb是三通道的BGR格式图,所以按下面的顺序获取颜色
p.b = rgb.ptr<uchar>(m)[n*3];
p.g = rgb.ptr<uchar>(m)[n*3+1];
p.r = rgb.ptr<uchar>(m)[n*3+2];
// 把p加入到点云中
cloud->points.push_back( p );
}
// 设置并保存点云
cloud->height = 1;
cloud->width = cloud->points.size();
cout<<"point cloud size = "<<cloud->points.size()<<endl;
cloud->is_dense = false;
pcl::io::savePCDFile( "./data/pointcloud.pcd", *cloud );
// 清除数据并退出
cloud->points.clear();
cout<<"Point cloud saved."<<endl;
return 0;
}
参考文献:
- and : Multiple View Geometry. Slides CVPR-Tutorial, 1999. http://users.cecs.anu.edu.au/~hartley/Papers/CVPR99-tutorial/tutorial.pdf
- : Kamerakalibrierung und Stereo Vision. Written report MIBI-Seminar, FH Bonn-Rhein-Sieg, 2005. http://www2.inf.fh-bonn-rhein-sieg.de/mi/lv/smibi/ss05/stud/klaeser/klaeser_ausarbeitung.pdf
- : Pinhole camera model. http://en.wikipedia.org/wiki/Pinhole_camera_model
- http://www.comp.nus.edu.sg/~cs4243/lecture/camera.pdf
- http://pille.iwr.uni-heidelberg.de/~kinect01/doc/reconstruction.html
- http://pille.iwr.uni-heidelberg.de/~kinect01/doc/classdescription.html#kinectcloud-section