Kinect三维重建3333———点云融合

这一节将介绍如何进行点云融合。

首先是加载文件,点云过滤什么的,这里不详讲了。

融合过程如下图所示:

 

首先,把0当作是Target,1和2当作是Source。1和2分别转60度之后,分别与0进行配准(旋转时计算这3帧的重心,然后绕其重心进行旋转)。配准后,0的点云是不变的,变的分别是1和2的点云。配准采用的是向量ICP 方法,下面仅以前面3帧为例给出代码,后面3帧处理过程是一样的,把3当作是Target,4和5当作是Source。

Eigen::Vector3f totalMass1 = Eigen::Vector3f::Zero();//记录所有点云的质心
int totalNumberOfPoints1 = 0;//记录所有点云的点个数
for (int i = 0; i < 3; ++i)
{
	totalNumberOfPoints1 += data[i].cloudWithNormal->size();
	totalMass1 += data[i].mass * data[i].cloudWithNormal->size();
}
totalMass1 /= totalNumberOfPoints1;
 
Eigen::Vector3f upVector(0, 1.0, 0);
Eigen::Matrix4f rotationMatrix = rot_mat(totalMass1, upVector, M_PI / 3);
pcl::transformPointCloudWithNormals(*data[1].cloudWithNormal, *data[1].cloudWithNormal, rotationMatrix);
UpdatePCDMass(data[1]);
 
rotationMatrix = rot_mat(totalMass1, upVector, -M_PI / 3);
pcl::transformPointCloudWithNormals(*data[2].cloudWithNormal, *data[2].cloudWithNormal, rotationMatrix);
UpdatePCDMass(data[2]);
 
pcl::IterativeClosestPointWithNormals<pcl::PointNormal, pcl::PointNormal> icpWithNormals;
icpWithNormals.setMaxCorrespondenceDistance(0.5);
icpWithNormals.setMaximumIterations(100);
icpWithNormals.setTransformationEpsilon(1e-10);
icpWithNormals.setEuclideanFitnessEpsilon(0.01);
 
icpWithNormals.setInputCloud(data[1].cloudWithNormal);
icpWithNormals.setInputTarget(data[0].cloudWithNormal);
icpWithNormals.align(*data[1].cloudWithNormal);
 
icpWithNormals.setInputCloud(data[2].cloudWithNormal);
icpWithNormals.setInputTarget(data[0].cloudWithNormal);
icpWithNormals.align(*data[2].cloudWithNormal);
 
pcl::PointCloud<pcl::PointNormal>::Ptr Front(new pcl::PointCloud<pcl::PointNormal>);
 
*Front += *data[0].cloudWithNormal;
*Front += *data[1].cloudWithNormal;
*Front += *data[2].cloudWithNormal;
UpdatePCDMass(data[1]);
UpdatePCDMass(data[2]);

 

经过上述处理后,可以得Front和Back这两个分别代表前面与后面的点云。接下来就是要把前面与后面的点云进行融合。

然而直接融合是一件非常困难的事情,因为其能找到的Corresponding Points不仅少,而且不准确。于是我们改为对奶牛底下的盒子进行前后配准。

我们把Front点云当作Target,把Back点云当作Source。先把Back点云旋转180度:

Eigen::Vector3f mass = Eigen::Vector3f::Zero();
for (int i = 0; i < Back->points.size(); ++i){
	mass(0) += Back->points[i].x;
	mass(1) += Back->points[i].y;
	mass(2) += Back->points[i].z;
}
mass /= Back->points.size();
rotationMatrix = rot_mat(mass, upVector, M_PI);
pcl::transformPointCloudWithNormals(*Back, *Back, rotationMatrix);

然后我们使用AABB(Axially Aligned Bounding Box)对前后点云进行初始对齐,处理过程为把Back点云向Front点云进行移动:

AABB BackAABB = computerAABB(Back);
AABB FrontAABB = computerAABB(Front);
Eigen::Vector3f diff = FrontAABB.center - BackAABB.center;
Eigen::Matrix4f translationMatrix = Eigen::Matrix4f::Identity();
translationMatrix(0, 3) = diff(0);
translationMatrix(1, 3) = diff(1);
translationMatrix(2, 3) = FrontAABB.max(2) - BackAABB.min(2) - (FrontAABB.max(2) - FrontAABB.min(2))*0.9;
pcl::transformPointCloudWithNormals(*Back, *Back, translationMatrix);

上面代码倒数第二行中,理论上应该赋值为diff(2)。然而这样赋值之后前后配准得到的点云有偏移,不太准确,之后我再讲讲为什么会这么赋值。

接下来就是要前后配准了。我们的策略是对盒子进行前后配准。判断该点云是否是盒子的点云,我们可以通过上一节得到的yMax2来判断。Y值小于yMax2的点即为盒子点云的 点。通过AABB初始对齐后,先对盒子的左右面进行配准。思想挺简单:每次迭代时,分别提取Front点云和Back点云中盒子点云的法向量朝左右两边的点。分别定义朝向左右两边的法向量为plane_left(-1, 0, 0)和plane_right(1, 0, 0),并设置一个阈值,如果其法向量与这两个法向量其中之一的夹角小于10度,我们就认为是我们想要的点。然后我们还是使用法向量进行配准,具体代码如下:

const float yMax = -0.218759;
 
Eigen::Vector3f plane_left(-1, 0, 0);
Eigen::Vector3f plane_right(1, 0, 0);
float cos_angle = cos(M_PI * 10 / 180);//角度阈值
 
cout << "左右面配准" << endl;
int iteration = 100;
for (int iter = 0; iter < iteration; ++iter)
{
	pcl::IndicesPtr source_indices(new std::vector<int>());
	for (int i = 0; i < Back->points.size(); ++i){
		if (Back->points[i].y>yMax)//如果其Y值大于yMax,那么它不是盒子的点云,下同
			continue;
		Eigen::Vector3f n = Back->points[i].getNormalVector3fMap();
		n.normalize();
		if (n.transpose() * plane_left > cos_angle){
			source_indices->push_back(i);
			continue;
		}
		if (n.transpose() * plane_right > cos_angle){
			source_indices->push_back(i);
		}
	}
 
	pcl::IndicesPtr target_indices(new std::vector<int>());
	for (int i = 0; i < Front->points.size(); ++i){
		if (Front->points[i].y>yMax)
			continue;
		Eigen::Vector3f n = Front->points[i].getNormalVector3fMap();
		n.normalize();
		if (n.transpose() * plane_left > cos_angle){
			target_indices->push_back(i);
			continue;
		}
		if (n.transpose() * plane_right > cos_angle){
			target_indices->push_back(i);
		}
	}
 
	pcl::registration::CorrespondenceEstimationNormalShooting<pcl::PointNormal, pcl::PointNormal, pcl::PointNormal> correst;
	correst.setInputCloud(Back);
	correst.setSourceNormals(Back);
	correst.setInputTarget(Front);
	correst.setIndicesSource(source_indices);
	correst.setIndicesTarget(target_indices);
	correst.setKSearch(15);
	pcl::CorrespondencesPtr all_correspondences(new pcl::Correspondences);
	correst.determineReciprocalCorrespondences(*all_correspondences);
 
	pcl::registration::CorrespondenceRejectorSurfaceNormal rejector;
	rejector.initializeDataContainer<pcl::PointNormal, pcl::PointNormal>();
	rejector.setInputSource<pcl::PointNormal>(Back);
	rejector.setInputTarget<pcl::PointNormal>(Front);
	rejector.setInputNormals<pcl::PointNormal, pcl::PointNormal>(Back);
	rejector.setTargetNormals<pcl::PointNormal, pcl::PointNormal>(Front);
	rejector.setInputCorrespondences(all_correspondences);
	rejector.setThreshold(M_PI * 10 / 180);
	pcl::CorrespondencesPtr correspondences_after_rejector(new pcl::Correspondences);
	rejector.getCorrespondences(*correspondences_after_rejector);
 
	Eigen::Matrix4f transformation;
 
	pcl::registration::TransformationEstimationLM<pcl::PointNormal, pcl::PointNormal> trans_est_lm;
	trans_est_lm.estimateRigidTransformation(*Back, *Front, *correspondences_after_rejector, transformation);
 
	pcl::transformPointCloudWithNormals(*Back, *Back, transformation);
	cout << "Iteration: " << iter << endl;
	if (transformation.isIdentity())
		break;
}

左右配准后,前后盒子的上表面点云一般来说也还没有配准好,所以我们对盒子的上表面点云进行配准,方法与左右面配准差不多:

cout << "上面配准" << endl;
for (int iter = 0; iter < iteration; ++iter)
{
	pcl::IndicesPtr source_indices(new std::vector<int>());
	for (int i = 0; i < Back->points.size(); ++i){
		if (Back->points[i].y>yMax)
			continue;
		Eigen::Vector3f n = Back->points[i].getNormalVector3fMap();
		n.normalize();
		if (n.transpose() * upVector > cos_angle){
			source_indices->push_back(i);
			continue;
		}
	}
 
	pcl::IndicesPtr target_indices(new std::vector<int>());
	for (int i = 0; i < Front->points.size(); ++i){
		if (Front->points[i].y>yMax)
			continue;
		Eigen::Vector3f n = Front->points[i].getNormalVector3fMap();
		n.normalize();
		if (n.transpose() * upVector > cos_angle){
			target_indices->push_back(i);
			continue;
		}
	}
 
	pcl::registration::CorrespondenceEstimationNormalShooting<pcl::PointNormal, pcl::PointNormal, pcl::PointNormal> correst;
	correst.setInputCloud(Back);
	correst.setSourceNormals(Back);
	correst.setInputTarget(Front);
	correst.setIndicesSource(source_indices);
	correst.setIndicesTarget(target_indices);
	correst.setKSearch(15);
	pcl::CorrespondencesPtr all_correspondences(new pcl::Correspondences);
	correst.determineReciprocalCorrespondences(*all_correspondences);
 
	pcl::registration::CorrespondenceRejectorSurfaceNormal rejector;
	rejector.initializeDataContainer<pcl::PointNormal, pcl::PointNormal>();
	rejector.setInputSource<pcl::PointNormal>(Back);
	rejector.setInputTarget<pcl::PointNormal>(Front);
	rejector.setInputNormals<pcl::PointNormal, pcl::PointNormal>(Back);
	rejector.setTargetNormals<pcl::PointNormal, pcl::PointNormal>(Front);
	rejector.setInputCorrespondences(all_correspondences);
	rejector.setThreshold(M_PI * 10 / 180);
	pcl::CorrespondencesPtr correspondences_after_rejector(new pcl::Correspondences);
	rejector.getCorrespondences(*correspondences_after_rejector);
 
	Eigen::Matrix4f transformation;
 
	pcl::registration::TransformationEstimationSVD<pcl::PointNormal, pcl::PointNormal> trans_est_svd;
	trans_est_svd.estimateRigidTransformation(*Back, *Front, *correspondences_after_rejector, transformation);
 
	pcl::transformPointCloudWithNormals(*Back, *Back, transformation);
	cout << "Iteration: " << iter << endl;
	if (transformation.isIdentity())
		break;
}

至此,配准过程已经结束。现在回到上面有一个还没解释清楚的问题,就是这一句代码:

translationMatrix(2, 3) = FrontAABB.max(2) - BackAABB.min(2) - (FrontAABB.max(2) - FrontAABB.min(2))*0.8;

理论上应该赋值为diff(2),但这样做不太准确。本来我们的想法是这样的:

希望AABB对齐后,本应该是上图的结果。但在实验中发现,刚好对齐的时候,重建出来的模型不准确。通过多次实验发现,不要刚好对齐,要稍微错开一下。效果应该为这样:

 

那么究竟要错开多少呢?这里没有一个能够准确估计的方法,于是想着配准后重叠的部分应该大概等于原始宽度的80%~90%(下面是示意图):

 

 

以上这就是这句代码的解释了。重叠比重大概在80%~90%之间,可以根据实验结果的不同,自己调整比重。

放上配准好后的点云图和重建后的三维模型图:

 

 

三维重建的时候我并没有直接用PCL本身中的重建方法进行重建。而是把点云数据保存为一个PLY文件,然后通过这个网站(http://www.cs.jhu.edu/~misha/Code/PoissonRecon/Version8.0/)中的泊松重建方法进行重建。从网站中下载其EXE 格式的文件,然后使用命令行进行调用,网站也给出了调用示例。这里我也给下调用示例:

首先保存了一个叫cow.ply的点云文件,对于一个比较完整的三维点云来说,我使用下面这条命令行(把PLY文件与EXE文件放在同一个文件夹下,然后CMD打命令)对我的奶牛点云进行泊松重建:

PoissonRecon.x64.exe --in cow.ply --out cow.unscreened.ply --depth 10 --pointWeight 0

从重建后的三维模型上看,其实细节还不够,比如说牛头和牛尾的地方。那是因为:一来,我每个角度只拍一帧,想要更多的细节应该同一个角度多拍好几帧,甚至更多。二来,本身Kinect获取到的深度数据噪音非常大,需要用上降噪方法对点云处理才能得到更好的结果,我这里就没有怎么进行去噪的工作。大家可以按照自己的想法去进行实验。

最后放上完整代码:

#define vtkRenderingCore_AUTOINIT 4(vtkInteractionStyle,vtkRenderingFreeType,vtkRenderingFreeType,vtkRenderingOpenGL) 
#define vtkRenderingVolume_AUTOINIT 1(vtkRenderingVolumeOpenGL)
#include <string>
#include <kinect.h>
#include <iostream>
#include <opencv2/core/core.hpp>  
#include <opencv2/highgui/highgui.hpp>  
#include <pcl/io/pcd_io.h> //PCL的PCD格式文件的输入输出头文件
#include <pcl/io/ply_io.h>
#include <pcl/point_types.h> //PCL对各种格式的点的支持头文件
#include <pcl/visualization/pcl_visualizer.h>
#include <pcl/filters/radius_outlier_removal.h>  
#include <pcl/filters/conditional_removal.h>
#include <pcl/filters/statistical_outlier_removal.h>
#include <pcl/registration/icp.h> //ICP(iterative closest point)配准
#include <pcl/registration/icp_nl.h>
#include <pcl/registration/correspondence_rejection_surface_normal.h>
#include <pcl/registration/correspondence_rejection_one_to_one.h>
#include <pcl/registration/correspondence_estimation_normal_shooting.h>
#include <pcl/console/parse.h>  //pcl控制台解析
#include <pcl/ModelCoefficients.h>
#include <pcl/sample_consensus/method_types.h>
#include <pcl/sample_consensus/model_types.h>
#include <pcl/segmentation/sac_segmentation.h>
#include <pcl/filters/extract_indices.h>  
//kd树
#include <pcl/kdtree/kdtree_flann.h>
//特征提取
#include <pcl/features/normal_3d_omp.h>
#include <pcl/features/normal_3d.h>
//重构
#include <pcl/surface/gp3.h>
#include <pcl/surface/poisson.h>
#include <boost/thread/thread.hpp>
#include <Eigen/Dense>
 
 
using namespace std;
 
const float yMax = -0.218759;
 
string num2str(int i)
{
	stringstream ss;
	ss << i;
	return ss.str();
}
 
// Returns the rotation matrix around a vector  placed at a point , rotate by angle t
Eigen::Matrix4f rot_mat(const Eigen::Vector3f& point, const Eigen::Vector3f& vector, const float t)
{
	float u = vector(0);
	float v = vector(1);
	float w = vector(2);
	float a = point(0);
	float b = point(1);
	float c = point(2);
 
	Eigen::Matrix4f matrix;
	matrix << u*u + (v*v + w*w)*cos(t), u*v*(1 - cos(t)) - w*sin(t), u*w*(1 - cos(t)) + v*sin(t), (a*(v*v + w*w) - u*(b*v + c*w))*(1 - cos(t)) + (b*w - c*v)*sin(t),
		u*v*(1 - cos(t)) + w*sin(t), v*v + (u*u + w*w)*cos(t), v*w*(1 - cos(t)) - u*sin(t), (b*(u*u + w*w) - v*(a*u + c*w))*(1 - cos(t)) + (c*u - a*w)*sin(t),
		u*w*(1 - cos(t)) - v*sin(t), v*w*(1 - cos(t)) + u*sin(t), w*w + (u*u + v*v)*cos(t), (c*(u*u + v*v) - w*(a*u + b*v))*(1 - cos(t)) + (a*v - b*u)*sin(t),
		0, 0, 0, 1;
	return matrix;
}
 
//定义结构体,用于处理点云
struct PCD
{
	std::string f_name; //文件名
	pcl::PointCloud<pcl::PointNormal>::Ptr cloudWithNormal;//存储估计的法线的指针
	Eigen::Vector3f mass;//存储点云的质心
	//构造函数
	PCD() : cloudWithNormal(new pcl::PointCloud<pcl::PointNormal>), mass(Eigen::Vector3f::Zero()){}; //初始化
};
 
void UpdatePCDMass(PCD& m)
{
	m.mass.Zero();
	for (int j = 0; j < m.cloudWithNormal->size(); ++j)
	{
		m.mass(0) += m.cloudWithNormal->points[j].x;
		m.mass(1) += m.cloudWithNormal->points[j].y;
		m.mass(2) += m.cloudWithNormal->points[j].z;
	}
	m.mass /= m.cloudWithNormal->size();
}
 
struct AABB
{
	Eigen::Vector3f center;
	Eigen::Vector3f min;
	Eigen::Vector3f max;
};
 
AABB computerAABB(pcl::PointCloud<pcl::PointNormal>::Ptr p)
{
	AABB aabb;
	aabb.min(0) = +10000;
	aabb.min(1) = +10000;
	aabb.min(2) = +10000;
	aabb.max(0) = -10000;
	aabb.max(1) = -10000;
	aabb.max(2) = -10000;
	for (int i = 0; i < p->size(); ++i)
	{
		if (p->points[i].x < aabb.min(0))
			aabb.min(0) = p->points[i].x;
		if (p->points[i].y < aabb.min(1))
			aabb.min(1) = p->points[i].y;
		if (p->points[i].z < aabb.min(2))
			aabb.min(2) = p->points[i].z;
 
		if (p->points[i].x > aabb.max(0))
			aabb.max(0) = p->points[i].x;
		if (p->points[i].y > aabb.max(1))
			aabb.max(1) = p->points[i].y;
		if (p->points[i].z > aabb.max(2))
			aabb.max(2) = p->points[i].z;
	}
	aabb.center = 0.5f*(aabb.max + aabb.min);
	return aabb;
}
 
int main()
{
	const int numberOfViews = 6;//点云数量
	std::vector<PCD, Eigen::aligned_allocator<PCD> > data; //点云数据
	std::string prefix("cow");
	std::string extension("_withNormal.pcd"); //声明并初始化string类型变量extension,表示文件后缀名
	// 通过遍历文件名,读取pcd文件
	for (int i = 0; i < numberOfViews; i++) //遍历所有的文件名
	{
		std::string fname = prefix + num2str(i) + extension;
		// 读取点云,并保存到models
		PCD m;
		m.f_name = fname;
		if (pcl::io::loadPCDFile(fname, *m.cloudWithNormal) == -1) //* 读入PCD格式的文件,如果文件不存在,返回-1
		{
			cout << "Couldn't read file " + fname + "." << endl; //文件不存在时,返回错误,终止程序。
			return (-1);
		}
		for (int j = 0; j < m.cloudWithNormal->size(); ++j)
		{
			m.mass(0) += m.cloudWithNormal->points[j].x;
			m.mass(1) += m.cloudWithNormal->points[j].y;
			m.mass(2) += m.cloudWithNormal->points[j].z;
		}
		m.mass /= m.cloudWithNormal->size();
		data.push_back(m);
	}
 
	//-----------------------去除离群点------------------------
	pcl::RadiusOutlierRemoval<pcl::PointNormal> outrem;
	outrem.setRadiusSearch(0.01);
	outrem.setMinNeighborsInRadius(15);
	for (int i = 0; i < numberOfViews; ++i)
	{
		outrem.setInputCloud(data[i].cloudWithNormal);
		outrem.filter(*data[i].cloudWithNormal);
		UpdatePCDMass(data[i]);
	}
	//--------------------------------------------------------------
 
	Eigen::Vector3f totalMass1 = Eigen::Vector3f::Zero();//记录所有点云的质心
	int totalNumberOfPoints1 = 0;//记录所有点云的点个数
	for (int i = 0; i < 3; ++i)
	{
		totalNumberOfPoints1 += data[i].cloudWithNormal->size();
		totalMass1 += data[i].mass * data[i].cloudWithNormal->size();
	}
	totalMass1 /= totalNumberOfPoints1;
 
	Eigen::Vector3f totalMass2 = Eigen::Vector3f::Zero();//记录所有点云的质心
	int totalNumberOfPoints2 = 0;//记录所有点云的点个数
	for (int i = 3; i < 6; ++i)
	{
		totalNumberOfPoints2 += data[i].cloudWithNormal->size();
		totalMass2 += data[i].mass * data[i].cloudWithNormal->size();
	}
	totalMass2 /= totalNumberOfPoints2;
 
	Eigen::Vector3f upVector(0, 1.0, 0);
	Eigen::Matrix4f rotationMatrix = rot_mat(totalMass1, upVector, M_PI / 3);
	pcl::transformPointCloudWithNormals(*data[1].cloudWithNormal, *data[1].cloudWithNormal, rotationMatrix);
	UpdatePCDMass(data[1]);
 
	rotationMatrix = rot_mat(totalMass1, upVector, -M_PI / 3);
	pcl::transformPointCloudWithNormals(*data[2].cloudWithNormal, *data[2].cloudWithNormal, rotationMatrix);
	UpdatePCDMass(data[2]);
 
	rotationMatrix = rot_mat(totalMass2, upVector, M_PI / 3);
	pcl::transformPointCloudWithNormals(*data[4].cloudWithNormal, *data[4].cloudWithNormal, rotationMatrix);
	UpdatePCDMass(data[4]);
 
	rotationMatrix = rot_mat(totalMass2, upVector, -M_PI / 3);
	pcl::transformPointCloudWithNormals(*data[5].cloudWithNormal, *data[5].cloudWithNormal, rotationMatrix);
	UpdatePCDMass(data[5]);
 
 
	pcl::IterativeClosestPointWithNormals<pcl::PointNormal, pcl::PointNormal> icpWithNormals;
	icpWithNormals.setMaxCorrespondenceDistance(0.5);
	icpWithNormals.setMaximumIterations(100);
	icpWithNormals.setTransformationEpsilon(1e-10);
	icpWithNormals.setEuclideanFitnessEpsilon(0.01);
 
	icpWithNormals.setInputCloud(data[1].cloudWithNormal);
	icpWithNormals.setInputTarget(data[0].cloudWithNormal);
	icpWithNormals.align(*data[1].cloudWithNormal);
 
	icpWithNormals.setInputCloud(data[2].cloudWithNormal);
	icpWithNormals.setInputTarget(data[0].cloudWithNormal);
	icpWithNormals.align(*data[2].cloudWithNormal);
 
	icpWithNormals.setInputCloud(data[4].cloudWithNormal);
	icpWithNormals.setInputTarget(data[3].cloudWithNormal);
	icpWithNormals.align(*data[4].cloudWithNormal);
 
	icpWithNormals.setInputCloud(data[5].cloudWithNormal);
	icpWithNormals.setInputTarget(data[3].cloudWithNormal);
	icpWithNormals.align(*data[5].cloudWithNormal);
 
 
	pcl::PointCloud<pcl::PointNormal>::Ptr Front(new pcl::PointCloud<pcl::PointNormal>);
	pcl::PointCloud<pcl::PointNormal>::Ptr Back(new pcl::PointCloud<pcl::PointNormal>);
	// ----------------------------------------------------
	*Front += *data[0].cloudWithNormal;
	*Front += *data[1].cloudWithNormal;
	*Front += *data[2].cloudWithNormal;
	UpdatePCDMass(data[1]);
	UpdatePCDMass(data[2]);
 
	*Back += *data[3].cloudWithNormal;
	*Back += *data[4].cloudWithNormal;
	*Back += *data[5].cloudWithNormal;
	UpdatePCDMass(data[4]);
	UpdatePCDMass(data[5]);
	// ----------------------------------------------------
 
	//---------------前后配准------------------------------
	Eigen::Vector3f mass = Eigen::Vector3f::Zero();
	for (int i = 0; i < Back->points.size(); ++i){
		mass(0) += Back->points[i].x;
		mass(1) += Back->points[i].y;
		mass(2) += Back->points[i].z;
	}
	mass /= Back->points.size();
	rotationMatrix = rot_mat(mass, upVector, M_PI);
	pcl::transformPointCloudWithNormals(*Back, *Back, rotationMatrix);
 
	Eigen::Vector3f plane_left(-1, 0, 0);
	Eigen::Vector3f plane_right(1, 0, 0);
	float cos_angle = cos(M_PI * 10 / 180);
 
	AABB BackAABB = computerAABB(Back);
	cout << "BackAABB's center:\n" << BackAABB.center << endl;
	cout << "BackAABB's Z Length:\n" << BackAABB.max(2)-BackAABB.min(2) << endl;
	AABB FrontAABB = computerAABB(Front);
	cout << "FrontAABB's center:\n" << FrontAABB.center << endl;
	cout << "FrontAABB's Z Length:\n" << FrontAABB.max(2) - FrontAABB.min(2) << endl;
	Eigen::Vector3f diff = FrontAABB.center - BackAABB.center;
	Eigen::Matrix4f translationMatrix = Eigen::Matrix4f::Identity();
	translationMatrix(0, 3) = diff(0);
	translationMatrix(1, 3) = diff(1);
	translationMatrix(2, 3) = FrontAABB.max(2) - BackAABB.min(2) - (FrontAABB.max(2) - FrontAABB.min(2))*0.8;
	pcl::transformPointCloudWithNormals(*Back, *Back, translationMatrix);
 
	cout << "左右面配准" << endl;
	int iteration = 100;
	for (int iter = 0; iter < iteration; ++iter)
	{
		pcl::IndicesPtr source_indices(new std::vector<int>());
		for (int i = 0; i < Back->points.size(); ++i){
			if (Back->points[i].y>yMax)
				continue;
			Eigen::Vector3f n = Back->points[i].getNormalVector3fMap();
			n.normalize();
			if (n.transpose() * plane_left > cos_angle){
				source_indices->push_back(i);
				continue;
			}
			if (n.transpose() * plane_right > cos_angle){
				source_indices->push_back(i);
			}
		}
		//cout << "Source Indices: " << source_indices->size() << endl;
 
		pcl::IndicesPtr target_indices(new std::vector<int>());
		for (int i = 0; i < Front->points.size(); ++i){
			if (Front->points[i].y>yMax)
				continue;
			Eigen::Vector3f n = Front->points[i].getNormalVector3fMap();
			n.normalize();
			if (n.transpose() * plane_left > cos_angle){
				target_indices->push_back(i);
				continue;
			}
			if (n.transpose() * plane_right > cos_angle){
				target_indices->push_back(i);
			}
		}
		//cout << "Target Indices: " << target_indices->size() << endl;
 
		pcl::registration::CorrespondenceEstimationNormalShooting<pcl::PointNormal, pcl::PointNormal, pcl::PointNormal> correst;
		correst.setInputCloud(Back);
		correst.setSourceNormals(Back);
		correst.setInputTarget(Front);
		correst.setIndicesSource(source_indices);
		correst.setIndicesTarget(target_indices);
		correst.setKSearch(15);
		pcl::CorrespondencesPtr all_correspondences(new pcl::Correspondences);
		correst.determineReciprocalCorrespondences(*all_correspondences);
		//cout << "Correspondences (Before) : " << all_correspondences->size() << "\n";
 
		pcl::registration::CorrespondenceRejectorSurfaceNormal rejector;
		rejector.initializeDataContainer<pcl::PointNormal, pcl::PointNormal>();
		rejector.setInputSource<pcl::PointNormal>(Back);
		rejector.setInputTarget<pcl::PointNormal>(Front);
		rejector.setInputNormals<pcl::PointNormal, pcl::PointNormal>(Back);
		rejector.setTargetNormals<pcl::PointNormal, pcl::PointNormal>(Front);
		rejector.setInputCorrespondences(all_correspondences);
		rejector.setThreshold(M_PI * 10 / 180);
		pcl::CorrespondencesPtr correspondences_after_rejector(new pcl::Correspondences);
		rejector.getCorrespondences(*correspondences_after_rejector);
		//cout << "Correspondences (After) : " << correspondences_after_rejector->size() << "\n";
 
 
		Eigen::Matrix4f transformation;
 
		//pcl::registration::TransformationEstimationSVD<pcl::PointNormal, pcl::PointNormal> trans_est_svd;
		//trans_est_svd.estimateRigidTransformation(*Back, *Front, *correspondences_after_rejector, transformation);
 
		pcl::registration::TransformationEstimationLM<pcl::PointNormal, pcl::PointNormal> trans_est_lm;
		trans_est_lm.estimateRigidTransformation(*Back, *Front, *correspondences_after_rejector, transformation);
 
		//pcl::registration::TransformationEstimationPointToPlaneLLS<pcl::PointNormal, pcl::PointNormal> trans_est_PointToPlane;
		//trans_est_PointToPlane.estimateRigidTransformation(*Back, *Front, *correspondences_after_rejector, transformation);
 
		pcl::transformPointCloudWithNormals(*Back, *Back, transformation);
		cout << "Iteration: " << iter << endl;
		//cout << "Matrix " << iter << ":\n" << transformation << endl;
		if (transformation.isIdentity())
			break;
	}
 
	cout << "上下面配准" << endl;
	for (int iter = 0; iter < iteration; ++iter)
	{
		pcl::IndicesPtr source_indices(new std::vector<int>());
		for (int i = 0; i < Back->points.size(); ++i){
			if (Back->points[i].y>yMax)
				continue;
			Eigen::Vector3f n = Back->points[i].getNormalVector3fMap();
			n.normalize();
			if (n.transpose() * upVector > cos_angle){
				source_indices->push_back(i);
				continue;
			}
		}
		//cout << "Source Indices: " << source_indices->size() << endl;
 
		pcl::IndicesPtr target_indices(new std::vector<int>());
		for (int i = 0; i < Front->points.size(); ++i){
			if (Front->points[i].y>yMax)
				continue;
			Eigen::Vector3f n = Front->points[i].getNormalVector3fMap();
			n.normalize();
			if (n.transpose() * upVector > cos_angle){
				target_indices->push_back(i);
				continue;
			}
		}
		//cout << "Target Indices: " << target_indices->size() << endl;
 
		pcl::registration::CorrespondenceEstimationNormalShooting<pcl::PointNormal, pcl::PointNormal, pcl::PointNormal> correst;
		correst.setInputCloud(Back);
		correst.setSourceNormals(Back);
		correst.setInputTarget(Front);
		correst.setIndicesSource(source_indices);
		correst.setIndicesTarget(target_indices);
		correst.setKSearch(15);
		pcl::CorrespondencesPtr all_correspondences(new pcl::Correspondences);
		correst.determineReciprocalCorrespondences(*all_correspondences);
		//cout << "Correspondences (Before) : " << all_correspondences->size() << "\n";
 
		pcl::registration::CorrespondenceRejectorSurfaceNormal rejector;
		rejector.initializeDataContainer<pcl::PointNormal, pcl::PointNormal>();
		rejector.setInputSource<pcl::PointNormal>(Back);
		rejector.setInputTarget<pcl::PointNormal>(Front);
		rejector.setInputNormals<pcl::PointNormal, pcl::PointNormal>(Back);
		rejector.setTargetNormals<pcl::PointNormal, pcl::PointNormal>(Front);
		rejector.setInputCorrespondences(all_correspondences);
		rejector.setThreshold(M_PI * 10 / 180);
		pcl::CorrespondencesPtr correspondences_after_rejector(new pcl::Correspondences);
		rejector.getCorrespondences(*correspondences_after_rejector);
		//cout << "Correspondences (After) : " << correspondences_after_rejector->size() << "\n";
 
		Eigen::Matrix4f transformation;
 
		pcl::registration::TransformationEstimationSVD<pcl::PointNormal, pcl::PointNormal> trans_est_svd;
		trans_est_svd.estimateRigidTransformation(*Back, *Front, *correspondences_after_rejector, transformation);
 
		//pcl::registration::TransformationEstimationLM<pcl::PointNormal, pcl::PointNormal> trans_est_lm;
		//trans_est_lm.estimateRigidTransformation(*Back, *Front, *correspondences_after_rejector, transformation);
 
		//pcl::registration::TransformationEstimationPointToPlaneLLS<pcl::PointNormal, pcl::PointNormal> trans_est_PointToPlane;
		//trans_est_PointToPlane.estimateRigidTransformation(*Back, *Front, *correspondences_after_rejector, transformation);
 
		pcl::transformPointCloudWithNormals(*Back, *Back, transformation);
		cout << "Iteration: " << iter << endl;
		//cout << "Matrix " << iter << ":\n" << transformation << endl;
		if (transformation.isIdentity())
			break;
	}
 
	BackAABB = computerAABB(Back);
	cout << "BackAABB's center:\n" << BackAABB.center << endl;
	cout << "BackAABB's Z Length:\n" << BackAABB.max(2) - BackAABB.min(2) << endl;
	FrontAABB = computerAABB(Front);
	cout << "FrontAABB's center:\n" << FrontAABB.center << endl;
	cout << "FrontAABB's Z Length:\n" << FrontAABB.max(2) - FrontAABB.min(2) << endl;
 
	cout << "Z Length:\n" << FrontAABB.max(2) - BackAABB.min(2) << endl;
 
	pcl::PointCloud<pcl::PointNormal>::Ptr cloud(new pcl::PointCloud<pcl::PointNormal>);
	*cloud = *Back;
	*cloud += *Front;
 
	// -------------------泊松重建----------------------
	//创建搜索树
	//pcl::search::KdTree<pcl::PointNormal>::Ptr tree2(new pcl::search::KdTree<pcl::PointNormal>);
	//tree2->setInputCloud(cloud);
	创建Poisson对象,并设置参数
	//pcl::Poisson<pcl::PointNormal> pn;
	//pn.setConfidence(false); //是否使用法向量的大小作为置信信息。如果false,所有法向量均归一化。
	//pn.setDegree(2); //设置参数degree[1,5],值越大越精细,耗时越久。
	//pn.setDepth(8); //树的最大深度,求解2^d x 2^d x 2^d立方体元。由于八叉树自适应采样密度,指定值仅为最大深度。
	//pn.setIsoDivide(8); //用于提取ISO等值面的算法的深度
	//pn.setManifold(false); //是否添加多边形的重心,当多边形三角化时。 设置流行标志,如果设置为true,则对多边形进行细分三角话时添加重心,设置false则不添加
	//pn.setOutputPolygons(true); //是否输出多边形网格(而不是三角化移动立方体的结果)
	//pn.setSamplesPerNode(3.0); //设置落入一个八叉树结点中的样本点的最小数量。无噪声,[1.0-5.0],有噪声[15.-20.]平滑
	//pn.setScale(1.25); //设置用于重构的立方体直径和样本边界立方体直径的比率。
	//pn.setSolverDivide(8); //设置求解线性方程组的Gauss-Seidel迭代方法的深度
	pn.setIndices();
	设置搜索方法和输入点云
	//pn.setSearchMethod(tree2);
	//pn.setInputCloud(cloud);
	创建多变形网格,用于存储结果
	//pcl::PolygonMesh mesh;
	执行重构
	//pn.performReconstruction(mesh);
	// ---------------------------------------------------
 
	pcl::io::savePLYFile("cow.ply", *cloud, true);
 
	// 显示结果图
	boost::shared_ptr<pcl::visualization::PCLVisualizer> viewer(new pcl::visualization::PCLVisualizer("3D viewer"));
	int v1; //定义两个窗口v1,v2,窗口v1用来显示初始位置,v2用以显示配准过程
	int v2;
	viewer->createViewPort(0.0, 0.0, 0.5, 1.0, v1);  //四个窗口参数分别对应x_min,y_min,x_max.y_max.
	viewer->createViewPort(0.5, 0.0, 1.0, 1.0, v2);
	viewer->setBackgroundColor(0, 0, 0);
	//viewer->addPolygonMesh(mesh, "mesh2", v2);
 
	pcl::visualization::PointCloudColorHandlerCustom<pcl::PointNormal> cloud_color4(Front, 0, 255, 0);
	viewer->addPointCloud(Front, cloud_color4, "cloud_color4", v1);
	pcl::visualization::PointCloudColorHandlerCustom<pcl::PointNormal> cloud_color5(Back, 255, 0, 0);
	viewer->addPointCloud(Back, cloud_color5, "cloud_color5", v1);
	pcl::visualization::PointCloudColorHandlerCustom<pcl::PointNormal> cloud_color6(Front, 255, 255, 255);
	viewer->addPointCloud(Front, cloud_color6, "cloud_color6", v2);
	pcl::visualization::PointCloudColorHandlerCustom<pcl::PointNormal> cloud_color7(Back, 255, 255, 255);
	viewer->addPointCloud(Back, cloud_color7, "cloud_color7", v2);
 
	viewer->initCameraParameters();
	while (!viewer->wasStopped()){
		viewer->spinOnce();
	}
 
	std::system("pause");
	return 0;
}

————————————————
版权声明:本文为CSDN博主「悄然的我-粤Y」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/u010848251/article/details/71023549

 

  • 6
    点赞
  • 57
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值