手把手教你如何把点云算法嵌入到QT中并可视化(一):从CC中提取凹包检测算法

13 篇文章 2 订阅

        emmm先说一下缘由。并不是所有的人上来就能写一手好程序,正如笔者至今仍然是一枚菜鸟一样。但是大家奋发向上的心情都是一样的。所以才会有很多人来看这篇博客。尽管我在置顶的文章中不断地把我更新的源码分享给大家,仍然有很多朋友们表示下载了之后完全看不懂啊有木有。原因很简单,它太多了,太杂了。

        也有很多朋友看到这些程序感觉到遥不可及的样子,心想自己什么时候才能搭建出一个一模一样的来呢。以我为数不多的编程经验告诉大家,其实关键就在于积累。你要想办法给自己设置障碍,要脱离舒适区,遇到问题之后要勤于解决,不停地对比和积累,总有一天也能写出属于自己的软件的。

        所以这一次我决定开一个这样的模块,把我的算法从实现到集成,然后可视化,包括树节点如何去管理,还有一些触发事件,以及一些突发的情况,一点一滴地教给大家。整个过程我也是一边写程序一边完成博客,遇到什么问题也都是未知的,那么大家就可以看到一个完整的软件框架搭建流程了。

        那么我们的目的很简单,这一次,我要把从CloudCompare这款开源软件中提取出来的的一个“平面点云检测凹包”的算法完完整整的加入到我的软件框架中去。之前的软件版本应该是 NinjaScarlet_GLViewerFramework-3-5,也就是三月5号的版本。在这个版本之后笔者其实已经成功地在其中加入了平面拟合的算法并已经排除万难准备打包上传了。但是一想到还有那么多嗷嗷待哺的小伙伴不知如何入手,那么等我将这一工程完整添加进去之后,再统一上传吧!

        好,千里之行始于足下。

        首先你必须从CC中把这一算法成功从源码中抽取出来才行。这一点我已经替大家完成了,现在就要把这一算法粘出来给大家瞧瞧。这一算法在CC中的位置如下:凹包拟合,顾名思义,可以把平面状的点云的外包点云求出来。最终的效果也给大家展示一下:


首先会让你设置一下参数,也就是点云的步长。整个算法也只有这么一个输入参数。这个参数过大过小都不好。过小之后,凹包过于密集,检测出的凹包容易入侵到点云的内部,过大了之后检测出的凹包就不完整。我们设置为0.2,出来个结果:


提取出的效果差强人意。。。。现在,我们的目标就是做出和CC一模一样的效果!


        想要找到这一算法,必须去翻CC的源码。。。emmm由于CC封装得十分严重,有很多地方需要修改,我直接把抽取出来的代码粘出来,这些代码经过本人测试,效果完全没问题的。还要说明一下,我们在将算法添加到QT大界面下的时候必须先将算法在VS的控制台下进行测试,确认没有问题以后再一步一步添加进去。事物都不是一蹴而就的。

        所以我们首先在控制台下测试我们的凹包检测函数。主要思路也很清晰,首先利用空间中所有的点拟合一个平面,然后将所有的点投影到平面上,在平面上检测凹包,最后将2D凹包投影回到3D空间。首先是主要的调用函数,必要的注释都已经写好了。还有一点,我用到了Eigen库,也是非常好配置的,网上教程一大堆。Vector3d这些类型很多都是从Eigen库中来的。

当然,大家把代码看懂了全部改成自己的最底层代码也是很受欢迎的。

//尝试寻找3D凹包
void CC_Tool_Fit_3DPolygon()
{
	std::string CloudPath = std::string(".\\Data\\Pointcloud\\Scane059SegCloud.txt");
	ScarletCloudIO CloudReader(CloudPath);
	CloudReader.Read();
	CloudReader.PrintStationStatus();
	VertexPositionColor *Points = CloudReader.PtCloud().PointCloud;
	int PtNum = CloudReader.PtCloud().PointNum;
	Vector4d out_lsPlaneEquation;
	Vector3d Gravity;
	computeLeastSquareBestFittingPlane(Points, PtNum, out_lsPlaneEquation, Gravity);

	//得出的是平面的四个参数,和CC中的属性Norm完全相同
	cout << "out_lsPlaneEquation" << endl << out_lsPlaneEquation.transpose() << endl;

	//we project the input points on a plane
	//我们将所有的2D点投影到3D面上
	std::vector<Vector2d> points2D;
	Vector3d X, Y; //local base
	Vector3d m_center;
	projectPointsOn2DPlane(Points, PtNum, points2D, out_lsPlaneEquation, Gravity, m_center, X, Y, false);
	cout << "Points2D Size:" << points2D.size() << endl;
	cout << "Point Center:" << endl;//Center is always right
	cout << m_center.transpose() << endl;
	int PtSizeTemp = points2D.size();
	//计算所拟合平面的反投影误差RMSE(也是相同的,可喜可贺) 
	double PlaneRMSE = computeCloud2PlaneDistanceRMS(Points, PtNum, out_lsPlaneEquation);
	cout << "PlaneRMSE: " << PlaneRMSE << endl;

	//try to get the points on the convex/concave hull to build the contour and the polygon
	//这一个步骤大致分为三步:
	cout << "Finding Concave Hull...";
	std::vector<IndexedVector2d> IndexedPoints2D;
	IndexedPoints2D.resize(points2D.size());
	for (int i = 0; i < points2D.size(); i++)//重新装载点,包含着它的index,可以进行访问
	{
		IndexedPoints2D[i].pt2d = points2D[i];
		IndexedPoints2D[i].index = i;
		//CloudOuter << points2D[i].x() << " " << points2D[i].y() << endl;
	}
	std::list<IndexedVector2d*> hullPoints;
	double m_maxEdgeLength = 0.2;
	//检测凹陷包,一个函数完成
	//try to get the points on the convex/concave hull to build the contour and the polygon
	if (!extractConcaveHull2D(IndexedPoints2D,
		hullPoints,
		m_maxEdgeLength*m_maxEdgeLength))
	{
		cout << ("[ccFacet::createInternalRepresentation] Failed to compute the convex hull of the input points!");
	}
	cout << "Done!";
	unsigned hullPtsCount = static_cast<unsigned>(hullPoints.size());
	//ofstream CloudOuter(".\\Output\\hullPoints.txt");//可以选择性是否输出点
	//for (std::list<IndexedVector2d*>::iterator it = hullPoints.begin(); it != hullPoints.end(); it++)
	//{
	//	CloudOuter << (**it).pt2d.x() << " " << (**it).pt2d.y() << endl;
	//}
	//CloudOuter.close();
	cout << "凹陷包的点个数为:" << hullPtsCount ;

	//projection on the LS plane (in 3D)
	//尝试将二维点重新投影到3D
	ofstream CloudOuter(".\\Output\\hullPoints.txt");//可以选择性是否输出点
	std::vector<Vector3d> hullpoints3D;
	for (std::list<IndexedVector2d*>::iterator it = hullPoints.begin(); it != hullPoints.end(); ++it)
	{
		Vector3d PointTemp=(m_center + X*(*it)->pt2d.x() + Y*(*it)->pt2d.y());
		CloudOuter << PointTemp(0) << " " << PointTemp(1) << " " << PointTemp(2) << endl;
		hullpoints3D.push_back(PointTemp);	
	}
}

        我们一步一步进行解释。下面这段代码是我自己写的一个读取任意格式点云的类,你必须换成自己写的一个点云的读取类才行。我觉得读取点云对于大多数人应该都是没有问题的。

std::string CloudPath = std::string(".\\Data\\Pointcloud\\Scane059SegCloud.txt");
	ScarletCloudIO CloudReader(CloudPath);
	CloudReader.Read();
	CloudReader.PrintStationStatus();
	VertexPositionColor *Points = CloudReader.PtCloud().PointCloud;
	int PtNum = CloudReader.PtCloud().PointNum;

        然后这个struct VertexPositionColor的结构是这样的:

struct VertexPositionColor
{
	int PtID;
	double x = 0.0;
	double y = 0.0;
	double z = 0.0;
	double R = -1;
	double G = -1;
	double B = -1;
	double A = 0;//反射强度为0
	friend ostream & operator<<(ostream &out, VertexPositionColor &CloudTemp);//为输出点云而重载
};

        大家读取之后也可以换成自己的类型。接下来是一个拟合平面的函数

computeLeastSquareBestFittingPlane(Points, PtNum, out_lsPlaneEquation, Gravity);

        该函数的实现如下:

bool computeLeastSquareBestFittingPlane(VertexPositionColor*&in_PointPtr, int PNum, Vector4d& out_lsPlaneEquation, Vector3d &G)
{
	unsigned pointCount = PNum;
	//we need at least 3 points to compute a plane
	if (PNum < 3)
		return false;
	//Vector3d G(0, 0, 0);
	G.setZero();
	Vector3d m_lsPlaneVectors[3];
	if (pointCount > 3)
	{
		Matrix3d CovarianceMatrix = computeCovarianceMatrix(in_PointPtr, PNum);
		cout << "CovarianceMatrix:" << endl << CovarianceMatrix << endl;

		we determine plane normal by computing the smallest eigen value of M = 1/n * S[(p-µ)*(p-µ)']
		SelfAdjointEigenSolver <Matrix3d> myEigenSolver(CovarianceMatrix);
		if (myEigenSolver.info() != ComputationInfo::Success)abort();
		Vector3d myEigenValues = myEigenSolver.eigenvalues();
		Matrix3d myEigenMatrix = myEigenSolver.eigenvectors();
		cout << "myEigenValues" << setprecision(13) << endl << myEigenValues << endl;
		cout << "myEigenMatrix" << setprecision(13) << endl << myEigenMatrix << endl;

		//get normal
		//the smallest eigen vector corresponds to the "least square best fitting plane" normal
		double minEigenValue = myEigenValues(0);
		Vector3d minEigenVector = myEigenMatrix.col(0);
		cout << "minEigenValue:" << minEigenValue << endl;
		cout << "minEigenVector:" << minEigenVector.transpose() << endl;
		m_lsPlaneVectors[2] = minEigenVector;

		//get also X (Y will be deduced by cross product, see below
		double maxEigenValue = myEigenValues(2);
		Vector3d maxEigenVector = myEigenMatrix.col(2);
		cout << "maxEigenValue:" << maxEigenValue << endl;
		cout << "maxEigenVector:" << maxEigenVector.transpose() << endl;
		m_lsPlaneVectors[0] = maxEigenVector;

		get the centroid
		G = computeGravityCenter(in_PointPtr, PNum);
	}
	else
	{
		we simply compute the normal of the 3 points by cross product!
		Vector3d A = Vector3d(in_PointPtr[0].x, in_PointPtr[0].y, in_PointPtr[0].z);
		Vector3d B = Vector3d(in_PointPtr[1].x, in_PointPtr[1].y, in_PointPtr[1].z);
		Vector3d C = Vector3d(in_PointPtr[2].x, in_PointPtr[2].y, in_PointPtr[2].z);

		get X (AB by default) and Y (AC - will be updated later) and deduce N = X ^ Y
		m_lsPlaneVectors[0] = (B - A);
		m_lsPlaneVectors[1] = (C - A);
		m_lsPlaneVectors[2] = m_lsPlaneVectors[0].cross(m_lsPlaneVectors[1]);
		//the plane passes through any of the 3 points
		G = A;
	}

	//make sure all vectors are unit!
	if (m_lsPlaneVectors[2].squaredNorm() < ZERO_TOLERANCE)
	{
		//this means that the points are colinear!
		m_lsPlaneVectors[2] = Vector3d(0, 0, 1); //any normal will do
		return false;
	}
	else
	{
		m_lsPlaneVectors[2].normalize();
	}
	//normalize X as well
	m_lsPlaneVectors[0].normalize();
	//and update Y
	m_lsPlaneVectors[1] = m_lsPlaneVectors[2].cross(m_lsPlaneVectors[0]);

	//deduce the proper equation
	out_lsPlaneEquation.setZero();
	out_lsPlaneEquation[0] = m_lsPlaneVectors[2](0);
	out_lsPlaneEquation[1] = m_lsPlaneVectors[2](1);
	out_lsPlaneEquation[2] = m_lsPlaneVectors[2](2);

	//eventually we just have to compute the 'constant' coefficient a3
	//we use the fact that the plane pass through G --> GM.N = 0 (scalar prod)
	//i.e. a0*G[0]+a1*G[1]+a2*G[2]=a3
	out_lsPlaneEquation[3] = G.dot(m_lsPlaneVectors[2]);
	return true;
}
Vector3d computeGravityCenter(VertexPositionColor*&in_PointPtr, int PNum)
{
	//sum
	Vector3d Psum(0, 0, 0); Psum.setZero();
	for (unsigned i = 0; i<PNum; ++i)
	{
		const VertexPositionColor P = in_PointPtr[i];
		Psum(0) += P.x;
		Psum(1) += P.y;
		Psum(2) += P.z;
	}

	Vector3d G(static_cast<double>(Psum(0) / (double)PNum),
		static_cast<double>(Psum(1) / (double)PNum),
		static_cast<double>(Psum(2) / (double)PNum));

	return(G);
}

        输入参数分别为:点云指针,点云个数。输出参数为拟合出的平面参数和点云重心。一个平面用四个参数来表示,这个是高中知识,相信大家没啥问题。这个函数中用到了Eigen的SVD分解,最小特征值对应的向量就是平面的法向,也就确定了平面的其中三个参数。最后一个参数将点云中心带入即可求得。然后下面是投影函数,没什么说的,一看都能看懂,直接粘也行:

//将3D点全部投影到2D上
Vector3d vorthogonal(const Vector3d p, Vector3d in_q)
{
	Vector3d q = in_q;
	if (fabs(p[0]) <= fabs(p[1]) && fabs(p[0]) <= fabs(p[2]))
	{
		q[0] = 0; q[1] = p[2]; q[2] = -p[1];
	}
	else if (fabs(p[1]) <= fabs(p[0]) && fabs(p[1]) <= fabs(p[2]))
	{
		q[0] = -p[2]; q[1] = 0; q[2] = p[0];
	}
	else
	{
		q[0] = p[1]; q[1] = -p[0]; q[2] = 0;
	}
	q.normalize();
	return (q);
}
void ComputeBaseVectors(const Vector3d &N, Vector3d& X, Vector3d& Y)
{
	Vector3d Nunit = N;
	Nunit.normalize();

	//we create a first vector orthogonal to the input one
	Vector3d ort;
	//X is also normalized
	X = vorthogonal(Nunit, ort);
	//we deduce the orthogonal vector to the input one and X
	Y = N.cross(X);
	//Y.normalize(); //should already be normalized!
}
bool projectPointsOn2DPlane(VertexPositionColor *Points, int Pnum, std::vector<Vector2d>& points2D,
	const Vector4d planeEquation, Eigen::Vector3d gravity,
	Eigen::Vector3d& O, Eigen::Vector3d& X, Eigen::Vector3d& Y,
	bool useOXYasBase)
{
	unsigned count = Pnum;
	if (!count)
		return false;
	//reserve memory for output set
	try
	{
		points2D.resize(count);
	}
	catch (const std::bad_alloc&)
	{
		//out of memory
		return false;
	}

	//we construct the plane local base
	Vector3d G(0, 0, 0), u(1, 0, 0), v(0, 1, 0);
	if (useOXYasBase)
	{
		G = O;
		u = X;
		v = Y;
	}
	else
	{
		//Vector3d N(planeEquation);
		Vector3d N = planeEquation.block<3, 1>(0, 0);
		ComputeBaseVectors(N, u, v);
		//get the barycenter
		G = gravity;
	}

	//project the points
	for (unsigned i = 0; i<count; ++i)
	{
		//we recenter current point
		Vector3d getPointd(Points[i].x, Points[i].y, Points[i].z);
		Vector3d P = getPointd - G;
		//then we project it on plane (with scalar prods)
		points2D[i] = Vector2d(P.dot(u), P.dot(v));
	}

	//output the local base if necessary
	if (!useOXYasBase)
	{
		O = G;
		X = u;
		Y = v;
	}
	return true;
}

        然后是检测凹包的程序了:

//try to get the points on the convex/concave hull to build the contour and the polygon
	//这一个步骤大致分为三步:
	cout << "Finding Concave Hull...";
	std::vector<IndexedVector2d> IndexedPoints2D;
	IndexedPoints2D.resize(points2D.size());
	for (int i = 0; i < points2D.size(); i++)//重新装载点,包含着它的index,可以进行访问
	{
		IndexedPoints2D[i].pt2d = points2D[i];
		IndexedPoints2D[i].index = i;
		//CloudOuter << points2D[i].x() << " " << points2D[i].y() << endl;
	}
	std::list<IndexedVector2d*> hullPoints;
	double m_maxEdgeLength = 0.2;
	//检测凹陷包,一个函数完成
	//try to get the points on the convex/concave hull to build the contour and the polygon
	if (!extractConcaveHull2D(IndexedPoints2D,
		hullPoints,
		m_maxEdgeLength*m_maxEdgeLength))
	{
		cout << ("[ccFacet::createInternalRepresentation] Failed to compute the convex hull of the input points!");
	}
	cout << "Done!";

        我们首先是将点云重新装载,变成IndexedPoints2D格式。这一格式为:

struct IndexedVector2d
{
	Eigen::Vector2d pt2d;//2Dpoint we get
	int index;//index we get
};

        很简单,就是在Vector2D的基础上封装了一个序号,这个序号后面用来检测凹包的时候使用。下面这个extractConcaveHull2D函数就很大了,也是检测凹包的主函数,也是最难的一部分,说实话,我没看懂。不过最终结果是一模一样的,对我来说就足够了。

bool extractConcaveHull2D(std::vector<IndexedVector2d>& points,
	std::list<IndexedVector2d*>& hullPoints,
	double maxSquareEdgeLength/*=0*/)
{
	//first compute the Convex hull
	if (!extractConvexHull2D(points, hullPoints))
		return false;
	//do we really need to compute the concave hull?
	if (hullPoints.size() < 2 || maxSquareEdgeLength <= 0)
		return true;

	unsigned pointCount = static_cast<unsigned>(points.size());
	cout << "凸包的点个数为:" << hullPoints.size();
	std::vector<HullPointFlags> pointFlags;
	try
	{
		pointFlags.resize(pointCount, POINT_NOT_USED);
	}
	catch (...)
	{
		//not enough memory
		return false;
	}

	//hack: compute the theoretical 'minimal' edge length
	//劈砍计算最短边长
	double minSquareEdgeLength = 0;
	{
		Eigen:Vector2d minP, maxP;
		for (size_t i = 0; i<pointCount; ++i)
		{
			const IndexedVector2d& P = points[i];
			if (i)
			{
				minP(0) = std::min(P.pt2d.x(), minP.x());
				minP(1) = std::min(P.pt2d.y(), minP.y());
				maxP(0) = std::max(P.pt2d.x(), maxP.x());
				maxP(1) = std::max(P.pt2d.y(), maxP.y());
			}
			else
			{
				minP = maxP = P.pt2d;//赋值
			}
		}

		minSquareEdgeLength = (maxP - minP).squaredNorm() / static_cast<double>(1.0e7); //10^-7 of the max bounding rectangle side
		minSquareEdgeLength = std::min(minSquareEdgeLength, maxSquareEdgeLength / 10);

		//we remove very small edges
		for (std::list<IndexedVector2d*>::iterator itA = hullPoints.begin(); itA != hullPoints.end(); ++itA)
		{
			std::list<IndexedVector2d*>::iterator itB = itA; ++itB;
			if (itB == hullPoints.end())
				itB = hullPoints.begin();
			//if (((**itB).pt2d - (**itA).pt2d).squaredNorm() < minSquareEdgeLength)
			if (((**itB).pt2d - (**itA).pt2d).squaredNorm() < minSquareEdgeLength)
			{
				pointFlags[(*itB)->index] = POINT_FROZEN;
				hullPoints.erase(itB);
			}
		}

		if (hullPoints.size() < 2)
		{
			//no more edges?!
			return false;
		}
	}

	//we repeat the process until nothing changes!
	//Warning: high STL containers usage ahead ;)
	unsigned step = 0;
	bool somethingHasChanged = true;
	while (somethingHasChanged)
	{
		try
		{
			somethingHasChanged = false;
			++step;
			reset point flags
			//for (size_t i=0; i<pointCount; ++i)
			//{
			//	if (pointFlags[i] != POINT_FROZEN)
			//		pointFlags[i] = POINT_NOT_USED;
			//}

			//build the initial edge list & flag the convex hull points
			std::multiset<Edge> edges;
			{
				for (std::list<IndexedVector2d*>::iterator itA = hullPoints.begin(); itA != hullPoints.end(); ++itA)
				{
					std::list<IndexedVector2d*>::iterator itB = itA; ++itB;
					if (itB == hullPoints.end())
						itB = hullPoints.begin();

					//we will only process the edges that are longer than the maximum specified length
					//if ((**itB - **itA).norm2() > maxSquareEdgeLength)
					if (((**itB).pt2d - (**itA).pt2d).squaredNorm() > maxSquareEdgeLength)
					{
						unsigned nearestPointIndex = 0;
						double minSquareDist = FindNearestCandidate(
							nearestPointIndex,
							itA,
							itB,
							points,
							pointFlags,
							minSquareEdgeLength,
							maxSquareEdgeLength,
							step > 1);

						if (minSquareDist >= 0)
						{
							Edge e(itA, nearestPointIndex, minSquareDist);
							edges.insert(e);
						}
					}

					pointFlags[(*itA)->index] = POINT_USED;
				}
			}
			while (!edges.empty())
			{
				//current edge (AB)
				//this should be the edge with the nearest 'candidate'
				Edge e = *edges.begin();
				edges.erase(edges.begin());

				VertexIterator itA = e.itA;
				VertexIterator itB = itA; ++itB;
				if (itB == hullPoints.end())
					itB = hullPoints.begin();

				//nearest point
				const IndexedVector2d& P = points[e.nearestPointIndex];
				if (pointFlags[P.index] != POINT_NOT_USED)
				{
					//assert(false); //DGM: in fact it happens!
					break;
				}

				//last check: the new segments must not intersect with the actual hull!
				bool intersect = false;
				//if (false)
				{
					for (VertexIterator itJ = hullPoints.begin(), itI = itJ++; itI != hullPoints.end(); ++itI, ++itJ)
					{
						if (itJ == hullPoints.end())
							itJ = hullPoints.begin();
						bool seg1 = segmentIntersect((**itI).pt2d, (**itJ).pt2d, (**itA).pt2d, P.pt2d);
						bool seg2 = segmentIntersect((**itI).pt2d, (**itJ).pt2d, (P).pt2d, (**itB).pt2d);
						if (((*itI)->index != (*itA)->index && (*itJ)->index != (*itA)->index && seg1)
							|| ((*itI)->index != (*itB)->index && (*itJ)->index != (*itB)->index && seg2))
						{
							intersect = true;
							break;
						}
					}
				}

				if (!intersect)
				{
					//add point to concave hull
					VertexIterator itP = hullPoints.insert(itB == hullPoints.begin() ? hullPoints.end() : itB, &points[e.nearestPointIndex]);

					//we won't use P anymore!
					pointFlags[P.index] = POINT_USED;

					somethingHasChanged = true;

					//update all edges that were having 'P' as their nearest candidate as well
					if (!edges.empty())
					{
						std::vector<VertexIterator> removed;
						std::multiset<Edge>::const_iterator lastValidIt = edges.end();
						for (std::multiset<Edge>::const_iterator it = edges.begin(); it != edges.end(); ++it)
						{
							if ((*it).nearestPointIndex == e.nearestPointIndex)
							{
								//we'll have to put them back afterwards!
								removed.push_back((*it).itA);

								edges.erase(it);
								if (edges.empty())
									break;
								if (lastValidIt != edges.end())
									it = lastValidIt;
								else
									it = edges.begin();
							}
							else
							{
								lastValidIt = it;
							}
						}

						//update the removed edges info and put them back in the main list
						for (size_t i = 0; i<removed.size(); ++i)
						{
							VertexIterator itC = removed[i];
							VertexIterator itD = itC; ++itD;
							if (itD == hullPoints.end())
								itD = hullPoints.begin();

							unsigned nearestPointIndex = 0;
							PointCoordinateType minSquareDist = FindNearestCandidate(
								nearestPointIndex,
								itC,
								itD,
								points,
								pointFlags,
								minSquareEdgeLength,
								maxSquareEdgeLength);

							if (minSquareDist >= 0)
							{
								Edge e(itC, nearestPointIndex, minSquareDist);
								edges.insert(e);
							}
						}
					}

					//we'll inspect the two new segments later (if necessary)
					if ((P.pt2d - (**itA).pt2d).squaredNorm() > maxSquareEdgeLength)
					{
						unsigned nearestPointIndex = 0;
						PointCoordinateType minSquareDist = FindNearestCandidate(
							nearestPointIndex,
							itA,
							itP,
							points,
							pointFlags,
							minSquareEdgeLength,
							maxSquareEdgeLength);

						if (minSquareDist >= 0)
						{
							Edge e(itA, nearestPointIndex, minSquareDist);
							edges.insert(e);
						}
					}

					if (((**itB).pt2d - P.pt2d).squaredNorm() > maxSquareEdgeLength)
					{
						unsigned nearestPointIndex = 0;
						PointCoordinateType minSquareDist = FindNearestCandidate(
							nearestPointIndex,
							itP,
							itB,
							points,
							pointFlags,
							minSquareEdgeLength,
							maxSquareEdgeLength);

						if (minSquareDist >= 0)
						{
							Edge e(itP, nearestPointIndex, minSquareDist);
							edges.insert(e);
						}
					}
				}
			}
		}
		catch (...)
		{
			//not enough memory
			return false;
		}
	}
}
enum HullPointFlags {
	POINT_NOT_USED = 0,
	POINT_USED = 1,
	POINT_IGNORED = 2,
	POINT_FROZEN = 3,
};
double FindNearestCandidate(unsigned& minIndex,
	const VertexIterator& itA,
	const VertexIterator& itB,
	const std::vector<IndexedVector2d>& points,
	const std::vector<HullPointFlags>& pointFlags,
	PointCoordinateType minSquareEdgeLength,
	PointCoordinateType maxSquareEdgeLength,
	bool allowLongerChunks = false)
{
	//look for the nearest point in the input set
	PointCoordinateType minDist2 = -1;
	Vector2d AB = (**itB).pt2d - (**itA).pt2d;
	PointCoordinateType squareLengthAB = AB.squaredNorm();
	unsigned pointCount = static_cast<unsigned>(points.size());
	for (unsigned i = 0; i<pointCount; ++i)
	{
		const IndexedVector2d& P = points[i];
		if (pointFlags[P.index] != POINT_NOT_USED)
			continue;


		//skip the edge vertices!
		if (P.index == (*itA)->index || P.index == (*itB)->index)
			continue;


		//we only consider 'inner' points
		Vector2d AP = P.pt2d - (**itA).pt2d;
		if (AB.x() * AP.y() - AB.y() * AP.x() < 0)
		{
			continue;
		}


		PointCoordinateType dot = AB.dot(AP); // = cos(PAB) * ||AP|| * ||AB||
		if (dot >= 0 && dot <= squareLengthAB)
		{
			Vector2d HP = AP - AB * (dot / squareLengthAB);
			PointCoordinateType dist2 = HP.squaredNorm();
			if (minDist2 < 0 || dist2 < minDist2)
			{
				//the 'nearest' point must also be a valid candidate
				//(i.e. at least one of the created edges is smaller than the original one
				//and we don't create too small edges!)
				PointCoordinateType squareLengthAP = AP.squaredNorm();
				PointCoordinateType squareLengthBP = (P.pt2d - (**itB).pt2d).squaredNorm();
				if (squareLengthAP >= minSquareEdgeLength
					&&	squareLengthBP >= minSquareEdgeLength
					&& (allowLongerChunks || (squareLengthAP < squareLengthAB || squareLengthBP < squareLengthAB))
					)
				{
					minDist2 = dist2;
					minIndex = i;
				}
			}
		}
	}
	return (minDist2 < 0 ? minDist2 : minDist2 / squareLengthAB);
}

那么所有函数都已经完成了。我们运行一下,控制台结果如下:


最终检测出来的凹包点个数为370个,然后平面参数以及反投影误差都已经输出出来了。我们再对比一下CC的结果:

        然后再对比一下凹包点数量。emmmm应该是完全相同的。好,那么我们宣布,经过控制台测试,凹包算法提取完毕!

        下一篇博客里面我们开始正式将算法加入到QT框架中去!




评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值