《RANSAC(随机采样一致算法)原理及openCV代码实现》
原文: http://www.lai18.com/content/1046939.html
本文转自:http://blog.csdn.net/yihaizhiyan/article/details/5973729
http://blog.csdn.net/Sway_2012/article/details/37765765
http://blog.csdn.net/zouwen198317/article/details/38494149
1.什么是RANSAC?
RANSAC是 RAN dom SA mple C onsensus(随机抽样一致性)的缩写。它是从一个观察数据集合中,估计模型参数(模型拟合)的迭代方法。它是一种随机的不确定算法,每次运算求出的结果可能不相同,但总能给出一个合理的结果,为了提高概率必须提高迭代次数。
2.算法详解
给定两个点p1与p2的坐标,确定这两点所构成的直线,要求对于输入的任意点p3,都可以判断它是否在该直线上。初中解析几何知识告诉我们,判断一个点在直线上,只需其与直线上任意两点点斜率都相同即可。实际操作当中,往往会先根据已知的两点算出直线的表达式(点斜式、截距式等等),然后通过向量计算即可方便地判断p3是否在该直线上。
生产实践中的数据往往会有一定的偏差。例如我们知道两个变量X与Y之间呈线性关系,Y=aX+b,我们想确定参数a与b的具体值。通过实验,可以得到一组X与Y的测试值。虽然理论上两个未知数的方程只需要两组值即可确认,但由于系统误差的原因,任意取两点算出的a与b的值都不尽相同。我们希望的是,最后计算得出的理论模型与测试值的误差最小。大学的高等数学课程中,详细阐述了最小二乘法的思想。通过计算最小均方差关于参数a、b的偏导数为零时的值。事实上,在很多情况下,最小二乘法都是线性回归的代名词。
遗憾的是,最小二乘法只适合与误差较小的情况。试想一下这种情况,假使需要从一个噪音较大的数据集中提取模型(比方说只有20%的数据时符合模型的)时,最小二乘法就显得力不从心了。例如下图,肉眼可以很轻易地看出一条直线(模式),但算法却找错了。
RANSAC算法的输入是一组观测数据(往往含有较大的噪声或无效点),一个用于解释观测数据的参数化模型以及一些可信的参数。RANSAC通过反复选择数据中的一组随机子集来达成目标。被选取的子集被假设为局内点,并用下述方法进行验证:
有一个模型适应于假设的局内点,即所有的未知参数都能从假设的局内点计算得出。
用1中得到的模型去测试所有的其它数据,如果某个点适用于估计的模型,认为它也是局内点。
如果有足够多的点被归类为假设的局内点,那么估计的模型就足够合理。
然后,用所有假设的局内点去重新估计模型(譬如使用最小二乘法),因为它仅仅被初始的假设局内点估计过。
最后,通过估计局内点与模型的错误率来评估模型。
上述过程被重复执行固定的次数,每次产生的模型要么因为局内点太少而被舍弃,要么因为比现有的模型更好而被选用。
整个过程可参考下图:
3.代码实现
随机一致性采样RANSAC是一种鲁棒的模型拟合算法,能够从有外点的数据中拟合准确的模型。
RANSAC过程中用到的参数
N-- 拟合模型所需要的最少的样本个数
K--算法的迭代次数
t--用于判断数据是否是内点
d--判定模型是否符合使用于数据集,也就是判断是否是好的模型
RANSAC算法过程
1 for K 次迭代
2 从数据中均匀随机采样N个点
3 利用采样的N个点拟合你个模型
4 for 对于除采样点外的每一个样本点
5 利用t检测样本点到模型的距离,如果小于t则认为是一致,否则认为是外点
6 end
7 如果有d或者更多的一致点,则认为拟合的模型是好的
8 end
9 使用拟合误差作为标准,选择最好的拟合模型
迭代次数的计算
假设 r = 内点个数/所有点的个数
则:
p0 = pow(r, N) 表示采样的N个点全为内点,也就是是一次有效采样的概率
p1 = 1 - pow(r, N) 表示采样的N个点中至少有一个外点,即一次无效采样的概率
p2 = pow(p1, K) 表示K次无效采样的概率
假设p表示K次采样中至少一次采样是有效采样,则有1-p = pow(p1, K), 两边取对数
则有 K = log(1- p )/log(1-p1).
附一份来自google 的RANSAC的代码框架
[cpp]
view plain copy print ?
#ifndef FVISION_RANSAC_H_
#define FVISION_RANSAC_H_
#include <fvision/utils/random_utils.h>
#include <fvision/utils/misc.h>
#include <vector>
#include <iostream>
#include <cassert>
namespace fvision {
class RANSAC_SamplesNumber {
public:
RANSAC_SamplesNumber(int modelSampleSize) {
this->s = modelSampleSize;
this->p = 0.99;
}
~RANSAC_SamplesNumber(void) {}
public:
long calcN(int inliersNumber, int samplesNumber) {
double e = 1 - (double)inliersNumber / samplesNumber;
//cout<<"e: "<<e<<endl;
if (e > 0.9) e = 0.9;
//cout<<"pow: "<<pow((1 - e), s)<<endl;
//cout<<log(1 - pow((1 - e), s))<<endl;
long N = (long)(log(1 - p) / log(1 - pow((1 - e), s)));
if (N < 0) return (long) 1000000000 ;
else return N;
}
private:
int s; //samples size for fitting a model
double p; //probability that at least one of the random samples if free from outliers
//usually 0.99
};
//fit a model to a set of samples
template <typename M, typename S>
class GenericModelCalculator {
public:
typedef std::vector<S> Samples;
virtual M compute(const Samples& samples) = 0;
virtual ~GenericModelCalculator<M, S>() {}
//the model calculator may only use a subset of the samples for computing
//default return empty for both
virtual const std::vector<int>& getInlierIndices() const { return defaultInlierIndices; };
virtual const std::vector<int>& getOutlierIndices() const { return defaultOutlierIndices; };
// if the subclass has a threshold parameter, it need to override the following three functions
// this is used for algorithms which have a normalization step on input samples
virtual bool hasThreshold() const { return false; }
virtual void setThreshold(double threshold) {}
virtual double getThreshold() const { return 0; }
protected:
std::vector<int> defaultInlierIndices;
std::vector<int> defaultOutlierIndices;
};
//evaluate a model to samples
//using a threshold to distinguish inliers and outliers
template <typename M, typename S>
class GenericErrorCaclculator {
public:
virtual ~GenericErrorCaclculator<M, S>() {}
typedef std::vector<S> Samples;
virtual double compute(const M& model, const S& sample) const = 0;
double computeAverage(const M& model, const Samples& samples) const {
int n = (int)samples.size();
if (n == 0) return 0;
double sum = 0;
for (int i = 0; i < n; i++) {
sum += compute(model, samples[i]);
}
return sum / n;
}
double computeInlierAverage(const M& model, const Samples& samples) const {
int n = (int)samples.size();
if (n == 0) return 0;
double sum = 0;
double error = 0;
int inlierNum = 0;
for (int i = 0; i < n; i++) {
error = compute(model, samples[i]);
if (error <= threshold) {
sum += error;
inlierNum++;
}
}
if (inlierNum == 0) return 1000000 ;
return sum / inlierNum;
}
public:
/** set a threshold for classify inliers and outliers
*/
void setThreshold(double v) { threshold = v; }
double getThreshold() const { return threshold; }
/** classify all samples to inliers and outliers
*
*/
void classify(const M& model, const Samples& samples, Samples& inliers, Samples& outliers) const {
inliers.clear();
outliers.clear();
Samples::const_iterator iter = samples.begin();
for (; iter != samples.end(); ++iter) {
if (isInlier(model, *iter)) inliers.push_back(*iter);
else outliers.push_back(*iter);
}
}
/** classify all samples to inliers and outliers, output indices
*
*/
void classify(const M& model, const Samples& samples, std::vector<int>& inlierIndices, std::vector<int>& outlierIndices) const {
inlierIndices.clear();
outlierIndices.clear();
Samples::const_iterator iter = samples.begin();
int i = 0;
for (; iter != samples.end(); ++iter, ++i) {
if (isInlier(model, *iter)) inlierIndices.push_back(i);
else outlierIndices.push_back(i);
}
}
/** classify all samples to inliers and outliers
*
*/
void classify(const M& model, const Samples& samples,
std::vector<int>& inlierIndices, std::vector<int>& outlierIndices,
Samples& inliers, Samples& outliers) const {
inliers.clear();
outliers.clear();
inlierIndices.clear();
outlierIndices.clear();
Samples::const_iterator iter = samples.begin();
int i = 0;
for (; iter != samples.end(); ++iter, ++i) {
if (isInlier(model, *iter)) {
inliers.push_back(*iter);
inlierIndices.push_back(i);
}
else {
outliers.push_back(*iter);
outlierIndices.push_back(i);
}
}
}
int calcInliersNumber(const M& model, const Samples& samples) const {
int n = 0;
for (int i = 0; i < (int)samples.size(); i++) {
if (isInlier(model, samples[i])) ++n;
}
return n;
}
bool isInlier(const M& model, const S& sample) const {
return (compute(model, sample) <= threshold);
}
private:
double threshold;
};
/** generic RANSAC framework
* make use of a model calculator and an error calculator
* M is the model type, need to support copy assignment operator and default constructor
* S is the sample type.
*
* Interface:
* M compute(samples); input a set of samples, output a model.
* after compute, inliers and outliers can be retrieved
*
*/
template <typename M, typename S>
class Ransac : public GenericModelCalculator<M, S> {
public:
typedef std::vector<S> Samples;
/** Constructor
*
* @param pmc a GenericModelCalculator object
* @param modelSampleSize how much samples are used to fit a model
* @param pec a GenericErrorCaclculator object
*/
Ransac(GenericModelCalculator<M, S>* pmc, int modelSampleSize, GenericErrorCaclculator<M, S>* pec) {
this->pmc = pmc;
this->modelSampleSize = modelSampleSize;
this->pec = pec;
this->maxSampleCount = 500;
this->minInliersNum = 1000000 ;
this->verbose = false;
}
const GenericErrorCaclculator<M, S>* getErrorCalculator() const { return pec; }
virtual ~Ransac() {
delete pmc;
delete pec;
}
void setMaxSampleCount(int n) {
this->maxSampleCount = n;
}
void setMinInliersNum(int n) {
this->minInliersNum = n;
}
virtual bool hasThreshold() const { return true; }
virtual void setThreshold(double threshold) {
pec->setThreshold(threshold);
}
virtual double getThreshold() const {
return pec->getThreshold();
}
public:
/** Given samples, compute a model that has most inliers. Assume the samples size is larger or equal than model sample size
* inliers, outliers, inlierIndices and outlierIndices are stored
*
*/
M compute(const Samples& samples) {
clear();
int pointsNumber = (int)samples.size();
assert(pointsNumber >= modelSampleSize);
long N = 100000 ;
int sampleCount = 0;
RANSAC_SamplesNumber ransac(modelSampleSize);
M bestModel;
int maxInliersNumber = 0;
bool stop = false;
while (sampleCount < N && sampleCount < maxSampleCount && !stop) {
Samples nsamples;
randomlySampleN(samples, nsamples, modelSampleSize);
M sampleModel = pmc->compute(nsamples);
if (maxInliersNumber == 0) bestModel = sampleModel; //init bestModel
int inliersNumber = pec->calcInliersNumber(sampleModel, samples);
if (verbose) std::cout<<"inliers number: "<<inliersNumber<<std::endl;
if (inliersNumber > maxInliersNumber) {
bestModel = sampleModel;
maxInliersNumber = inliersNumber;
N = ransac.calcN(inliersNumber, pointsNumber);
if (maxInliersNumber > minInliersNum) stop = true;
}
if (verbose) std::cout<<"N: "<<N<<std::endl;
sampleCount ++;
}
if (verbose) std::cout<<"sampleCount: "<<sampleCount<<std::endl;
finalModel = computeUntilConverge(bestModel, maxInliersNumber, samples);
pec->classify(finalModel, samples, inlierIndices, outlierIndices, inliers, outliers);
inliersRate = (double)inliers.size() / samples.size();
return finalModel;
}
const Samples& getInliers() const { return inliers; }
const Samples& getOutliers() const { return outliers; }
const std::vector<int>& getInlierIndices() const { return inlierIndices; }
const std::vector<int>& getOutlierIndices() const { return outlierIndices; }
double getInliersAverageError() const {
return pec->computeAverage(finalModel, inliers);
}
double getInliersRate() const {
return inliersRate;
}
void setVerbose(bool v) {
verbose = v;
}
private:
void randomlySampleN(const Samples& samples, Samples& nsamples, int sampleSize) {
std::vector<int> is = ranis((int)samples.size(), sampleSize);
for (int i = 0; i < sampleSize; i++) {
nsamples.push_back(samples[is[i]]);
}
}
/** from initial model, iterate to find the best model.
*
*/
M computeUntilConverge(M initModel, int initInliersNum, const Samples& samples) {
if (verbose) {
std::cout<<"iterate until converge...."<<std::endl;
std::cout<<"init inliers number: "<<initInliersNum<<std::endl;
}
M bestModel = initModel;
M newModel = initModel;
int lastInliersNum = initInliersNum;
Samples newInliers, newOutliers;
pec->classify(initModel, samples, newInliers, newOutliers);
double lastInlierAverageError = pec->computeAverage(initModel, newInliers);
if (verbose) std::cout<<"init inlier average error: "<<lastInlierAverageError<<std::endl;
while (true && (int)newInliers.size() >= modelSampleSize) {
//update new model with new inliers, the new model does not necessarily have more inliers
newModel = pmc->compute(newInliers);
pec->classify(newModel, samples, newInliers, newOutliers);
int newInliersNum = (int)newInliers.size();
double newInlierAverageError = pec->computeAverage(newModel, newInliers);
if (verbose) {
std::cout<<"new inliers number: "<<newInliersNum<<std::endl;
std::cout<<"new inlier average error: "<<newInlierAverageError<<std::endl;
}
if (newInliersNum < lastInliersNum) break;
if (newInliersNum == lastInliersNum && newInlierAverageError >= lastInlierAverageError) break;
//update best model with the model has more inliers
bestModel = newModel;
lastInliersNum = newInliersNum;
lastInlierAverageError = newInlierAverageError;
}
return bestModel;
}
void clear() {
inliers.clear();
outliers.clear();
inlierIndices.clear();
outlierIndices.clear();
}
private:
GenericModelCalculator<M, S>* pmc;
GenericErrorCaclculator<M, S>* pec;
int modelSampleSize;
int maxSampleCount;
int minInliersNum;
M finalModel;
Samples inliers;
Samples outliers;
std::vector<int> inlierIndices;
std::vector<int> outlierIndices;
double inliersRate;
private:
bool verbose;
};
}
#endif // FVISION_RANSAC_H_
实例2
#include <math.h>
#include "LineParamEstimator.h"
LineParamEstimator::LineParamEstimator(double delta) : m_deltaSquared(delta*delta) {}
/*****************************************************************************/
/*
* Compute the line parameters [n_x,n_y,a_x,a_y]
* 通过输入的两点来确定所在直线,采用法线向量的方式来表示,以兼容平行或垂直的情况
* 其中n_x,n_y为归一化后,与原点构成的法线向量,a_x,a_y为直线上任意一点
*/
void LineParamEstimator::estimate(std::vector<Point2D *> &data,
std::vector<double> ¶meters)
{
parameters.clear();
if(data.size()<2)
return;
double nx = data[1]->y - data[0]->y;
double ny = data[0]->x - data[1]->x;// 原始直线的斜率为K,则法线的斜率为-1/k
double norm = sqrt(nx*nx + ny*ny);
parameters.push_back(nx/norm);
parameters.push_back(ny/norm);
parameters.push_back(data[0]->x);
parameters.push_back(data[0]->y);
}
/*****************************************************************************/
/*
* Compute the line parameters [n_x,n_y,a_x,a_y]
* 使用最小二乘法,从输入点中拟合出确定直线模型的所需参量
*/
void LineParamEstimator::leastSquaresEstimate(std::vector<Point2D *> &data,
std::vector<double> ¶meters)
{
double meanX, meanY, nx, ny, norm;
double covMat11, covMat12, covMat21, covMat22; // The entries of the symmetric covarinace matrix
int i, dataSize = data.size();
parameters.clear();
if(data.size()<2)
return;
meanX = meanY = 0.0;
covMat11 = covMat12 = covMat21 = covMat22 = 0;
for(i=0; i<dataSize; i++) {
meanX +=data[i]->x;
meanY +=data[i]->y;
covMat11 +=data[i]->x * data[i]->x;
covMat12 +=data[i]->x * data[i]->y;
covMat22 +=data[i]->y * data[i]->y;
}
meanX/=dataSize;
meanY/=dataSize;
covMat11 -= dataSize*meanX*meanX;
covMat12 -= dataSize*meanX*meanY;
covMat22 -= dataSize*meanY*meanY;
covMat21 = covMat12;
if(covMat11<1e-12) {
nx = 1.0;
ny = 0.0;
}
else { //lamda1 is the largest eigen-value of the covariance matrix
//and is used to compute the eigne-vector corresponding to the smallest
//eigenvalue, which isn't computed explicitly.
double lamda1 = (covMat11 + covMat22 + sqrt((covMat11-covMat22)*(covMat11-covMat22) + 4*covMat12*covMat12)) / 2.0;
nx = -covMat12;
ny = lamda1 - covMat22;
norm = sqrt(nx*nx + ny*ny);
nx/=norm;
ny/=norm;
}
parameters.push_back(nx);
parameters.push_back(ny);
parameters.push_back(meanX);
parameters.push_back(meanY);
}
/*****************************************************************************/
/*
* Given the line parameters [n_x,n_y,a_x,a_y] check if
* [n_x, n_y] dot [data.x-a_x, data.y-a_y] < m_delta
* 通过与已知法线的点乘结果,确定待测点与已知直线的匹配程度;结果越小则越符合,为
* 零则表明点在直线上
*/
bool LineParamEstimator::agree(std::vector<double> ¶meters, Point2D &data)
{
double signedDistance = parameters[0]*(data.x-parameters[2]) + parameters[1]*(data.y-parameters[3]);
return ((signedDistance*signedDistance) < m_deltaSquared);
}
#include <math.h> #include "LineParamEstimator.h" LineParamEstimator::LineParamEstimator(double delta) : m_deltaSquared(delta*delta) {} /*****************************************************************************/ /* * Compute the line parameters [n_x,n_y,a_x,a_y] * 通过输入的两点来确定所在直线,采用法线向量的方式来表示,以兼容平行或垂直的情况 * 其中n_x,n_y为归一化后,与原点构成的法线向量,a_x,a_y为直线上任意一点 */ void LineParamEstimator::estimate(std::vector<Point2D *> &data, std::vector<double> ¶meters) { parameters.clear(); if(data.size()<2) return; double nx = data[1]->y - data[0]->y; double ny = data[0]->x - data[1]->x;// 原始直线的斜率为K,则法线的斜率为-1/k double norm = sqrt(nx*nx + ny*ny); parameters.push_back(nx/norm); parameters.push_back(ny/norm); parameters.push_back(data[0]->x); parameters.push_back(data[0]->y); } /*****************************************************************************/ /* * Compute the line parameters [n_x,n_y,a_x,a_y] * 使用最小二乘法,从输入点中拟合出确定直线模型的所需参量 */ void LineParamEstimator::leastSquaresEstimate(std::vector<Point2D *> &data, std::vector<double> ¶meters) { double meanX, meanY, nx, ny, norm; double covMat11, covMat12, covMat21, covMat22; // The entries of the symmetric covarinace matrix int i, dataSize = data.size(); parameters.clear(); if(data.size()<2) return; meanX = meanY = 0.0; covMat11 = covMat12 = covMat21 = covMat22 = 0; for(i=0; i<dataSize; i++) { meanX +=data[i]->x; meanY +=data[i]->y; covMat11 +=data[i]->x * data[i]->x; covMat12 +=data[i]->x * data[i]->y; covMat22 +=data[i]->y * data[i]->y; } meanX/=dataSize; meanY/=dataSize; covMat11 -= dataSize*meanX*meanX; covMat12 -= dataSize*meanX*meanY; covMat22 -= dataSize*meanY*meanY; covMat21 = covMat12; if(covMat11<1e-12) { nx = 1.0; ny = 0.0; } else { //lamda1 is the largest eigen-value of the covariance matrix //and is used to compute the eigne-vector corresponding to the smallest //eigenvalue, which isn't computed explicitly. double lamda1 = (covMat11 + covMat22 + sqrt((covMat11-covMat22)*(covMat11-covMat22) + 4*covMat12*covMat12)) / 2.0; nx = -covMat12; ny = lamda1 - covMat22; norm = sqrt(nx*nx + ny*ny); nx/=norm; ny/=norm; } parameters.push_back(nx); parameters.push_back(ny); parameters.push_back(meanX); parameters.push_back(meanY); } /*****************************************************************************/ /* * Given the line parameters [n_x,n_y,a_x,a_y] check if * [n_x, n_y] dot [data.x-a_x, data.y-a_y] < m_delta * 通过与已知法线的点乘结果,确定待测点与已知直线的匹配程度;结果越小则越符合,为 * 零则表明点在直线上 */ bool LineParamEstimator::agree(std::vector<double> ¶meters, Point2D &data) { double signedDistance = parameters[0]*(data.x-parameters[2]) + parameters[1]*(data.y-parameters[3]); return ((signedDistance*signedDistance) < m_deltaSquared); }
RANSAC寻找匹配的代码如下:
[cpp]
view plain copy print ?
/*****************************************************************************/
template<class T, class S>
double Ransac<T,S>::compute(std::vector<S> ¶meters,
ParameterEsitmator<T,S> *paramEstimator ,
std::vector<T> &data,
int numForEstimate)
{
std::vector<T *> leastSquaresEstimateData;
int numDataObjects = data.size();
int numVotesForBest = -1;
int *arr = new int[numForEstimate];// numForEstimate表示拟合模型所需要的最少点数,对本例的直线来说,该值为2
short *curVotes = new short[numDataObjects]; //one if data[i] agrees with the current model, otherwise zero
short *bestVotes = new short[numDataObjects]; //one if data[i] agrees with the best model, otherwise zero
//there are less data objects than the minimum required for an exact fit
if(numDataObjects < numForEstimate)
return 0;
// 计算所有可能的直线,寻找其中误差最小的解。对于100点的直线拟合来说,大约需要100*99*0.5=4950次运算,复杂度无疑是庞大的。一般采用随机选取子集的方式。
computeAllChoices(paramEstimator,data,numForEstimate,
bestVotes, curVotes, numVotesForBest, 0, data.size(), numForEstimate, 0, arr);
//compute the least squares estimate using the largest sub set
for(int j=0; j<numDataObjects; j++) {
if(bestVotes[j])
leastSquaresEstimateData.push_back(&(data[j]));
}
// 对局内点再次用最小二乘法拟合出模型
paramEstimator->leastSquaresEstimate(leastSquaresEstimateData,parameters);
delete [] arr;
delete [] bestVotes;
delete [] curVotes;
return (double)leastSquaresEstimateData.size()/(double)numDataObjects;
}
相关文章推荐
大型网站的灵魂——性能
http://www.lai18.com/content/318116.html
RSA,DSA等加解密算法介绍
http://www.lai18.com/content/321599.html
权限项目总结(一)权限设计
http://www.lai18.com/content/409780.html
快速排序里的学问:随机化快排
http://www.lai18.com/content/423495.html
快速排序里的学问:枢纽元选择与算法效率
http://www.lai18.com/content/423496.html
快速排序里的学问:霍尔快排的实现
http://www.lai18.com/content/423497.html
快速排序里的学问:霍尔与快速排序
http://www.lai18.com/content/423498.html
快速排序里的学问:快速排序的过程
http://www.lai18.com/content/423499.html
快速排序里的学问:信息熵
http://www.lai18.com/content/423501.html
快速排序里的学问:再看看称球问题
http://www.lai18.com/content/423502.html
快速排序里的学问:从猜数字开始
http://www.lai18.com/content/423503.html
漫谈递归:从汇编看尾递归的优化
http://www.lai18.com/content/423517.html
漫谈递归:PHP里的尾递归及其优化
http://www.lai18.com/content/423518.html
漫谈递归:补充一些Continuation的知识
http://www.lai18.com/content/423519.html
漫谈递归:尾递归与CPS
http://www.lai18.com/content/423520.html
漫谈递归:从斐波那契开始了解尾递归
http://www.lai18.com/content/423521.html