区域生长一般从一组种子点开始,按照视差连续性原则将更多的像素合并进入种子点,从而不断扩张种子点区域,直至达到设定的终止条件为止。与传统的局部匹配算法相比,区域生长算法加入了视差连续性约束,极大的提高了匹配的速度和正确度。
区域生长密集匹配算法的第一步是匹配一些同名点对作为初始种子点,种子点正确与否直接影响生长的结果。区域生长匹配只需少量的初始种子点即能生成较为密集的匹配点集,因此可以选择特征匹配或灰度匹配的方式得到。
得到立体像对的初始种子点后,便可依据四邻域或八领域联通准则,把种子点传播至其邻域。区域生长的基本思想是若已知左图像上一点Pb与右图像上一点Pm,则与pb相邻的点pr的同名点必定在pm附近。在目标像素周围4*4或者3*3(像素)的搜索窗口内计算匹配代价,若最小匹配代价满足阈值,则将对应的点匹配给当前生长点,并把当前点加入生长种子点堆栈。如下式

其中,Wp 是以 P为中心的匹配窗口,函数Il和Ir 分别表示左、右图像中像素的的灰度值, f为灰度相似性计算函数,通常会采用ZNCC法计算。从种子点向图像对右下方的基本生长策略如下图所示,这里的搜索窗口的大小取4*4,即图中填充的区域。

实现过程如下所示:
1)首先是提取种子点,可以采用特征匹配之后的点,如SIFT、Surf和ORB等;
2)提取种子点之后就可以进行区域生长匹配了,区域生长代码如下:
namespace {
template<typename T>
inline double Dist(const cv::Point_<T>& p1, const cv::Point_<T>& p2)
{
return sqrt(pow((p1.x - p2.x), 2) + pow((p1.y - p2.y), 2));
}
struct SeedPoint
{
cv::Point2i l_pt;
cv::Point2i r_pt;
double diff;
};
bool operator<(const SeedPoint& a,const SeedPoint& b)
{
return a.diff > b.diff;
}
}
void RegionGrowMatch(
const cv::Mat& img1, const std::vector<cv::Point2i>& points1,
const cv::Mat& img2, const std::vector<cv::Point2i>& points2,
int m_size, cv::Mat& disp)
{
if (img1.empty() || img2.empty()
|| img1.type() != CV_8UC1 || img2.type() != CV_8UC1
|| points1.empty() || points1.size() != points2.size()
|| m_size < 1) return;
cv::Mat map1(img1.size(), CV_8UC1, cv::Scalar::all(0)); //左匹配图,0表示未匹配, 1表示已匹配
cv::Mat map2(img2.size(), CV_8UC1, cv::Scalar::all(0)); // 右匹配图
disp = cv::Mat(img1.size(), CV_8UC1, cv::Scalar::all(0));///左视差图
std::priority_queue<SeedPoint> seeds_queue;
for (int i = 0; i < points1.size(); ++i)
{
SeedPoint seed;
seed.l_pt = points1[i];
seed.r_pt = points2[i];
seed.diff = ZNCC(img1, seed.l_pt, img2, seed.r_pt, m_size);
seeds_queue.push(seed);
map1.ptr<uchar>(seed.l_pt.y)[seed.l_pt.x] = 1;
map2.ptr<uchar>(seed.r_pt.y)[seed.r_pt.x] = 1;
disp.ptr<uchar>(seed.l_pt.y)[seed.l_pt.x] = Dist(seed.l_pt, seed.r_pt);
}
///***************匹配直至种子点数目为零***********************
while (!seeds_queue.empty())
{
SeedPoint seed = seeds_queue.top();
seeds_queue.pop();
for (int i = -1; i <= 1; i++)
{
for (int j = -1; j <= 1; j++)
{
cv::Point2i p1(seed.l_pt.x + j, seed.l_pt.y + i);
if (XYCheck(img1, p1) &&
map1.ptr<uchar>(p1.y)[p1.x] == 0)
{
double diff_min = FLT_MAX;
cv::Point2i p2_min;
for (int ii = -1; ii <= 1; ii++)
{
for (int jj = -1; jj <= 1; jj++)
{
cv::Point2i p2(seed.r_pt.x + jj, seed.r_pt.y + ii);
if (XYCheck(img2, p2) &&
map2.ptr<uchar>(p2.y)[p2.x] == 0)
{
double diff = ZNCC(img1, p1, img2, p2, m_size);
if (diff < diff_min && diff < 2 && diff>1) // diff过大不匹配
{
diff_min = diff;
p2_min = p2;
}
}
}
}
if (diff_min < 2)
{
SeedPoint new_seed;
new_seed.diff = diff_min;
new_seed.l_pt = p1;
new_seed.r_pt = p2_min;
seeds_queue.push(new_seed);
disp.ptr<uchar>(p1.y)[p1.x] = Dist(p1, p2_min);
map1.ptr<uchar>(p1.y)[p1.x] = 1;
map2.ptr<uchar>(p2_min.y)[p2_min.x] = 1;
}
}
}
}
}
}
使用样例如下所示:
void RegionGrowingTest()
{
cv::Mat img1 = cv::imread("cones\\im2.png");
cv::Mat img2 = cv::imread("cones\\im6.png");
std::vector<cv::KeyPoint> key_points1, key_points2;
cv::Mat descriptors1, descriptors2;
cv::Ptr<cv::SIFT> sift=cv::SIFT::create(1000);
sift->detectAndCompute(img1, cv::Mat(), key_points1, descriptors1);
sift->detectAndCompute(img2, cv::Mat(), key_points2, descriptors2);
cv::BFMatcher matcher;
std::vector < std::vector<cv::DMatch>> matches;
matcher.knnMatch(descriptors1, descriptors2, matches, 2);
std::vector<cv::DMatch > good_matches;
for (const auto& m : matches)
{
if (m[0].distance < 0.6 * m[1].distance)
{
good_matches.push_back(m[0]);
}
}
cv::Mat img_matches;
cv::drawMatches(img1, key_points1, img2, key_points2, matches, img_matches);
cv::imshow("bf_knn match", img_matches);
std::vector <cv::KeyPoint> good_key_points1, good_key_points2;
std::vector<cv::Point2f> good_points1, good_points2;
for (const auto& m : good_matches)
{
good_key_points1.push_back(key_points1[m.queryIdx]);
good_key_points2.push_back(key_points2[m.trainIdx]);
good_points1.push_back(good_key_points1.back().pt);
good_points2.push_back(good_key_points2.back().pt);
}
std::vector<uchar> ransac_status;
cv::Mat Fundamental = cv::findFundamentalMat(
good_points1, good_points2, ransac_status, cv::FM_RANSAC);
//重新定义关键点RR_KP和RR_matches来存储新的关键点和基础矩阵,通过RansacStatus来删除误匹配点
std::vector<cv::KeyPoint> ran_key_points1, ran_key_points2;
std::vector<cv::DMatch> ran_matches;
int index = 0;
for (size_t i = 0; i < good_matches.size(); ++i)
{
if (ransac_status[i] != 0)
{
ran_key_points1.push_back(good_key_points1[i]);
ran_key_points2.push_back(good_key_points2[i]);
good_matches[i].queryIdx = index;
good_matches[i].trainIdx = index;
ran_matches.push_back(good_matches[i]);
index++;
}
}
cv::Mat img_ransac_matches;
drawMatches(img1, ran_key_points1, img2, ran_key_points2, ran_matches, img_ransac_matches);
cv::imshow("ransac match", img_ransac_matches);
std::vector<cv::Point2i> seed_points1, seed_points2;
for (int i = 0; i < ran_key_points1.size(); ++i)
{
seed_points1.push_back({ (int)ran_key_points1[i].pt.x,
(int)ran_key_points1[i].pt.y });
seed_points2.push_back({ (int)ran_key_points2[i].pt.x,
(int)ran_key_points2[i].pt.y });
}
cv::Mat gray1, gray2;
cv::cvtColor(img1, gray1, cv::COLOR_BGR2GRAY);
cv::cvtColor(img2, gray2, cv::COLOR_BGR2GRAY);
cv::Mat disp;
dense_matching::RegionGrowMatch(
gray1, seed_points1, gray2, seed_points2, 3, disp);
cv::Mat disp_show;
cv::normalize(disp, disp_show,0,255,cv::NORM_MINMAX);
cv::imshow("disp", disp_show);
cv::waitKey(0);
}
本人水平有限,如有错误,还望不吝指正,代码有一定删减,没有重新编译,如有错误,请自行调试,有问题请邮箱联系1299771369@qq.com。
845

被折叠的 条评论
为什么被折叠?



