OpenCV删除面积小的区域 实现图像二值化分割 标记连通区域
【尊重原创,转载请注明出处】http://blog.csdn.net/guyuealian/article/details/78142749
之前本博客在Matlab实现了《Matlab形态学图像处理:二值图像分割 标记连通区域和重心位置 删除连通区域》http://blog.csdn.net/guyuealian/article/details/71440949,现在本人使用OpenCV实现这一功能:对图像进行二值化分割,并用“红色矩形”标记连通区域的面积,为了减少噪声的干扰,删除面积小的区域,代码中将连通区域面积(像素个数)不足100的区域认为是噪声点,并将其删除(即置为背景黑色)。本人制作了一个GIF动画图,以便大家观看效果图:
之前本博客在Matlab实现了《Matlab形态学图像处理:二值图像分割 标记连通区域和重心位置 删除连通区域》http://blog.csdn.net/guyuealian/article/details/71440949,现在本人使用OpenCV实现这一功能:对图像进行二值化分割,并用“红色矩形”标记连通区域的面积,为了减少噪声的干扰,删除面积小的区域,代码中将连通区域面积(像素个数)不足100的区域认为是噪声点,并将其删除(即置为背景黑色)。本人制作了一个GIF动画图,以便大家观看效果图:
OpenCV参考代码如下:
#include "stdafx.h"
#include <iostream>
#include<vector>
#include<algorithm>
#include <opencv2\opencv.hpp>
#include <opencv2\highgui\highgui.hpp>
using namespace std;
using namespace cv;
//轮廓按照面积大小升序排序
bool ascendSort(vector<Point> a, vector<Point> b) {
return a.size() < b.size();
}
//轮廓按照面积大小降序排序
bool descendSort(vector<Point> a, vector<Point> b) {
return a.size() > b.size();
}
int main() {
Mat srcImage = imread("D:\\OpencvTest\\123.jpg");
Mat thresholdImage;
Mat grayImage;
cvtColor(srcImage, grayImage, CV_BGR2GRAY);
threshold(grayImage, thresholdImage, 0, 255, CV_THRESH_OTSU + CV_THRESH_BINARY);
//Mat resultImage;
//thresholdImage.copyTo(resultImage);
vector< vector< Point> > contours; //用于保存所有轮廓信息
vector< vector< Point> > contours2; //用于保存面积不足100的轮廓
vector<Point> tempV; //暂存的轮廓
findContours(thresholdImage, contours, CV_RETR_LIST, CV_CHAIN_APPROX_NONE);
//cv::Mat labels;
//int N = connectedComponents(resultImage, labels, 8, CV_16U);
//findContours(labels, contours2, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
//轮廓按照面积大小进行升序排序
sort(contours.begin(), contours.end(), ascendSort);//升序排序
vector<vector<Point> >::iterator itc = contours.begin();
int i = 0;
while (itc != contours.end())
{
//获得轮廓的矩形边界
Rect rect = boundingRect(*itc);
int x = rect.x;
int y = rect.y;
int w = rect.width;
int h = rect.height;
//绘制轮廓的矩形边界
cv::rectangle(srcImage, rect, { 0, 0, 255 }, 1);
//保存图片
char str[10];
sprintf(str, "%d.jpg", i++);
cv::imshow("srcImage", srcImage);
imwrite(str, srcImage);
waitKey(1000);
if (itc->size() < 100)
{
//把轮廓面积不足100的区域,放到容器contours2中,
tempV.push_back(Point(x, y));
tempV.push_back(Point(x, y+h));
tempV.push_back(Point(x+w, y+h));
tempV.push_back(Point(x+w, y));
contours2.push_back(tempV);
/*也可以直接用:contours2.push_back(*itc);代替上面的5条语句*/
//contours2.push_back(*itc);
//删除轮廓面积不足100的区域,即用黑色填充轮廓面积不足100的区域:
cv::drawContours(srcImage, contours2, -1, Scalar(0,0,0), CV_FILLED);
}
//保存图片
sprintf(str, "%d.jpg", i++);
cv::imshow("srcImage", srcImage);
imwrite(str, srcImage);
cv::waitKey(100);
tempV.clear();
++itc;
}
return 0;
}
【2】findContours的用法:
using namespace std;
using namespace cv;
using namespace cv::xphoto;
#include "stdafx.h"
#include <iostream>
#include<vector>
#include<algorithm>
#include <opencv2\opencv.hpp>
#include <opencv2\highgui\highgui.hpp>
using namespace std;
using namespace cv;
//轮廓按照面积大小升序排序
bool ascendSort(vector<Point> a, vector<Point> b) {
return a.size() < b.size();
}
//轮廓按照面积大小降序排序
bool descendSort(vector<Point> a, vector<Point> b) {
return a.size() > b.size();
}
//自己实现的将灰度图像转为三通道的BGR图像
cv::Mat gray2BGR(cv::Mat grayImg) {
if (grayImg.channels() == 3)
return grayImg;
cv::Mat bgrImg = cv::Mat::zeros(grayImg.size(), CV_8UC3);
std::vector<cv::Mat> bgr_channels;
cv::split(bgrImg, bgr_channels);
bgr_channels.at(0) = grayImg;
bgr_channels.at(1) = grayImg;
bgr_channels.at(2) = grayImg;
cv::merge(bgr_channels, bgrImg);
return bgrImg;
}
//自定义的drawImage函数的功能类似于OpenCV的drawContours函数
cv::Mat drawImage(cv::Mat image, vector< vector< Point> > pointV) {
cv::Mat destImage=image.clone();
if (destImage.channels()==1)
{
destImage = gray2BGR(destImage);
}
for (size_t i=0;i<pointV.size();i++)
{
for (size_t j = 0; j<pointV.at(i).size(); j++)
{
cv::Point point = pointV.at(i).at(j);
destImage.at<Vec3b>(point) = cv::Vec3b(0, 0, saturate_cast<uchar>(255-i*5));
}
}
return destImage;
}
int main() {
Mat srcImage = imread("D:\\OpencvTest\\mask5.jpg");
cv::imshow("srcImage", srcImage);
Mat thresholdImage;
Mat grayImage;
cvtColor(srcImage, grayImage, CV_BGR2GRAY);
threshold(grayImage, thresholdImage, 0, 255, CV_THRESH_OTSU + CV_THRESH_BINARY);
cv::Mat mask = thresholdImage.clone();
//(1)CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE
vector< vector< Point> > contours1;
findContours(mask, contours1, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
cv::Mat destImage = drawImage(mask, contours1);
imshow("destImage", destImage);
//(2)CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE
vector< vector< Point> > contours2;
findContours(mask, contours2, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
cv::Mat destImage2 = drawImage(mask, contours2);
imshow("destImage2", destImage2);
//(3)CV_RETR_LIST, CV_CHAIN_APPROX_NONE
vector< vector< Point> > contours3;
findContours(mask, contours3, CV_RETR_LIST, CV_CHAIN_APPROX_NONE);
cv::Mat destImage3 = drawImage(mask, contours3);
imshow("destImage3", destImage3);
//(4)CV_RETR_CCOMP, CV_CHAIN_APPROX_NONE
vector< vector< Point> > contours4;
findContours(mask, contours4, CV_RETR_CCOMP, CV_CHAIN_APPROX_NONE);
cv::Mat destImage4 = drawImage(mask, contours4);
imshow("destImage4", destImage4);
//cv::Mat imge2 = gray2BGR(mask);
//drawContours(imge2, contours4, -1, cv::Scalar(0, 0, 255), 1);
//cv::fillConvexPoly(mask, contours1.at(0), cv::Scalar(255, 255, 255));
cv::waitKey(0);
return 0;
}
如果你觉得该帖子帮到你,还望贵人多多支持,鄙人会再接再厉,继续努力的~