引导滤波(GuidedFilter)原理及C++实现

写在前面

引导滤波是何恺明读博士的时候提出来的一种去噪保边算法,很有名。作者其主页上给出了该算法的Matlab实现和原文。而且他提出的基于暗通道去雾算法技惊四座,获CVPR2009最佳论文(膜拜),近几年在CV领域的成果也相当丰硕,关于他的研究动态,可以访问http://kaiminghe.com/

优点:

1、应用面很广、很广;

2、能够克服双边滤波的梯度翻转现象,在滤波后图像的细节上更优;

3、最重要的优点,快,效率高,时间复杂度为O(N),N是像素个数,也就是说与掩膜窗口尺寸无关了,我们知道传统双边滤波效率是很低的。还有更快的Fast Guide Filter:https://arxiv.org/abs/1505.00996

应用:图像增强、图像融合、图像去雾、图像去噪、羽化、美颜、三维重建等等。

如果你仅仅只是需要运用这个算法,现在opencv 3.0和MATLAB 14都已经添加了guided filter的API,可以直接调用。

opencv中的API如下void cv::ximgproc::guidedFilter(),具体的可以参考opencv的帮助文档关于导向滤波的介绍guidedFilter

但是需要提醒的是,opencv中guidedFilter()函数包含在ximgproc模块下,但是从官方下载的标准的opencv.exe程序中并不包含该模块,需要分别下载opencv的source文件和contrib模块的source文件,然后自己编译,具体可以参考opencv3.1.0+contrib模块编译总结。

 

原理

说实话,引导滤波的理论还是蛮深的,不容易讲清楚,我这小菜鸡看了原文,还看了很多博客,也是一知半解。好在作者在原文中给出了伪代码,程序实现起来比较简单,我基于OpenCV写了一个,经过测试,和作者的Matlab实现效果是一样的。

为了不误导读者,还是等我自己完全搞明白了,再详细写原理。这里就推荐几个原理写的好的博客吧:

1、https://www.cnblogs.com/riddick/p/8367591.html

2、白马负金羁-导向滤波(Guided Filter)的解析与实现

3、https://www.cnblogs.com/yzl050819/p/7515250.html

 

伪代码:

 

基于OpenCV的C++实现

实现分为两种:灰度图作为引导图、彩色图作为引导图,值得说明的是:两种实现的滤波图像都只能对单通道图像进行处理。若需要处理彩色图像,在main中只需将图像split,然后对各个通道分别滤波,最后merge就可以了。作者还在原文中指出,用彩色图作为引导图能更好地保持边缘细节,我后面的效果也确实印证了这一点。

灰度图作为引导图(main中可以处理彩色图像)

#include <iostream>
#include <opencv2/core.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/imgproc.hpp>

//
//   GUIDEDFILTER   O(1) time implementation of guided filter.
//   -guidance image : I(should be a gray - scale / single channel image)
//   -filtering input image : p(should be a gray - scale / single channel image)
//   -local window radius : r
//   -regularization parameter : eps
/
cv::Mat GuidedFilter(cv::Mat& I, cv::Mat& p, int r, double eps){
	int wsize = 2 * r + 1;
	//数据类型转换
	I.convertTo(I, CV_64F, 1.0 / 255.0);
	p.convertTo(p, CV_64F, 1.0 / 255.0);

	//meanI=fmean(I)
	cv::Mat mean_I;
	cv::boxFilter(I, mean_I, -1, cv::Size(wsize, wsize), cv::Point(-1, -1), true, cv::BORDER_REFLECT);//盒子滤波

	//meanP=fmean(P)
	cv::Mat mean_p;
	cv::boxFilter(p, mean_p, -1, cv::Size(wsize, wsize), cv::Point(-1, -1), true, cv::BORDER_REFLECT);//盒子滤波

	//corrI=fmean(I.*I)
	cv::Mat mean_II;
	mean_II = I.mul(I);
	cv::boxFilter(mean_II, mean_II, -1, cv::Size(wsize, wsize), cv::Point(-1, -1), true, cv::BORDER_REFLECT);//盒子滤波

	//corrIp=fmean(I.*p)
	cv::Mat mean_Ip;
	mean_Ip = I.mul(p);
	cv::boxFilter(mean_Ip, mean_Ip, -1, cv::Size(wsize, wsize), cv::Point(-1, -1), true, cv::BORDER_REFLECT);//盒子滤波

	//varI=corrI-meanI.*meanI
	cv::Mat var_I, mean_mul_I;
	mean_mul_I=mean_I.mul(mean_I);
	cv::subtract(mean_II, mean_mul_I, var_I);

	//covIp=corrIp-meanI.*meanp
	cv::Mat cov_Ip;
	cv::subtract(mean_Ip, mean_I.mul(mean_p), cov_Ip);

	//a=conIp./(varI+eps)
	//b=meanp-a.*meanI
	cv::Mat a, b;
	cv::divide(cov_Ip, (var_I+eps),a);
	cv::subtract(mean_p, a.mul(mean_I), b);

	//meana=fmean(a)
	//meanb=fmean(b)
	cv::Mat mean_a, mean_b;
	cv::boxFilter(a, mean_a, -1, cv::Size(wsize, wsize), cv::Point(-1, -1), true, cv::BORDER_REFLECT);//盒子滤波
	cv::boxFilter(b, mean_b, -1, cv::Size(wsize, wsize), cv::Point(-1, -1), true, cv::BORDER_REFLECT);//盒子滤波

	//q=meana.*I+meanb
	cv::Mat q;
	q = mean_a.mul(I) + mean_b;

	//数据类型转换
	I.convertTo(I, CV_8U, 255);
	p.convertTo(p, CV_8U, 255);
	q.convertTo(q, CV_8U, 255);

	return q;

}

int main(){
	cv::Mat src = cv::imread("I:\\Learning-and-Practice\\2019Change\\Image process algorithm\\Img\\woman.jpg");
	if (src.empty()){
		return -1;
	}

	//if (src.channels() > 1)  
	//	cv::cvtColor(src, src, CV_RGB2GRAY);
	
	//自编GuidedFilter测试
	double t2 = (double)cv::getTickCount(); //测时间

	cv::Mat dst1, src_input, I;
	src.copyTo(src_input);
	if (src.channels() > 1)
	   cv::cvtColor(src, I, CV_RGB2GRAY); //若引导图为彩色图,则转为灰度图
	std::vector<cv::Mat> p,q;
	if (src.channels() > 1){             //输入为彩色图
		cv::split(src_input, p);
		for (int i = 0; i < src.channels(); ++i){
			dst1 = GuidedFilter(I, p[i], 9, 0.1*0.1);
			q.push_back(dst1);
		}
		cv::merge(q, dst1);
	}
	else{                               //输入为灰度图
		src.copyTo(I);
		dst1 = GuidedFilter(I, src_input, 9, 0.1*0.1);
	}

	t2 = (double)cv::getTickCount() - t2;
	double time2 = (t2 *1000.) / ((double)cv::getTickFrequency());
	std::cout << "MyGuidedFilter_process=" << time2 << " ms. " << std::endl << std::endl;

	cv::namedWindow("GuidedImg", CV_WINDOW_NORMAL);
	cv::imshow("GuidedImg", I);
	cv::namedWindow("src", CV_WINDOW_NORMAL);
	cv::imshow("src", src);
	cv::namedWindow("GuidedFilter_box", CV_WINDOW_NORMAL);
	cv::imshow("GuidedFilter_box", dst1);
	cv::waitKey(0);

}

效果(核半径=9,eps=0.1*0.1)

                       引导图                                                            输入图                                                             结果图

 

                          引导图                                                            输入图                                                             结果图

 

 

彩色图作为引导图(只能处理单通道图像)

#include <iostream>
#include <opencv2/core.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/imgproc.hpp>


///
//   GUIDEDFILTER_COLOR   O(1) time implementation of guided filter using a color image as the guidance.
//
//   -guidance image : I(should be a color(RGB) image)
//	 -filtering input image : p(should be a gray - scale / single channel image)
//   -local window radius : r
//   -regularization parameter : eps
///
cv::Mat GuidedFilter_Color(cv::Mat& I, cv::Mat& p, int r, double eps ){
	int wsize = 2 * r + 1;
	//数据类型转换
	I.convertTo(I, CV_64F, 1.0 / 255.0);
	p.convertTo(p, CV_64F, 1.0 / 255.0);
	
	//引导图通道分离
	if (I.channels() == 1){
		std::cout<<"I should be a color(RGB) image "<<std::endl;
	}
	std::vector<cv::Mat> rgb;
	cv::split(I, rgb);

	//meanI=fmean(I)
	cv::Mat mean_I_r, mean_I_g, mean_I_b;
	cv::boxFilter(rgb[0], mean_I_b, -1, cv::Size(wsize, wsize), cv::Point(-1, -1), true, cv::BORDER_REFLECT);//盒子滤波
	cv::boxFilter(rgb[1], mean_I_g, -1, cv::Size(wsize, wsize), cv::Point(-1, -1), true, cv::BORDER_REFLECT);//盒子滤波
	cv::boxFilter(rgb[2], mean_I_r, -1, cv::Size(wsize, wsize), cv::Point(-1, -1), true, cv::BORDER_REFLECT);//盒子滤波

	//meanP=fmean(P)
	cv::Mat mean_p;
	cv::boxFilter(p, mean_p, -1, cv::Size(wsize, wsize), cv::Point(-1, -1), true, cv::BORDER_REFLECT);//盒子滤波

	//corrI=fmean(I.*I)
	cv::Mat mean_II_rr, mean_II_rg, mean_II_rb, mean_II_gb, mean_II_gg, mean_II_bb;
	cv::boxFilter(rgb[2].mul(rgb[2]), mean_II_rr, -1, cv::Size(wsize, wsize), cv::Point(-1, -1), true, cv::BORDER_REFLECT);//盒子滤波
	cv::boxFilter(rgb[2].mul(rgb[1]), mean_II_rg, -1, cv::Size(wsize, wsize), cv::Point(-1, -1), true, cv::BORDER_REFLECT);//盒子滤波
	cv::boxFilter(rgb[2].mul(rgb[0]), mean_II_rb, -1, cv::Size(wsize, wsize), cv::Point(-1, -1), true, cv::BORDER_REFLECT);//盒子滤波
	cv::boxFilter(rgb[1].mul(rgb[0]), mean_II_gb, -1, cv::Size(wsize, wsize), cv::Point(-1, -1), true, cv::BORDER_REFLECT);//盒子滤波
	cv::boxFilter(rgb[1].mul(rgb[1]), mean_II_gg, -1, cv::Size(wsize, wsize), cv::Point(-1, -1), true, cv::BORDER_REFLECT);//盒子滤波
	cv::boxFilter(rgb[0].mul(rgb[0]), mean_II_bb, -1, cv::Size(wsize, wsize), cv::Point(-1, -1), true, cv::BORDER_REFLECT);//盒子滤波

	//corrIp=fmean(I.*p)
	cv::Mat mean_Ip_r, mean_Ip_g, mean_Ip_b;
	mean_Ip_b = rgb[0].mul(p);
	mean_Ip_g = rgb[1].mul(p);
	mean_Ip_r = rgb[2].mul(p);
	cv::boxFilter(mean_Ip_b, mean_Ip_b, -1, cv::Size(wsize, wsize), cv::Point(-1, -1), true, cv::BORDER_REFLECT);//盒子滤波
	cv::boxFilter(mean_Ip_g, mean_Ip_g, -1, cv::Size(wsize, wsize), cv::Point(-1, -1), true, cv::BORDER_REFLECT);//盒子滤波
	cv::boxFilter(mean_Ip_r, mean_Ip_r, -1, cv::Size(wsize, wsize), cv::Point(-1, -1), true, cv::BORDER_REFLECT);//盒子滤波

	//covIp=corrIp-meanI.*meanp
	cv::Mat cov_Ip_r, cov_Ip_g, cov_Ip_b;
	cv::subtract(mean_Ip_r, mean_I_r.mul(mean_p), cov_Ip_r);
	cv::subtract(mean_Ip_g, mean_I_g.mul(mean_p), cov_Ip_g);
	cv::subtract(mean_Ip_b, mean_I_b.mul(mean_p), cov_Ip_b);

	//varI=corrI-meanI.*meanI
	//variance of I in each local patch : the matrix Sigma in Eqn(14).
	//Note the variance in each local patch is a 3x3 symmetric matrix :
	//           rr, rg, rb
	//   Sigma = rg, gg, gb
	//           rb, gb, bb
	cv::Mat var_I_rr, var_I_rg, var_I_rb, var_I_gb, var_I_gg, var_I_bb;
	cv::subtract(mean_II_rr, mean_I_r.mul(mean_I_r), var_I_rr);
	cv::subtract(mean_II_rg, mean_I_r.mul(mean_I_g), var_I_rg);
	cv::subtract(mean_II_rb, mean_I_r.mul(mean_I_b), var_I_rb);
	cv::subtract(mean_II_gb, mean_I_g.mul(mean_I_b), var_I_gb);
	cv::subtract(mean_II_gg, mean_I_g.mul(mean_I_g), var_I_gg);
	cv::subtract(mean_II_bb, mean_I_b.mul(mean_I_b), var_I_bb);

	//a=conIp./(varI+eps)
	int cols = p.cols;
	int rows = p.rows;
	cv::Mat Mat_a = cv::Mat::zeros(rows, cols,CV_64FC3);
	std::vector<cv::Mat> a;
	cv::split(Mat_a, a);
	double rr, rg, rb, gg, gb, bb;
	for (int i = 0; i < rows; ++i){
		for (int j = 0; j < cols; ++j){
			rr = var_I_rr.at<double>(i, j); rg = var_I_rg.at<double>(i, j); rb = var_I_rb.at<double>(i, j);
			gg = var_I_gg.at<double>(i, j); gb = var_I_gb.at<double>(i, j);
		    bb = var_I_bb.at<double>(i, j);
			cv::Mat sigma = (cv::Mat_<double>(3, 3) << rr, rg, rb,
													   rg, gg, gb,
				                                       rb, gb, bb);
			cv::Mat cov_Ip = (cv::Mat_<double>(1, 3) << cov_Ip_r.at<double>(i, j), cov_Ip_g.at<double>(i, j), cov_Ip_b.at<double>(i, j));
			cv::Mat eye = cv::Mat::eye(3, 3, CV_64FC1);
			sigma = sigma + eps*eye;
			cv::Mat sigma_inv = sigma.inv();//求逆矩阵
			cv::Mat tmp = cov_Ip*sigma_inv;
			a[2].at<double>(i, j) = tmp.at<double>(0, 0);//r
			a[1].at<double>(i, j) = tmp.at<double>(0, 1);//g
			a[0].at<double>(i, j) = tmp.at<double>(0, 2);//b
		}
	}

	//b=meanp-a.*meanI
	cv::Mat b = mean_p - a[0].mul(mean_I_b) - a[1].mul(mean_I_g) - a[2].mul(mean_I_r);

	//meana=fmean(a)
	//meanb=fmean(b)
	cv::Mat mean_a_r, mean_a_g, mean_a_b, mean_b;
	cv::boxFilter(a[0], mean_a_b, -1, cv::Size(wsize, wsize), cv::Point(-1, -1), true, cv::BORDER_REFLECT);//盒子滤波
	cv::boxFilter(a[1], mean_a_g, -1, cv::Size(wsize, wsize), cv::Point(-1, -1), true, cv::BORDER_REFLECT);//盒子滤波
	cv::boxFilter(a[2], mean_a_r, -1, cv::Size(wsize, wsize), cv::Point(-1, -1), true, cv::BORDER_REFLECT);//盒子滤波
	cv::boxFilter(b, mean_b, -1, cv::Size(wsize, wsize), cv::Point(-1, -1), true, cv::BORDER_REFLECT);//盒子滤波

	//q=meana.*I+meanb
	cv::Mat q = mean_a_r.mul(rgb[2]) + mean_a_g.mul(rgb[1]) + mean_a_b.mul(rgb[0]) + mean_b;

	//数据类型转换
	I.convertTo(I, CV_8UC3, 255);
	p.convertTo(p, CV_8U, 255);
	q.convertTo(q, CV_8U, 255);

	return q;
}

int main(){
	cv::Mat I = cv::imread("I:\\Learning-and-Practice\\2019Change\\Image process algorithm\\Img\\woman1.jpeg");
	cv::Mat P = cv::imread("I:\\Learning-and-Practice\\2019Change\\Image process algorithm\\Img\\woman1.jpeg");
	if (I.empty() || P.empty()){
		return -1;
	}
	if (P.channels() > 1)
		cv::cvtColor(P, P, CV_RGB2GRAY);
	
	//自编GuidedFilter测试
	double t2 = (double)cv::getTickCount(); //测时间
	cv::Mat q;
	q = GuidedFilter_Color(I, P, 9, 0.2*0.2);
	t2 = (double)cv::getTickCount() - t2;
	double time2 = (t2 *1000.) / ((double)cv::getTickFrequency());
	std::cout << "MyGuidedFilter_process=" << time2 << " ms. " << std::endl << std::endl;

	cv::namedWindow("GuidedImg");
	cv::imshow("GuidedImg", I);
	cv::namedWindow("src");
	cv::imshow("src", P);
	cv::namedWindow("GuidedFilter", CV_WINDOW_NORMAL);
	cv::imshow("GuidedFilter", q);
	cv::waitKey(0);

}

效果(核半径=9,eps=0.1*0.1)

                         引导图                                                            输入图                                                             结果图

这里的结果图比用灰度图作为引导,边缘保持更好,可以仔细对比观察面部和鼻子。

  • 5
    点赞
  • 68
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
引导滤波Guided Image Filtering)是一种能够保留图像细节的图像滤波方法,通过引导图像的辅助作用,对待处理图像进行滤波。其主要思想是根据引导图像的特征来调整滤波器的权重,从而使得滤波器更加适应于图像的结构和纹理特征,达到保留细节的效果。 具体实现方法如下: 1. 对待处理图像和引导图像进行预处理,计算它们的均值和方差。 2. 对引导图像进行高斯滤波,得到平滑后的引导图像。 3. 计算待处理图像和引导图像的协方差,并计算得到待处理图像的均值和方差。 4. 计算待处理图像和引导图像的相关系数,并根据相关系数和平滑后的引导图像计算得到滤波器的权重。 5. 根据滤波器的权重和待处理图像的均值、方差,对待处理图像进行滤波。 下面是引导滤波的Matlab代码实现: ```matlab function [q] = guidedfilter(I, p, r, eps) % guidedfilter: Guided image filtering % % Input: % - I: guidance image (should be a gray-scale/single channel image) % - p: filtering input image % - r: radius of filter % - eps: regularization parameter % % Output: % - q: filtering output image % % Reference: % Kaiming He, Jian Sun, and Xiaoou Tang, "Guided Image Filtering," % IEEE Transactions on Pattern Analysis and Machine Intelligence, % Vol. 35, No. 6, pp. 1397-1409, June 2013. % % Author: hqli % Email: hqli@pku.edu.cn % Date: 2016-11-05 % % Check inputs if (ndims(I)~=2) error('The guidance image should be a gray-scale/single channel image.'); end if (ndims(p)==2) % Single-channel image [hei, wid] = size(p); nCh = 1; else % Multi-channel image [hei, wid, nCh] = size(p); end if (size(I,1)~=hei || size(I,2)~=wid) error('The size of the guidance image should be the same as the input image.'); end % Compute mean and covariance matrices mean_I = imboxfilt(I, r) ./ (r^2); mean_p = zeros(hei, wid, nCh); for ii=1:nCh mean_p(:,:,ii) = imboxfilt(p(:,:,ii), r) ./ (r^2); end mean_Ip = zeros(hei, wid, nCh); for ii=1:nCh mean_Ip(:,:,ii) = imboxfilt(I.*p(:,:,ii), r) ./ (r^2); end cov_Ip = mean_Ip - mean_I.*mean_p; % Compute local variances and covariances var_I = imboxfilt(I.^2, r) ./ (r^2) - mean_I.^2; var_p = zeros(hei, wid, nCh); for ii=1:nCh var_p(:,:,ii) = imboxfilt(p(:,:,ii).^2, r) ./ (r^2) - mean_p(:,:,ii).^2; end % Compute weight and bias a = zeros(hei, wid, nCh); b = zeros(hei, wid, nCh); for ii=1:nCh a(:,:,ii) = cov_Ip(:,:,ii) ./ (var_I + eps); b(:,:,ii) = mean_p(:,:,ii) - a(:,:,ii) .* mean_I; end % Compute the filtering output q = zeros(size(p)); for ii=1:nCh q(:,:,ii) = imboxfilt(a(:,:,ii).*p(:,:,ii) + b(:,:,ii), r) ./ (r^2); end ``` 其中,I为引导图像,p为待处理图像,r为滤波器的半径,eps为正则化参数。函数返回值q为滤波后的图像。 下面是引导滤波的OpenCV实现: ```c++ cv::Mat guidedFilter(const cv::Mat& I, const cv::Mat& p, int r, double eps) { // Check inputs CV_Assert(I.channels() == 1); CV_Assert(p.channels() == 1 || p.channels() == I.channels()); CV_Assert(I.rows == p.rows && I.cols == p.cols); // Convert input images to CV_64FC1 cv::Mat I_double, p_double; I.convertTo(I_double, CV_64FC1); p.convertTo(p_double, CV_64FC1); // Compute mean and covariance matrices cv::Mat mean_I, mean_p, mean_Ip, cov_Ip, var_I, var_p; cv::boxFilter(I_double, mean_I, CV_64FC1, cv::Size(r, r)); cv::boxFilter(p_double, mean_p, CV_64FC1, cv::Size(r, r)); cv::boxFilter(I_double.mul(p_double), mean_Ip, CV_64FC1, cv::Size(r, r)); cov_Ip = mean_Ip - mean_I.mul(mean_p); cv::boxFilter(I_double.mul(I_double), var_I, CV_64FC1, cv::Size(r, r)); var_I -= mean_I.mul(mean_I); if (p.channels() == 1) { cv::boxFilter(p_double.mul(p_double), var_p, CV_64FC1, cv::Size(r, r)); var_p -= mean_p.mul(mean_p); } else { std::vector<cv::Mat> p_channels(p.channels()); cv::split(p_double, p_channels); var_p = cv::Mat::zeros(I.rows, I.cols, CV_64FC(p.channels())); for (int i = 0; i < p.channels(); i++) { cv::boxFilter(p_channels[i].mul(p_channels[i]), var_p.channels(i), CV_64FC1, cv::Size(r, r)); var_p.channels(i) -= mean_p.channels(i).mul(mean_p.channels(i)); } } // Compute weight and bias cv::Mat a, b; a = cov_Ip / (var_I + eps); b = mean_p - a.mul(mean_I); // Compute the filtering output cv::Mat q; if (p.channels() == 1) { cv::boxFilter(a.mul(p_double) + b, q, CV_64FC1, cv::Size(r, r)); } else { std::vector<cv::Mat> q_channels(p.channels()); for (int i = 0; i < p.channels(); i++) { cv::boxFilter(a.channels(i).mul(p_channels[i]) + b.channels(i), q_channels[i], CV_64FC1, cv::Size(r, r)); } cv::merge(q_channels, q); } return q; } ``` 其中,I为引导图像,p为待处理图像,r为滤波器的半径,eps为正则化参数。函数返回值q为滤波后的图像。
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值