使用opencv 进行图像去雾

37 篇文章 10 订阅
25 篇文章 6 订阅

背景

近年来国内的雾霾天气逐渐由中东地区向全国蔓延。雾霾自2013年起开始成为人们对天气关注的关键词。雾霾是特定气候条件与人类活动相互作用的结果。

高密度人口的经济及社会活动必然会排放大量细颗粒物(PM2.5),一旦排放超过大气循环能力和承载度,细颗粒物浓度将持续积聚,此时如果受静稳天气
等影响,极易出现大范围的雾霾。 雾天时,弥漫在空中的雾气和尘埃模糊了人们的视线,使得景物的能见度大幅降低。在雾天条件下的室外获得的图像会受
到严重的退化,图像目标的对比度和颜色等特征被衰减,这大大降低了图像的应用价值。即使在晴朗的天气条件下拍摄的照片,由于大气的散射作用,照片的
清晰度同样受到影响。因为在每一个实际的场景中,光线在到达相机之前,都会从物体表面反射出来而且散射在空气中。这是因为空气中存在的浮质,像灰尘、
雾和烟等,这些因素导致物体表面颜色变淡和整幅图像的对比度降低。这给工业生产及人们的日常生活带来了很大影响。例如城市交叉路口图像监视系统,
在恶劣天气条件下得到的退化图像会对判断车辆信息和监控交通情况造成极大的困难;在军事侦察或监视中,退化图像对信息的识别与处理会造成偏差,
而这种偏差的后果是非常严重的;遥感探测中退化图像同样会对后续的信息处理产生很大的干扰。因此许多领域都要用到去雾算法。有雾图像特征清晰化
的研究具有非常重要的意义。

 

去雾流程描述

  1. 首先看看暗通道先验是什么

在绝大多数非天空的局部区域里,某一些像素总会有至少一个颜色通道具有很低的值。换言之,该区域光强度的最小值是个很小的数。

我们给暗通道一个数学定义,对于任意的输入图像J,其暗通道可以用下式表达:

 

 

式中Jc表示彩色图像的每个通道 ,Ω(x)表示以像素X为中心的一个窗口。 该式的意义用代码表达也很简单,首先求出每个像素RGB分量中的最小值,存入一副和原始图像大小相同的灰度图中,然后再对这幅灰度图进行最小值滤波,滤波的半径由窗口大小决定,一般有WindowSize = 2 * Radius + 1;

  1. 暗通道先验的理论指出

Jdark.png
实际生活中造成暗原色中低通道值主要有三个因素:a)汽车、建筑物和城市中玻璃窗户的阴影,或者是树叶、树与岩石等自然景观的投影;
b)色彩鲜艳的物体或表面,在RGB的三个通道中有些通道的值很低(比如绿色的草地/树/植物,红色或黄色的花朵/叶子,或者蓝色的水面);
c)颜色较暗的物体或者表面,例如灰暗色的树干和石头。总之,自然景物中到处都是阴影或者彩色,这些景物的图像的暗原色总是很灰暗的。
我们抛开论文中列举的那些例子,自己从网上找几幅没有雾的风景照,看看结果如下:
haze1.png :darkhaze1.png
有雾的按通道
haze2.png :darkhaze2.png
上述暗通道图像均使用的窗口大小为15*15,即最小值滤波的半径为7像素。

由上述几幅图像,可以明显的看到暗通道先验理论的普遍性。在作者的论文中,统计了5000多副图像的特征,也都基本符合这个先验,因此,我们可以认为其实一条定理。

有了这个先验,接着就需要进行一些数学方面的推导来最终解决问题。

首先,在计算机视觉和计算机图形中,下述方程所描述的雾图形成模型被广泛使用:

removehaze.png
其中,I(X)就是我们现在已经有的图像(待去雾的图像),J(x)是我们要恢复的无雾的图像,A是全球大气光成分,
t(x)为透射率。现在的已知条件就是I(X),要求目标值J(x),显然,这是个有无数解的方程,因此,就需要一些先验了。

将式(1)稍作处理,变形为下式:

removehaze1.png
如上所述,上标C表示R/G/B三个通道的意思。

首先假设在每一个窗口内透射率t(x)为常数,定义他为,并且A值已经给定,然后对式(7)两边求两次最小值运算,得到下式:

removehaze3.png
上式中,J是待求的无雾的图像,根据前述的暗原色先验理论有:
removehaze4.png
因此,可推导出:
removehaze5.png
把式(10)带入式(8)中,得到:
removehaze6.png
这就是透射率的预估值。

在现实生活中,即使是晴天白云,空气中也存在着一些颗粒,因此,看远处的物体还是能感觉到雾的影响,另外,雾的存在让人类感到景深的存在,

因此,有必要在去雾的时候保留一定程度的雾,这可以通过在式(11)中引入一个在[0,1] 之间的因子,则式(11)修正为:
removehaze7.png
本文中所有的测试结果依赖于: ω=0.95。

上述推论中都是假设全球达气光A值时已知的,在实际中,我们可以借助于暗通道图来从有雾图像中获取该值。具体步骤如下:

  1. 从暗通道图中按照亮度的大小取前0.1%的像素。
  2. 在这些位置中,在原始有雾图像I中寻找对应的具有最高亮度的点的值,作为A值。

到这一步,我们就可以进行无雾图像的恢复了。由式(1)可知: J = ( I - A)/t + A

现在I,A,t都已经求得了,因此,完全可以进行J的计算。

当投射图t 的值很小时,会导致J的值偏大,从而使淂图像整体向白场过度,因此一般可设置一阈值T0,当t值小于T0时,令t=T0,本文中所有效果图均以T0=0.1为标准计算。

因此,最终的恢复公式如下:

removehaze8.png
当直接用上述理论进行恢复时,去雾的效果其实也是很明显的,比如下面一些例子:
haze2.png :removehazepicture.png

编码的步骤

如果你仔细的分析了原文的细路,加上适当的参考,编码其实并不是很困难。

  1. 根据原始图像求暗通道,参考代码如下:

for (Y = 0, DarkPt = DarkChannel; Y < Height; Y++)

   {
       ImgPt = Scan0 + Y * Stride;
       for (X = 0; X < Width; X++)
       {
           Min = *ImgPt;
           if (Min > *(ImgPt + 1)) Min = *(ImgPt + 1);
           if (Min > *(ImgPt + 2)) Min = *(ImgPt + 2);
           *DarkPt = Min;
           ImgPt += 3;
           DarkPt++;
       }
   }

 

 

MinFilter(DarkChannel, Width, Height, Radius);

这里需要注意的是MinFilter算法的快速实现,提供一篇论文供有需要的朋友学习:STREAMING MAXIMUM-MINIMUM FILTER USING NO MORE

THAN THREE COMPARISONS PER ELEMENT 。这个算法的时间复杂度是O(1)的。

  1. 按文中所描述的算法自动获得全球大气光的值:

这里说明一点,原始论文中的A最终是取原始像素中的某一个点的像素,我实际上是取的符合条件的所有点的平均值作为A的值,我这样做是因为,

如果是取一个点,则各通道的A值很有可能全部很接近255,这样的话会造成处理后的图像偏色和出现大量色斑。原文作者说这个算法对天空部分不需特备处理,
我实际发现该算法对有天空的图像的效果一般都不好。天空会出现明显的过渡区域。作为解决方案,我增加了一个参数,最大全球大气光值,当计算的值大于该值时,
则就取该值。

  1. 按式(12)计算预估的透射率图。

在式(12)中,每个通道的数据都需要除以对应的A值,即归一化,这样做,还存在一个问题,由于A的选取过程,并不能保证每个像素分量值除以A值后都小于1,从而导致t的值可能小于0,

而这是不容许的,原文作者并没有交代这一点是如何处理的。我在实际的编码中发现,如果真的这样做了,其效果也并不是很理想 ,因此,我最后的办法是在式(12)中,不考虑A的计算。

  1. 计算导向滤波图。

这里可以直接用原始的图像做导向图,当然也可以用其灰度图,但是用RGB导向图在下一步的计算中会占用比较大的时间。

  1. 按照《Guided Image Filtering》论文中的公式(5)、(6)、(8)编码计算获得精细的透射率图。

网络上有这个算法的 matlab代码可下载的,这里贴部分代码:

  function q = guidedfilter(I, p, r, eps)  % GUIDEDFILTER O(1) time implementation of guided filter.  %  % - guidance image: I (should be a gray-scale/single channel image)  % - filtering input image: p (should be a gray-scale/single channel image)  % - local window radius: r  % - regularization parameter: eps

  [hei, wid] = size(I);  N = boxfilter(ones(hei, wid), r); % the size of each local patch; N=(2r+1)^2 except for boundary pixels.

  % imwrite(uint8(N), 'N.jpg');  % figure,imshow(N,[]),title('N');  

  mean_I = boxfilter(I, r) ./ N;  mean_p = boxfilter(p, r) ./ N;  mean_Ip = boxfilter(I.*p, r) ./ N;  cov_Ip = mean_Ip - mean_I .* mean_p; % this is the covariance of (I, p) in each local patch.

  mean_II = boxfilter(I.*I, r) ./ N;  var_I = mean_II - mean_I .* mean_I;

  a = cov_Ip ./ (var_I + eps); % Eqn. (5) in the paper;  b = mean_p - a .* mean_I; % Eqn. (6) in the paper;

  mean_a = boxfilter(a, r) ./ N;  mean_b = boxfilter(b, r) ./ N;

  q = mean_a .* I + mean_b; % Eqn. (8) in the paper;  end

 

由上面的代码,可见,主要的工作量在均值模糊上,而均值模糊是个很快速的算法,关于均值模糊的优化可参考我以前的文章:彩色图像高速模糊之懒惰算法。

还有一点就是,上述计算需要在[0,1]范围内进行,也就是说导向图和预估的透射率图都必须从[0,255]先映射到[0,1]在进行计算。

算法的局限性

暗原色先验是一种统计的结果,是对大量户外无雾照片(outdoor haze-free images)的统计结果,如果目标场景内在的就和大气光类似,比如雪地、白色背景墙、大海等,则由于前提条件就不正确,因此一般无法获得满意的效果,而对于一般的风景照片这个算法能处理的不错。

 

 

实现源码

#include <jni.h>
#include"android/log.h"
#include <opencv2/core/core.hpp>
#include<opencv2/opencv.hpp>
//#include <opencv2/imgproc/imgproc.hpp>
//#include <opencv2/features2d/features2d.hpp>


using namespace std;
using namespace cv;


#define TAG  "OCVSampleActivity"
#define LOGD(fmt, args...) __android_log_print(ANDROID_LOG_DEBUG, TAG, fmt, ##args)


extern "C" {


jmethodID mid;
jclass objclass;
jobject mobj;


inline double getminbgr(double b, double g, double r) {
if (b > g)
b = g;
if (b > r)
b = r;
return b;
}


inline uchar max(uchar x, uchar y, uchar z) {
if (y > x) {
x = y;
}
if (z > x) {
x = z;
}
return x;
}


inline double getminbgrdivideAc(double b, double g, double r, int A_b, int A_g,
int A_r) {
double bA_b = b / A_b;
double gA_g = b / A_g;
double rA_r = b / A_r;


double result = bA_b;
if (result > gA_g) {
result = gA_g;
}
if (result > rA_r) {
result = rA_r;
}
return result;
}


void getMatMinValue(Mat *roimat, double * min) {
int rows = roimat->rows;
int cols = roimat->cols;
int i, j;
uchar * p = NULL;
*min = 256;
for (i = 0; i < rows; i++) {
p = roimat->ptr < uchar > (i);
for (j = 0; j < cols; j++) {
if (*min > *p) {
*min = *p;
}
p++;
}
}
}


Mat getDarkChannel(Mat dark, int windowSize) {
double t = (double) getTickCount();
Mat newDark;
int radius = windowSize / 2;
copyMakeBorder(dark, newDark, radius, radius, radius, radius,
BORDER_CONSTANT, Scalar(255));
Mat element = getStructuringElement(MORPH_RECT,
Size(windowSize, windowSize));
erode(newDark, newDark, element);
Mat darkChannel = Mat(newDark, Rect(radius, radius, dark.cols, dark.rows));
t = ((double) getTickCount() - t) / getTickFrequency() * 1000;
cout << "darkChannel cost in miliseconds: " << t << endl;
return darkChannel;
}


void getGlobalA(Mat image, Mat dark, uchar* maxRGB) {
double t = (double) getTickCount();
int frequency[256] = { 0 };
uchar * p;
uchar * p2;
for (int i = 0; i < dark.rows; i++) {
p = dark.ptr < uchar > (i);
for (int j = 0; j < dark.cols; j++) {
frequency[p[j]]++;
}
}
int count = (int) (dark.rows * dark.cols * 0.001);
int sum = 0;
int threshold;
//cout << "count="<<count<<endl;
for (int i = 255; i > 0; i--) {
sum += frequency[i];
//cout << "frequency["<<i<<"]"<<frequency[i]<<endl;
if (sum + frequency[i - 1] > count) {
threshold = i;
if (frequency[i - 1] > frequency[i]) {
count = sum + frequency[i - 1];
threshold = i - 1;
//cout << "frequency["<<i-1<<"]"<<frequency[i-1]<<endl;
}
break;
}
}
//  cout << "threshold = " << threshold<<endl;
LOGD("threshold = %d", threshold);
int offset = 0;
threshold = threshold - 1;
int log = 2;
int channels = image.channels();
for (int i = 0; i < dark.rows; i++) {
p = dark.ptr < uchar > (i);
p2 = image.ptr < uchar > (i);
offset = 0;
for (int j = 0; j < dark.cols; j++, offset += channels) {
if (p[j] > threshold) {
count--;
if (maxRGB[0] < p2[offset]) {
maxRGB[0] = p2[offset];
}
if (maxRGB[1] < p2[offset + 1]) {
maxRGB[1] = p2[offset + 1];
}
if (maxRGB[2] < p2[offset + 2]) {
maxRGB[2] = p2[offset + 2];
}
}
if (count == 0) {
break;
}
}
if (count == 0) {
break;
}
}
//    cout<<(int)maxRGB[0]<<","<<(int)maxRGB[1]<<","<<(int)maxRGB[2]<<endl;
t = ((double) getTickCount() - t) / getTickFrequency() * 1000;
//    cout << "getGlobalA cost in miliseconds: " << t << endl;
}


void getMatMaxValue(Mat *mat, double *max, CvPoint *maxLoc) {
int rows = mat->rows;
int cols = mat->cols;
int i, j;
uchar * p = NULL;
*max = 0;
for (i = 0; i < rows; i++) {
p = mat->ptr < uchar > (i);
for (j = 0; j < cols; j++) {
if (*max < *p) {
*max = *p;
if (maxLoc != NULL) {
maxLoc->x = j;
maxLoc->y = i;
}
}
p++;
}
}
}


void getMinFilter(uchar * data, uchar *output, int size, int r) {
int minIndex = 0;
int block = (r << 1) + 1;
int min = *data;
for (int i = 1; i < block; i++) {
if (min >= *(data + i)) {
min = *(data + i);
minIndex = i;
}
}


for (int m = 0; m <= r; m++) {


output[m] = data[minIndex];
}


for (int j = r + 1; j < size - r; j++) {


if (minIndex >= j - r) {
int temp = (r << 1) + j - r;
if (data[minIndex] >= data[temp])
minIndex = temp;
} else {
minIndex = j - r;
int min = data[j - r];
for (int k = j - r + 1; k < block + j - r; k++) {
if (min >= data[k]) {
min = data[k];
minIndex = k;
}
}
}
output[j] = data[minIndex];
}


for (int n = size - r; n < size; n++) {
output[n] = data[minIndex];
}
}
;


Mat cumSum(Mat src, int rc) {
Mat Imdst = Mat(src.rows, src.cols, CV_64FC1);
//克隆出新的对象,地址空间不一样
Imdst = src.clone();
if (rc == 1) {
for (int y = 1; y < src.rows; y++) {


double * ptr0 = Imdst.ptr<double>(y - 1);


double * ptr = Imdst.ptr<double>(y);


// double *ptr0=(double *)(Imdst->data.ptr+(y-1)*Imdst->step);
// double *ptr=(double *)(Imdst->data.ptr+y*Imdst->step);
for (int x = 0; x < src.cols; x++) {
ptr[x] = ptr0[x] + ptr[x];
//cvSetReal2D(Imdst,y,x,cvGetReal2D(Imdst,y-1,x)+cvGetReal2D(Imdst,y,x));
}
}
LOGD("rc==1");
}


else if (rc == 2) {
for (int y = 0; y < src.rows; y++) {
double *ptr = Imdst.ptr<double>(y);
// double *ptr=(double *)(Imdst->data.ptr+y*Imdst->step);
for (int x = 1; x < src.cols; x++) {
ptr[x] = ptr[x - 1] + ptr[x];
//cvSetReal2D(Imdst,y,x,cvGetReal2D(Imdst,y,x-1)+cvGetReal2D(Imdst,y,x));
}
}
LOGD("rc==2");
}
return Imdst;
}
;


Mat boxFilter(Mat src, int r) {
Mat Imdst = Mat(src.rows, src.cols, CV_64FC1);
Imdst = src.clone();
Mat subImage;
//imCum = cumsum(imSrc, 1);
Mat imCum = cumSum(Imdst, 1);
//imDst(1:r+1, :) = imCum(1+r:2*r+1, :);
for (int y = 0; y < r; y++) {
//double *ptrDst=(double *)Imdst->data.ptr+y*Imdst->step;
//double *ptrCum=(double *)imCum->data.ptr+(y+r)*imCum->step;
for (int x = 0; x < Imdst.cols; x++) {
//ptrDst[x]=ptrCum[x];
Imdst.at<double>(y, x) = imCum.at<double>(y + r, x);
// cvSetReal2D(Imdst,y,x,cvGetReal2D(imCum,y+r,x));
}
}
//imDst(r+2:hei-r, :) = imCum(2*r+2:hei, :) - imCum(1:hei-2*r-1, :);
for (int y = r + 1; y < Imdst.rows - r - 1; y++) {
for (int x = 0; x < Imdst.cols; x++) {
Imdst.at<double>(y, x) = imCum.at<double>(y + r, x)
- imCum.at<double>(y - r - 1, x);
// cvSetReal2D(Imdst,y,x,(cvGetReal2D(imCum,y+r,x)-cvGetReal2D(imCum,y-r-1,x)));
}
}


//imDst(hei-r+1:hei, :) = repmat(imCum(hei, :), [r, 1]) - imCum(hei-2*r:hei-r-1, :);
subImage = Mat(r, Imdst.cols, CV_64FC1);


// Mat temMat = Mat(1,Imdst.cols,CV_8UC1);
//取出指向指定行的地址
// uchar *data =  imCum.ptr<uchar>(imCum.rows-1);
// for (int i = 0; i < temMat.cols; i++){
// temMat.at<uchar>(0,i) = *(data+i);
// }
Mat temMat = imCum.row(imCum.rows - 1);
//cvGetRow(imCum,tem,imCum->height-1);


for (int i = 0; i < r; i++) {
Mat dsttemp = subImage.row(i);
temMat.copyTo(dsttemp);
// subImage.ptr<uchar>(i) = da;
}
// cvRepeat(tem,subImage);
/*for(int y=0;y<r;y++)
{
for(int x=0;x<Imdst->width;x++)
{
cvSetReal2D(subImage,y,x,cvGetReal2D(imCum,Imdst->height-1,x));
}
}*/


for (int y = Imdst.rows - r; y < Imdst.rows; y++) {
for (int x = 0; x < Imdst.cols; x++) {
Imdst.at<double>(y, x) = subImage.at<double>(y - Imdst.rows + r, x)
- imCum.at<double>(y - r - 1, x);
// cvSetReal2D(Imdst,y,x,cvGetReal2D(subImage,y-Imdst->height+r,x)-cvGetReal2D(imCum,y-r-1,x));
}
}
//cvReleaseMat(&subImage);
//cvReleaseMat(&tem);


imCum = cumSum(Imdst, 2);
//imDst(:, 1:r+1) = imCum(:, 1+r:2*r+1);
for (int y = 0; y < Imdst.rows; y++) {
for (int x = 0; x < r; x++) {
Imdst.at<double>(y, x) = imCum.at<double>(y, x + r);
// cvSetReal2D(Imdst,y,x,cvGetReal2D(imCum,y,x+r));
}
}


//imDst(:, r+2:wid-r) = imCum(:, 2*r+2:wid) - imCum(:, 1:wid-2*r-1);
for (int y = 0; y < Imdst.rows; y++) {
for (int x = r + 1; x < Imdst.cols - r - 1; x++) {
Imdst.at<double>(y, x) = imCum.at<double>(y, x + r)
- imCum.at<double>(y, x - r - 1);
// cvSetReal2D(Imdst,y,x,(cvGetReal2D(imCum,y,x+r)-cvGetReal2D(imCum,y,x-r-1)));
}
}


//imDst(:, wid-r+1:wid) = repmat(imCum(:, wid), [1, r]) - imCum(:, wid-2*r:wid-r-1);
subImage = Mat(Imdst.rows, r, CV_64FC1);


temMat = Mat(Imdst.rows, 1, CV_64FC1);


temMat = imCum.col(imCum.cols - 1);
// cvGetCol(imCum,tem,imCum->width-1);


for (int i = 0; i < r; i++) {
Mat dsttemp = subImage.col(i);
temMat.copyTo(dsttemp);
// subImage.ptr<uchar>(i) = da;
}
// cvRepeat(tem,subImage);


/*for(int y=0;y<Imdst->height;y++)
{
for(int x=0;x<r;x++)
{
cvSetReal2D(subImage,y,x,cvGetReal2D(imCum,y,Imdst->width-1));
}
}*/
for (int y = 0; y < Imdst.rows; y++) {
for (int x = Imdst.cols - r; x < Imdst.cols; x++) {
Imdst.at<double>(y, x) = subImage.at<double>(y, x - Imdst.cols + r)
- imCum.at<double>(y, x - r - 1);
// cvSetReal2D(Imdst,y,x,cvGetReal2D(subImage,y,x-Imdst->width+r)-cvGetReal2D(imCum,y,x-r-1));
}
}
//cvReleaseMat(&subImage);
return Imdst;
}
;


Mat myGuidedFilter_Mat(Mat I, Mat img_pp, int r, double eps) {
int height = img_pp.rows;
int width = img_pp.cols;
int type = CV_64FC1;


Mat ones = Mat(height, width, type, Scalar(1.0));


// cvSet(ones,cvRealScalar(1));


Mat N = boxFilter(ones, r);


//求I的均值
Mat mean_I = Mat(height, width, type);


//矩阵点除
mean_I = boxFilter(I, r) / N;
// cvDiv(boxFilter(I,r),N,mean_I);


//求P的均值
Mat mean_p = Mat(height, width, type);
mean_p = boxFilter(img_pp, r) / N;
// cvDiv(boxFilter(img_pp,r),N,mean_p);


//求I*P的均值
Mat pr = Mat(height, width, type);
//矩阵点乘
pr = I.mul(img_pp);
// cvMul(I,img_pp,pr);


LOGD("mul");


Mat mean_Ip = Mat(height, width, type);
mean_Ip = boxFilter(pr, r) / N;
// cvDiv(boxFilter(pr,r),N,mean_Ip);


//求I与p协方差
pr = mean_I.mul(mean_p);
// cvMul(mean_I,mean_p,pr);


Mat cov_Ip = Mat(height, width, type);
cov_Ip = mean_Ip - pr;
// cvSub(mean_Ip,pr,cov_Ip);


LOGD("Sub");
//求I的方差
Mat var_I = Mat(height, width, type);


pr = I.mul(I);
// cvMul(I,I,pr);


var_I = boxFilter(pr, r) / N;
// cvDiv(boxFilter(pr,r),N,var_I);


pr = mean_I.mul(mean_I);
// cvMul(mean_I,mean_I,pr);


var_I = var_I - pr;
// cvSub(var_I,pr,var_I);


//求a
Mat a = Mat(height, width, type);


var_I = var_I + cvScalar(eps);
// cvAddS(var_I,cvScalar(eps),var_I);
a = cov_Ip / var_I;
// cvDiv(cov_Ip,var_I,a);


//求b
Mat b = Mat(height, width, type);
pr = a.mul(mean_I);
// cvMul(a,mean_I,pr);
b = mean_p - pr;
// cvSub(mean_p,pr,b);


//求a的均值
Mat mean_a = Mat(height, width, type);
mean_a = boxFilter(a, r) / N;
// cvDiv(boxFilter(a,r),N,mean_a);


//求b的均值
Mat mean_b = Mat(height, width, type);
mean_b = boxFilter(b, r) / N;
// cvDiv(boxFilter(b,r),N,mean_b);


//求Q
Mat q = Mat(height, width, type);
a = mean_a.mul(I);
// cvMul(mean_a,I,a);
q = a + mean_b;
// cvAdd(a,mean_b,q);


// cvReleaseMat(&ones);
// cvReleaseMat(&mean_I);
// cvReleaseMat(&mean_p);
// cvReleaseMat(&pr);
// cvReleaseMat(&mean_Ip);
// cvReleaseMat(&cov_Ip);
// cvReleaseMat(&var_I);
// cvReleaseMat(&a);
// cvReleaseMat(&b);
// cvReleaseMat(&mean_a);
// cvReleaseMat(&mean_b);
return q;
}


inline uchar min(uchar x, uchar y, uchar z) {
if (y < x) {
x = y;
}
if (z < x) {
x = z;
}
return x;
}


void getMinRGB(Mat image, Mat dark) {
uchar* p;
int offset = 0;
int channels = image.channels();
for (int i = 0; i < dark.rows; ++i) {
p = image.ptr < uchar > (i);
offset = 0;
for (int j = 0; j < dark.cols; ++j, offset += channels) {
dark.at < uchar > (i, j) = min(p[offset], p[offset + 1],
p[offset + 2]);
}
}
}


JNIEXPORT jlong JNICALL Java_com_example_opencvdemo_NativeUtils_getDarkChannel(
JNIEnv *, jobject, jlong matAddress, jint windowsize);


JNIEXPORT jlong JNICALL Java_com_example_opencvdemo_NativeUtils_getDarkChannel2(
JNIEnv *, jobject, jlong matAddress, jlong removeAddress, jint block);


JNIEXPORT jlong JNICALL Java_com_example_opencvdemo_NativeUtils_getDarkChannel3(
JNIEnv *, jobject, jlong matAddress, jlong removeAddress, jint block);


JNIEXPORT jlong JNICALL Java_com_example_opencvdemo_NativeUtils_getDarkChannel4(
JNIEnv *, jobject, jlong matAddress, jlong removeAddress, jint r);
JNIEXPORT jlong JNICALL Java_com_example_opencvdemo_NativeUtils_getDarkChannel5(
JNIEnv *, jobject, jlong matAddress, jlong removeAddress, jint r);


JNIEXPORT jlong JNICALL Java_com_example_opencvdemo_NativeUtils_getDarkChannel(
JNIEnv *, jobject, jlong matAddress, jint windowsize) {
Mat & mat = *(Mat*) matAddress;
jint radius = windowsize / 2;
int row = mat.rows;
int col = mat.cols;
Vec3b pixel;
CvRect ROIRect;
int min;
ROIRect.width = radius;
ROIRect.height = radius;
ROIRect.x = 0;
ROIRect.y = 0;


int darkmin;
int i, j, k, m;
LOGD("row = %d, col = %d", row, col);
Mat *graymat = new Mat(row, col, CV_8UC1);


for (i = 0; i < row; i++) {
for (j = 0; j < col; j++) {
pixel = mat.at < Vec3b > (i, j);
pixel[0] = (pixel[0] < pixel[1]) ? pixel[0] : pixel[1];
min = (pixel[2] < pixel[0]) ? pixel[2] : pixel[0];
graymat->at < uchar > (i, j) = min;
}
}


int rowr = row - radius;
int clor = col - radius;


LOGD("rowr = %d   clor = %d", rowr, clor);
for (i = 0; i < rowr; i++) {
for (j = 0; j < clor; j++) {
//cvSetImageCOI()
darkmin = 256;
for (k = i; k < i + radius; k++) {
for (m = j; m < j + radius; m++) {
if (graymat->at < uchar > (k, m) < darkmin) {
darkmin = graymat->at < uchar > (k, m);
}


}
}
graymat->at < uchar > (i, j) = darkmin;


}
}
return (jlong) graymat;
}

JNIEXPORT jlong JNICALL Java_com_example_opencvdemo_NativeUtils_getDarkChannel3(
JNIEnv *, jobject, jlong matAddress, jlong removeAddress, jint block) {


Mat & mat = *(Mat*) matAddress;
Mat & removeMat = *(Mat*) removeAddress;


int i, j, k, m;
int rows = mat.rows;
int cols = mat.cols;


double minValue, maxValue, minValuer, minValueg, minValueb;
CvPoint maxPoint;
double A_dstb, A_dstg, A_dstr;


Mat darkchannel = Mat(rows, cols, CV_8UC1);


Mat Roib = Mat(block, block, CV_8UC1);
Mat Roig = Mat(block, block, CV_8UC1);
Mat Roir = Mat(block, block, CV_8UC1);


LOGD("mat = %d   mat = %d   mat = %d",
mat.at < Vec3b > (0, 0)[0], mat.at < Vec3b > (0, 0)[1], mat.at < Vec3b > (0, 0)[2]);


std::vector < cv::Mat > sbgr(mat.channels());
split(mat, sbgr);


Mat dstb = sbgr[0];
Mat dstg = sbgr[1];
Mat dstr = sbgr[2];


// LOGD("rows = %d   cols = %d, mat.channels = %d",rows, cols, mat.channels());


int rowsnum = rows / block;
int colsnum = cols / block;


// LOGD("rowsnum = %d   colsnum = %d",rowsnum, colsnum);


uchar * p = NULL;


Rect rect_roi;
rect_roi.x = 0;
rect_roi.y = 0;
rect_roi.height = block;
rect_roi.width = block;


for (i = 0; i < colsnum; i++) {
for (j = 0; j < rowsnum; j++) {


rect_roi.x = i * block;
rect_roi.y = j * block;


//LOGD("i*radus = %d   j*radus = %d",i*radus, j*radus);
Roib = dstb(rect_roi);
Roig = dstg(rect_roi);
Roir = dstr(rect_roi);


//耗时操作 100ms 左右


getMatMinValue(&Roib, &minValueb);
getMatMinValue(&Roig, &minValueg);
getMatMinValue(&Roir, &minValuer);


// minMaxIdx(Roib, &minValueb, &maxValueb);
// minMaxIdx(Roig, &minValueg, &maxValueg);
// minMaxIdx(Roir, &minValuer, &maxValuer);
minValue = getminbgr(minValueb, minValueg, minValuer);


// if (i == 0 && j< 5)LOGD("minValueb = %f , minValueg = %f ,minValuer = %f , minValue = %f", minValueb, minValueg,minValuer, minValue);


for (k = 0; k < block; k++) {
for (m = 0; m < block; m++) {
darkchannel.at < uchar > (rect_roi.y + k, rect_roi.x + m) =
minValue;
}
}
// cvRectangle(&darkchannel,cvPoint(rect_roi.y,rect_roi.x),cvPoint(rect_roi.y+block,rect_roi.x+block), cvScalar(minValue));


}
}
// for (i = 0; i < 15; i++){
// for (j = 0; j< 15; j++){
// LOGD("darkchannel = %d ", darkchannel.at<uchar>(i,j));
// }
// }
// getMatMaxValue(&darkchannel, &maxValue, &maxPoint);
uchar maxRGB[3];
getGlobalA(mat, darkchannel, maxRGB);
uchar maxA = max(maxRGB[0], maxRGB[1], maxRGB[2]);
// LOGD("darkchannel Max = %lf   x = %d   y = %d", maxValue, maxPoint.x, maxPoint.y);
/*
rect_roi.x = maxPoint.x;
rect_roi.y = maxPoint.y;


Roib = dstb(rect_roi);
getMatMaxValue(&Roib, &A_dstb, NULL);


Roig = dstg(rect_roi);
getMatMaxValue(&Roig, &A_dstg, NULL);


Roir = dstr(rect_roi);
getMatMaxValue(&Roir, &A_dstr, NULL);


*/
LOGD("A_dstb = %f   A_dstg = %f   A_dstr = %f",
maxRGB[2], maxRGB[1], maxRGB[0]);


Vec3b pixel;
Scalar scalar;
float w = 0.95;
float dx = w / maxA;
float tx;
Mat mtx = Mat(rows, cols, CV_32FC1);
for (i = 0; i < rows; i++) {
for (j = 0; j < cols; j++) {
tx = 1 - dx * darkchannel.at < uchar > (i, j);
if (tx < 0.1)
tx = 0.1;
mtx.at<float>(i, j) = tx;
// recover;
scalar = mat.at < Vec4b > (i, j);
pixel[0] = ((scalar[0] - A_dstb) / tx) + maxRGB[2];
pixel[1] = ((scalar[1] - A_dstg) / tx) + maxRGB[1];
pixel[2] = ((scalar[2] - A_dstr) / tx) + maxRGB[0];
removeMat.at < Vec3b > (i, j) = pixel;
}
}
}


JNIEXPORT jlong JNICALL Java_com_example_opencvdemo_NativeUtils_getDarkChannel4(
JNIEnv *, jobject, jlong matAddress, jlong removeAddress, jint r) {


Mat & mat = *(Mat*) matAddress;
Mat & removeMat = *(Mat*) removeAddress;


int i, j, k, m;
int rows = mat.rows;
int cols = mat.cols;
int block = (r << 1) + 1;
uchar * data_ptr = NULL;
uchar * output_ptr = NULL;


double minValue, maxValue, minValuer, minValueg, minValueb;
CvPoint maxPoint;
double A_dstb, A_dstg, A_dstr;


Mat darkchannel = Mat(rows, cols, CV_8UC1);
Mat minbgr = Mat(rows, cols, CV_8UC1);
Mat minbgrmin = Mat(rows, cols, CV_8UC1);


Mat Roib = Mat(block, block, CV_8UC1);
Mat Roig = Mat(block, block, CV_8UC1);
Mat Roir = Mat(block, block, CV_8UC1);


LOGD("mat = %d   mat = %d   mat = %d",
mat.at < Vec3b > (0, 0)[0], mat.at < Vec3b > (0, 0)[1], mat.at < Vec3b > (0, 0)[2]);


std::vector < cv::Mat > sbgr(mat.channels());
split(mat, sbgr);


Mat dstb = sbgr[0];
Mat dstg = sbgr[1];
Mat dstr = sbgr[2];


uchar * p = NULL;


Rect rect_roi;
rect_roi.x = 0;
rect_roi.y = 0;
rect_roi.height = block;
rect_roi.width = block;


for (i = 0; i < rows; i++) {
Vec4b * p = mat.ptr < Vec4b > (i);
for (j = 0; j < cols; j++) {
minbgr.at < uchar > (i, j) = (uchar) getminbgr((double) p[j][0],
(double) p[j][1], (double) p[j][2]);
}
}


/*
//求行的最小值
for (i = 0; i < rows; i++) {
data_ptr = minbgr.ptr<uchar>(i);
output_ptr = minbgrmin.ptr<uchar>(i);
getMinFilter(data_ptr, output_ptr, cols, r);
}
//求列的最小值
// unsigned char* tempData=(unsigned char*)calloc(cols,sizeof(unsigned char));
uchar tempData1[rows];
uchar tempData2[rows];
for (j = 0; j < cols; j++) {


for (i = 0; i < rows; i++) {
tempData1[i] = minbgrmin.at<uchar>(i, j);
}
getMinFilter(tempData1, tempData2, rows, r);
for (i = 0; i < rows; i++) {
darkchannel.at<uchar>(i, j) = tempData2[i];
}
}
*/
darkchannel = getDarkChannel(minbgr, block);
// getMatMaxValue(&Roir, &A_dstr, NULL);
// LOGD("A_dstb = %f,  %f,  %f", A_dstb, A_dstg, A_dstr);
//
Mat grayMat = Mat(rows, cols, CV_8UC1);

Mat grayMatf = Mat(rows, cols, CV_64FC1);
Mat darkchannelf = Mat(rows, cols, CV_64FC1);
//
// // Mat tx1 = Mat (rows, cols, CV_64FC1);
//
cvtColor(mat, grayMat, CV_BGR2GRAY);
//
grayMat.convertTo(grayMatf, CV_64FC1, 1 / 255.0);
darkchannel.convertTo(darkchannelf, CV_64FC1, 1 / 255.0);
// // tx1 =  myGuidedFilter_Mat(grayMatf, darkchannelf, r*5, 0.0001);
//
if (A_dstb > 220){
A_dstb = 220;
}
if (A_dstg > 220){
A_dstg = 220;
}
if (A_dstr > 220){
A_dstr = 220;
}
// //根据预估透射率 求去雾图像
// Vec3b pixel;
// Scalar scalar;
// double tx;
// for (i = 0; i < rows; i++) {
// for (j = 0; j < cols; j++) {
//
scalar = mat.at<Vec4b>(i,j);
tx = tx1.at<double>(i,j);
// LOGD("tx = %f", tx);
if(tx < 0) tx = 0.1;
pixel[0] = ((scalar[0] - A_dstb)/tx) + A_dstb;
pixel[1] = ((scalar[1] - A_dstg)/tx) + A_dstg;
pixel[2] = ((scalar[2] - A_dstr)/tx) + A_dstr;
removeMat.at<Vec3b>(i,j) = pixel;
开始写成3b  出问题   mat 有四个通道。
// scalar = mat.at<Vec4b>(i, j);
// tx = (int) (255 - 0.7 * darkchannel.at<uchar>(i, j)) / 255.0;
// // LOGD("tx1 = %f", tx);
// if (tx < 0)
// tx = 0.1;
// // if (i == 0 & j < 10) LOGD("tx =   %f", tx);
// pixel[0] = ((scalar[0] - A_dstb) / tx) + A_dstb;
// pixel[1] = ((scalar[1] - A_dstg) / tx) + A_dstg;
// pixel[2] = ((scalar[2] - A_dstr) / tx) + A_dstr;
// removeMat.at<Vec3b>(i, j) = pixel;
// }
// getMatMaxValue(&darkchannel, &maxValue, &maxPoint);
uchar maxRGB[3];
getGlobalA(mat, darkchannel, maxRGB);
uchar maxA = max(maxRGB[0], maxRGB[1], maxRGB[2]);
// LOGD("darkchannel Max = %lf   x = %d   y = %d", maxValue, maxPoint.x, maxPoint.y);
/*
rect_roi.x = maxPoint.x;
rect_roi.y = maxPoint.y;


Roib = dstb(rect_roi);
getMatMaxValue(&Roib, &A_dstb, NULL);


Roig = dstg(rect_roi);
getMatMaxValue(&Roig, &A_dstg, NULL);


Roir = dstr(rect_roi);
getMatMaxValue(&Roir, &A_dstr, NULL);


*/
LOGD("A_dstb = %f   A_dstg = %f   A_dstr xxxxx = %f",
maxRGB[2], maxRGB[1], maxRGB[0]);


Vec3b pixel;
Scalar scalar;
float w = 0.95;
float dx = w / maxA;
float tx;
Mat mtx = Mat(rows, cols, CV_64FC1);
Mat mtxx = Mat(rows, cols, CV_64FC1);


for (i = 0; i < rows; i++) {
for (j = 0; j < cols; j++) {
tx = 1 - dx * darkchannel.at < uchar > (i, j);
if (tx < 0.1)
tx = 0.1;
mtx.at<float>(i, j) = tx;
// recover;
scalar = mat.at < Vec4b > (i, j);
pixel[0] = ((scalar[0] - maxRGB[2]) / tx) + maxRGB[2];
pixel[1] = ((scalar[1] - maxRGB[1]) / tx) + maxRGB[1];
pixel[2] = ((scalar[2] - maxRGB[0]) / tx) + maxRGB[0];
removeMat.at < Vec3b > (i, j) = pixel;
}
}


// mtxx = myGuidedFilter_Mat(grayMatf, mtx, r * 5, 0.001);
//
// for (i = 0; i < rows; i++) {
// for (j = 0; j < cols; j++) {
// tx = mtxx.at<float>(i, j);
// // recover;
// if(tx < 0) tx = 0.1;
// scalar = mat.at < Vec4b > (i, j);
// pixel[0] = ((scalar[0] - maxRGB[2]) / tx) + maxRGB[2];
// pixel[1] = ((scalar[1] - maxRGB[1]) / tx) + maxRGB[1];
// pixel[2] = ((scalar[2] - maxRGB[0]) / tx) + maxRGB[0];
// removeMat.at < Vec3b > (i, j) = pixel;
// }
// }
}
//mat 的   加  减  乘  除    操作
// Mat a = Mat(4,4,CV_8UC1, cvScalar(8));
// Mat b = Mat(4,4,CV_8UC1, cvScalar(2));
// Mat c = Mat(4,4,CV_8UC1);
// Mat d = Mat(4,4,CV_8UC1);
//
// c = a.mul(b);
// d = a/b;//divide(a, b);
//
// c = c + cvScalar(2);
// for (int i = 0; i < 4;i++){
// for (int j = 0; j < 4; j++){
// LOGD(" a*b = %d ", c.at<uchar>(i, j));
// LOGD(" a/b = %d ", d.at<uchar>(i, j));
// }
// }
}
JNIEXPORT jlong JNICALL Java_com_example_opencvdemo_NativeUtils_getDarkChannel5(
JNIEnv *env, jobject object, jlong matAddress, jlong removeAddress, jint r) {
int isRGB = 1;
Mat & mat = *(Mat*) matAddress; // Mat with RGBA channel
// isRGB = 0;
// Mat mat = imread("/storage/emulated/0/DCIM/Camera/25.jpg",
// CV_LOAD_IMAGE_COLOR); // Mat with BGR channel
    //get callback
    objclass=env->FindClass("com/example/opencvdemo/NativeUtils");
    mid = env->GetMethodID(objclass, "progressCallback", "(II)V");
    jfieldID progress_field = env->GetStaticFieldID(objclass, "PROGRESS_STEP_START", "I");
    jint progress = env->GetStaticIntField(objclass, progress_field);
    jint max_num = 100;
    //
    env->CallVoidMethod(object, mid,progress,max_num);
Mat & removeMat = *(Mat*) removeAddress;
int i, j, k, m;
int rows = mat.rows;
int cols = mat.cols;
int block = (r << 1) + 1;
uchar * data_ptr = NULL;
uchar * output_ptr = NULL;


double minValue, maxValue, minValuer, minValueg, minValueb;
CvPoint maxPoint;
double A_dstb, A_dstg, A_dstr;


Mat darkchannel = Mat(rows, cols, CV_8UC1);
Mat minbgr = Mat(rows, cols, CV_8UC1);
getMinRGB(mat, minbgr);
darkchannel = getDarkChannel(minbgr, block);
progress_field = env->GetStaticFieldID(objclass, "PROGRESS_STEP_GET_DARK_CHANNEL", "I");
    progress = env->GetStaticIntField(objclass, progress_field);
    env->CallVoidMethod(object, mid,progress,max_num);
uchar maxRGB[3];
getGlobalA(mat, darkchannel, maxRGB);
if (isRGB != 1){
int tmp = maxRGB[0];
maxRGB[0] = maxRGB[2];
maxRGB[2] = maxRGB[0];
}
uchar maxA = max(maxRGB[0], maxRGB[1], maxRGB[2]);
LOGD("maxRGB[0] = %d   maxRGB[1] = %d   maxRGB[2] xx = %d",
(int)maxRGB[2], (int)maxRGB[1], (int)maxRGB[0]);
progress_field = env->GetStaticFieldID(objclass, "PROGRESS_STEP_GET_A", "I");
progress = env->GetStaticIntField(objclass, progress_field);
env->CallVoidMethod(object, mid,progress,max_num);
uchar *p;
uchar *pRemove;
uchar *pImage;
float w = 0.95;
float minT = 0.1;
float dx = w / maxA;
float tx;
int offset = 0;
int offset2 = 0;
int channels = mat.channels();
int outChannels = removeMat.channels();
for (int i = 0; i < darkchannel.rows; i++) {
p = darkchannel.ptr < uchar > (i);
pRemove = removeMat.ptr < uchar > (i);
pImage = mat.ptr < uchar > (i);
offset = 0;
offset2 = 0;
for (int j = 0; j < darkchannel.cols; j++, offset += channels, offset2 +=outChannels) {
tx = (1 - dx * p[j]);
if (minT > tx || tx < 0)
tx = minT;
pRemove[offset2] = (uchar)(
(pImage[offset] - maxRGB[0]) / tx + maxRGB[0]);
pRemove[offset2 + 1] = (uchar)(
(pImage[offset + 1] - maxRGB[1]) / tx + maxRGB[1]);
pRemove[offset2 + 2] = (uchar)(
(pImage[offset + 2] - maxRGB[2]) / tx + maxRGB[2]);
}
}
progress_field = env->GetStaticFieldID(objclass, "PROGRESS_STEP_END", "I");
    progress = env->GetStaticIntField(objclass, progress_field);
env->CallVoidMethod(object, mid,progress,max_num);
env->DeleteLocalRef(objclass);
}

 

评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

NineDays66

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值