图像对比增强算法总结(粗略介绍)

算法1:AINDANE
论文题目:Adaptive and integrated neighborhood-dependent approach for nonlinear enhancement of color images

#include <iostream>
#include <opencv2/opencv.hpp>

#include "image_enhancement.h"

void AINDANE(const cv::Mat& src, cv::Mat& dst, int sigma1, int sigma2, int sigma3)
{
    cv::Mat I;
    cv::cvtColor(src, I, CV_BGR2GRAY);

    int histsize = 256;
    float range[] = { 0, 256 };
    const float* histRanges = { range };
    int bins = 256;
    cv::Mat hist;
    calcHist(&I, 1, 0, cv::Mat(), hist, 1, &histsize, &histRanges, true, false);

    int L;
    float cdf = 0;
    int total_pixel = src.rows * src.cols;
    for (int i = 0; i < 256; i++) {
        cdf += hist.at<float>(i) / total_pixel;
        if (cdf >= 0.1) {
            L = i;
            break;
        }
    }

    double z;
    if (L <= 50)
        z = 0;
    else if (L > 150)
        z = 1;
    else
        z = (L - 50) / 100.0;

    cv::Mat I_conv1, I_conv2, I_conv3;
    cv::GaussianBlur(I, I_conv1, cv::Size(0, 0), sigma1, sigma1, cv::BORDER_CONSTANT);
    cv::GaussianBlur(I, I_conv2, cv::Size(0, 0), sigma2, sigma2, cv::BORDER_CONSTANT);
    cv::GaussianBlur(I, I_conv3, cv::Size(0, 0), sigma3, sigma3, cv::BORDER_CONSTANT);

    cv::Mat mean, stddev;
    cv::meanStdDev(I, mean, stddev);
    double global_sigma = stddev.at<double>(0, 0);

    double P;
    if (global_sigma <= 3.0)
        P = 3.0;
    else if (global_sigma >= 10.0)
        P = 1.0;
    else
        P = (27.0 - 2.0 * global_sigma) / 7.0;

    // Look-up table.
    uchar Table[256][256];
    for (int Y = 0; Y < 256; Y++) // Y represents I_conv(x,y)
    {
        for (int X = 0; X < 256; X++) // X represents I(x,y)
        {
            double i = X / 255.0; // Eq.2
            i = (std::pow(i, 0.75 * z + 0.25) + (1 - i) * 0.4 * (1 - z) + std::pow(i, 2 - z)) * 0.5; // Eq.3
            Table[Y][X] = cv::saturate_cast<uchar>(255 * std::pow(i, std::pow((Y + 1.0) / (X + 1.0), P)) + 0.5); // Eq.7 & Eq.8
        }
    }

    dst = src.clone();
    for (int r = 0; r < src.rows; r++) {
        uchar* I_it = I.ptr<uchar>(r);
        uchar* I_conv1_it = I_conv1.ptr<uchar>(r);
        uchar* I_conv2_it = I_conv2.ptr<uchar>(r);
        uchar* I_conv3_it = I_conv3.ptr<uchar>(r);
        const cv::Vec3b* src_it = src.ptr<cv::Vec3b>(r);
        cv::Vec3b* dst_it = dst.ptr<cv::Vec3b>(r);
        for (int c = 0; c < src.cols; c++) {
            uchar i = I_it[c];
            uchar i_conv1 = I_conv1_it[c];
            uchar i_conv2 = I_conv2_it[c];
            uchar i_conv3 = I_conv3_it[c];
            uchar S1 = Table[i_conv1][i];
            uchar S2 = Table[i_conv2][i];
            uchar S3 = Table[i_conv3][i];
            double S = (S1 + S2 + S3) / 3.0; // Eq.13

            /***
                The following commented codes are original operation(Eq.14) in paper.
            However, the results may contain obvious color spots due to the difference
            between adjacent enhanced luminance is too large.
            Here is an example:
                original luminance     --->     enhanced luminance
                        1              --->             25
                        2              --->             50
                        3              --->             75
            ***/
            //dst_it[c][0] = cv::saturate_cast<uchar>(src_it[c][0] * S / i);
            //dst_it[c][1] = cv::saturate_cast<uchar>(src_it[c][1] * S / i);
            //dst_it[c][2] = cv::saturate_cast<uchar>(src_it[c][2] * S / i);

            /***
                A simple way to deal with above problem is to limit the amplification,
            says, the amplification should not exceed 4 times. You can adjust it by
            yourself, or adaptively set this value.
                You can uncomment(coment) the above(below) codes to see the difference
            and check it out.
            ***/
            double cof = std::min(S / i, 4.0);
            dst_it[c][0] = cv::saturate_cast<uchar>(src_it[c][0] * cof);
            dst_it[c][1] = cv::saturate_cast<uchar>(src_it[c][1] * cof);
            dst_it[c][2] = cv::saturate_cast<uchar>(src_it[c][2] * cof);
        }
    }
    return;
}

(粗略总结)根据输入图像的直方图、全局标准差以及一些参数来调整图像的对比度和亮度,以获得更好的视觉效果。这种增强方法是基于非局部欧几里德距离的,可以根据输入图像的特性自适应地调整增强参数。这有助于提高图像的质量和可视化效果。

算法2:WTHE
论文题目:Fast image/video contrast enhancement based on weighted thresholded histogram equalization

#include <iostream>
#include <opencv2/opencv.hpp>

#include "image_enhancement.h"

void WTHE(const cv::Mat & src, cv::Mat & dst, float r, float v)
{
	int rows = src.rows;
	int cols = src.cols;
	int channels = src.channels();
	int total_pixels = rows * cols;

	cv::Mat L;
	cv::Mat YUV;
	std::vector<cv::Mat> YUV_channels;
	if (channels == 1) {
		L = src.clone();
	}
	else {
		cv::cvtColor(src, YUV, CV_BGR2YUV);
		cv::split(YUV, YUV_channels);
		L = YUV_channels[0];
	}

	int histsize = 256;
	float range[] = { 0,256 };
	const float* histRanges = { range };
	int bins = 256;
	cv::Mat hist;
	calcHist(&L, 1, 0, cv::Mat(), hist, 1, &histsize, &histRanges, true, false);

	float total_pixels_inv = 1.0f / total_pixels;
	cv::Mat P = hist.clone();
	for (int i = 0; i < 256; i++) {
		P.at<float>(i) = P.at<float>(i) * total_pixels_inv;
	}

	cv::Mat Pwt = P.clone();
	double minP, maxP;
	cv::minMaxLoc(P, &minP, &maxP);
	float Pu = v * maxP;
	float Pl = minP;
	for (int i = 0; i < 256; i++) {
		float Pi = P.at<float>(i);
		if (Pi > Pu)
			Pwt.at<float>(i) = Pu;
		else if (Pi < Pl)
			Pwt.at<float>(i) = 0;
		else
			Pwt.at<float>(i) = std::pow((Pi - Pl) / (Pu - Pl), r) * Pu;
	}

	cv::Mat Cwt = Pwt.clone();
	float cdf = 0;
	for (int i = 0; i < 256; i++) {
		cdf += Pwt.at<float>(i);
		Cwt.at<float>(i) = cdf;
	}

	float Wout = 255.0f;
	float Madj = 0.0f;
	std::vector<uchar> table(256, 0);
	for (int i = 0; i < 256; i++) {
		table[i] = cv::saturate_cast<uchar>(Wout * Cwt.at<float>(i) + Madj);
	}

	cv::LUT(L, table, L);

	if (channels == 1) {
		dst = L.clone();
	}
	else {
		cv::merge(YUV_channels, dst);
		cv::cvtColor(dst, dst, CV_YUV2BGR);
	}

	return;
}

总结:实现了一种基于直方图变换的图像增强方法,通过截断和非线性变换来调整图像的亮度分布,以提高图像的视觉质量。增强的程度由参数 r 和 v 控制。这种方法可以用于增强单通道(灰度图像)或多通道(彩色图像)的图像。

算法3:GCEHistMod
论文题目:A histogram modification framework and its application for image contrast enhancement

#include <iostream>
#include <opencv2/opencv.hpp>

#include "image_enhancement.h"

void GCEHistMod(const cv::Mat& src, cv::Mat& dst, int threshold, int b, int w, double alpha, int g)
{
    int rows = src.rows;
    int cols = src.cols;
    int channels = src.channels();
    int total_pixels = rows * cols;

    cv::Mat L;
    cv::Mat HSV;
    std::vector<cv::Mat> HSV_channels;
    if (channels == 1) {
        L = src.clone();
    } else {
        cv::cvtColor(src, HSV, CV_BGR2HSV_FULL);
        cv::split(HSV, HSV_channels);
        L = HSV_channels[2];
    }

    std::vector<int> hist(256, 0);

    int k = 0;
    int count = 0;
    for (int r = 0; r < rows; r++) {
        const uchar* data = L.ptr<uchar>(r);
        for (int c = 0; c < cols; c++) {
            int diff = (c < 2) ? data[c] : std::abs(data[c] - data[c - 2]);
            k += diff;
            if (diff > threshold) {
                hist[data[c]]++;
                count++;
            }
        }
    }

    double kg = k * g;
    double k_prime = kg / std::pow(2, std::ceil(std::log2(kg)));

    double umin = 10;
    double u = std::min(count / 256.0, umin);

    std::vector<double> modified_hist(256, 0);
    double sum = 0;
    for (int i = 0; i < 256; i++) {
        if (i > b && i < w)
            modified_hist[i] = std::round((1 - k_prime) * u + k_prime * hist[i]);
        else
            modified_hist[i] = std::round(((1 - k_prime) * u + k_prime * hist[i]) / (1 + alpha));
        sum += modified_hist[i];
    }

    std::vector<double> CDF(256, 0);
    double culsum = 0;
    for (int i = 0; i < 256; i++) {
        culsum += modified_hist[i] / sum;
        CDF[i] = culsum;
    }

    std::vector<uchar> table_uchar(256, 0);
    for (int i = 1; i < 256; i++) {
        table_uchar[i] = cv::saturate_cast<uchar>(255.0 * CDF[i]);
    }

    cv::LUT(L, table_uchar, L);

    if (channels == 1) {
        dst = L.clone();
    } else {
        cv::merge(HSV_channels, dst);
        cv::cvtColor(dst, dst, CV_HSV2BGR_FULL);
    }

    return;
}

总结:实现了一种基于直方图修改的图像增强方法,通过调整直方图的形状来增强图像的对比度和亮度。增强的程度由参数 threshold、b、w、alpha 和 g 控制。这种方法可以用于增强单通道(灰度图像)或多通道(彩色图像)的图像。

算法4:LDR
论文题目:Contrast Enhancement based on Layered Difference Representation of 2D Histograms

#include <iostream>
#include <opencv2/opencv.hpp>

#include "image_enhancement.h"
#include "util.h"

void LDR(const cv::Mat& src, cv::Mat& dst, double alpha)
{
    int R = src.rows;
    int C = src.cols;

    cv::Mat Y;
    std::vector<cv::Mat> YUV_channels;
    if (src.channels() == 1) {
        Y = src.clone();
    } else {
        cv::Mat YUV;
        cv::cvtColor(src, YUV, CV_BGR2YUV);
        cv::split(YUV, YUV_channels);
        Y = YUV_channels[0];
    }

    cv::Mat U = cv::Mat::zeros(255, 255, CV_64F);
    {
        cv::Mat tmp_k(255, 1, CV_64F);
        for (int i = 0; i < 255; i++)
            tmp_k.at<double>(i) = i + 1;

        for (int layer = 1; layer <= 255; layer++) {
            cv::Mat mi, ma;
            cv::min(tmp_k, 256 - layer, mi);
            cv::max(tmp_k - layer, 0, ma);
            cv::Mat m = mi - ma;
            m.copyTo(U.col(layer - 1));
        }
    }

    // unordered 2D histogram acquisition
    cv::Mat h2d = cv::Mat::zeros(256, 256, CV_64F);
    for (int j = 0; j < R; j++) {
        for (int i = 0; i < C; i++) {
            uchar ref = Y.at<uchar>(j, i);

            if (j != R - 1) {
                uchar trg = Y.at<uchar>(j + 1, i);
                h2d.at<double>(std::max(ref, trg), std::min(ref, trg)) += 1;
            }
            if (i != C - 1) {
                uchar trg = Y.at<uchar>(j, i + 1);
                h2d.at<double>(std::max(ref, trg), std::min(ref, trg)) += 1;
            }
        }
    }

    // Intra-Layer Optimization
    cv::Mat D = cv::Mat::zeros(255, 255, CV_64F);
    cv::Mat s = cv::Mat::zeros(255, 1, CV_64F);

    for (int layer = 1; layer <= 255; layer++) {
        cv::Mat h_l = cv::Mat::zeros(256 - layer, 1, CV_64F);

        int tmp_idx = 1;
        for (int j = 1 + layer; j <= 256; j++) {
            int i = j - layer;
            h_l.at<double>(tmp_idx - 1) = std::log(h2d.at<double>(j - 1, i - 1) + 1); // Equation (2)
            tmp_idx++;
        }

        s.at<double>(layer - 1) = cv::sum(h_l)[0];

        if (s.at<double>(layer - 1) == 0)
            continue;

        cv::Mat kernel = cv::Mat::ones(layer, 1, CV_64F);
        cv::Mat m_l = conv2(h_l, kernel, ConvolutionType::CONVOLUTION_FULL); // Equation (30)

        double mi;
        cv::minMaxLoc(m_l, &mi, 0);
        cv::Mat d_l = m_l - mi;
        d_l = d_l.mul(1.0 / U.col(layer - 1)); // Equation (33)

        if (cv::sum(d_l)[0] == 0)
            continue;

        D.col(layer - 1) = d_l / cv::sum(d_l)[0];
    }

    // Inter - Layer Aggregation
    double max_s;
    cv::minMaxLoc(s, 0, &max_s);
    cv::Mat W;
    cv::pow(s / max_s, alpha, W); // Equation (23)
    cv::Mat d = D * W; // Equation (24)

    // reconstruct transformation function
    d /= cv::sum(d)[0];
    cv::Mat tmp = cv::Mat::zeros(256, 1, CV_64F);
    for (int k = 1; k <= 255; k++) {
        tmp.at<double>(k) = tmp.at<double>(k - 1) + d.at<double>(k - 1);
    }
    tmp.convertTo(tmp, CV_8U, 255.0);

    cv::LUT(Y, tmp, Y);

    if (src.channels() == 1) {
        dst = Y.clone();
    } else {
        cv::merge(YUV_channels, dst);
        cv::cvtColor(dst, dst, CV_YUV2BGR);
    }

    return;
}
#ifndef _UTIL_H
#define _UTIL_H

// This must be defnined, in order to use arma::spsolve in the code with SuperLU
#define ARMA_USE_SUPERLU

#include <armadillo>
#include <iostream>
#include <opencv2/opencv.hpp>

// This is a Armadillo-based implementation of spdiags in Matlab.
arma::sp_mat spdiags(const arma::mat& B, const std::vector<int>& d, int m, int n);


enum ConvolutionType {
	/* Return the full convolution, including border */
	CONVOLUTION_FULL,
	/* Return only the part that corresponds to the original image */
	CONVOLUTION_SAME,
	/* Return only the submatrix containing elements that were not influenced by the border */
	CONVOLUTION_VALID
};

// This is a OpenCV-based implementation of conv2 in Matlab.
cv::Mat conv2(const cv::Mat &img, const cv::Mat& ikernel, ConvolutionType type);

#endif
// This must be defnined, in order to use arma::spsolve in the code with SuperLU
#define ARMA_USE_SUPERLU

#include <armadillo>
#include <iostream>
#include <opencv2/opencv.hpp>

#include "util.h"

arma::sp_mat spdiags(const arma::mat& B, const std::vector<int>& d, int m, int n)
{
    arma::sp_mat A(m, n);
    for (int k = 0; k < d.size(); k++) {
        int i_min = std::max(0, -d[k]);
        int i_max = std::min(m - 1, n - d[k] - 1);
        A.diag(d[k]) = B(arma::span(0, i_max - i_min), arma::span(k, k));
    }

    return A;
}


// This is a OpenCV-based implementation of conv2 in Matlab.
cv::Mat conv2(const cv::Mat &img, const cv::Mat& ikernel, ConvolutionType type)
{
	cv::Mat dest;
	cv::Mat kernel;
	cv::flip(ikernel, kernel, -1);
	cv::Mat source = img;
	if (CONVOLUTION_FULL == type)
	{
		source = cv::Mat();
		const int additionalRows = kernel.rows - 1, additionalCols = kernel.cols - 1;
		copyMakeBorder(img, source, (additionalRows + 1) / 2, additionalRows / 2, (additionalCols + 1) / 2, additionalCols / 2, cv::BORDER_CONSTANT, cv::Scalar(0));
	}
	cv::Point anchor(kernel.cols - kernel.cols / 2 - 1, kernel.rows - kernel.rows / 2 - 1);
	int borderMode = cv::BORDER_CONSTANT;
	filter2D(source, dest, img.depth(), kernel, anchor, 0, borderMode);

	if (CONVOLUTION_VALID == type)
	{
		dest = dest.colRange((kernel.cols - 1) / 2, dest.cols - kernel.cols / 2).rowRange((kernel.rows - 1) / 2, dest.rows - kernel.rows / 2);
	}
	return dest;
}

总结:实现了一种基于局部动态范围调整的图像增强方法,通过对图像的局部特性进行优化来增强图像的对比度和细节。增强的程度由参数 alpha 控制。这种方法可以用于增强单通道(灰度图像)或多通道(彩色图像)的图像。

算法5:AGCWD
论文题目:Efficient Contrast Enhancement Using Adaptive Gamma Correction With Weighting Distribution

#include <iostream>
#include <opencv2/opencv.hpp>

#include "image_enhancement.h"

void AGCWD(const cv::Mat & src, cv::Mat & dst, double alpha)
{
	int rows = src.rows;
	int cols = src.cols;
	int channels = src.channels();
	int total_pixels = rows * cols;

	cv::Mat L;
	cv::Mat HSV;
	std::vector<cv::Mat> HSV_channels;
	if (channels == 1) {
		L = src.clone();
	}
	else {
		cv::cvtColor(src, HSV, CV_BGR2HSV_FULL);
		cv::split(HSV, HSV_channels);
		L = HSV_channels[2];
	}

	int histsize = 256;
	float range[] = { 0,256 };
	const float* histRanges = { range };
	int bins = 256;
	cv::Mat hist;
	calcHist(&L, 1, 0, cv::Mat(), hist, 1, &histsize, &histRanges, true, false);

	double total_pixels_inv = 1.0 / total_pixels;
	cv::Mat PDF = cv::Mat::zeros(256, 1, CV_64F);
	for (int i = 0; i < 256; i++) {
		PDF.at<double>(i) = hist.at<float>(i) * total_pixels_inv;
	}

	double pdf_min, pdf_max;
	cv::minMaxLoc(PDF, &pdf_min, &pdf_max);
	cv::Mat PDF_w = PDF.clone();
	for (int i = 0; i < 256; i++) {
		PDF_w.at<double>(i) = pdf_max * std::pow((PDF_w.at<double>(i) - pdf_min) / (pdf_max - pdf_min), alpha);
	}

	cv::Mat CDF_w = PDF_w.clone();
	double culsum = 0;
	for (int i = 0; i < 256; i++) {
		culsum += PDF_w.at<double>(i);
		CDF_w.at<double>(i) = culsum;
	}
	CDF_w /= culsum;

	std::vector<uchar> table(256, 0);
	for (int i = 1; i < 256; i++) {
		table[i] = cv::saturate_cast<uchar>(255.0 * std::pow(i / 255.0, 1 - CDF_w.at<double>(i)));
	}

	cv::LUT(L, table, L);

	if (channels == 1) {
		dst = L.clone();
	}
	else {
		cv::merge(HSV_channels, dst);
		cv::cvtColor(dst, dst, CV_HSV2BGR_FULL);
	}

	return;
}

总结:实现了一种自适应伽马校正的图像增强方法,通过调整图像的亮度分布来改善图像的视觉效果。增强的程度由参数 alpha 控制。这种方法可以用于增强单通道(灰度图像)或多通道(彩色图像)的图像。

算法6:AGCIE
论文题目:An adaptive gamma correction for image enhancement

#include <iostream>
#include <opencv2/opencv.hpp>

#include "image_enhancement.h"

void AGCIE(const cv::Mat & src, cv::Mat & dst)
{
	int rows = src.rows;
	int cols = src.cols;
	int channels = src.channels();
	int total_pixels = rows * cols;

	cv::Mat L;
	cv::Mat HSV;
	std::vector<cv::Mat> HSV_channels;
	if (channels == 1) {
		L = src.clone();
	}
	else {
		cv::cvtColor(src, HSV, CV_BGR2HSV_FULL);
		cv::split(HSV, HSV_channels);
		L = HSV_channels[2];
	}

	cv::Mat L_norm;
	L.convertTo(L_norm, CV_64F, 1.0 / 255.0);

	cv::Mat mean, stddev;
	cv::meanStdDev(L_norm, mean, stddev);
	double mu = mean.at<double>(0, 0);
	double sigma = stddev.at<double>(0, 0);

	double tau = 3.0;

	double gamma;
	if (4 * sigma <= 1.0 / tau) { // low-contrast
		gamma = -std::log2(sigma);
	}
	else { // high-contrast
		gamma = std::exp((1.0 - mu - sigma) / 2.0);
	}
	
	std::vector<double> table_double(256, 0);
	for (int i = 1; i < 256; i++) {
		table_double[i] = i / 255.0;
	}

	if (mu >= 0.5) { // bright image
		for (int i = 1; i < 256; i++) {
			table_double[i] = std::pow(table_double[i], gamma);
		}
	}
	else { // dark image
		double mu_gamma = std::pow(mu, gamma);
		for (int i = 1; i < 256; i++) {
			double in_gamma = std::pow(table_double[i], gamma);;
			table_double[i] = in_gamma / (in_gamma + (1.0 - in_gamma) * mu_gamma);
		}
	}
	
	std::vector<uchar> table_uchar(256, 0);
	for (int i = 1; i < 256; i++) {
		table_uchar[i] = cv::saturate_cast<uchar>(255.0 * table_double[i]);
	}

	cv::LUT(L, table_uchar, L);

	if (channels == 1) {
		dst = L.clone();
	}
	else {
		cv::merge(HSV_channels, dst);
		cv::cvtColor(dst, dst, CV_HSV2BGR_FULL);
	}

	return;
}

总结:实现了一种自适应伽马校正的图像增强方法,该方法会根据图像的对比度特性来选择伽马校正的方式(增加或减小对比度)。增强的效果会根据图像的亮度和对比度动态调整,以提高图像的视觉质量。这种方法可以用于增强单通道(灰度图像)或多通道(彩色图像)的图像。

算法7:IAGCWD
论文题目:Contrast enhancement of brightness-distorted images by improved adaptive gamma correction

#include <iostream>
#include <opencv2/opencv.hpp>

#include "image_enhancement.h"

void IAGCWD(const cv::Mat & src, cv::Mat & dst, double alpha_dimmed, double alpha_bright, int T_t, double tau_t, double tau)
{
	int rows = src.rows;
	int cols = src.cols;
	int channels = src.channels();
	int total_pixels = rows * cols;

	cv::Mat L;
	cv::Mat HSV;
	std::vector<cv::Mat> HSV_channels;
	if (channels == 1) {
		L = src.clone();
	}
	else {
		cv::cvtColor(src, HSV, CV_BGR2HSV_FULL);
		cv::split(HSV, HSV_channels);
		L = HSV_channels[2];
	}

	double mean_L = cv::mean(L).val[0];
	double t = (mean_L - T_t) / T_t;

	double alpha;
	bool truncated_cdf;
	if (t < -tau_t) {
		//process dimmed image
		alpha = alpha_dimmed;
		truncated_cdf = false;
	}
	else if (t > tau_t) {
		//process bright image
		alpha = alpha_bright;
		truncated_cdf = true;
		L = 255 - L;
	}
	else {
		//do nothing
		dst = src.clone();
		return;
	}

	int histsize = 256;
	float range[] = { 0,256 };
	const float* histRanges = { range };
	int bins = 256;
	cv::Mat hist;
	calcHist(&L, 1, 0, cv::Mat(), hist, 1, &histsize, &histRanges, true, false);

	double total_pixels_inv = 1.0 / total_pixels;
	cv::Mat PDF = cv::Mat::zeros(256, 1, CV_64F);
	for (int i = 0; i < 256; i++) {
		PDF.at<double>(i) = hist.at<float>(i) * total_pixels_inv;
	}

	double pdf_min, pdf_max;
	cv::minMaxLoc(PDF, &pdf_min, &pdf_max);
	cv::Mat PDF_w = PDF.clone();
	for (int i = 0; i < 256; i++) {
		PDF_w.at<double>(i) = pdf_max * std::pow((PDF_w.at<double>(i) - pdf_min) / (pdf_max - pdf_min), alpha);
	}

	cv::Mat CDF_w = PDF_w.clone();
	double culsum = 0;
	for (int i = 0; i < 256; i++) {
		culsum += PDF_w.at<double>(i);
		CDF_w.at<double>(i) = culsum;
	}
	CDF_w /= culsum;

	cv::Mat inverse_CDF_w = 1.0 - CDF_w;
	if (truncated_cdf) {
		inverse_CDF_w = cv::max(tau, inverse_CDF_w);
	}

	std::vector<uchar> table(256, 0);
	for (int i = 1; i < 256; i++) {
		table[i] = cv::saturate_cast<uchar>(255.0 * std::pow(i / 255.0, inverse_CDF_w.at<double>(i)));
	}

	cv::LUT(L, table, L);

	if (t > tau_t) {
		L = 255 - L;
	}

	if (channels == 1) {
		dst = L.clone();
	}
	else {
		cv::merge(HSV_channels, dst);
		cv::cvtColor(dst, dst, CV_HSV2BGR_FULL);
	}

	return;
}

总结:实现了一种迭代的、自适应的伽马校正图像增强方法,该方法根据图像的亮度特性动态调整伽马校正的方式。增强的程度和方式由参数和图像亮度决定。这种方法可以用于增强单通道(灰度图像)或多通道(彩色图像)的图像。

算法8:Ying_2017_CAIP
论文题目:A New Image Contrast Enhancement Algorithm Using Exposure Fusion Framework

#define ARMA_USE_SUPERLU

#include <armadillo>
#include <dlib/global_optimization.h>
#include <dlib/optimization.h>
#include <iostream>
#include <opencv2/opencv.hpp>

#include "image_enhancement.h"
#include "util.h"


void Ying_2017_CAIP(const cv::Mat& src, cv::Mat& dst, double mu, double a, double b, double lambda, double sigma)
{
    clock_t start_time, end_time;

    cv::Mat L;
    if (src.channels() == 3) {
        std::vector<cv::Mat> channels;
        split(src, channels);
        L = max(max(channels[0], channels[1]), channels[2]);
    } else {
        L = src.clone();
    }

    cv::Mat normalized_L;
    L.convertTo(normalized_L, CV_64F, 1 / 255.0);

    cv::Mat normalized_half_L;
    resize(normalized_L, normalized_half_L, cv::Size(), 0.5, 0.5, cv::INTER_CUBIC);

    // start_time = clock();
    cv::Mat M_h, M_v;
    {

        cv::Mat dL_h = normalized_half_L.clone();
        normalized_half_L(cv::Range(0, dL_h.rows), cv::Range(1, dL_h.cols)).copyTo(dL_h(cv::Range(0, dL_h.rows), cv::Range(0, dL_h.cols - 1)));
        normalized_half_L(cv::Range(0, dL_h.rows), cv::Range(0, 1)).copyTo(dL_h(cv::Range(0, dL_h.rows), cv::Range(dL_h.cols - 1, dL_h.cols)));
        dL_h = dL_h - normalized_half_L;

        cv::Mat dL_v = normalized_half_L.clone();
        normalized_half_L(cv::Range(1, dL_v.rows), cv::Range(0, dL_v.cols)).copyTo(dL_v(cv::Range(0, dL_v.rows - 1), cv::Range(0, dL_v.cols)));
        normalized_half_L(cv::Range(0, 1), cv::Range(0, dL_v.cols)).copyTo(dL_v(cv::Range(dL_v.rows - 1, dL_v.rows), cv::Range(0, dL_v.cols)));
        dL_v = dL_v - normalized_half_L;

        cv::Mat kernel_h = cv::Mat(1, sigma, CV_64F, cv::Scalar::all(1));
        cv::Mat kernel_v = cv::Mat(sigma, 1, CV_64F, cv::Scalar::all(1));

        cv::Mat gauker_h, gauker_v;
        filter2D(dL_h, gauker_h, -1, kernel_h, cv::Point(0, 0), 0, cv::BORDER_CONSTANT);
        filter2D(dL_v, gauker_v, -1, kernel_v, cv::Point(0, 0), 0, cv::BORDER_CONSTANT);

        double sharpness = 0.001;
        M_h = 1.0 / (abs(gauker_h).mul(abs(dL_h)) + sharpness);
        M_v = 1.0 / (abs(gauker_v).mul(abs(dL_v)) + sharpness);
    }
    // end_time = clock();
    // MyTimeOutput("computeTextureWeight处理时间: ", start_time, end_time);

    cv::Mat normalized_T;
    {
        // start_time = clock();
        int r = normalized_half_L.rows;
        int c = normalized_half_L.cols;
        int N = r * c;

        //Since OpenCV data is saved in row-wised,
        //and armadillo data is saved in column-wised,
        //therefore, when we convert OpenCV to Armadillo, we need to do transpose at first.
        cv::Mat M_h_t = M_h.t();
        cv::Mat M_v_t = M_v.t();
        arma::mat wx(reinterpret_cast<double*>(M_h_t.data), r, c);
        arma::mat wy(reinterpret_cast<double*>(M_v_t.data), r, c);

        arma::mat dx = -lambda * arma::reshape(wx, N, 1);
        arma::mat dy = -lambda * arma::reshape(wy, N, 1);

        arma::mat tempx = arma::shift(wx, +1, 1);
        arma::mat tempy = arma::shift(wy, +1, 0);

        arma::mat dxa = -lambda * arma::reshape(tempx, N, 1);
        arma::mat dya = -lambda * arma::reshape(tempy, N, 1);

        tempx.cols(1, c - 1).zeros();
        tempy.rows(1, r - 1).zeros();

        arma::mat dxd1 = -lambda * arma::reshape(tempx, N, 1);
        arma::mat dyd1 = -lambda * arma::reshape(tempy, N, 1);

        wx.col(c - 1).zeros();
        wy.row(r - 1).zeros();

        arma::mat dxd2 = -lambda * arma::reshape(wx, N, 1);
        arma::mat dyd2 = -lambda * arma::reshape(wy, N, 1);

        arma::mat dxd = arma::join_horiz(dxd1, dxd2);
        arma::mat dyd = arma::join_horiz(dyd1, dyd2);

        std::vector<int> x_diag_th = { -N + r, -r };
        std::vector<int> y_diag_th = { -r + 1, -1 };

        arma::sp_mat Ax = spdiags(dxd, x_diag_th, N, N);
        arma::sp_mat Ay = spdiags(dyd, y_diag_th, N, N);

        arma::mat D = 1.0 - (dx + dy + dxa + dya);

        std::vector<int> D_diag_th = { 0 };

        arma::sp_mat A = (Ax + Ay) + (Ax + Ay).t() + spdiags(D, D_diag_th, N, N);

        // end_time = clock();
        // MyTimeOutput("Before Ax=b处理时间: ", start_time, end_time);

        // start_time = clock();

        //Do transpose first.
        cv::Mat normalized_half_L_t = normalized_half_L.t();
        arma::mat normalized_half_L_mat(reinterpret_cast<double*>(normalized_half_L_t.data), r, c);
        arma::vec normalized_half_L_vec = arma::vectorise(normalized_half_L_mat);

        arma::mat t;
        arma::spsolve(t, A, normalized_half_L_vec);
        t.reshape(r, c);

        //When we convert Armadillo to OpenCV, we construct the cv::Mat with row and column number exchange at first.
        //Then do transpose.
        cv::Mat normalized_half_T(c, r, CV_64F, t.memptr());
        normalized_half_T = normalized_half_T.t();

        resize(normalized_half_T, normalized_T, src.size(), 0, 0, CV_INTER_CUBIC);

        // end_time = clock();
        // MyTimeOutput("Ax=b处理时间: ", start_time, end_time);
    }
    // imshow("normalized_T", normalized_T);

    cv::Mat normalized_src;
    src.convertTo(normalized_src, CV_64F, 1 / 255.0);

    // start_time = clock();
    cv::Mat J;
    {

        cv::Mat isBad = normalized_T < 0.5;
        cv::Mat isBad_50x50;
        cv::resize(isBad, isBad_50x50, cv::Size(50, 50), 0, 0, CV_INTER_NN);

        int count = countNonZero(isBad_50x50);
        if (count == 0) {
            J = normalized_src.clone();
        } else {
            isBad_50x50.convertTo(isBad_50x50, CV_64F, 1.0 / 255);

            cv::Mat normalized_src_50x50;
            cv::resize(normalized_src, normalized_src_50x50, cv::Size(50, 50), 0, 0, CV_INTER_CUBIC);
            normalized_src_50x50 = cv::max(normalized_src_50x50, 0);
            cv::Mat Y;
            {
                if (normalized_src_50x50.channels() == 3) {
                    std::vector<cv::Mat> channels;
                    split(normalized_src_50x50, channels);
                    Y = channels[0].mul(channels[1]).mul(channels[2]);
                    cv::pow(Y, 1.0 / 3, Y);
                } else {
                    Y = normalized_src_50x50;
                }
            }
            Y = Y.mul(isBad_50x50);

            dlib::matrix<double> y;
            y.set_size(Y.rows, Y.cols);
            for (int r = 0; r < Y.rows; r++) {
                for (int c = 0; c < Y.cols; c++) {
                    y(r, c) = Y.at<double>(r, c);
                }
            }

            double a = -0.3293, b = 1.1258;

            auto entropy = [&y, &a, &b](double k) {
                double beta = exp(b * (1.0 - pow(k, a)));
                double gamma = pow(k, a);
                double cost = 0;

                std::vector<int> hist(256, 0);
                for (int r = 0; r < y.nr(); r++) {
                    for (int c = 0; c < y.nc(); c++) {
                        double j = beta * pow(y(r, c), gamma);
                        int bin = int(j * 255.0);
                        if (bin < 0)
                            bin = 0;
                        else if (bin >= 255)
                            bin = 255;
                        hist[bin]++;
                    }
                }

                double N = y.nc() * y.nr();

                for (int i = 0; i < hist.size(); i++) {
                    if (hist[i] == 0)
                        continue;
                    double p = hist[i] / N;
                    cost += -p * log2(p);
                }
                return cost;
            };

            auto result = dlib::find_max_global(entropy, 1.0, 7.0, dlib::max_function_calls(20));

            double opt_k = result.x;
            double beta = exp(b * (1.0 - pow(opt_k, a)));
            double gamma = pow(opt_k, a);
            cv::pow(normalized_src, gamma, J);
            J = J * beta - 0.01;

            //cout << "beta: " << beta << endl;
            //cout << "gamma: " << gamma << endl;
            // std::cout << "opt_k: " << opt_k << std::endl;
        }
    }
    // end_time = clock();
    // MyTimeOutput("Ax=J处理时间: ", start_time, end_time);

    cv::Mat T;
    std::vector<cv::Mat> T_channels;
    for (int i = 0; i < src.channels(); i++)
        T_channels.push_back(normalized_T.clone());
    cv::merge(T_channels, T);

    cv::Mat W;
    cv::pow(T, mu, W);

    cv::Mat I2 = normalized_src.mul(W);
    cv::Mat ones_mat = cv::Mat(W.size(), src.channels() == 3 ? CV_64FC3 : CV_64FC1, cv::Scalar::all(1.0));
    cv::Mat J2 = J.mul(ones_mat - W);

    dst = I2 + J2;

    dst.convertTo(dst, CV_8U, 255);

    return;
}

总结:实现了一种复杂的图像增强算法,通过自适应地调整亮度和对比度,可以显著改善输入图像的质量。它利用了线性代数和全局最优化技术来实现这些调整,从而使得增强后的图像在亮度和对比度方面都得到了改进。

算法9:CEusingLuminanceAdaptation
论文题目:Retinex-based perceptual contrast enhancement in images using luminance adaptation

#include <iostream>
#include <opencv2/opencv.hpp>

#include "image_enhancement.h"

void CEusingLuminanceAdaptation(const cv::Mat& src, cv::Mat& dst)
{
    cv::Mat HSV;
    cv::cvtColor(src, HSV, cv::COLOR_BGR2HSV_FULL);
    std::vector<cv::Mat> HSV_channels;
    cv::split(HSV, HSV_channels);
    cv::Mat V = HSV_channels[2];

    int ksize = 5;
    cv::Mat gauker1 = cv::getGaussianKernel(ksize, 15);
    cv::Mat gauker2 = cv::getGaussianKernel(ksize, 80);
    cv::Mat gauker3 = cv::getGaussianKernel(ksize, 250);

    cv::Mat gauV1, gauV2, gauV3;
    cv::filter2D(V, gauV1, CV_8U, gauker1, cv::Point(-1, -1), 0, cv::BORDER_CONSTANT);
    cv::filter2D(V, gauV2, CV_8U, gauker2, cv::Point(-1, -1), 0, cv::BORDER_CONSTANT);
    cv::filter2D(V, gauV3, CV_8U, gauker3, cv::Point(-1, -1), 0, cv::BORDER_CONSTANT);

    std::vector<double> lut(256, 0);
    for (int i = 0; i < 256; i++) {
        if (i <= 127)
            lut[i] = 17.0 * (1.0 - std::sqrt(i / 127.0)) + 3.0;
        else
            lut[i] = 3.0 / 128.0 * (i - 127.0) + 3.0;
        lut[i] = (-lut[i] + 20.0) / 17.0;
    }

    cv::Mat beta1, beta2, beta3;
    cv::LUT(gauV1, lut, beta1);
    cv::LUT(gauV2, lut, beta2);
    cv::LUT(gauV3, lut, beta3);

    gauV1.convertTo(gauV1, CV_64F, 1.0 / 255.0);
    gauV2.convertTo(gauV2, CV_64F, 1.0 / 255.0);
    gauV3.convertTo(gauV3, CV_64F, 1.0 / 255.0);

    V.convertTo(V, CV_64F, 1.0 / 255.0);

    cv::log(V, V);
    cv::log(gauV1, gauV1);
    cv::log(gauV2, gauV2);
    cv::log(gauV3, gauV3);

    cv::Mat r = (3.0 * V - beta1.mul(gauV1) - beta2.mul(gauV2) - beta3.mul(gauV3)) / 3.0;

    cv::Mat R;
    cv::exp(r, R);

    double R_min, R_max;
    cv::minMaxLoc(R, &R_min, &R_max);
    cv::Mat V_w = (R - R_min) / (R_max - R_min);

    V_w.convertTo(V_w, CV_8U, 255.0);

    int histsize = 256;
    float range[] = { 0, 256 };
    const float* histRanges = { range };
    cv::Mat hist;
    calcHist(&V_w, 1, 0, cv::Mat(), hist, 1, &histsize, &histRanges, true, false);

    cv::Mat pdf = hist / (src.rows * src.cols);

    double pdf_min, pdf_max;
    cv::minMaxLoc(pdf, &pdf_min, &pdf_max);
    for (int i = 0; i < 256; i++) {
        pdf.at<float>(i) = pdf_max * (pdf.at<float>(i) - pdf_min) / (pdf_max - pdf_min);
    }

    std::vector<double> cdf(256, 0);
    double accum = 0;
    for (int i = 0; i < 255; i++) {
        accum += pdf.at<float>(i);
        cdf[i] = accum;
    }
    cdf[255] = 1.0 - accum;

    double V_w_max;
    cv::minMaxLoc(V_w, 0, &V_w_max);
    for (int i = 0; i < 255; i++) {
        lut[i] = V_w_max * std::pow((i * 1.0 / V_w_max), 1.0 - cdf[i]);
    }

    cv::Mat V_out;
    cv::LUT(V_w, lut, V_out);
    V_out.convertTo(V_out, CV_8U);

    HSV_channels[2] = V_out;
    cv::merge(HSV_channels, HSV);
    cv::cvtColor(HSV, dst, CV_HSV2BGR_FULL);

    return;
}

总结:通过调整亮度通道的对比度来增强图像,特别是强调了低对比度区域。这可以提高图像的视觉质量和细节可见度。

算法10:adaptiveImageEnhancement
论文题目:Adaptive image enhancement method for correcting low-illumination images

#include <iostream>
#include <opencv2/opencv.hpp>

#include "image_enhancement.h"

void adaptiveImageEnhancement(const cv::Mat& src, cv::Mat& dst)
{
    int r = src.rows;
    int c = src.cols;
    int n = r * c;

    cv::Mat HSV;
    cv::cvtColor(src, HSV, cv::COLOR_BGR2HSV_FULL);
    std::vector<cv::Mat> HSV_channels;
    cv::split(HSV, HSV_channels);
    cv::Mat S = HSV_channels[1];
    cv::Mat V = HSV_channels[2];

    int ksize = 5;
    cv::Mat gauker1 = cv::getGaussianKernel(ksize, 15);
    cv::Mat gauker2 = cv::getGaussianKernel(ksize, 80);
    cv::Mat gauker3 = cv::getGaussianKernel(ksize, 250);

    cv::Mat gauV1, gauV2, gauV3;
    cv::filter2D(V, gauV1, CV_64F, gauker1, cv::Point(-1, -1), 0, cv::BORDER_CONSTANT);
    cv::filter2D(V, gauV2, CV_64F, gauker2, cv::Point(-1, -1), 0, cv::BORDER_CONSTANT);
    cv::filter2D(V, gauV3, CV_64F, gauker3, cv::Point(-1, -1), 0, cv::BORDER_CONSTANT);

    cv::Mat V_g = (gauV1 + gauV2 + gauV3) / 3.0;

    cv::Scalar avg_S = cv::mean(S);
    double k1 = 0.1 * avg_S[0];
    double k2 = avg_S[0];

    cv::Mat V_double;
    V.convertTo(V_double, CV_64F);

    cv::Mat V1 = ((255 + k1) * V_double).mul(1.0 / (cv::max(V_double, V_g) + k1));
    cv::Mat V2 = ((255 + k2) * V_double).mul(1.0 / (cv::max(V_double, V_g) + k2));

    cv::Mat X1 = V1.reshape(0, n);
    cv::Mat X2 = V2.reshape(0, n);

    cv::Mat X(n, 2, CV_64F);
    X1.copyTo(X(cv::Range(0, n), cv::Range(0, 1)));
    X2.copyTo(X(cv::Range(0, n), cv::Range(1, 2)));

    cv::Mat covar, mean;
    cv::calcCovarMatrix(X, covar, mean, CV_COVAR_NORMAL | CV_COVAR_ROWS, CV_64F);

    cv::Mat eigenValues; //The eigenvalues are stored in the descending order.
    cv::Mat eigenVectors; //The eigenvectors are stored as subsequent matrix rows.
    cv::eigen(covar, eigenValues, eigenVectors);

    double w1 = eigenVectors.at<double>(0, 0) / (eigenVectors.at<double>(0, 0) + eigenVectors.at<double>(0, 1));
    double w2 = 1 - w1;

    cv::Mat F = w1 * V1 + w2 * V2;

    F.convertTo(F, CV_8U);

    HSV_channels[2] = F;
    cv::merge(HSV_channels, HSV);
    cv::cvtColor(HSV, dst, CV_HSV2BGR_FULL);

    return;
}

总结:根据图像的局部信息和亮度通道之间的关系,自适应地增强图像的亮度,从而提高图像的质量。这可以使图像在不同亮度条件下更容易查看,并增强细节。

算法11:JHE
论文题目:A novel joint histogram equalization based image contrast enhancement

#include <iostream>
#include <opencv2/opencv.hpp>

#include "image_enhancement.h"

void JHE(const cv::Mat & src, cv::Mat & dst)
{
	int rows = src.rows;
	int cols = src.cols;
	int channels = src.channels();
	int total_pixels = rows * cols;

	cv::Mat L;
	cv::Mat YUV;
	std::vector<cv::Mat> YUV_channels;
	if (channels == 1) {
		L = src.clone();
	}
	else {
		cv::cvtColor(src, YUV, CV_BGR2YUV);
		cv::split(YUV, YUV_channels);
		L = YUV_channels[0];
	}

	// Compute average image.
	cv::Mat avg_L;
	cv::boxFilter(L, avg_L, -1, cv::Size(3, 3), cv::Point(-1,-1), true, cv::BORDER_CONSTANT);

	// Computer joint histogram.
	cv::Mat jointHist = cv::Mat::zeros(256, 256, CV_32S);
	for (int r = 0; r < rows; r++) {
		uchar* L_it = L.ptr<uchar>(r);
		uchar* avg_L_it = avg_L.ptr<uchar>(r);
		for (int c = 0; c < cols; c++) {
			int i = L_it[c];
			int j = avg_L_it[c];
			jointHist.at<int>(i, j)++;
		}
	}

	// Compute CDF.
	cv::Mat CDF = cv::Mat::zeros(256, 256, CV_32S);
	int min_CDF = total_pixels + 1;
	int cumulative = 0;
	for (int i = 0; i < 256; i++) {
		int* jointHist_it = jointHist.ptr<int>(i);
		int* CDF_it = CDF.ptr<int>(i);
		for (int j = 0; j < 256; j++) {
			int count = jointHist_it[j];
			cumulative += count;
			if (cumulative > 0 && cumulative < min_CDF)
				min_CDF = cumulative;
			CDF_it[j] = cumulative;
		}
	}

	// Compute equalized joint histogram.
	cv::Mat h_eq = cv::Mat::zeros(256, 256, CV_8U);
	for (int i = 0; i < 256; i++) {
		uchar* h_eq_it = h_eq.ptr<uchar>(i);
		int* cdf_it = CDF.ptr<int>(i);
		for (int j = 0; j < 256; j++) {
			int cur_cdf = cdf_it[j];
			h_eq_it[j] = cv::saturate_cast<uchar>(255.0 * (cur_cdf - min_CDF) / (total_pixels - 1));
		}
	}

	// Map to get enhanced image.
	for (int r = 0; r < rows; r++) {
		uchar* L_it = L.ptr<uchar>(r);
		uchar* avg_L_it = avg_L.ptr<uchar>(r);
		for (int c = 0; c < cols; c++) {
			int i = L_it[c];
			int j = avg_L_it[c];
			L_it[c] = h_eq.at<uchar>(i, j);
		}
	}

	if (channels == 1) {
		dst = L.clone();
	}
	else {
		cv::merge(YUV_channels, dst);
		cv::cvtColor(dst, dst, CV_YUV2BGR);
	}

	return;
}

总结:根据输入图像的亮度与平均亮度之间的关系来增强图像,以提高对比度。它的优势在于可以同时考虑像素之间的关联性,从而避免了一些传统直方图均衡化方法可能引入的过度增强和噪声放大问题。

算法12:SEF
论文题目:An Extended Exposure Fusion and its Application to Single Image Contrast Enhancement

#include <iostream>
#include <opencv2/opencv.hpp>

#include "image_enhancement.h"

std::vector<cv::Mat> gaussian_pyramid(const cv::Mat& src, int nLevel)
{
	cv::Mat I = src.clone();
	std::vector<cv::Mat> pyr;
	pyr.push_back(I);
	for (int i = 2; i <= nLevel; i++) {
		cv::pyrDown(I, I);
		pyr.push_back(I);
	}
	return pyr;
}

std::vector<cv::Mat> laplacian_pyramid(const cv::Mat& src, int nLevel)
{
	cv::Mat I = src.clone();
	std::vector<cv::Mat> pyr;
	cv::Mat J = I.clone();
	for (int i = 1; i < nLevel; i++) {
		cv::pyrDown(J, I);
		cv::Mat J_up;
		cv::pyrUp(I, J_up, J.size());
		pyr.push_back(J - J_up);
		J = I;
	}
	pyr.push_back(J); // the coarest level contains the residual low pass image
	return pyr;
}

cv::Mat reconstruct_laplacian_pyramid(const std::vector<cv::Mat>& pyr)
{
	int nLevel = pyr.size();
	cv::Mat R = pyr[nLevel - 1].clone();
	for (int i = nLevel - 2; i >= 0; i--) {
		cv::pyrUp(R, R, pyr[i].size());
		R = pyr[i] + R;
	}
	return R;
}

cv::Mat multiscale_blending(const std::vector<cv::Mat>& seq, const std::vector<cv::Mat>& W)
{
	int h = seq[0].rows;
	int w = seq[0].cols;
	int n = seq.size();

	int nScRef = int(std::log(std::min(h, w)) / log(2));

	int nScales = 1;
	int hp = h;
	int wp = w;
	while(nScales < nScRef) {
		nScales++;
		hp = (hp + 1) / 2;
		wp = (wp + 1) / 2;
	}
	//std::cout << "Number of scales: " << nScales << ", residual's size: " << hp << " x " << wp << std::endl;

	std::vector<cv::Mat> pyr;
	hp = h;
	wp = w;
	for (int scale = 1; scale <= nScales; scale++) {
		pyr.push_back(cv::Mat::zeros(hp, wp, CV_64F));
		hp = (hp + 1) / 2;
		wp = (wp + 1) / 2;
	}

	for (int i = 0; i < n; i++) {
		std::vector<cv::Mat> pyrW = gaussian_pyramid(W[i], nScales);
		std::vector<cv::Mat> pyrI = laplacian_pyramid(seq[i], nScales);

		for (int scale = 0; scale < nScales; scale++) {
			pyr[scale] += pyrW[scale].mul(pyrI[scale]);
		}
	}

	return reconstruct_laplacian_pyramid(pyr);
}

void robust_normalization(const cv::Mat& src, cv::Mat& dst, double wSat = 1.0, double bSat = 1.0)
{
	int H = src.rows;
	int W = src.cols;
	int D = src.channels();
	int N = H * W;

	double vmax;
	double vmin;
	if (D > 1) {
		std::vector<cv::Mat> src_channels;
		cv::split(src, src_channels);

		cv::Mat max_channel;
		cv::max(src_channels[0], src_channels[1], max_channel);
		cv::max(max_channel, src_channels[2], max_channel);
		cv::Mat max_channel_sort;
		cv::sort(max_channel.reshape(1,1), max_channel_sort, CV_SORT_ASCENDING);
		vmax = max_channel_sort.at<double>(int(N - wSat*N / 100 + 1));

		cv::Mat min_channel;
		cv::min(src_channels[0], src_channels[1], min_channel);
		cv::min(min_channel, src_channels[2], min_channel);
		cv::Mat min_channel_sort;
		cv::sort(min_channel.reshape(1, 1), min_channel_sort, CV_SORT_ASCENDING);
		vmin = min_channel_sort.at<double>(int(bSat*N / 100));
	}
	else {
		cv::Mat src_sort;
		cv::sort(src.reshape(1, 1), src_sort, CV_SORT_ASCENDING);
		vmax = src_sort.at<double>(int(N - wSat*N / 100 + 1));
		vmin = src_sort.at<double>(int(bSat*N / 100));
	}

	if (vmax <= vmin) {
		if (D > 1)
			dst = cv::Mat(H, W, src.type(), cv::Scalar(vmax, vmax, vmax));
		else 
			dst = cv::Mat(H, W, src.type(), cv::Scalar(vmax));
	}
	else {
		cv::Scalar Ones;
		if (D > 1) {
			cv::Mat vmin3 = cv::Mat(H, W, src.type(), cv::Scalar(vmin, vmin, vmin));
			cv::Mat vmax3 = cv::Mat(H, W, src.type(), cv::Scalar(vmax, vmax, vmax));
			dst = (src - vmin3).mul(1.0 / (vmax3 - vmin3));
			Ones = cv::Scalar(1.0, 1.0, 1.0);
		}
		else {
			dst = (src - vmin) / (vmax - vmin);
			Ones = cv::Scalar(1.0);
		}
		
		cv::Mat mask_over = dst > vmax;
		cv::Mat mask_below = dst < vmin;
		mask_over.convertTo(mask_over, CV_64F, 1.0 / 255.0);
		mask_below.convertTo(mask_below, CV_64F, 1.0 / 255.0);
		
		dst = dst.mul(Ones - mask_over) + mask_over;
		dst = dst.mul(Ones - mask_below);
	}

	return;
}

/***
@inproceedings{hessel2020extended,
  title={An extended exposure fusion and its application to single image contrast enhancement},
  author={Hessel, Charles and Morel, Jean-Michel},
  booktitle={The IEEE Winter Conference on Applications of Computer Vision},
  pages={137--146},
  year={2020}
}

This is a reimplementation from https://github.com/chlsl/simulated-exposure-fusion-ipol/
***/
void SEF(const cv::Mat & src, cv::Mat & dst, double alpha, double beta, double lambda)
{
	int rows = src.rows;
	int cols = src.cols;
	int channels = src.channels();
	int total_pixels = rows * cols;

	cv::Mat L;
	cv::Mat HSV;
	std::vector<cv::Mat> HSV_channels;
	if (channels == 1) {
		L = src.clone();
	}
	else {
		cv::cvtColor(src, HSV, CV_BGR2HSV_FULL);
		cv::split(HSV, HSV_channels);
		L = HSV_channels[2];
	}

	cv::Mat L_norm;
	L.convertTo(L_norm, CV_64F, 1.0 / 255.0);

	cv::Mat src_norm;
	src.convertTo(src_norm, CV_64F, 1.0 / 255.0);

	cv::Mat C;
	if (channels == 1) {
		C = src_norm.mul(1.0 / (L_norm + std::pow(2, -16)));
	}
	else {
		cv::Mat temp = 1.0 / (L_norm + std::pow(2, -16));
		std::vector<cv::Mat> temp_arr = { temp.clone(),temp.clone(),temp.clone() };
		cv::Mat temp3;
		cv::merge(temp_arr, temp3);
		C = src_norm.mul(temp3);
	}
	
	// Compute median
	cv::Mat tmp = src.reshape(1, 1);
	cv::Mat sorted;
	cv::sort(tmp, sorted, CV_SORT_ASCENDING);
	double med = double(sorted.at<uchar>(rows * cols * channels / 2)) / 255.0;
	//std::cout << "med = " << med << std::endl;

	//Compute optimal number of images
	int Mp = 1;					// Mp = M - 1; M is the total number of images
	int Ns = int(Mp * med);		// number of images generated with fs
	int N = Mp - Ns;			// number of images generated with f
	int Nx = std::max(N, Ns);	// used to compute maximal factor
	double tmax1 = (1.0 + (Ns + 1.0) * (beta - 1.0) / Mp) / (std::pow(alpha, 1.0 / Nx));			// t_max k=+1
	double tmin1s = (-beta + (Ns - 1.0) * (beta - 1.0) / Mp) / (std::pow(alpha, 1.0 / Nx)) + 1.0;	// t_min k=-1
	double tmax0 = 1.0 + Ns*(beta - 1.0) / Mp;														// t_max k=0
	double tmin0 = 1.0 - beta + Ns*(beta - 1.0) / Mp;												// t_min k=0
	while (tmax1 < tmin0 || tmax0 < tmin1s) {
		Mp++;
		Ns = int(Mp * med);
		N = Mp - Ns;
		Nx = std::max(N, Ns);
		tmax1 = (1.0 + (Ns + 1.0) * (beta - 1.0) / Mp) / (std::pow(alpha, 1.0 / Nx));
		tmin1s = (-beta + (Ns - 1.0) * (beta - 1.0) / Mp) / (std::pow(alpha, 1.0 / Nx)) + 1.0;
		tmax0 = 1.0 + Ns*(beta - 1.0) / Mp;
		tmin0 = 1.0 - beta + Ns*(beta - 1.0) / Mp;
		if (Mp > 49) {
			std::cerr << "The estimation of the number of image required in the sequence stopped, please check the parameters!" << std::endl;
		}
	}

	// std::cout << "M = " << Mp + 1 << ", with N = " << N << " and Ns = " << Ns << std::endl;

	// Remapping functions
	auto fun_f = [alpha, Nx](cv::Mat t, int k) {	// enhance dark parts
		return std::pow(alpha, k * 1.0 / Nx) * t;
	};
	auto fun_fs = [alpha, Nx](cv::Mat t, int k) {	// enhance bright parts
		return std::pow(alpha, -k * 1.0 / Nx) * (t - 1.0) + 1.0;
	};

	// Offset for the dynamic range reduction (function "g")
	auto fun_r = [beta, Ns, Mp](int k) {
		return (1.0 - beta / 2.0) - (k + Ns) * (1.0 - beta) / Mp;
	};

	// Reduce dynamic (using offset function "r")
	double a = beta / 2 + lambda;
	double b = beta / 2 - lambda;
	auto fun_g = [fun_r, beta, a, b, lambda](cv::Mat t, int k) {
		auto rk = fun_r(k);
		cv::Mat diff = t - rk;
		cv::Mat abs_diff = cv::abs(diff);

		cv::Mat mask = abs_diff <= beta / 2;
		mask.convertTo(mask, CV_64F, 1.0 / 255.0);

		cv::Mat sign = diff.mul(1.0 / abs_diff);

		return mask.mul(t) + (1.0 - mask).mul(sign.mul(a - lambda * lambda / (abs_diff - b)) + rk);
	};

	// final remapping functions: h = g o f
	auto fun_h = [fun_f, fun_g](cv::Mat t, int k) {		// create brighter images (k>=0) (enhance dark parts)
		return fun_g(fun_f(t, k), k);
	};
	auto fun_hs = [fun_fs, fun_g](cv::Mat t, int k) {	// create darker images (k<0) (enhance bright parts)
		return fun_g(fun_fs(t, k), k);
	};

	// derivative of g with respect to t
	auto fun_dg = [fun_r, beta, b, lambda](cv::Mat t, int k) {
		auto rk = fun_r(k);
		cv::Mat diff = t - rk;
		cv::Mat abs_diff = cv::abs(diff);

		cv::Mat mask = abs_diff <= beta / 2;
		mask.convertTo(mask, CV_64F, 1.0 / 255.0);

		cv::Mat p;
		cv::pow(abs_diff - b, 2, p);

		return mask + (1.0 - mask).mul(lambda * lambda / p);
	};

	// derivative of the remapping functions: dh = f' x g' o f
	auto fun_dh = [alpha, Nx, fun_f, fun_dg](cv::Mat t, int k) {
		return std::pow(alpha, k * 1.0 / Nx) * fun_dg(fun_f(t, k), k);
	};
	auto fun_dhs = [alpha, Nx, fun_fs, fun_dg](cv::Mat t, int k) {
		return std::pow(alpha, -k * 1.0 / Nx) * fun_dg(fun_fs(t, k), k);
	};

	// Simulate a sequence from image L_norm and compute the contrast weights
	std::vector<cv::Mat> seq(N + Ns + 1);
	std::vector<cv::Mat> wc(N + Ns + 1);

	for (int k = -Ns; k <= N; k++) {
		cv::Mat seq_temp, wc_temp;
		if (k < 0) {
			seq_temp = fun_hs(L_norm, k);	// Apply remapping function
			wc_temp = fun_dhs(L_norm, k);	// Compute contrast measure
		}
		else {
			seq_temp = fun_h(L_norm, k);	// Apply remapping function
			wc_temp = fun_dh(L_norm, k);	// Compute contrast measure
		}

		// Detect values outside [0,1]
		cv::Mat mask_sup = seq_temp > 1.0;
		cv::Mat mask_inf = seq_temp < 0.0;
		mask_sup.convertTo(mask_sup, CV_64F, 1.0 / 255.0);
		mask_inf.convertTo(mask_inf, CV_64F, 1.0 / 255.0);
		// Clip them
		seq_temp = seq_temp.mul(1.0 - mask_sup) + mask_sup;
		seq_temp = seq_temp.mul(1.0 - mask_inf);
		// Set to 0 contrast of clipped values
		wc_temp = wc_temp.mul(1.0 - mask_sup);
		wc_temp = wc_temp.mul(1.0 - mask_inf);

		seq[k + Ns] = seq_temp.clone();
		wc[k + Ns] = wc_temp.clone();
	}

	// Compute well-exposedness weights and final normalized weights
	std::vector<cv::Mat> we(N + Ns + 1);
	std::vector<cv::Mat> w(N + Ns + 1);
	cv::Mat sum_w = cv::Mat::zeros(rows, cols, CV_64F);

	for (int i = 0; i < we.size(); i++) {
		cv::Mat p, we_temp, w_temp;
		cv::pow(seq[i] - 0.5, 2, p);
		cv::exp(-0.5*p / (0.2*0.2), we_temp);

		w_temp = wc[i].mul(we_temp);

		we[i] = we_temp.clone();
		w[i] = w_temp.clone();

		sum_w = sum_w + w[i];
	}

	sum_w = 1.0 / sum_w;
	for (int i = 0; i < we.size(); i++) {
		w[i] = w[i].mul(sum_w);
	}

	// Multiscale blending
	cv::Mat lp = multiscale_blending(seq, w);

	if (channels == 1) {
		lp = lp.mul(C);
	}
	else {
		std::vector<cv::Mat> lp3 = { lp.clone(),lp.clone(),lp.clone() };
		cv::merge(lp3, lp);
		lp = lp.mul(C);
	}

	robust_normalization(lp, lp);

	lp.convertTo(dst, CV_8U, 255);

	return;
}

总结:主要思想是模拟不同曝光水平的图像,然后使用权重进行融合,以提高图像的对比度和可视性。SEF方法还包括对输入图像的归一化处理,以确保图像的像素值在指定范围内。这个方法的参数可以根据具体的应用场景进行调整,以获得最佳的增强效果。

参考链接https://github.com/dengyueyun666/Image-Contrast-Enhancement

  • 1
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
### 回答1: 图像对比度增强是指通过一系列的算法和技术,使图像中的不同区域之间的对比度更加明显,以便更好地展示图像的细节和特征。 在MATLAB中,有多种算法可以实现图像对比度增强。以下是一种常用的方法: 1. 线性拉伸法:该方法通过对图像的像素值进行线性映射,将灰度范围拉伸到更广的范围,从而提高对比度。 具体步骤如下: - 读取图像并将其转换为灰度图像。 - 计算图像的最小灰度值和最大灰度值。 - 将图像中的每个像素值映射到新的灰度范围,例如0到255。 - 将映射后的像素值更新到图像中。 这种方法简单易用,但对于灰度范围较大的图像会失去一部分细节。 除了线性拉伸法,还有其他的图像对比度增强算法,例如直方图均衡化、自适应直方图均衡化、伽玛校正等。每个算法都有其优缺点和适用场景,具体选择哪种方法取决于图像的特点和需求。 总之,MATLAB提供了多种图像对比度增强算法,可以根据具体情况选择合适的方法来提高图像的对比度,展示图像的细节和特征。 ### 回答2: 图像对比度增强是指改变图像中不同灰度级之间的亮度差异,使得图像中的细节更加清晰可见。在Matlab中,有许多算法可以用来实现图像对比度增强。 其中一个常用的算法是直方图均衡化。直方图均衡化通过对图像的像素值进行重新映射,使得图像中的灰度级尽可能均匀分布在整个灰度范围内。这可以通过使用Matlab中的`histeq`函数来实现。通过对输入图像使用`histeq`函数,我们可以得到一个对比度增强后的输出图像。这个算法的优点是简单易用,但可能会导致一些细节的失真。 另一个常用的对比度增强算法是自适应直方图均衡化。自适应直方图均衡化与传统的直方图均衡化不同,它将图像划分为许多小块,然后对每个小块进行直方图均衡化。这个算法可以在保持整体对比度增强的同时,避免一些细节的失真。在Matlab中,可以使用`adapthisteq`函数来实现自适应直方图均衡化。 此外,还有其他一些对比度增强算法,如对数变换、伽马校正等。通过使用Matlab提供的不同函数,可以根据需要选择合适的算法图像进行对比度增强总结来说,图像对比度增强是一种通过改变图像的灰度级分布来增强图像的细节和对比度的方法。Matlab提供了多种算法来实现这一目标,例如直方图均衡化和自适应直方图均衡化。具体选择哪种算法取决于图像的特点和需求。 ### 回答3: 图像对比度增强算法是一种用于提高图像的对比度,使得图像中的细节更加清晰和突出的算法。在MATLAB中,有许多常用的图像对比度增强算法,例如直方图均衡化、自适应直方图均衡化和CLAHE(对比度限制的自适应直方图均衡化)等。 直方图均衡化是一种常用的图像对比度增强算法。它的思想是通过将像素的灰度级重新分配,使得图像的直方图更加均匀,从而增加图像的对比度。在MATLAB中,可以使用“histeq”函数实现直方图均衡化。 自适应直方图均衡化是对直方图均衡化的改进。它通过将图像分成许多小块,并在每个小块上独立进行直方图均衡化,从而避免了直方图均衡化可能引起的过度增强。在MATLAB中,可以使用“adapthisteq”函数实现自适应直方图均衡化。 CLAHE是一种进一步改进的自适应直方图均衡化方法。它引入了对比度限制,避免了在低对比度区域过度增强的问题。在MATLAB中,可以使用“adapthisteq”函数的参数设置限制对比度。 除了上述算法,还有许多其他图像对比度增强的方法,如灰度拉伸、直方图规定化等。这些算法都可以在MATLAB中实现,提供了丰富的图像处理工具箱,方便用户进行图像处理和对比度增强的操作。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值