[OpenCV实战]44 使用OpenCV进行图像超分放大

54 篇文章 32 订阅
52 篇文章 192 订阅

图像超分辨率(Image Super Resolution)是指从低分辨率图像或图像序列得到高分辨率图像。图像超分辨率是计算机视觉领域中一个非常重要的研究问题,广泛应用于医学图像分析、生物识别、视频监控和安全等领域。随着深度学习技术的发展,基于深度学习的图像超分方法在多个测试任务上,相比传统图像超分方法,取得了更优的性能和效果。


关于基于深度学习的图像超分辨率的综述可以见文章:

【超分辨率】—图像超分辨率(Super-Resolution)技术研究

关于基于深度学习的图像超分辨率放大的介绍和最新进展可以见文章:

【超分辨率】—基于深度学习的图像超分辨率最新进展与趋势

OpenCV contrib库中dnn_superres模块用于实现基于深度学习的图像超分放大,本文主要介绍使用此模块进行超分放大。关于dnn_superres模块的代码介绍可以见:

Super Resolution using Convolutional Neural Networks

本文需要OpenCV contrib库,OpenCV contrib库的编译安装见:

OpenCV_contrib库在windows下编译使用指南

本文所有代码见:

OpenCV-Practical-Exercise

1 OpenCV dnn_superres模块介绍

dnn_superres包含四种基于深度学习的算法,用于放大图像,这些模型能让图像放大2~4倍。具体模型介绍如下:
EDSR

  • 模型和官方代码地址:EDSR_Tensorflow
  • 论文:Enhanced Deep Residual Networks for Single Image Super-Resolution
  • 模型大小:〜38.5MB。这是一个量化版本,因此可以将其上传到GitHub。(原始模型大小为150MB。)
  • 模型参数:提供x2,x3,x4训练模型
  • 优点:高精度
  • 缺点:模型文件大且运行速度慢
  • 速度:在Intel i7-9700K CPU上的256x256图像,每个放大比例所需时间均小于3秒。

ESPCN

FSRCNN

  • 模型和官方代码地址:FSRCNN_Tensorflow
  • 论文:Accelerating the Super-Resolution Convolutional Neural Network
  • 模型大小:〜40KB(对于FSRCNN-small,约为9kb)
  • 模型参数:提供x2,x3,x4训练模型和small训练模型
  • 优点:快速,小巧
  • 缺点:不够准确
  • 速度:在Intel i7-9700K CPU上的256x256图像上,每个放大比例所需时间均小于0.01秒。
  • 其他:FSRCNN-small具有较少的参数,因此精度较低,但速度更快。

LapSRN

  • 模型和官方代码地址:TF-LAPSRN
  • 论文:Deep laplacian pyramid networks for fast and accurate super-resolution
  • 模型大小:1-5Mb之间
  • 模型参数:提供x2,x4,x8训练模型
  • 优点:该模型可以通过一次向前传递进行多尺度超分辨率。可以支持2x,4x,8x和[2x,4x]和[2x,4x,8x]超分辨率。
  • 缺点:它比ESPCN和FSRCNN慢,并且精度比EDSR差。
  • 速度:在Intel i7-9700K CPU上的256x256图像上,每个放大比例所需时间均小于0.1秒。。

2 OpenCV dnn_superres模块使用

2.1 图像超分放大单输出

2.1.1 接口介绍

在本节中,我们将学习如何使用dnn_superres中的函数,通过已有训练的神经网络对图像进行放大。实际上就是调用模型构造模型,只不过dnn_superres对这些模型的调用函数进行了封装,并且建立了通用接口。调用方法如下:
C++

// Make dnn super resolution instance
    // 创建dnn超分辨率对象
    DnnSuperResImpl sr;
    // 读取模型
    sr.readModel(path);
    // 设定算法和放大比例
    sr.setModel(algorithm, scale);
    // 放大图像
    sr.upsample(img, img_new);

Python

# 创建模型
    sr = dnn_superres.DnnSuperResImpl_create()
    # 读取模型
    sr.readModel(path)
    #  设定算法和放大比例
    sr.setModel(algorithm, scale)
    # 放大图像
    img_new = sr.upsample(img)

2.1.2 示例代码

主要展示通过OpenCV自带resize函数或调用深度学习将一张图像发大指定倍数,C++代码和Python代码如下。
C++/dnn_superres.cpp

// 图像超分放大单输出
#include <iostream>
#include <opencv2/opencv.hpp>
#include <opencv2/dnn_superres.hpp>
using namespace std;
using namespace cv;
using namespace dnn;
using namespace dnn_superres;
int main()
{
	string img_path = string("./image/image.png");
	// 可选择算法,bilinear, bicubic, edsr, espcn, fsrcnn or lapsrn
	string algorithm = string("fsrcnn");
	// 放大比例,可输入值2,3,4
	int scale = 4;
	// 模型路径
	string path = "./model/FSRCNN-small_x4.pb";
	// Load the image
	// 载入图像
	Mat img = cv::imread(img_path);
	// 如果输入的图像为空
	if (img.empty())
	{
		std::cerr << "Couldn't load image: " << img << "\n";
		return -2;
	}
	Mat original_img(img);
	// Make dnn super resolution instance
	// 创建dnn超分辨率对象
	DnnSuperResImpl sr;
	// 超分放大后的图像
	Mat img_new;
	// 双线性插值
	if (algorithm == "bilinear")
	{
		resize(img, img_new, Size(), scale, scale, cv::INTER_LINEAR);
	}
	// 双三次插值
	else if (algorithm == "bicubic")
	{
		resize(img, img_new, Size(), scale, scale, cv::INTER_CUBIC);
	}
	else if (algorithm == "edsr" || algorithm == "espcn" || algorithm == "fsrcnn" || algorithm == "lapsrn")
	{
		// 读取模型
		sr.readModel(path);
		// 设定算法和放大比例
		sr.setModel(algorithm, scale);
		// 放大图像
		sr.upsample(img, img_new);
	}
	else
	{
		std::cerr << "Algorithm not recognized. \n";
	}
	// 如果失败
	if (img_new.empty())
	{
		// 放大失败
		std::cerr << "Upsampling failed. \n";
		return -3;
	}
	cout << "Upsampling succeeded. \n";
	// Display image
	// 展示图片
	cv::namedWindow("Initial Image", WINDOW_AUTOSIZE);
	// 初始化图片
	cv::imshow("Initial Image", img_new);
	//cv::imwrite("./saved.jpg", img_new);
	cv::waitKey(0);
	return 0;
}

Python/dnn_superres.py

# -*- coding: utf-8 -*-
"""
Created on Fri Aug 20 20:08:22 2020
@author: luohenyueji
图像超分放大单输出
"""
import cv2
from cv2 import dnn_superres
def main():
    img_path = "./image/image.png"
    # 可选择算法,bilinear, bicubic, edsr, espcn, fsrcnn or lapsrn
    algorithm = "bilinear"
    # 放大比例,可输入值2,3,4
    scale = 4
    # 模型路径
    path = "./model/LapSRN_x4.pb"
    # 载入图像
    img = cv2.imread(img_path)
    # 如果输入的图像为空
    if img is None:
        print("Couldn't load image: " + str(img_path))
        return
    original_img = img.copy()
    # 创建模型
    sr = dnn_superres.DnnSuperResImpl_create()
    if algorithm == "bilinear":
        img_new = cv2.resize(img, None, fx=scale, fy=scale, interpolation=cv2.INTER_LINEAR)
    elif algorithm == "bicubic":
        img_new = cv2.resize(img, None, fx=scale, fy=scale, interpolation=cv2.INTER_CUBIC)
    elif algorithm == "edsr" or algorithm == "espcn" or algorithm == "fsrcnn" or algorithm == "lapsrn":
        # 读取模型
        sr.readModel(path)
        #  设定算法和放大比例
        sr.setModel(algorithm, scale)
        # 放大图像
        img_new = sr.upsample(img)
    else:
        print("Algorithm not recognized")
    # 如果失败
    if img_new is None:
        print("Upsampling failed")
    print("Upsampling succeeded. \n")
    # Display
    # 展示图片
    cv2.namedWindow("Initial Image", cv2.WINDOW_AUTOSIZE)
    # 初始化图片
    cv2.imshow("Initial Image", img_new)
    cv2.imwrite("./saved.jpg", img_new)
    cv2.waitKey(0)
if __name__ == '__main__':
    main()

2.1.3 结果

放大四倍,不同算法效果如下所示:

方法结果
原图
bilinear
bicubic
edsr
espcn
fsrcnn
fsrcnn-small
lapsrn

2.2 图像超分放大多输出

2.2.1 接口介绍

本节主要介绍如何通过LapSRN多输出来放大图像。如果给出了节点的名称,OpenCV的dnn模块支持一次推断访问多个节点。LapSRN模型可以在一次推理运行中提供更多输出。现在,LapSRN模型可以支持2x,4x,8x和(2x,4x)和(2x,4x,8x)超分辨率。经过训练的LapSRN模型文件具有以下输出节点名称:

  • 2x模型:NCHW_output
  • 4x模型:NCHW_output_2x,NCHW_output_4x
  • 8x模型:NCHW_output_2x,NCHW_output_4x,NCHW_output_8x

其次这个功能用处不那么大,LapSRN效果很一般。不过看看挺好的。
由于Python相关实现代码有所问题,因此该部分只提供C++代码。调用方法如下。相比单输出放大,需要设定输出层名字并通过upsampleMultioutput输出各输出层的放大结果。

// 可选多输入放大比例2,4,8。','分隔放大比例
    string scales_str = string("2,4,8");
    // 可选模型输出放大层比例名,NCHW_output_2x,NCHW_output_4x,NCHW_output_8x
    // 需要根据模型和输入放大比例共同确定确定
    string output_names_str = string("NCHW_output_2x,NCHW_output_4x,NCHW_output_8x");
    // 创建Dnn Superres对象
    DnnSuperResImpl sr;
    // 获得最大放大比例
    int scale = *max_element(scales.begin(), scales.end());
    std::vector<Mat> outputs;
    // 读取模型
    sr.readModel(path);
    // 设定模型输出
    sr.setModel("lapsrn", scale);
    // 多输出超分放大图像
    sr.upsampleMultioutput(img, outputs, scales, node_names);

2.2.2 示例代码

C++/dnn_superres_multioutput.cpp

// 图像超分放大多输出
#include <iostream>
#include <sstream>
#include <opencv2/opencv.hpp>
#include <opencv2/dnn_superres.hpp>
using namespace std;
using namespace cv;
using namespace dnn_superres;
int main()
{
	// 图像路径
	string img_path = string("./image/image.png");
	if (img_path.empty())
	{
		printf("image is empty!");
	}
	// 可选多输入放大比例2,4,8。','分隔放大比例
	string scales_str = string("2,4,8");
	// 可选模型输出放大层比例名,NCHW_output_2x,NCHW_output_4x,NCHW_output_8x
	// 需要根据模型和输入放大比例共同确定确定
	string output_names_str = string("NCHW_output_2x,NCHW_output_4x,NCHW_output_8x");
	// 模型路径
	std::string path = string("./model/LapSRN_x8.pb");
	// Parse the scaling factors
	// 解析放大比例因子
	std::vector<int> scales;
	char delim = ',';
	{
		std::stringstream ss(scales_str);
		std::string token;
		while (std::getline(ss, token, delim))
		{
			scales.push_back(atoi(token.c_str()));
		}
	}
	// Parse the output node names
	// 解析模型放大层参数
	std::vector<String> node_names;
	{
		std::stringstream ss(output_names_str);
		std::string token;
		while (std::getline(ss, token, delim))
		{
			node_names.push_back(token);
		}
	}
	// Load the image
	// 导入图片
	Mat img = cv::imread(img_path);
	Mat original_img(img);
	if (img.empty())
	{
		std::cerr << "Couldn't load image: " << img << "\n";
		return -2;
	}
	// Make dnn super resolution instance
	// 创建Dnn Superres对象
	DnnSuperResImpl sr;
	// 获得最大放大比例
	int scale = *max_element(scales.begin(), scales.end());
	std::vector<Mat> outputs;
	// 读取模型
	sr.readModel(path);
	// 设定模型输出
	sr.setModel("lapsrn", scale);
	// 多输出超分放大图像
	sr.upsampleMultioutput(img, outputs, scales, node_names);
	for (unsigned int i = 0; i < outputs.size(); i++)
	{
		cv::namedWindow("Upsampled image", WINDOW_AUTOSIZE);
		// 在图上显示当前放大比例
		cv::putText(outputs[i], format("Scale %d", scales[i]), Point(10, 30), FONT_HERSHEY_PLAIN, 2.0, Scalar(255, 0, 255), 2, LINE_AA);
		cv::imshow("Upsampled image", outputs[i]);
		cv::imwrite(to_string(i) + ".jpg", outputs[i]);
		cv::waitKey(-1);
	}
	return 0;
}

2.2.3 结果

放大二倍、四倍、八倍的LapSRN算法效果如下所示:

方法结果
原图
LapSRN_x2
LapSRN_x4
LapSRN_x8

2.3 视频超分放大

实际视频超分放大输出,就是把视频每一帧提取出来,超分放大每一帧图像。代码如下,实际上如果电脑配置很一般不建议视频超分放大,对电脑配置性能要求很高,建议使用opencv cuda进行运算。
C++/dnn_superres_video.cpp

// 视频超分放大多输出
#include <iostream>
#include <opencv2/opencv.hpp>
#include <opencv2/dnn_superres.hpp>
using namespace std;
using namespace cv;
using namespace dnn_superres;
int main()
{
	string input_path = string("./video/chaplin.mp4");
	string output_path = string("./video/https://gitee.com/luohenyueji/article_picture_warehouse/raw/master/CSDN/%5BOpenCV%E5%AE%9E%E6%88%98%5D44%20%E4%BD%BF%E7%94%A8OpenCV%E8%BF%9B%E8%A1%8C%E5%9B%BE%E5%83%8F%E8%B6%85%E5%88%86%E6%94%BE%E5%A4%A7/out_chaplin.mp4");
	// 选择模型 edsr, espcn, fsrcnn or lapsrn
	string algorithm = string("lapsrn");
	// 放大比例,2,3,4,8,根据模型结构选择
	int scale = 2;
	// 模型路径
	string path = string("./model/LapSRN_x2.pb");
	// 打开视频
	VideoCapture input_video(input_path);
	// 输入图像编码尺寸
	int ex = static_cast<int>(input_video.get(CAP_PROP_FOURCC));
	// 获得输出视频图像尺寸
	Size S = Size((int)input_video.get(CAP_PROP_FRAME_WIDTH) * scale,
		(int)input_video.get(CAP_PROP_FRAME_HEIGHT) * scale);
	VideoWriter output_video;
	output_video.open(output_path, ex, input_video.get(CAP_PROP_FPS), S, true);
	// 如果视频没有打开
	if (!input_video.isOpened())
	{
		std::cerr << "Could not open the video." << std::endl;
		return -1;
	}
	// 读取超分放大模型
	DnnSuperResImpl sr;
	sr.readModel(path);
	sr.setModel(algorithm, scale);
	for (;;)
	{
		Mat frame, output_frame;
		input_video >> frame;
		if (frame.empty())
			break;
		// 上采样图像
		sr.upsample(frame, output_frame);
		output_video << output_frame;
		namedWindow("Upsampled video", WINDOW_AUTOSIZE);
		imshow("Upsampled video", output_frame);
		namedWindow("Original video", WINDOW_AUTOSIZE);
		imshow("Original video", frame);
		char c = (char)waitKey(1);
		// esc退出
		if (c == 27)
		{
			break;
		}
	}
	input_video.release();
	output_video.release();
	return 0;
}

Python/dnn_superres_video.py

# -*- coding: utf-8 -*-
"""
Created on Fri Aug 20 21:08:22 2020
@author: luohenyueji
视频超分放大
"""
import cv2
from cv2 import dnn_superres
def main():
    input_path = "./video/chaplin.mp4"
    output_path = "./video/https://gitee.com/luohenyueji/article_picture_warehouse/raw/master/CSDN/%5BOpenCV%E5%AE%9E%E6%88%98%5D44%20%E4%BD%BF%E7%94%A8OpenCV%E8%BF%9B%E8%A1%8C%E5%9B%BE%E5%83%8F%E8%B6%85%E5%88%86%E6%94%BE%E5%A4%A7/out_chaplin.mp4"
    # 选择模型 edsr, espcn, fsrcnn or lapsrn
    algorithm = "lapsrn"
    # 放大比例,2,3,4,8,根据模型结构选择
    scale = 2
    # 模型路径
    path = "./model/LapSRN_x2.pb"
    # 打开视频
    input_video = cv2.VideoCapture(input_path)
    # 输入图像编码尺寸
    ex = int(input_video.get(cv2.CAP_PROP_FOURCC))
    # 获得输出视频图像尺寸
    # 如果视频没有打开
    if input_video is None:
        print("Could not open the video.")
        return
    S = (
    int(input_video.get(cv2.CAP_PROP_FRAME_WIDTH)) * scale, int(input_video.get(cv2.CAP_PROP_FRAME_HEIGHT)) * scale)
    output_video = cv2.VideoWriter(output_path, ex, input_video.get(cv2.CAP_PROP_FPS), S, True)
    # 读取超分放大模型
    sr = dnn_superres.DnnSuperResImpl_create()
    sr.readModel(path)
    sr.setModel(algorithm, scale)
    while True:
        ret, frame = input_video.read()  # 捕获一帧图像
        if not ret:
            print("read video error")
            return
        # 上采样图像
        output_frame = sr.upsample(frame)
        output_video.write(output_frame)
        cv2.namedWindow("Upsampled video", cv2.WINDOW_AUTOSIZE);
        cv2.imshow("Upsampled video", output_frame)
        cv2.namedWindow("Original video", cv2.WINDOW_AUTOSIZE);
        cv2.imshow("Original video", frame)
        c = cv2.waitKey(1);
        # esc退出
        if 27 == c:
            break
    input_video.release()
    output_video.release()
if __name__ == '__main__':
    main()

3 不同图像超分算法性能比较

3.1 不同图像超分算法效果评估

通过PSNR和SSIM来评估图像放大后的效果,PSNR越大,图像失真越小。SSIM也是越大,图像失真越小。PSNR和SSIM介绍见博客:PSNR和SSIM
本节对比四类算法放大图像后的PSNR值和SSIM值,因为电脑性能原因只放大2倍。具体放大倍数可自行调试。代码如下:
C++/dnn_superres_benchmark_quality.cpp

// 不同图像超分算法效果评估
#include <iostream>
#include <opencv2/opencv_modules.hpp>
#include <opencv2/dnn_superres.hpp>
#include <opencv2/quality.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/highgui.hpp>
using namespace std;
using namespace cv;
using namespace dnn_superres;
// 展示图片
static void showBenchmark(vector<Mat> images, string title, Size imageSize,
	const vector<String> imageTitles,
	const vector<double> psnrValues,
	const vector<double> ssimValues)
{
	// 文字信息
	int fontFace = FONT_HERSHEY_COMPLEX_SMALL;
	int fontScale = 1;
	Scalar fontColor = Scalar(255, 255, 255);
	// 图像数量
	int len = static_cast<int>(images.size());
	int cols = 2, rows = 2;
	// 建立背景图像
	Mat fullImage = Mat::zeros(Size((cols * 10) + imageSize.width * cols, (rows * 10) + imageSize.height * rows),
		images[0].type());
	stringstream ss;
	int h_ = -1;
	// 拼接显示图片
	for (int i = 0; i < len; i++)
	{
		int fontStart = 15;
		int w_ = i % cols;
		if (i % cols == 0)
			h_++;
		Rect ROI((w_ * (10 + imageSize.width)), (h_ * (10 + imageSize.height)), imageSize.width, imageSize.height);
		Mat tmp;
		resize(images[i], tmp, Size(ROI.width, ROI.height));
		ss << imageTitles[i];
		putText(tmp,
			ss.str(),
			Point(5, fontStart),
			fontFace,
			fontScale,
			fontColor,
			1,
			16);
		ss.str("");
		fontStart += 20;
		ss << "PSNR: " << psnrValues[i];
		putText(tmp,
			ss.str(),
			Point(5, fontStart),
			fontFace,
			fontScale,
			fontColor,
			1,
			16);
		ss.str("");
		fontStart += 20;
		ss << "SSIM: " << ssimValues[i];
		putText(tmp,
			ss.str(),
			Point(5, fontStart),
			fontFace,
			fontScale,
			fontColor,
			1,
			16);
		ss.str("");
		fontStart += 20;
		tmp.copyTo(fullImage(ROI));
	}
	namedWindow(title, 1);
	imshow(title, fullImage);
	imwrite("save.jpg", fullImage);
	waitKey();
}
static Vec2d getQualityValues(Mat orig, Mat upsampled)
{
	double psnr = PSNR(upsampled, orig);
	// 前两个参数为对比图片,第三个参数为输出数组
	Scalar q = quality::QualitySSIM::compute(upsampled, orig, noArray());
	double ssim = mean(Vec3d((q[0]), q[1], q[2]))[0];
	return Vec2d(psnr, ssim);
}
int main()
{
	// 图片路径
	string img_path = string("./image/image.png");
	// 算法名称 edsr, espcn, fsrcnn or lapsrn
	string algorithm = string("lapsrn");
	// 模型路径,根据算法确定
	string model = string("./model/LapSRN_x2.pb");
	// 放大系数
	int scale = 2;
	Mat img = imread(img_path);
	if (img.empty())
	{
		cerr << "Couldn't load image: " << img_path << "\n";
		return -2;
	}
	// Crop the image so the images will be aligned
	// 裁剪图像,使图像对齐
	int width = img.cols - (img.cols % scale);
	int height = img.rows - (img.rows % scale);
	Mat cropped = img(Rect(0, 0, width, height));
	// Downscale the image for benchmarking
	// 缩小图像,以实现基准质量测试
	Mat img_downscaled;
	resize(cropped, img_downscaled, Size(), 1.0 / scale, 1.0 / scale);
	// Make dnn super resolution instance
	// 超分模型初始化
	DnnSuperResImpl sr;
	vector<Mat> allImages;
	// 放大后的图片
	Mat img_new;
	// Read and set the dnn model
	// 读取和设定模型
	sr.readModel(model);
	sr.setModel(algorithm, scale);
	// 放大图像
	sr.upsample(img_downscaled, img_new);
	vector<double> psnrValues = vector<double>();
	vector<double> ssimValues = vector<double>();
	// DL MODEL
	// 获得模型质量评估值
	Vec2f quality = getQualityValues(cropped, img_new);
	// 模型质量评价PSNR
	psnrValues.push_back(quality[0]);
	// 模型质量评价SSIM
	ssimValues.push_back(quality[1]);
	// 数值越大图像质量越好
	cout << sr.getAlgorithm() << ":" << endl;
	cout << "PSNR: " << quality[0] << " SSIM: " << quality[1] << endl;
	cout << "----------------------" << endl;
	// BICUBIC
	// INTER_CUBIC - 三次样条插值放大图像
	Mat bicubic;
	resize(img_downscaled, bicubic, Size(), scale, scale, INTER_CUBIC);
	quality = getQualityValues(cropped, bicubic);
	psnrValues.push_back(quality[0]);
	ssimValues.push_back(quality[1]);
	cout << "Bicubic " << endl;
	cout << "PSNR: " << quality[0] << " SSIM: " << quality[1] << endl;
	cout << "----------------------" << endl;
	// NEAREST NEIGHBOR
	// INTER_NEAREST - 最近邻插值
	Mat nearest;
	resize(img_downscaled, nearest, Size(), scale, scale, INTER_NEAREST);
	quality = getQualityValues(cropped, nearest);
	psnrValues.push_back(quality[0]);
	ssimValues.push_back(quality[1]);
	cout << "Nearest neighbor" << endl;
	cout << "PSNR: " << quality[0] << " SSIM: " << quality[1] << endl;
	cout << "----------------------" << endl;
	// LANCZOS
	// Lanczos插值放大图像
	Mat lanczos;
	resize(img_downscaled, lanczos, Size(), scale, scale, INTER_LANCZOS4);
	quality = getQualityValues(cropped, lanczos);
	psnrValues.push_back(quality[0]);
	ssimValues.push_back(quality[1]);
	cout << "Lanczos" << endl;
	cout << "PSNR: " << quality[0] << " SSIM: " << quality[1] << endl;
	cout << "-----------------------------------------------" << endl;
	// 要显示的图片
	vector<Mat> imgs{ img_new, bicubic, nearest, lanczos };
	// 要显示的标题
	vector<String> titles{ sr.getAlgorithm(), "Bicubic", "Nearest neighbor", "Lanczos" };
	showBenchmark(imgs, "Quality benchmark", Size(bicubic.cols, bicubic.rows), titles, psnrValues, ssimValues);
	waitKey(0);
	return 0;
}

Python/dnn_superres_benchmark_quality.py

# -*- coding: utf-8 -*-
"""
Created on Fri Aug 20 22:08:22 2020
@author: luohenyueji
不同图像超分算法效果评估
"""
import cv2
from cv2 import dnn_superres
import numpy as np
# TODO 绘图
def showBenchmark(imgs, titles, psnrValues, ssimValues):
    # 绘图
    for i in range(0, len(imgs)):
        # 标题绘图
        cv2.putText(imgs[i], titles[i], (10, 30), cv2.FONT_HERSHEY_PLAIN, 1.5,
                    (255, 0, 255), 2, cv2.LINE_AA)
        # psnr值
        cv2.putText(imgs[i], "PSNR: " + str(psnrValues[i]), (10, 60), cv2.FONT_HERSHEY_PLAIN, 1.5,
                    (255, 0, 255), 2, cv2.LINE_AA)
        # ssim值
        cv2.putText(imgs[i], "SSIM: " + str(ssimValues[i]), (10, 90), cv2.FONT_HERSHEY_PLAIN, 1.5,
                    (255, 0, 255), 2, cv2.LINE_AA)
    # 图片拼接展示
    img = np.vstack([np.hstack([imgs[0], imgs[1]]), np.hstack([imgs[2], imgs[3]])])
    cv2.imshow("Quality benchmark", img)
    cv2.waitKey(0)
# TODO 图像质量评估
def getQualityValues(upsampled, orig):
    psnr = cv2.PSNR(upsampled, orig)
    q, _ = cv2.quality.QualitySSIM_compute(upsampled, orig)
    ssim = (q[0] + q[1] + q[2]) / 3
    return round(psnr, 3), round(ssim, 3)
def main():
    # 图片路径
    img_path = "./image/butterfly.png"
    # 算法名称 edsr, espcn, fsrcnn or lapsrn
    algorithm = "lapsrn"
    # 模型路径,根据算法确定
    model = "./model/LapSRN_x2.pb"
    # 放大系数
    scale = 2
    psnrValues = []
    ssimValues = []
    img = cv2.imread(img_path)
    if img is None:
        print("Couldn't load image: " + str(img_path))
    # Crop the image so the images will be aligned
    # 裁剪图像,使图像对齐
    width = img.shape[0] - (img.shape[0] % scale)
    height = img.shape[1] - (img.shape[1] % scale)
    cropped = img[0:width, 0:height]
    # Downscale the image for benchmarking
    # 缩小图像,以实现基准质量测试
    img_downscaled = cv2.resize(cropped, None, fx=1.0 / scale, fy=1.0 / scale)
    # Make dnn super resolution instance
    # 超分模型初始化
    sr = dnn_superres.DnnSuperResImpl_create()
    # Read and set the dnn model
    # 读取和设定模型
    sr.readModel(model)
    sr.setModel(algorithm, scale)
    # 放大图像
    img_new = sr.upsample(img_downscaled)
    # DL MODEL
    # 获得模型质量评估值
    psnr, ssim = getQualityValues(cropped, img_new)
    psnrValues.append(psnr)
    ssimValues.append(ssim)
    print(sr.getAlgorithm() + "\n")
    print("PSNR: " + str(psnr) + " SSIM: " + str(ssim) + "\n")
    print("-" * 50)
    # INTER_CUBIC - 三次样条插值放大图像
    bicubic = cv2.resize(img_downscaled, None, fx=scale, fy=scale, interpolation=cv2.INTER_CUBIC)
    psnr, ssim = getQualityValues(cropped, bicubic)
    psnrValues.append(psnr)
    ssimValues.append(ssim)
    print("Bicubic \n")
    print("PSNR: " + str(psnr) + " SSIM: " + str(ssim) + "\n")
    print("-" * 50)
    # INTER_NEAREST - 最近邻插值
    nearest = cv2.resize(img_downscaled, None, fx=scale, fy=scale, interpolation=cv2.INTER_NEAREST)
    psnr, ssim = getQualityValues(cropped, nearest)
    psnrValues.append(psnr)
    ssimValues.append(ssim)
    print("Nearest neighbor \n")
    print("PSNR: " + str(psnr) + " SSIM: " + str(ssim) + "\n")
    print("-" * 50)
    # Lanczos插值放大图像
    lanczos = cv2.resize(img_downscaled, None, fx=scale, fy=scale, interpolation=cv2.INTER_LANCZOS4);
    psnr, ssim = getQualityValues(cropped, lanczos)
    psnrValues.append(psnr)
    ssimValues.append(ssim)
    print("Lanczos \n")
    print("PSNR: " + str(psnr) + " SSIM: " + str(ssim) + "\n")
    print("-" * 50)
    imgs = [img_new, bicubic, nearest, lanczos]
    titles = [sr.getAlgorithm(), "Bicubic", "Nearest neighbor", "Lanczos"]
    showBenchmark(imgs, titles, psnrValues, ssimValues)
if __name__ == '__main__':
    main()

通过lapsrn模型进行超分放大,结果如图所示。可以知道的是lapsrn模型效果实际最好,但是实际中resize函数调用不同选项也会有类似结果,差距没有想象那么大。

3.2 不同图像超分算法速度评估

本节对比四类算法差分放大所需时间,因为电脑性能原因只放大2倍。具体放大倍数可自行调试。代码如下:
C++/dnn_superres_benchmark_time.cpp

// 不同图像超分算法速度评估
#include <iostream>
#include <opencv2/dnn_superres.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/highgui.hpp>
using namespace std;
using namespace cv;
using namespace dnn_superres;
static void showBenchmark(vector<Mat> images, string title, Size imageSize,
	const vector<String> imageTitles,
	const vector<double> perfValues)
{
	int fontFace = FONT_HERSHEY_COMPLEX_SMALL;
	int fontScale = 1;
	Scalar fontColor = Scalar(255, 255, 255);
	int len = static_cast<int>(images.size());
	int cols = 2, rows = 2;
	Mat fullImage = Mat::zeros(Size((cols * 10) + imageSize.width * cols, (rows * 10) + imageSize.height * rows),
		images[0].type());
	stringstream ss;
	int h_ = -1;
	for (int i = 0; i < len; i++)
	{
		int fontStart = 15;
		int w_ = i % cols;
		if (i % cols == 0)
			h_++;
		Rect ROI((w_ * (10 + imageSize.width)), (h_ * (10 + imageSize.height)), imageSize.width, imageSize.height);
		Mat tmp;
		resize(images[i], tmp, Size(ROI.width, ROI.height));
		ss << imageTitles[i];
		putText(tmp,
			ss.str(),
			Point(5, fontStart),
			fontFace,
			fontScale,
			fontColor,
			1,
			16);
		ss.str("");
		fontStart += 20;
		ss << perfValues[i];
		putText(tmp,
			ss.str(),
			Point(5, fontStart),
			fontFace,
			fontScale,
			fontColor,
			1,
			16);
		ss.str("");
		tmp.copyTo(fullImage(ROI));
	}
	namedWindow(title, 1);
	imshow(title, fullImage);
	imwrite("save.jpg", fullImage);
	waitKey();
}
int main()
{
	// 图片路径
	string img_path = string("./image/butterfly.png");
	// 算法名称 edsr, espcn, fsrcnn or lapsrn
	string algorithm = string("lapsrn");
	// 模型路径,根据算法确定
	string model = string("./model/LapSRN_x2.pb");
	// 放大系数
	int scale = 2;
	Mat img = imread(img_path);
	if (img.empty())
	{
		cerr << "Couldn't load image: " << img << "\n";
		return -2;
	}
	// Crop the image so the images will be aligned
	// 对齐图像
	int width = img.cols - (img.cols % scale);
	int height = img.rows - (img.rows % scale);
	Mat cropped = img(Rect(0, 0, width, height));
	// Downscale the image for benchmarking
	// 缩小图像,以实现基准测试
	Mat img_downscaled;
	resize(cropped, img_downscaled, Size(), 1.0 / scale, 1.0 / scale);
	// Make dnn super resolution instance
	DnnSuperResImpl sr;
	Mat img_new;
	// Read and set the dnn model
	// 读取模型
	sr.readModel(model);
	sr.setModel(algorithm, scale);
	double elapsed = 0.0;
	vector<double> perf;
	TickMeter tm;
	// DL MODEL
	// 计算时间
	tm.start();
	sr.upsample(img_downscaled, img_new);
	tm.stop();
	// 运行时间s
	elapsed = tm.getTimeSec() / tm.getCounter();
	perf.push_back(elapsed);
	cout << sr.getAlgorithm() << " : " << elapsed << endl;
	// BICUBIC
	Mat bicubic;
	tm.start();
	resize(img_downscaled, bicubic, Size(), scale, scale, INTER_CUBIC);
	tm.stop();
	elapsed = tm.getTimeSec() / tm.getCounter();
	perf.push_back(elapsed);
	cout << "Bicubic" << " : " << elapsed << endl;
	// NEAREST NEIGHBOR
	Mat nearest;
	tm.start();
	resize(img_downscaled, nearest, Size(), scale, scale, INTER_NEAREST);
	tm.stop();
	elapsed = tm.getTimeSec() / tm.getCounter();
	perf.push_back(elapsed);
	cout << "Nearest" << " : " << elapsed << endl;
	// LANCZOS
	Mat lanczos;
	tm.start();
	resize(img_downscaled, lanczos, Size(), scale, scale, INTER_LANCZOS4);
	tm.stop();
	elapsed = tm.getTimeSec() / tm.getCounter();
	perf.push_back(elapsed);
	cout << "Lanczos" << " : " << elapsed << endl;
	vector <Mat> imgs{ img_new, bicubic, nearest, lanczos };
	vector <String> titles{ sr.getAlgorithm(), "Bicubic", "Nearest neighbor", "Lanczos" };
	showBenchmark(imgs, "Time benchmark", Size(bicubic.cols, bicubic.rows), titles, perf);
	waitKey(0);
	return 0;
}

Python/dnn_superres_benchmark_time.py

# -*- coding: utf-8 -*-
"""
Created on Fri Aug 20 22:38:22 2020
@author: luohenyueji
不同图像超分算法速度评估
"""
import cv2
from cv2 import dnn_superres
import numpy as np
# TODO 绘图
def showBenchmark(imgs, titles, perf):
    # 绘图
    for i in range(0, len(imgs)):
        # 标题绘图
        cv2.putText(imgs[i], titles[i], (10, 30), cv2.FONT_HERSHEY_PLAIN, 1.5,
                    (255, 0, 255), 2, cv2.LINE_AA)
        # psnr值
        cv2.putText(imgs[i], str(round(perf[i], 3)), (10, 60), cv2.FONT_HERSHEY_PLAIN, 1.5,
                    (255, 0, 255), 2, cv2.LINE_AA)
    # 图片拼接展示
    img = np.vstack([np.hstack([imgs[0], imgs[1]]), np.hstack([imgs[2], imgs[3]])])
    cv2.imshow("Quality benchmark", img)
    cv2.waitKey(0)
def main():
    # 图片路径
    img_path = "./image/image.png"
    # 算法名称 edsr, espcn, fsrcnn or lapsrn
    algorithm = "lapsrn"
    # 模型路径,根据算法确定
    model = "./model/LapSRN_x2.pb"
    # 放大系数
    scale = 2
    # 时间系数
    perf = []
    
    img = cv2.imread(img_path)
    
    if img is None:
        print("Couldn't load image: " + str(img_path))
    
    # Crop the image so the images will be aligned
    # 裁剪图像,使图像对齐
    width = img.shape[0] - (img.shape[0] % scale)
    height = img.shape[1] - (img.shape[1] % scale)
    cropped = img[0:width, 0:height]
    
    # Downscale the image for benchmarking
    # 缩小图像,以实现基准质量测试
    img_downscaled = cv2.resize(cropped, None, fx=1.0 / scale, fy=1.0 / scale)
    
    # Make dnn super resolution instance
    # 超分模型初始化
    sr = dnn_superres.DnnSuperResImpl_create()
    
    # Read and set the dnn model
    # 读取和设定模型
    sr.readModel(model)
    sr.setModel(algorithm, scale)
    
    timer = cv2.TickMeter()
    timer.start()
    # 放大图像
    img_new = sr.upsample(img_downscaled)
    timer.stop()
    # 运行时间s
    elapsed = timer.getTimeSec() / timer.getCounter()
    perf.append(elapsed)
    print(sr.getAlgorithm() + " : " + str(elapsed))
    
    # INTER_CUBIC - 三次样条插值放大图像
    timer.start()
    bicubic = cv2.resize(img_downscaled, None, fx=scale, fy=scale, interpolation=cv2.INTER_CUBIC)
    timer.stop()
    # 运行时间s
    elapsed = timer.getTimeSec() / timer.getCounter()
    perf.append(elapsed)
    print("Bicubic" + " : " + str(elapsed))
    
    # INTER_NEAREST - 最近邻插值
    timer.start()
    nearest = cv2.resize(img_downscaled, None, fx=scale, fy=scale, interpolation=cv2.INTER_NEAREST)
    timer.stop()
    # 运行时间s
    elapsed = timer.getTimeSec() / timer.getCounter()
    perf.append(elapsed)
    print("Nearest" + " : " + str(elapsed))
    
    # Lanczos插值放大图像
    timer.start()
    lanczos = cv2.resize(img_downscaled, None, fx=scale, fy=scale, interpolation=cv2.INTER_LANCZOS4);
    timer.stop()
    # 运行时间s
    elapsed = timer.getTimeSec() / timer.getCounter()
    perf.append(elapsed)
    print("Lanczos" + " : " + str(elapsed))
    
    imgs = [img_new, bicubic, nearest, lanczos]
    titles = [sr.getAlgorithm(), "Bicubic", "Nearest neighbor", "Lanczos"]
    showBenchmark(imgs, titles, perf)
if __name__ == '__main__':
    main()

通过lapsrn模型进行超分放大,结果如图所示。图中单位为秒/s。lapsrn是OpenCV提供速度最快和精度最低的DNN超分模块,比resize普通算法效果更好都是耗时更多。

3.3 官方超分放大基准测试

OpenCV官方文档给了数据集下的基础测试结果,具体见:Super-resolution benchmarking
在Ubuntu 18.04.02 OS的Intel i7-9700K CPU上数据集超分放大算法结果如下所示。
2倍超分放大

方法平均时间(s)/cpu平均PSNR平均SSIM
ESPCN0.00879532.70590.9276
EDSR5.92345034.13000.9447
FSRCNN0.02174132.88860.9301
LapSRN0.11481232.26810.9248
Bicubic0.00020832.16380.9305
Nearest neighbor0.00011429.16650.9049
Lanczos0.00109432.46870.9327

3倍超分放大

方法平均时间(s)/cpu平均PSNR平均SSIM
ESPCN0.00549528.42290.8474
EDSR2.45551029.98280.8801
FSRCNN0.00880728.30680.8429
LapSRN0.28257526.73300.8862
Bicubic0.00031126.06350.8754
Nearest neighbor0.00014823.56280.8174
Lanczos0.00101225.91150.8706

4倍超分放大

方法平均时间(s)/cpu平均PSNR平均SSIM
ESPCN0.00431126.68700.7891
EDSR1.60757028.15520.8317
FSRCNN0.00530226.60880.7863
LapSRN0.12122926.73830.7896
Bicubic0.00031126.06350.8754
Nearest neighbor0.00014823.56280.8174
Lanczos0.00101225.91150.8706

此外,官方也给出了不同图片在不同算法和不同比例下超分放大的结果,如下所示:
4倍放大一张768x512大小的图像

方法时间(s)/cpuSNRSSIM
ESPCN0.0115926.54710.88116
EDSR3.2675829.24040.92112
FSRCNN0.0129826.56460.88064
LapSRN0.2825726.73300.88622
Bicubic0.0003126.06350.87537
Nearest neighbor0.0001423.56280.81741
Lanczos0.0010125.91150.87057

2倍放大一张256x256大小的图像

Set5: butterfly.pngsize: 256x256
OriginalBicubic interpolationNearest neighbor interpolationLanczos interpolation
OriginalBicubic interpolationNearest neighbor interpolationLanczos interpolation
PSRN / SSIM / Speed (CPU)26.6645 / 0.9048 / 0.00020123.6854 / 0.8698 / 0.00007526.9476 / 0.9075 / 0.001039
ESPCNFSRCNNLapSRNEDSR
ESPCNFSRCNNLapSRNEDSR
29.0341 / 0.9354 / 0.00415729.0077 / 0.9345 / 0.00632527.8212 / 0.9230 / 0.03793730.0347 / 0.9453 / 2.077280

3倍放大一张1024x644大小的图像

Urban100: img_001.pngsize: 1024x644
OriginalBicubic interpolationNearest neighbor interpolationLanczos interpolation
OriginalBicubic interpolationNearest neighbor interpolationLanczos interpolation
PSRN / SSIM / Speed (CPU)27.0474 / 0.8484 / 0.00039126.0842 / 0.8353 / 0.00023627.0704 / 0.8483 / 0.002234
ESPCNFSRCNNLapSRN无三倍放大EDSR
ESPCNFSRCNNLapSRNEDSR
28.0118 / 0.8588 / 0.03074828.0184 / 0.8597 / 0.09417330.5671 / 0.9019 / 9.517580

4倍放大一张250x361大小的图像

Set14: comic.pngsize: 250x361
OriginalBicubic interpolationNearest neighbor interpolationLanczos interpolation
OriginalBicubic interpolationNearest neighbor interpolationLanczos interpolation
PSRN / SSIM / Speed (CPU)19.6766 / 0.6413 / 0.00026218.5106 / 0.5879 / 0.00008519.4948 / 0.6317 / 0.001098
ESPCNFSRCNNLapSRNEDSR
ESPCNFSRCNNLapSRNEDSR
20.0417 / 0.6302 / 0.00189420.0885 / 0.6384 / 0.00210320.0676 / 0.6339 / 0.06164020.5233 / 0.6901 / 0.665876

4倍放大一张1356x2040大小的图像

Div2K: 0006.pngsize: 1356x2040
OriginalBicubic interpolationNearest neighbor interpolation
OriginalBicubic interpolationNearest neighbor interpolation
PSRN / SSIM / Speed (CPU)26.3139 / 0.8033 / 0.00110723.8291 / 0.7340 / 0.000611
Lanczos interpolationLapSRN
Lanczos interpolationLapSRN
26.1565 / 0.7962 / 0.00478226.7046 / 0.7987 / 2.274290

3.4 超分算法选择总结

OpenCV中的dnn_superres模块提供的四种图像超分放大深度学习模型,在实践中用的最多的就是EDSR模型。其他三类模型和OpenCV自带的resize函数视觉上差别并不大。但是EDSR模型推理速度太慢,2倍放大和4倍放大可以使用ESPCN代替,4倍和8倍放大可以使用LapSRN。但是总体来说还是使用EDSR为好,毕竟超分放大需要高性能运算,还是用高性能显卡运算较为合适。
此外OpenCV的dnn_superres模块不适用于移动端设备或嵌入式设备,因为OpenCV对设备性能有一定要求。所以移动端可以看看ncnn的超分放大实现。具体见:

srmd ncnn vulkan 通用图片超分放大工具

ncnn用的是srmd超分放大模型,srmd官方代码和ncnn官方实现代码见

SRMD
srmd-ncnn-vulkan

事实上srmd超分放大性能效果高于OpenCV提供的EDSR模型,但SRMD需要显卡进行运算,ncnn在移动端使用vulkan实现加速运算,在PC端如果有显卡也通过ncnn调用SRMD模型。

4 参考

4.1 相关论文

Enhanced Deep Residual Networks for Single Image Super-Resolution
Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network
Accelerating the Super-Resolution Convolutional Neural Network
Deep laplacian pyramid networks for fast and accurate super-resolution

4.2 参考代码

Super Resolution using Convolutional Neural Networks
EDSR_Tensorflow
TF-ESPCN
FSRCNN_Tensorflow
TF-LAPSRN
OpenCV-Practical-Exercise
SRMD
srmd-ncnn-vulkan

4.3 参考文档

超分辨率基准测试
Super-resolution benchmarking
【超分辨率】—图像超分辨率(Super-Resolution)技术研究
【超分辨率】—基于深度学习的图像超分辨率最新进展与趋势
PSNR和SSIM
OpenCV_contrib库在windows下编译使用指南
srmd ncnn vulkan 通用图片超分放大工具

  • 14
    点赞
  • 80
    收藏
    觉得还不错? 一键收藏
  • 19
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 19
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值