dncnn(残差网络图像去燥记录)

一,生成训练数据

1,原文相关知识

we use the noisy images from a wide range of noise levels (e.g., σ ∈ [0,55])  to train a single DnCNN model。

blind Gaussian denoising, SISR, and JPEG deblocking

.the noisy image is generated by adding Gaussian noise with a certain noise level from the range of [0,55]. The SISR input is generated by first bicubic downsampling and then bicubic upsampling the highresolution image with downscaling factors 2, 3 and 4. The JPEG deblocking input is generated by compressing the image with a quality factor ranging from 5 to 99 using the MATLAB JPEG encoder. All these images are treated as the inputs to a single DnCNN model. Totally, we generate 128×8,000 image patch (the size is 50 × 50) pairs for training. Rotation/flip based operations on the patch pairs are used during mini-batch learning. 

图片大小及数量:we follow [16] to use 400 images of size 180 × 180 for training。 

The noise levels are also set into the range of [0,55] and 128×3,000 patches of size 50×50 are cropped to train the model.

——————————————————————————————————————————————————————

For increasing the training set, we segment these images to overlapping patches of size 50×50 with stride of 10. 

 

 

 

 

 

二,模型构建

1,模型知识

网络层数:

we set the size of convolutional filters to be 3 × 3 but remove all pooling layers

the receptive field of DnCNN with depth of d should be (2d+1)×(2d+1). 

high noise level usually requires larger effective patch size to capture more context information for restoration

Thus, for Gaussian denoising with a certain noise level, we set the receptive field size of DnCNN to 35 × 35 with the corresponding depth of 17. For other general image denoising tasks, we adopt a larger receptive field and set the depth to be 20.

输出代价函数:

具体结构:

Deep Architecture:

Given the DnCNN with depth D, there are three types of layers, shown in Fig. 1 with three different colors. (i) Conv+ReLU: for the first layer, 64 filters of size 3×3×c are used to generate 64 feature maps, and rectified linear units (ReLU, max(0,·)) are then utilized for nonlinearity. Here c represents the number of image channels, i.e., c = 1 for gray image and c = 3 for color image. (ii) Conv+BN+ReLU: for layers 2 ∼ (D −1), 64 filters of size 3 × 3 × 64 are used, and batch normalization [21] is added between convolution and ReLU. (iii) Conv: for the last layer, c filters of size 3×3×64 are used to reconstruct the output. 

padding:

Different from the above methods, we directly pad zeros 

———————————————————————————————————————————————————————

batch normalization当时没有封装好的包,看最新的去燥论文并没有采用batch normalization。因此对模型换坑,采用最新的2018年的图像处理模型。接下来参考2018论文Adaptive Residual Networks for High-Quality Image Restoration

 

 

 

 

三,训练记录

采用学习率0.01直接导致网络训练死,后采用leak-relu网络不会学死。

采用0.0001学习率,1000轮最后loss达到0.0008。效果不理想。认为是使用学习率下降模块,学习率下降过快导致,因此采用固定学习率。

 

 

 

 

  • 1
    点赞
  • 16
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
DnCNN(深度残差卷积神经网络)是一种用于图像去噪的深度学习模型。该模型的用途是通过学习残差图像来减少图像中的噪声。 残差图像(residual image)是指通过将原始图像与去噪后的图像相减得到的差异图像。在DnCNN模型中,减去残差图像是通过以下步骤实现的: 1. 输入原始图像:首先,将待去噪的原始图像作为输入传递给DnCNN模型。 2. 模型学习噪声:使用卷积神经网络来学习图像中的噪声分布。模型通过多个卷积层来提取图像特征,并将这些特征视为噪声和信号的组合。 3. 生成去噪图像:在模型训练过程中,DnCNN模型通过学习如何从输入图像中减去噪声来生成去噪图像。为此,模型通过训练来优化网络参数,以最小化残差图像。 4. 产生残差图像:通过将原始图像DnCNN生成的去噪图像相减,得到残差图像。该残差图像包含了原始图像中未能被DnCNN模型去除的噪声。 减去残差图像的目的是进一步减少图像中的噪声。通过将原始图像和去噪图像相减,可以得到一个更清晰的图像,其中仅保留了模型无法去除的噪声。这有助于提高图像的质量和清晰度。 总的来说,DnCNN模型通过学习噪声分布并生成去噪图像,然后通过减去残差图像进一步减少噪声,以提高图像质量。这一过程通过反复迭代训练模型以优化其性能,以达到最佳去噪效果。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值