Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising
Absract:
- Residual learning and batch normalization
- handle Gaussian denoising withunknown noise level
I. INTRODUCTION
目的:从加了噪声的图像中恢复清晰图像
y = x + v
v is additive white Gaussian noise(加性高斯白噪声)
之前模型的缺点:
- involve a complex optimization problem in the testing stage, making the denoising process timeconsuming(复杂的最优化问题,耗费时间)
- the models in general are non-convex and involve several manually chosen parameters, providingsome leeway to boost denoising performance(非凸?需要手动选取参数)
本文的贡献:
不直接恢复x,而是预测出噪声v,其实是在隐藏层中将干净图片移除
Rather than directly outputingthe denoised image x, the proposed DnCNNis designed to predict the residual image v, DnCNN implicitly removes the latent clean image with the operations in the hidden layers.
残差学习和批量规范化方法相结合,效果又快又好
residual learning and batchnormalization can greatly benefit the CNN learning as they can not only speed up the training but also boost the denoising performance.
单图像超分辨问题(SISR)和去块效应问题都是降噪问题的特例。一般化的模型可以一起解决这些问题。
SISR and JPEG imagedeblocking can be treated as two special casesof a “general” image denoising problem
II. RELATED WORK
A. Deep Neural Networks for Image Denoising(用深度神经网络解决图像降噪问题)
B. Residual Learning and Batch Normalization
1) Residual Learning: 残差学习,本文用一个残差单元来预测残差图像。
DnCNN employs a single residual unit to predict the residual image.
2) Batch Normalization: Batch normalization is proposed to alleviate the internal covariate shift by
incorporating a normalization step and a scale and shift step before the nonlinearity in each layer.
III. THE PROPOSED DENOISING CNN MODEL
Architecture design:modified VGG network
Model learning:residual learning formulation
感受野:receptive field:
rfsize=f(out,stride,ksize)=(out−1)∗stride+ksize ,
其中out是指上一层感受野的大小。
http://blog.csdn.net/bojackhosreman/article/details/70162018
two gradient-based optimization algorithms are adopted:
- SGD
- Adam algorithm