“Kai Zhang, Wangmeng Zuo, Yunjin Chen, Deyu Meng, Lei Zhang, Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising, IEEE Trans. on Image Processing, 2017”
Network Architecture
DnCNNs: feed-forward denoising convolutional neural networks
Code
def dncnn(input, is_training=True, output_channels=1):
with tf.variable_scope('block1'):
output = tf.layers.conv2d(input, 64, 3, padding='same', activation=tf.nn.relu)
## (i) Conv+ReLU: for the first layer
# 64 filters of size 3*3*c are used to generate 64 feature maps
# zero padding to avoid boundary artifacts
# utilize rectified linearunits (ReLU, max(0; ·)) to introduce nonlinearity
for layers in xrange(2, 16 + 1):
with tf.variable_scope('block%d' % layers):
output = tf.layers.conv2d(output, 64, 3, padding='same', name='conv%d' % layers, use_bias=False)
output = tf.nn.relu(tf.layers.batch_normalization(output, training=is_training))
## (ii) Conv+BN+ReLU: for layers 2 ~ (D - 1)
# 64 filters of size 3*3*64
# batch normalization is added between convolution and ReLU
with tf.variable_scope('block17'):
output = tf.layers.conv2d(output, output_channels, 3, padding='same')
## (iii) Conv: for the last layer
# c filters of size 3*3*64 are used to reconstruct the output
## output is residual image; return is (input-output) = predicted clean image
return input - output
网络结构说明:
- First layer:Conv(3 * 3 * c * 64)+ReLu (c代表图片通道数)
- Layers 2~(D-1):Conv(3 * 3 * 64 * 64)+BN(batch normalization)+ReLu
- Last layer:Conv(3 * 3 * 64)
- 每一层都zero padding,使得每一层的输入、输出尺寸保持一致,防止产生 boundary artifact。
Highlights
- Residual learning:解决网络深度加深带来的梯度消失或梯度爆炸问题☞ResNet论文详解
- Batch normalization:解决内部协变量移位问题☞ BN生动形象的说明
Experimental setting
- 400 images of size 180×180 for training
- three noise levels, i.e., σ = 15, 25 and 50
- patch size = 40 × 40, crop 128 × 1600 patches to train the model
- network depth = 17
- mini-batch size = 128
- training epochs = 50
- learning rate was decayed exponentially from 1e-1 to 1e-4 for the 50 epochs.