DnCNN

Kai Zhang, Wangmeng Zuo, Yunjin Chen, Deyu Meng, Lei Zhang, Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising, IEEE Trans. on Image Processing, 2017

Network Architecture

 DnCNNs: feed-forward denoising convolutional neural networks

Code

def dncnn(input, is_training=True, output_channels=1):
    
    with tf.variable_scope('block1'):
        output = tf.layers.conv2d(input, 64, 3, padding='same', activation=tf.nn.relu)
         ## (i) Conv+ReLU: for the first layer
            # 64 filters of size 3*3*c are used to generate 64 feature maps
            # zero padding to avoid boundary artifacts
            # utilize rectified linearunits (ReLU, max(0; ·)) to introduce nonlinearity
            
    for layers in xrange(2, 16 + 1):
        with tf.variable_scope('block%d' % layers):
            output = tf.layers.conv2d(output, 64, 3, padding='same', name='conv%d' % layers, use_bias=False)
            output = tf.nn.relu(tf.layers.batch_normalization(output, training=is_training))
        ## (ii) Conv+BN+ReLU: for layers 2 ~ (D - 1)
            # 64 filters of size 3*3*64 
            # batch normalization is added between convolution and ReLU  
            
    with tf.variable_scope('block17'):
        output = tf.layers.conv2d(output, output_channels, 3, padding='same')
        ## (iii) Conv: for the last layer
            # c filters of size 3*3*64 are used to reconstruct the output
            
     ## output is residual image; return is (input-output) = predicted clean image       
    return input - output

网络结构说明:

  1. First layer:Conv(3 * 3 * c * 64)+ReLu (c代表图片通道数)
  2. Layers 2~(D-1):Conv(3 * 3 * 64 * 64)+BN(batch normalization)+ReLu
  3. Last layer:Conv(3 * 3 * 64)
  4. 每一层都zero padding,使得每一层的输入、输出尺寸保持一致,防止产生 boundary artifact。

Highlights

  1. Residual learning:解决网络深度加深带来的梯度消失或梯度爆炸问题☞ResNet论文详解
  2. Batch normalization:解决内部协变量移位问题☞ BN生动形象的说明

Experimental setting

  1. 400 images of size 180×180 for training
  2. three noise levels, i.e., σ = 15, 25 and 50
  3. patch size = 40 × 40, crop 128 × 1600 patches to train the model
  4. network depth = 17
  5. mini-batch size = 128
  6. training epochs = 50
  7. learning rate was decayed exponentially from 1e-1 to 1e-4 for the 50 epochs.

 

  • 2
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 4
    评论
评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值