SR_BriefReview

Paper Title:Deep Learning for Single Image Super-Resolution:A Brief Review

I.Sections

section2:相关背景概念
section3:有效神经网络结构forSISR
section4:SR不同的用处

II.Backgrounds

1.在这里插入图片描述
2.Interpolation-based SISR methods, such as bicubic interpolation
[10] and Lanczos resampling [11]
3.Learning-based SISR methods:
3.1.The Markov random field (MRF) [16] approach was first adopted by Freeman et al. to exploit the abundant real-world images to synthesize visually pleasing image textures.
3.2.Neighbor embedding methods [17] proposed by Chang et al. took advantage of similar local geometry between LR and HR to restore HR image patches.
3.3.Inspired by the sparse signal recovery theory [18], researchers applied sparse coding methods [19], [20], [21], [22], [23], [24] to SISR problems.
3.4.Lately, random forest [25] has also been used to achieve improvement in the reconstruction performance.
3.5.Meanwhile, many works combined the merits of reconstruction-based methods with the learning-based approaches to further reduce artifacts introduced by external training examples [26], [27], [28], [29].
4.deep learning:
artificial neural network (ANN)[32]->convolutional neural network (CNN) [35]->recurrent neural network (RNN) [36]->pretraining the DNN with restricted Boltzmann machine(RBM)->deep Boltzmann machine (DBM) [42], variational autoencoder (VAE) [43], [44] and generative adversarial nets (GAN) [45]

III.DEEP ARCHITECTURES FOR SISR

discuss the efficient architectures proposed for SISR in recent years.

A.Benchmark of Deep Architecture for SISR

  1. SRCNN: three-layer CNNCNN, where the filter
    sizes of each layer are 64 * 1 * 9 * 9, 32 * 64 * 5 * 5 and 1 * 32 * 5 * 5.
    loss function:mean square error (MSE),
    在这里插入图片描述
  2. problems:
    ① the first question emerges: can we design CNN architectures that directly implement LR as input to address these problems
    ② how can we design such models of greater complexity?

B. State-of-the-Art Deep SISR Networks

Learning Effective Upsampling with CNN:

  1. upsampling operation:deconvolution [50] or transposed convolution [51]
    Given the upsampling factor, the deconvolution layer is composed of an arbitrary interpolation operator (usually, we choose the nearest neighbor interpolation for simplicity) and a following convolution operator with a stride of 1, as shown in Fig. 3.(近邻插值,乘卷积核)
    在这里插入图片描述
  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值