Single Image Super-Resolution with EDSR, WDSR and SRGAN
A Tensorflow 2.x based implementation of
This is a complete re-write of the old Keras/Tensorflow 1.x based implementation available here. Some parts are still work in progress but you can already train models as described in the papers via a high-level training API. Furthermore, you can also fine-tune EDSR and WDSR models in an SRGAN context. Training and usage examples are given in the notebooks
A DIV2K data provider automatically downloads DIV2K training and validation images of given scale (2, 3, 4 or 8) and downgrade operator ("bicubic", "unknown", "mild" or "difficult").
Important: if you want to evaluate the pre-trained models with a dataset other than DIV2K please read this comment (and replies) first.
Environment setup
Create a new conda environment with
conda env create -f environment.yml
and activate it with
conda activate sisr
Introduction
You can find an introduction to single-image super-resolution in this article. It also demonstrates how EDSR and WDSR models can be fine-tuned with SRGAN (see also this section).
Getting started
Examples in this section require following pre-trained weights for running (see also example notebooks):
Pre-trained weights
weights-edsr-16-x4.tar.gz
EDSR x4 baseline as described in the EDSR paper: 16 residual blocks, 64 filters, 1.52M parameters.
PSNR on DIV2K validation set = 28.89 dB (images 801 - 900, 6 + 4 pixel border included).
weights-wdsr-b-32-x4.tar.gz
WDSR B x4 custom model: 32 residual blocks, 32 filters, expansion factor 6, 0.62M parameters.
PSNR on DIV2K validation set = 28.91 dB (images 801 - 900, 6 + 4 pixel border included).
weights-srgan.tar.gz
SRGAN as described in the SRGAN paper: 1.55M parameters, trained with VGG54 content loss.
After download, extract them in the root folde