最近实验室服务器用不了,看论文复现测试代码只能找这些在线平台,记录一下自己部署github项目到kaggle的过程。
环境配置:
Python 3.8
Pytorch 1.8.0
NVIDIA GPU + CUDA
# Clone the github repo and go to the default directory 'DAT'. git clone https://github.com/zhengchen1999/DAT.git conda create -n DAT python=3.8 conda activate DAT pip install -r requirements.txt python setup.py develop
数据集:https://drive.google.com/drive/folders/1ZMaZyCer44ZX6tdcDmjIrc_hSsKoMKg2?usp=drive_link
预训练模型:https://drive.google.com/drive/folders/1iBdf_-LVZuz_PAbFtuxSKd_11RL1YKxM?usp=drive_link
作者提供的可视化结果:https://drive.google.com/drive/folders/1ZMaZyCer44ZX6tdcDmjIrc_hSsKoMKg2?usp=drive_link
测试:
# Test on your dataset python basicsr/test.py -opt options/Test/test_single_x2.yml python basicsr/test.py -opt options/Test/test_single_x3.yml python basicsr/test.py -opt options/Test/test_single_x4.yml
kaggle部署DAT项目并进行测试:
! git clone https://github.com/zhengchen1999/DAT.git ! pip install -r /kaggle/working/DAT/requirements.txt !pip install tqdm !pip install yapf !pip install timm !pip install einops !pip install h5py !pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu118 !gdown --folder https://drive.google.com/drive/folders/14VG5mw5ie8RrR4jjypeHynXDZYWL8w-r?usp=sharing -O /kaggle/working/DAT/experiments/pretrained_models/ !cp -r /kaggle/input/dat-datasets-test /kaggle/working/DAT/datasets cd /kaggle/working/DAT %%writefile /kaggle/working/DAT/options/Test/test_DAT_x4.yml # general settings name: test_DAT_x4 model_type: DATModel scale: 4 num_gpu: 2 manual_seed: 10 datasets: test_1: # the 1st test dataset task: SR name: Set5 type: PairedImageDataset dataroot_gt: /kaggle/working/DAT/datasets/dat-datasets-test/benchmark/Set5/HR dataroot_lq: /kaggle/working/DAT/datasets/dat-datasets-test/benchmark/Set5/LR_bicubic/X4 filename_tmpl: '{}x4' io_backend: type: disk test_2: # the 2st test dataset task: SR name: Set14 type: PairedImageDataset dataroot_gt: /kaggle/working/DAT/datasets/dat-datasets-test/benchmark/Set14/HR dataroot_lq: /kaggle/working/DAT/datasets/dat-datasets-test/benchmark/Set14/LR_bicubic/X4 filename_tmpl: '{}x4' io_backend: type: disk test_3: # the 3st test dataset task: SR name: B100 type: PairedImageDataset dataroot_gt: /kaggle/working/DAT/datasets/dat-datasets-test/benchmark/B100/HR dataroot_lq: /kaggle/working/DAT/datasets/dat-datasets-test/benchmark/B100/LR_bicubic/X4 filename_tmpl: '{}x4' io_backend: type: disk test_4: # the 4st test dataset task: SR name: Urban100 type: PairedImageDataset dataroot_gt: /kaggle/working/DAT/datasets/dat-datasets-test/benchmark/Urban100/HR dataroot_lq: /kaggle/working/DAT/datasets/dat-datasets-test/benchmark/Urban100/LR_bicubic/X4 filename_tmpl: '{}x4' io_backend: type: disk test_5: # the 5st test dataset task: SR name: Manga109 type: PairedImageDataset dataroot_gt: /kaggle/working/DAT/datasets/dat-datasets-test/benchmark/Manga109/HR dataroot_lq: /kaggle/working/DAT/datasets/dat-datasets-test/benchmark/Manga109/LR_bicubic/X4 filename_tmpl: '{}_LRBI_x4' io_backend: type: disk # network structures network_g: type: DAT upscale: 4 in_chans: 3 img_size: 64 img_range: 1. split_size: [8,32] depth: [6,6,6,6,6,6] embed_dim: 180 num_heads: [6,6,6,6,6,6] expansion_factor: 4 resi_connection: '1conv' # path path: pretrain_network_g: /kaggle/working/DAT/experiments/pretrained_models/DAT/DAT_x4.pth strict_load_g: True # validation settings val: save_img: True suffix: ~ # add suffix to saved images, if None, use exp name use_chop: False # True to save memory, if img too large metrics: psnr: # metric name, can be arbitrary type: calculate_psnr crop_border: 4 test_y_channel: True ssim: type: calculate_ssim crop_border: 4 test_y_channel: True !python /kaggle/working/DAT/basicsr/test.py -opt /kaggle/working/DAT/options/Test/test_DAT_x4.yml