2021.05.10
文献:Compression Artifacts Reduction by a Deep Convolutional Network 2015ICCV
文献下载地址:https://arxiv.org/abs/1504.06993 或者https://www.paperswithcode.com/paper/compression-artifacts-reduction-by-a-deep#code
代码地址:https://github.com/volvet/ARCNN 这是ARCNN第一版代码,非fast_ARCNN
配置环境:pycharm+anaconda
TensorFlow2.0.0 Keras2.3.1
1、首先根据代码中提供的readme文档,下载BSDS500和LIVE2(下完之后重命名databaserelease2)数据集放在data文件夹下(也可向我索取,如果懒得下,或下不到)
2、将代码中./data/source中的MATLAB的.m文件用MATLAB运行,文件中有数据集路径问题,自己进行设定,将指定生成的ProcessedData文件copy到项目./data目录下。(当然你也可以在MATLAB中直接用绝对路径,指定到项目目录下)
3、新建一个空文件夹…/logs(对于train.py文件来说)也就是二级目录。修改代码中的路径。
然后运行train.py文件
(1)你会发现train.py中,data = BSDS500(conf.data_path, conf.quality, conf.batch_size, conf.test_size)这一行会报错,报错原因BSDS500.py文件中BSDS500函数的初始参数只有三个,而你输入五个。
改错:
修改BSDS500.py文件中
class BSDS500(object):
def init(self, data_path, quality, batch_size, test_size):
self.train = Train(data_path, quality)
self.test = Test(data_path, quality)
self.batch_size = batch_size
self.test_size = test_size
记得参数位置要对应。
(2)还有一个地方会报错TypeError: Value passed to parameter ‘begin’ has DataType float32 not in lis
原因:models.py中第25-30行代码中有参数数据类型错误。
改错:
mid_compres = tf.strided_slice(self.compres, [0, int(shift_height), int(shift_width), 0],
[conf.batch_size, int(shift_height + conf.valid_height), int(shift_width + conf.valid_width), conf.channel])
mid_reconstruct = tf.strided_slice(self.F_4, [0, int(shift_height), int(shift_width), 0],
[conf.batch_size, int(shift_height + conf.valid_height), int(shift_width + conf.valid_width), conf.channel])
mid_truths = tf.strided_slice(self.truths, [0, int(shift_height), int(shift_width), 0],
[conf.batch_size, int(shift_height + conf.valid_height), int(shift_width + conf.valid_width), conf.channel])
(3)train.py还有错误,错误提示大致是data.train.next_batch()缺少一个batch_size参数输入。
从文件末尾参数配置可以看出batch_size=128
直接将batch_truth, batch_compres = data.train.next_batch() #在第43行
改为batch_truth, batch_compres = data.train.next_batch(128)
第64行同理。