基于深度学习的图像压缩感知
针对图像的压缩感知有好多篇论文使用深度学习的方法实现图像压缩采样和重构,主要是复现论文的代码过程。
分析论文:[1]Shi W, Jiang F, Zhang S, et al. Deep Networks for Compressed Image Sensing[J]. 2017:877-882.
论文题目:Deep Networks for Compressed Image Sensing
首先论文的框架是:
中心思想是通过卷积和步长实现图像的压缩,然后通过卷积的深度实现小块图像的重构和块图像的拼接,最后通过5层卷积神经网络实现最终图像的复原。这个过程和GANLU在2009年的块压缩感知的论文[2]过程差不多。结合两篇论文的过程,实现代码:
[2].Gan L. Block Compressed Sensing of Natural Images[C]// International Conference on Digital Signal Processing. IEEE, 2007:403-406.
准备数据集:利用BSDS500中的400张自然图像,通过平移旋转镜像等方法得到128*128大小的图像70000张左右,存储格式使用的是.h5用起来比较方便,测试集使用了100张图像作为测试集。
path = '/home/train.h5'
def read_data(path):
with h5py.File(path, 'r') as hf:
orig_image = np.array(hf.get('orig_image'))
sample_line = np.array(hf.get('sample_line'))
return orig_image, sample_line
orig_image, sample_line = read_data(path)
#read validation data:
path1 = '/home/test1.h5'
def read_data1(path):
with h5py.File(path, 'r') as hf:
orig_image = np.array(hf.get('orig_image'))
sample_line = np.array(hf.get('sample_line'))
return orig_image, sample_line
test_image, test_line = read_data1(path1)
test_image = test_image.reshape((100, 256, 256, 1))
# Load test data:
read_dictionary = np.load('/home/lab30202/Juanjuan/images/h5/test_data.npy').item()
#print(read_dictionary['baby']) # displays "world"
image_test = tf.placeholder(tf.float32, [1, 512, 512, 1])
整个过程代码:
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
import tensorflow as tf
import numpy as np
from PIL import Image
import matplotlib.pyplot as plt
from skimage import io, img_as_float, measure
import os
import h5py
import collections
#########################################################################
#read train original data
#read train original data
path = '/home/train.h5'
def read_data(path):
with h5py.File(path, 'r') as hf:
orig_image = np.array(hf.get('orig_image'))
sample_line = np.array(hf.get('sample_line'))
return orig_image, sample_line
orig_image, sample_line = read_data(path)
#read validation data:
path1 = '/home/test1.h5'
def read_data1(path):
with h5py.File(path, 'r') as hf:
orig_image = np.array(hf.get('orig_image'))
sample_line = np.array(hf.get('sample_line'))
return orig_image, sample_line
test_image, test_line = read_data1(path1)
test_image = test_image.reshape((100, 256, 256, 1))
# Load test data:
read_dictionary = np.load('/home/lab30202/Juanjuan/images/h5/test_data.npy').item()
image_test = tf.placeholder(tf.float32, [1, 512, 512, 1])
#CNN
learning_rate1 = 0.001
learning_rate2 = 0.0001
learning_rate3 = 0.00001
#training_iters = 25
batch_size = 64
num_samples=