【深度学习图像识别课程】自编码器系列:(1)简单自编码器

版权声明:本文为博主原创文章,遵循 CC 4.0 by-sa 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://blog.csdn.net/weixin_41770169/article/details/80330573

一、自编码器介绍

自编码器:用于图像压缩和去噪。

自编码器会造成一些信息丢失,因为压缩的实质是因为单元减少了。自编码器的压缩性能:比jpeg, Mp3, Mpeg或者视频的效果差。而且泛化能力差。但最近发现,自编码器具有降噪和降维的性能。

我们会实现2种自编码器,第一种Encoder和Decoder都是简单的神经网络。第二种,用CNN来优化简单的神经网络。

 

二、简单的自编码器

Input:将28*28图像展平成784维的向量

隐藏层:ReLu,输出32个元素

Output:Sigmoid激活函数,输出784个元素

降噪的目标:输出与输入尽量一致

1、读入图像,并可视化

%matplotlib inline

import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)

 

img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')

2、模型构建

# Size of the encoding layer (the hidden layer)
encoding_dim = 32

image_size = mnist.train.images.shape[1]

inputs_ = tf.placeholder(tf.float32, (None, image_size), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, image_size), name='targets')

# Output of hidden layer
encoded = tf.layers.dense(inputs_, encoding_dim, activation=tf.nn.relu)

# Output layer logits
logits = tf.layers.dense(encoded, image_size, activation=None)
# Sigmoid output from
decoded = tf.nn.sigmoid(logits, name='output')

loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(0.001).minimize(cost)

3、训练

# Create the session
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
    for ii in range(mnist.train.num_examples//batch_size):
        batch = mnist.train.next_batch(batch_size)
        feed = {inputs_: batch[0], targets_: batch[0]}
        batch_cost, _ = sess.run([cost, opt], feed_dict=feed)

        print("Epoch: {}/{}...".format(e+1, epochs),
              "Training loss: {:.4f}".format(batch_cost))

 

4、测试

fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs})

for images, row in zip([in_imgs, reconstructed], axes):
    for img, ax in zip(images, row):
        ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
        ax.get_xaxis().set_visible(False)
        ax.get_yaxis().set_visible(False)

fig.tight_layout(pad=0.1)

sess.close()
展开阅读全文

没有更多推荐了,返回首页