tensorlayer学习日志8_chapter3_3.6

这篇博客介绍了TensorLayer中堆叠自编码器的实现,提到了在使用tl.visualize.W与matplotlib.pyplot(plt)时可能出现的图像重叠问题,并分享了通过地板除解决该问题的经验。博主在个人电脑上运行示例后,在工作电脑上得到了预期的输出结果。
摘要由CSDN通过智能技术生成

第三章最后一个示例,堆叠自编码器

 

import tensorflow as tf
import tensorlayer as tl
import numpy as np
import time


# model = 'sigmoid'
model = 'relu'
# n_epoch = 200
n_epoch = 100
batch_size = 128
learning_rate = 0.0001
print_freq = 10


X_train, y_train, X_val, y_val, X_test, y_test = tl.files.load_mnist_dataset(shape=(-1, 784))

sess = tf.InteractiveSession()

if model == 'relu':
    act = tf.nn.relu
    act_recon = tf.nn.softplus
elif model == 'sigmoid':
    act = tf.nn.sigmoid
    act_recon = act

print("~~~~~~~~Build net~~~~~~~~~~")

x = tf.placeholder(tf.float32, shape=[None, 784], name='x')
y_ = tf.placeholder(tf.int64, shape=[None], name='y_')


network = tl.layers.InputLayer(x, name='input')

# denoise layer for AE 降噪层
network = tl.layers.DropoutLayer(network, keep=0.5, name='denoising1')
network = tl.layers.DropoutLayer(network, keep=0.8, name='drop1')

# 1st layer 第一个降噪自编码器
network = tl.layers.DenseLayer(network, n_units=800, act=act, name='dense1')
x_recon1 = network.outputs
recon_layer1 = tl.layers.ReconLayer(network, x_recon=x, n_units=784, act=act_recon, name='recon_layer1')
# 2nd layer 第二个降噪自编码器
network = tl.layers.DropoutLayer(network, keep=0.5, name='drop2')
network = tl.layers.DenseLayer(network, n_units=800, act=act_recon, name=model + '2')
recon_layer2 = tl.layers.ReconLayer(network, x_recon=x_recon1, n_units=800, act=act_recon, name='recon_layer2')
# 3rd layer 分类器
# network = tl.layers.DropoutLayer(network, keep=0.5, name='drop3') # 书上无此drop层
network = tl.layers.DenseLayer(network, n_units=10, act=tf.identity, name='output')

# Define fine-tune process 定义微调步骤即损失函数
y = network.outputs
y_op = tf.argmax(tf.nn.softmax(y),1) # github示例无此行
cost = tl.cost.cross_entropy(y, y_, name='cost')

train_params = network.all_params

train_op = tf.train.AdamOptimizer(learning_rate).minimize(cost, var_list=train_params)

# 开始逐层贪婪预训练
sess = tf.InteractiveSession()
tl.layers.initialize_global_variables(sess)

# Pre-train
print("~~~~~~~~~~预训练前的参数~~~~~~~~~~")
network.print_params()
print("\n~~~~~~~~~~~~~Pre-train Layer 1~~~~~~~~~~~~~")
recon_layer1.pretrain(
    sess, x=x, X_train=X_train, X_val=X_val, denoise_name='denoising1', n_epoch=50, batch_size=128, print_freq=10,
    save=False, save_name='w1pre_')
print("\n~~~~~~~~~~~~~~~Pre-train Layer 2~~~~~~~~~~~")
recon_layer2.pretrain(
    sess, x=x, X_train=X_train, X_val=X_val, denoise_name='denoising1
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值