tensorlayer学习日志6_chapter3

3.5主要就是讲降噪编码器,个人感觉主要功能还是为了对付过度拟合为主。。。这节的画出隐层的权值很有意思~~

电脑不行,老规矩缩水 训练,教材是训练n_epoch=200的,我这破机器就100好了,然后是每隔20个画个隐层图print_freq=20。

这里是model='relu'的,应该还有个model='sigmoid'的,我下次去用好点的电脑再试,再对比吧~这里就只贴下relu的吧

import tensorflow as tf
import tensorlayer as tl
import numpy as np

model = 'relu'

X_train, y_train, X_val, y_val, X_test, y_test = tl.files.load_mnist_dataset(shape=(-1, 784))

sess = tf.InteractiveSession()

# placeholder
x = tf.placeholder(tf.float32, shape=[None, 784], name='x')

print("~~~~~~~~~~~~~~~Build net~~~~~~~~~~~~~~~~~~~~~~")
if model == 'relu':
    net = tl.layers.InputLayer(x, name='input')
    net = tl.layers.DropoutLayer(net, keep=0.5, name='denoising1')  # if drop some inputs, it is denoise AE
    net = tl.layers.DenseLayer(net, n_units=196, act=tf.nn.relu, name='relu1')
    recon_layer1 = tl.layers.ReconLayer(net, x_recon=x, n_units=784, act=tf.nn.softplus, name='recon_layer1')
elif model == 'sigmoid':
    # sigmoid - set keep to 1.0, if you want a vanilla Autoencoder
    net = tl.layers.InputLayer(x, name='input')
    net = tl.layers.DropoutLayer(net, keep=0.5, name='denoising1')
    net = tl.layers.DenseLayer(net, n_units=196, act=tf.nn.sigmoid, name='sigmoid1')
    recon_layer1 = tl.layers.ReconLayer(net, x_recon=x, n_units=784, act=tf.nn.sigmoid, name='recon_layer1')

## ready to train
tl.layers.initialize_global_variables(sess)

## print all params
print("~~~~~~~~~~~All net Params~~~~~~~~~~~~~~~")
net.print_params()

## pretrain
print("~~~~~~~~~~Pre-train Layer 1~~~~~~~~~~~~~~")
recon_layer1.pretrain(
    sess, x=x, X_train=X_train, X_val=X_val, denoise_name='denoising1', n_epoch=100, batch_size=128, print_freq=20,
    save=True, save_name='w1pre_'
)
# You can also disable denoisong by setting denoise_name=None.

saver = tf.train.Saver()
# you may want to save the model
save_path = saver.save(sess, "./model_denoising1_3.4/")
print("Model saved in file: %s" % save_path)
sess.close()

运行输出如下:

[TL] Load or Download MNIST > data\mnist
[TL] data\mnist\train-images-idx3-ubyte.gz
[TL] data\mnist\t10k-images-idx3-ubyte.gz
~~~~~~~~~~~~~~~Build net~~~~~~~~~~~~~~~~~~~~~~
[TL] InputLayer  input: (?, 784)
[TL] DropoutLayer denoising1: keep:0.500000 is_fix:False
[TL] DenseLayer  relu1: 196 relu
[TL] DenseLayer  recon_layer1: 784 softplus
[TL] recon_layer1 is a ReconLayer
[TL]      lambda_l2_w: 0.004000
[TL]      learning_rate: 0.000100
[TL]      use: mse, L2_w, L1_a
~~~~~~~~~~~All net Params~~~~~~~~~~~~~~~
[TL]   param   0: relu1/W:0            (784, 196)         float32_ref (mean: 0.000326054374454543, median: 0.0003588348627090454, std: 0.08798697590827942)   
[TL]   param   1: relu1/b:0            (196,)             float32_ref (mean: 0.0               , median: 0.0               , std: 0.0               )   
[TL]   num of params: 153860
~~~~~~~~~~Pre-train Layer 1~~~~~~~~~~~~~~
[TL]      [*] recon_layer1 start pretrain
[TL]      batch_size: 128
[TL]      denoising layer keep: 0.500000
[TL] Epoch 1 of 100 took 11.146820s
[TL]    train loss: 67.451898
[TL]    val loss: 65.987557
[TL] [*] w1pre_1.npz saved
[TL] Epoch 20 of 100 took 10.789619s
[TL]    train loss: 17.309017
[TL]    val loss: 17.203405
C:\Program Files\Anaconda3\lib\site-packages\matplotlib\cbook\deprecation.py:107: MatplotlibDeprecationWarning: Adding an axes using the same arguments as a previous axes currently reuses the earlier instance.  In a future version, a new instance will always be created and returned.  Meanwhile, this warning can be suppressed, and the future behavior ensured, by passing a unique label to each axes instance.
  warnings.warn(message, mplDeprecation, stacklevel=1)
[TL] [*] w1pre_20.npz saved
[TL] Epoch 40 of 100 took 10.814819s
[TL]    train loss: 11.773568
[TL]    val loss: 11.801825
[TL] [*] w1pre_40.npz saved
[TL] Epoch 60 of 100 took 12.611034s
[TL]    train loss: 10.263730
[TL]    val loss: 10.318127
[TL] [*] w1pre_60.npz saved
[TL] Epoch 80 of 100 took 10.957020s
[TL]    train loss: 9.595004
[TL]    val loss: 9.660938
[TL] [*] w1pre_80.npz saved
[TL] Epoch 100 of 100 took 11.122819s
[TL]    train loss: 9.268205
[TL]    val loss: 9.332822
[TL] [*] w1pre_100.npz saved
Model saved in file: ./model_denoising1/
[Finished in 1210.7s]

 开始时的隐层权值图

 第20个,看得出已经有东东了

40个,更清晰了些

 60个 感觉在减少无关的,主要特征在增加

80个,图片 越来越干净了

100个最终的,不知道 用好机器,搞个几 千,会是什么样喔~ 真想试试

存了好多文件,npz是隐层权值图像数值的表示,顺便还输出了个model

 

 

 

 

阅读更多
个人分类: tensorlayer tensorflow
想对作者说点什么? 我来说一句

没有更多推荐了,返回首页

关闭
关闭
关闭