Tensorlayer学习笔记——卷积神经网络

一、先看代码

import tensorflow as tf
import tensorlayer as tl

sess = tf.InteractiveSession()
# 导入数据
X_train, y_train, X_val, y_val, X_test, y_test = tl.files.load_mnist_dataset(shape=(-1, 28, 28, 1))

# 定义placeholder
x = tf.placeholder(tf.float32, shape=[None, 28, 28, 1], name='x')
y_ = tf.placeholder(tf.int64, shape = [None, ], name='y_')

# 建立模型
network = tl.layers.InputLayer(inputs=x, name='input_layer')
network = tl.layers.Conv2dLayer(network, act=tf.nn.relu, shape=[5, 5, 1, 32], strides=[1, 1, 1, 1], padding='SAME', name='cnn_layer1')
network = tl.layers.PoolLayer(network, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', pool=tf.nn.max_pool, name='pool_layer1')

network = tl.layers.Conv2dLayer(network, act=tf.nn.relu, shape=[5, 5, 32, 64], strides=[1, 2, 2, 1], padding='SAME', name='cnn_layer2')
network = tl.layers.PoolLayer(network, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', pool=tf.nn.max_pool, name='pool_layer2')

network = tl.layers.FlattenLayer(network, name='flatten_layer')
network = tl.layers.DropoutLayer(network, keep=0.5, name='drop1')

network = tl.layers.DenseLayer(network, n_units=256, act=tf.nn.relu, name='fc1')
network = tl.layers.DropoutLayer(network, keep=0.5, name='drop2')

network = tl.layers.DenseLayer(network, n_units=10, act=tf.identity, name='output_layer')

# 定义损失函数
y = network.outputs
cost = tl.cost.cross_entropy(y, y_, name='cost')
correct_prediction = tf.equal(tf.arg_max(y, 1), y_)
acc = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
y_op = tf.arg_max(tf.nn.softmax(y), 1)

# 定义优化器
train_param = network.all_params
train_op = tf.train.AdamOptimizer(learning_rate=0.0001, use_locking=False).minimize(cost, var_list=train_param)

# tensorboard
acc_summ = tf.summary.scalar('acc', acc)
cost_summ = tf.summary.scalar('cost', cost)
summary = tf.summary.merge_all()
writer = tf.summary.FileWriter('./logs')
writer.add_graph(sess.graph)

# 初始化参数
tl.layers.initialize_global_variables(sess)

# 列出模型信息
network.print_layers()
network.print_params()

# 训练模型
tl.utils.fit(sess, network, train_op, cost, X_train, y_train, x, y_,
             acc=acc, batch_size=512, n_epoch=100, print_freq=10,
             X_val=X_val, y_val=y_val, eval_train=False, tensorboard=True)

# 评估
tl.utils.test(sess, network, acc, X_test, y_test, x, y_, batch_size=None, cost=cost)

# 保存模型
tl.files.save_npz(network.all_params, name='model.npz')
sess.close()


二、看看与多层神经网络不同的地方

上篇博文介绍了Tensorlayer的多层神经网络实现,这次实现卷积神经网络除了第二、三、四步不同外,其他都一样,在此不再介绍,下面直接解释第二三四步

1、第二步,导入MNIST数据

X_train, y_train, X_val, y_val, X_test, y_test = tl.files.load_mnist_dataset(shape=(-1, 28, 28, 1))
因为MNIST原始数据是一张图片数据是展开的,即784维的向量,卷积神经网络输入的是二维图像,将其reshape成28*28*1,MNIST是灰度图,通道数为1

2、第三步,定义占位符

x = tf.placeholder(tf.float32, shape=[None, 28, 28, 1], name='x')
y_ = tf.placeholder(tf.int64, shape = [None, ], name='y_')
同理x的shape是28*28*1

3、第四步,建立模型

↓首先定义输入层

network = tl.layers.InputLayer(inputs=x, name='input_layer')
↓定义一个卷积层:

network = tl.layers.Conv2dLayer(network, act=tf.nn.relu, shape=[5, 5, 1, 32], strides=[1, 1, 1, 1], padding='SAME', name='cnn_layer1')
激活函数为relu函数。shape参数为卷积核的形状,前两个代表过滤器大小,第三个表示前一层的通道数,也即每个过滤器的通道数,第四个表示过滤器的个数,此例则是5*5*1大小的过滤器,一共有32个这样的过滤器,卷积后输出有32个通道。strides参数为步长,四个元素分别代表样本数、长、宽、通道四个维度的步长。一般情况下图像长宽的步长就行,此处步长为1。padding参数有两个选项,SAME和VALID。

↓定义一个池化层

network = tl.layers.PoolLayer(network, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', pool=tf.nn.max_pool, name='pool_layer1')
ksize参数为池化窗口大小,和stride一样每个元素与输入各维度对应,此处为2*2大小的池化。stride、padding同上。此处选择最大池化

一个典型的卷积层定义结束,接下来按此方法定义余下各层:

network = tl.layers.Conv2dLayer(network, act=tf.nn.relu, shape=[5, 5, 32, 64], strides=[1, 2, 2, 1], padding='SAME', name='cnn_layer2')
network = tl.layers.PoolLayer(network, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', pool=tf.nn.max_pool, name='pool_layer2')

network = tl.layers.FlattenLayer(network, name='flatten_layer')
network = tl.layers.DropoutLayer(network, keep=0.5, name='drop1')

network = tl.layers.DenseLayer(network, n_units=256, act=tf.nn.relu, name='fc1')
network = tl.layers.DropoutLayer(network, keep=0.5, name='drop2')
最后依然是节点为10的输出层:

network = tl.layers.DenseLayer(network, n_units=10, act=tf.identity, name='output_layer')
其他部分请参考上篇博文

三、跑一跑

运行,输出如下:

Load or Download MNIST > data/mnist/
data/mnist/train-images-idx3-ubyte.gz
data/mnist/t10k-images-idx3-ubyte.gz
  [TL] InputLayer  input_layer: (?, 28, 28, 1)
  [TL] Conv2dLayer cnn_layer1: shape:[5, 5, 1, 32] strides:[1, 1, 1, 1] pad:SAME act:relu
  [TL] PoolLayer   pool_layer1: ksize:[1, 2, 2, 1] strides:[1, 2, 2, 1] padding:SAME pool:max_pool
  [TL] Conv2dLayer cnn_layer2: shape:[5, 5, 32, 64] strides:[1, 2, 2, 1] pad:SAME act:relu
  [TL] PoolLayer   pool_layer2: ksize:[1, 2, 2, 1] strides:[1, 2, 2, 1] padding:SAME pool:max_pool
  [TL] FlattenLayer flatten_layer: 1024
  [TL] DropoutLayer drop1: keep:0.500000 is_fix:False
  [TL] DenseLayer  fc1: 256 relu
  [TL] DropoutLayer drop2: keep:0.500000 is_fix:False
  [TL] DenseLayer  output_layer: 10 identity
WARNING:tensorflow:From E:/Machine_Learning/TensorLayer_code/cnn/cnn.py:31: arg_max (from tensorflow.python.ops.gen_math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `argmax` instead
  layer   0: cnn_layer1/Relu:0    (?, 28, 28, 32)    float32
  layer   1: pool_layer1:0        (?, 14, 14, 32)    float32
  layer   2: cnn_layer2/Relu:0    (?, 7, 7, 64)      float32
  layer   3: pool_layer2:0        (?, 4, 4, 64)      float32
  layer   4: flatten_layer:0      (?, 1024)          float32
  layer   5: drop1/mul:0          (?, 1024)          float32
  layer   6: fc1/Relu:0           (?, 256)           float32
  layer   7: drop2/mul:0          (?, 256)           float32
  layer   8: output_layer/Identity:0 (?, 10)            float32
  param   0: cnn_layer1/W_conv2d:0 (5, 5, 1, 32)      float32_ref (mean: -2.510603553673718e-05, median: 0.0005096496315672994, std: 0.018007082864642143)   
  param   1: cnn_layer1/b_conv2d:0 (32,)              float32_ref (mean: 0.0               , median: 0.0               , std: 0.0               )   
  param   2: cnn_layer2/W_conv2d:0 (5, 5, 32, 64)     float32_ref (mean: -2.960372694360558e-05, median: -0.00010261016723234206, std: 0.017551297321915627)   
  param   3: cnn_layer2/b_conv2d:0 (64,)              float32_ref (mean: 0.0               , median: 0.0               , std: 0.0               )   
  param   4: fc1/W:0              (1024, 256)        float32_ref (mean: 1.893113676487701e-06, median: 0.00015916726260911673, std: 0.08775801211595535)   
  param   5: fc1/b:0              (256,)             float32_ref (mean: 0.0               , median: 0.0               , std: 0.0               )   
  param   6: output_layer/W:0     (256, 10)          float32_ref (mean: 0.0018607772653922439, median: 0.0004102111852262169, std: 0.08700685948133469)   
  param   7: output_layer/b:0     (10,)              float32_ref (mean: 0.0               , median: 0.0               , std: 0.0               )   
  num of params: 317066
Setting up tensorboard ...
[!] logs/ exists ...
Param name  cnn_layer1/W_conv2d:0
Param name  cnn_layer1/b_conv2d:0
Param name  cnn_layer2/W_conv2d:0
Param name  cnn_layer2/b_conv2d:0
Param name  fc1/W:0
Param name  fc1/b:0
Param name  output_layer/W:0
Param name  output_layer/b:0
Finished! use $tensorboard --logdir=logs/ to start server
Start training the network ...
Epoch 1 of 100 took 140.208019s
   val loss: 1.406113
   val acc: 0.737664
Epoch 10 of 100 took 119.369828s
   val loss: 0.109499
   val acc: 0.969984
Epoch 20 of 100 took 123.663073s
   val loss: 0.073103
   val acc: 0.978721

  • 1
    点赞
  • 8
    收藏
    觉得还不错? 一键收藏
  • 4
    评论
评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值