tensorlayer学习日志12_chapter4_4.6.1

这篇博客详细记录了在TensorLayer中学习第4.6.1章节的过程,作者通过代码展示了运行结果,并提到由于计算资源限制,仅能进行少量的迭代(n_epoch=5)。实验发现,不同模式的速度差异明显,强调对于图像处理的卷积神经网络(CNN),使用强大的GPU至关重要。
摘要由CSDN通过智能技术生成

第4.6.1节,这里先运行用的是:

network, cost, _ = model_batch_norm(x, y_, reuse=False, is_train=True)
_, cost_test, acc = model_batch_norm(x, y_, reuse=True, is_train=False)

代码如下:

import tensorflow as tf
import tensorlayer as tl
from tensorlayer.layers import *
import numpy as np
import time

sess = tf.InteractiveSession()

X_train, y_train, X_test, y_test = tl.files.load_cifar10_dataset(shape=(-1, 32, 32, 3), plotable=False)

def model(x, y_, reuse):
    W_init = tf.truncated_normal_initializer(stddev=5e-2)
    W_init2 = tf.truncated_normal_initializer(stddev=0.04)
    b_init2 = tf.constant_initializer(value=0.1)
    with tf.variable_scope("model", reuse=reuse):
        tl.layers.set_name_reuse(reuse)  # github示例上无
        net = InputLayer(x, name='input')

        net = Conv2d(net, n_filter=64, filter_size=(5, 5), strides=(1, 1), act=tf.nn.relu, padding='SAME', W_init=W_init, name='cnn1')
        # net = Conv2dLayer(net, act=tf.nn.relu, shape=[5, 5, 3, 64],strides=[1, 1, 1, 1], padding='SAME',  W_init=W_init, name ='cnn1')
        # output: (batch_size, 24, 24, 64)
        
        net = MaxPool2d(net, filter_size=(3, 3), strides=(2, 2), padding='SAME', name='pool1')
        # net = PoolLayer(net, ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1], padding='SAME', pool = tf.nn.max_pool, name ='pool1',)
        # output: (batch_size, 12, 12, 64)
        
        net = LocalResponseNormLayer(net, depth_radius=4, bias=1.0, alpha=0.001 / 9.0, beta=0.75, name='norm1')
        # net.outputs = tf.nn.lrn(net.outputs, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75, name='norm1')

        net = Conv2d(net, n_filter=64, filter_size=(5, 5), strides=(1, 1), act=tf.nn.relu, padding='SAME', W_init=W_init, name='cnn2')
         # output: (batch_size, 12, 12, 64)

        net = LocalResponseNormLayer(net, depth_radius=4, bias=1.0, alpha=0.001 / 9.0, beta=0.75, name='norm2')

        net = MaxPool2d(net, filter_size=(3, 3), strides=(2, 2), padding='SAME', name='pool2')
        # output: (batch_size, 6, 6, 64)

        net = FlattenLayer(net, name='flatten')  
        # output: (batch_size, 2304)
        
        net = DenseLayer(net, n_units=384, act=tf.nn.relu, W_init=W_init2, b_init=b_init2, name='d1relu')  
        # output: (batch_size, 384)
        
        net = DenseLayer(net, n_units=192, act=tf.nn.relu, W_init=W_init2, b_init=b_init2, name='d2relu')  
        # output: (batch_size, 192)
        
        net = DenseLayer(net, n_units=10, act=tf.identity, W_init=tf.truncated_normal_initializer(stddev=1 / 192.0), name='output')  
        # output: (batch_size, 10)
        
        y = net.outputs

        ce = tl.cost.cross_entropy(y, y_, name='cost')
        # L2 for the MLP, without this, the accuracy will be reduced by 15%.
        L2 = 0
        for p in tl.layers.get_variables_with_name('relu/W', True, True):
            L2 += tf.contrib.layers.l2_regularizer(0.004)(p)
        cost = ce + L2

        correct_prediction = tf.equal(tf.argmax(y, 1), y_)
        acc = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

        return net, cost, acc

def model_batch_norm(x, y_, reuse, is_train):
    W_init = tf
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值