tensorflow从入门到放弃---第四篇

使用RNN处理Mnist数据集

  首先什么是RNN?---递归神经网络(循环神经网络)

  

   意思就是x1样本输入时候,其输出结果受到前一阶段x0的影响,即h1=(x1+x0*w0)*w1

      同理,x3样本输入时候,其输出结果受到x1,x2的影响

总结:在RNN中,x3受到x1,x2的影响,且影响权重一样


RNN的升级版----LSTM网络结构

而居然后面输入的样本受到前面样本的影响,RNN影响权重是一样的,但是LSTM不同在于它会选择去忘记一些状态,也就是前一个状态对当前输入影响权重是不一样的。

LSTMRNN增加了1忘记门层;2输入门层;3更新;4输出


将28*28的每张手写字符图片划分成28个1*28的向量,之后通过每个向量经过RNN以及LSTM,得到预测值以及中间结果



其中_LSTM_O表示第一阶段的预测结果,_LSTM_S表示第一阶段的中间结果


具体实现的代码:

import tensorflow as tf
import input_data
import numpy as np
import matplotlib.pyplot as plt
print ("Packages imported")

mnist = input_data.read_data_sets("data/", one_hot=True)
trainimgs, trainlabels, testimgs, testlabels \
 = mnist.train.images, mnist.train.labels, mnist.test.images, mnist.test.labels 
ntrain, ntest, dim, nclasses \
 = trainimgs.shape[0], testimgs.shape[0], trainimgs.shape[1], trainlabels.shape[1]
print ("MNIST loaded")


diminput  = 28
dimhidden = 128
dimoutput = nclasses
nsteps    = 28
weights = {
    'hidden': tf.Variable(tf.random_normal([diminput, dimhidden])),   #hidden是一个28*128,这样的话输出一个1*128的向量,图中_H
    'out': tf.Variable(tf.random_normal([dimhidden, dimoutput]))       #out是一个 128*10,图中的LSTM_O
}
biases = {
    'hidden': tf.Variable(tf.random_normal([dimhidden])),
    'out': tf.Variable(tf.random_normal([dimoutput]))
}


def _RNN(_X, _W, _b, _nsteps, _name):
    # 1.数据格式转换 [batchsize, nsteps, diminput]  => [nsteps, batchsize, diminput]
    _X = tf.transpose(_X, [1, 0, 2])


    # 2. 将[nsteps, batchsize, diminput]Reshape 成[nsteps*batchsize, diminput] 
    _X = tf.reshape(_X, [-1, diminput])  


    # 3. Input layer => Hidden layer  从输入层到隐层
    _H = tf.matmul(_X, _W['hidden']) + _b['hidden']


    # 4. 把_H切分成一个序列一个序列,这次案例中_nsteps是28,就是切分成28块
    _Hsplit = tf.split(0, _nsteps, _H) 


    # 5. Get LSTM's final output (_LSTM_O) and state (_LSTM_S)
    #    Both _LSTM_O and _LSTM_S consist of 'batchsize' elements
    #    Only _LSTM_O will be used to predict the output. 
    with tf.variable_scope(_name) as scope:  
        scope.reuse_variables()   #防止命名域冲突
        lstm_cell = tf.nn.rnn_cell.BasicLSTMCell(dimhidden, forget_bias=1.0)
        _LSTM_O, _LSTM_S = tf.nn.rnn(lstm_cell, _Hsplit,dtype=tf.float32)


    # 6. Output
    _O = tf.matmul(_LSTM_O[-1], _W['out']) + _b['out']    


    # Return! 

    #返回数据存在字典里面
    return {
        'X': _X, 'H': _H, 'Hsplit': _Hsplit,
        'LSTM_O': _LSTM_O, 'LSTM_S': _LSTM_S, 'O': _O 
    }
print ("Network ready")


learning_rate = 0.001
x      = tf.placeholder("float", [None, nsteps, diminput])
y      = tf.placeholder("float", [None, dimoutput])
myrnn  = _RNN(x, weights, biases, nsteps, 'basic')
pred   = myrnn['O']
cost   = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y)) 
optm   = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost) # Adam Optimizer
accr   = tf.reduce_mean(tf.cast(tf.equal(tf.argmax(pred,1), tf.argmax(y,1)), tf.float32))
init   = tf.global_variables_initializer()
print ("Network Ready!")


training_epochs = 5
batch_size      = 16
display_step    = 1
sess = tf.Session()
sess.run(init)
print ("Start optimization")
for epoch in range(training_epochs):
    avg_cost = 0.
    #total_batch = int(mnist.train.num_examples/batch_size)
    total_batch = 100
    # Loop over all batches
    for i in range(total_batch):
        batch_xs, batch_ys = mnist.train.next_batch(batch_size)
        batch_xs = batch_xs.reshape((batch_size, nsteps, diminput))
        # Fit training using batch data
        feeds = {x: batch_xs, y: batch_ys}
        sess.run(optm, feed_dict=feeds)
        # Compute average loss
        avg_cost += sess.run(cost, feed_dict=feeds)/total_batch
    # Display logs per epoch step
    if epoch % display_step == 0: 
        print ("Epoch: %03d/%03d cost: %.9f" % (epoch, training_epochs, avg_cost))
        feeds = {x: batch_xs, y: batch_ys}
        train_acc = sess.run(accr, feed_dict=feeds)
        print (" Training accuracy: %.3f" % (train_acc))
        testimgs = testimgs.reshape((ntest, nsteps, diminput))
        feeds = {x: testimgs, y: testlabels, istate: np.zeros((ntest, 2*dimhidden))}
        test_acc = sess.run(accr, feed_dict=feeds)
        print (" Test accuracy: %.3f" % (test_acc))
print ("Optimization Finished.")


评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值