LSTM预测MNIST手写数字张量流图分析

看LSTM的代码感觉封装的太厉害,看的有些模糊,现画了个MNIST的张量流图,便于分析代码

 

原始代码如下

# View more python learning tutorial on my Youtube and Youku channel!!!

# Youtube video tutorial: https://www.youtube.com/channel/UCdyjiB5H8Pu7aDTNVXTTpcg
# Youku video tutorial: http://i.youku.com/pythontutorial

"""
This code is a modified version of the code from this link:
https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3_NeuralNetworks/recurrent_network.py
His code is a very good one for RNN beginners. Feel free to check it out.
"""
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
tf.reset_default_graph() 
# set random seed for comparing the two result calculations
tf.set_random_seed(1)

# this is data
mnist = input_data.read_data_sets('MNIST_data', one_hot=True)

# hyperparameters
lr = 0.001
training_iters = 100000
batch_size = 128

n_inputs = 28   # MNIST data input (img shape: 28*28)
n_steps = 28    # time steps
n_hidden_units = 128   # neurons in hidden layer
n_classes = 10      # MNIST classes (0-9 digits)

# tf Graph input
x = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.float32, [None, n_classes])

# Define weights
weights = {
    # (28, 128)
    'in': tf.Variable(tf.random_normal([n_inputs, n_hidden_units])),
    # (128, 10)
    'out': tf.Variable(tf.random_normal([n_hidden_units, n_classes]))
}
biases = {
    # (128, )
    'in': tf.Variable(tf.constant(0.1, shape=[n_hidden_units, ])),
    # (10, )
    'out': tf.Variable(tf.constant(0.1, shape=[n_classes, ]))
}


def RNN(X, weights, biases):
    # hidden layer for input to cell
    ########################################
    #X default is [128,28,28]
    # transpose the inputs shape from
    # X ==> (128 batch * 28 steps, 28 inputs)
    X = tf.reshape(X, [-1, n_inputs])

    # into hidden
    # X_in = (128 batch * 28 steps, 128 hidden)
    #weights_in = [28,128]
    X_in = tf.matmul(X, weights['in']) + biases['in']
    # X_in ==> (128 batch, 28 steps, 128 hidden)
    X_in = tf.reshape(X_in, [-1, n_steps, n_hidden_units])

    # cell
    ##########################################

    # basic LSTM Cell.
    if int((tf.__version__).split('.')[1]) < 12 and int((tf.__version__).split('.')[0]) < 1:
        cell = tf.nn.rnn_cell.BasicLSTMCell(n_hidden_units, forget_bias=1.0, state_is_tuple=True)
    else:
        cell = tf.contrib.rnn.BasicLSTMCell(n_hidden_units)
    print("cell=",cell)
    # lstm cell is divided into two parts (c_state, h_state)
    # c_state = [128,128]   h_state = [128,128]
    init_state = cell.zero_state(batch_size, dtype=tf.float32)
    print("init_state.shape=",init_state)
    # You have 2 options for following step.
    # 1: tf.nn.rnn(cell, inputs);
    # 2: tf.nn.dynamic_rnn(cell, inputs).
    # If use option 1, you have to modified the shape of X_in, go and check out this:
    # https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3_NeuralNetworks/recurrent_network.py
    # In here, we go for option 2.
    # dynamic_rnn receive Tensor (batch, steps, inputs) or (steps, batch, inputs) as X_in.
    # Make sure the time_major is changed accordingly. = 
    #outputs.shape= [128, 28, 128] 
    outputs, final_state = tf.nn.dynamic_rnn(cell, X_in, initial_state=init_state, time_major=False)
    #outputs.shape= [128, 28, 128]   final_state.shape 中包含两个tuple,其中每个是c=[128,128]    h=[128,128]
    print("outputs.shape=",outputs.shape.as_list(),"final_state.shape=",len(final_state))
    print("final_state.shape=",final_state)
    # hidden layer for output as the final results
    #############################################
    # results = tf.matmul(final_state[1], weights['out']) + biases['out']

    # # or
    # unpack to list [(batch, outputs)..] * steps  此处是[128,128]*28
    #tf.unstack(value, num=None, axis=0, name=’unstack’)以指定的轴axis,将一个维度为R的张量数组转变成一个维度为R-1的张量
    if int((tf.__version__).split('.')[1]) < 12 and int((tf.__version__).split('.')[0]) < 1:
        outputs = tf.unpack(tf.transpose(outputs, [1, 0, 2]))    # states is the last outputs
    else:
        outputs = tf.unstack(tf.transpose(outputs, [1,0,2]))
    print("unstack 后的outputs",len(outputs))
    print("now outputs",outputs)
    results = tf.matmul(outputs[-1], weights['out']) + biases['out']    # shape = (128, 10)

    return results


pred = RNN(x, weights, biases)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))
train_op = tf.train.AdamOptimizer(lr).minimize(cost)

correct_pred = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))

with tf.Session() as sess:
    # tf.initialize_all_variables() no long valid from
    # 2017-03-02 if using tensorflow >= 0.12
    if int((tf.__version__).split('.')[1]) < 12 and int((tf.__version__).split('.')[0]) < 1:
        init = tf.initialize_all_variables()
    else:
        init = tf.global_variables_initializer()
    sess.run(init)
    step = 0
    while step * batch_size < training_iters:
        batch_xs, batch_ys = mnist.train.next_batch(batch_size)
        print("batch_xs.shape=",batch_xs.shape,"batch_ys.shape=",batch_ys,"len(batch_ys)=",len(batch_ys))
        batch_xs = batch_xs.reshape([batch_size, n_steps, n_inputs])
        print("after reshaping batch_xs.shape=",batch_xs.shape)
        sess.run([train_op], feed_dict={
            x: batch_xs,
            y: batch_ys,
        })
        if step % 20 == 0:
            print(sess.run(accuracy, feed_dict={
            x: batch_xs,
            y: batch_ys,
            }))
        step += 1

 

#########################################################################

LSTM预测取最后一组数据进行预测,因为直到最后一组数据输入完成才能最大程度利用时间序列信息对结果进行预测,o这就好比于翻译时,只有等到说者将这句话的最后一个字说完,才能开始翻译一样,同理,分析一个图片时只有,从上到下逐行扫描图片,只有最后一行像素扫描完成时才可以有把握对该图片最终表示的是几进行预测,这也是为什么取图片输入完成时最后一刻的输出结果作为最终结果

results = tf.matmul(outputs[-1], weights['out']) + biases['out']    # shape = (128, 10)

好的,我可以为您介绍如何使用LSTM实现MNIST手写数字分类。 MNIST是一个常用的手写数字数据集,它包含60000个训练样本和10000个测试样本。每个样本是一个28x28的灰度图像,表示手写数字0-9中的一个。 LSTM(长短时记忆网络)是一种递归神经网络,特别适用于处理序列数据,如文本、语音、视频等。在MNIST手写数字分类任务中,我们可以将每个图像的像素值序列视为一个时间序列,然后使用LSTM对其进行分类。 以下是实现MNIST手写数字分类的步骤: 1. 准备数据集 首先,我们需要从Keras库中加载MNIST数据集。可以使用以下代码: ``` python from keras.datasets import mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() ``` 数据集中的图像像素值范围在0到255之间,我们需要将其归一化到0到1之间。可以使用以下代码: ``` python x_train = x_train.astype('float32') / 255. x_test = x_test.astype('float32') / 255. ``` 还需要将标签转换为独热编码。可以使用以下代码: ``` python from keras.utils import to_categorical y_train = to_categorical(y_train, num_classes=10) y_test = to_categorical(y_test, num_classes=10) ``` 2. 构建LSTM模型 接下来,我们需要构建一个LSTM模型。可以使用以下代码: ``` python from keras.models import Sequential from keras.layers import LSTM, Dense model = Sequential() model.add(LSTM(128, input_shape=(28, 28))) model.add(Dense(10, activation='softmax')) model.summary() ``` 在这个模型中,我们使用了一个LSTM层和一个全连接层。LSTM层的输入形状是(28,28),因为每个图像都是28x28像素。全连接层的输出是10,因为我们要对10个数字进行分类。 3. 编译和训练模型 我们需要编译这个模型并对其进行训练。可以使用以下代码: ``` python model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) model.fit(x_train, y_train, batch_size=128, epochs=10, validation_data=(x_test, y_test)) ``` 在这个模型中,我们使用了交叉熵作为损失函数,Adam作为优化器,并使用准确率作为度量标准。我们将训练数据分成大小为128的批次,并对模型进行10次迭代。 4. 评估模型 训练完成后,我们需要评估模型的性能。可以使用以下代码: ``` python score = model.evaluate(x_test, y_test, verbose=0) print('Test loss:', score[0]) print('Test accuracy:', score[1]) ``` 这将打印出测试集上的损失和准确率。 完整的代码如下: ``` python from keras.datasets import mnist from keras.utils import to_categorical from keras.models import Sequential from keras.layers import LSTM, Dense (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train = x_train.astype('float32') / 255. x_test = x_test.astype('float32') / 255. y_train = to_categorical(y_train, num_classes=10) y_test = to_categorical(y_test, num_classes=10) model = Sequential() model.add(LSTM(128, input_shape=(28, 28))) model.add(Dense(10, activation='softmax')) model.summary() model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) model.fit(x_train, y_train, batch_size=128, epochs=10, validation_data=(x_test, y_test)) score = model.evaluate(x_test, y_test, verbose=0) print('Test loss:', score[0]) print('Test accuracy:', score[1]) ``` 希望这能对您有所帮助!
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值