TensorFlow 初探

TensorFlow是Google团队开发的用于开发神经网络的模块。使用TensorFlow主要由两部分组成:构建图和执行图。构建图的过程主要是建立起整个神经网络的框架,主要是采用Variable,placeholder等元素代表模型的参数以及输入。并设定损失函数和优化函数。在执行图的过程中,模型自动计算梯度优化模型参数。以下代码简要记录下TensorFlow构建简单的模型来进行经典的mnist识别。

import tensorflow as tf
import numpy as np
import tensorflow.examples.tutorials.mnist.input_data as input_data
from time import time
import matplotlib.pyplot as plt
# 绘制图形
def plot_img_labels_prediction(images, labels, prediction, idx, num=10):
    fig=plt.gcf()
    fig.set_size_inches(12,14)
    if num>25: num = 25
    for i in range(0, num):
        ax = plt.subplot(5, 5, 1+i)
        ax.imshow(np.reshape(images[idx], (28,28)), cmap='binary')
        title = 'label=' + str(np.argmax(labels[idx]))
        if (len(prediction)>0):
            title += ',prediction=' + str(prediction[idx])
        ax.set_title(title, fontsize=10)
        ax.set_xticks([])
        ax.set_yticks([])
        idx += 1
    plt.show()
# 建立简单的神经网络层
def layer(inputs, input_dim, output_dim, activation=None):
    W = tf.Variable(tf.random_normal([input_dim, output_dim]), name='W')
    b = tf.Variable(tf.random_normal([1, output_dim]), name='b')
    out = tf.matmul(inputs, W) + b
    if activation == None:
        return out
    else:
        return activation(out)
        
if __name__ == '__main__':
    startTime = time()
    mnist = input_data.read_data_sets('/DataSet/', one_hot=True)
# 设定参数以及输入输出
    x = tf.placeholder(tf.float32, shape=[None, 784])
    h1 = layer(inputs=x, input_dim=784, output_dim=1000, activation=tf.nn.relu)
    h2 = layer(inputs=h1, input_dim=1000, output_dim=1000, activation=tf.nn.relu)
    y_predict = layer(inputs=h2, input_dim=1000, output_dim=10, activation=None)
    y_label = tf.placeholder(tf.float32, shape=[None, 10])
# 设定损失函数
    loss_function = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=y_predict,
                                                                           labels=y_label))
# 设定优化函数                                                                        
    optimizer = tf.train.AdamOptimizer(learning_rate=0.001).minimize(loss_function)
    correct_case = tf.equal(tf.argmax(y_label, 1), tf.argmax(y_predict, 1))
    accuracy = tf.reduce_mean(tf.cast(correct_case, tf.float32))
# 通过会话执行图
    with tf.Session() as sess:
    # 初始化参数
        init = tf.global_variables_initializer()
        sess.run(init)
        trainEpoches = 15
        batchSize = 100
        batchs = int(mnist.train.num_examples/batchSize)
        # 分批次运行,总共运行15次,每个批次中每次应用100个样本,一共运行int(mnist.train.num_examples/batchSize)次
        for epoch in range(trainEpoches):
            for _ in range(batchs):
                x_batch, y_batch = mnist.train.next_batch(batchSize)
                sess.run(optimizer, feed_dict={x:x_batch, y_label:y_batch})
            # 在验证集上验证准确度
            loss, acc = sess.run([loss_function, accuracy], feed_dict={x:mnist.validation.images,
                                                                       y_label:mnist.validation.labels})
            print("Train Epoch:", '%02d' % (epoch + 1), "Loss=", "{:.9f}".format(loss), "Accuracy=", acc)
        # 根据测试集计算预测结果, 并计算准确率
        predictions = sess.run(tf.argmax(y_predict, 1), feed_dict={x:mnist.test.images})
        print('Accuracy:', sess.run(accuracy, feed_dict={x:mnist.test.images,
                                                         y_label:mnist.test.labels}))
    print('Take time:', time()-startTime)
    plot_img_labels_prediction(mnist.test.images, mnist.test.labels, predictions, 0)    

运行结果如下:(采用TensorFlow-gpu运行,运行时间比CPU模式快。)
Train Epoch: 01 Loss= 127.624496460 Accuracy= 0.9182
Train Epoch: 02 Loss= 89.686813354 Accuracy= 0.9344
Train Epoch: 03 Loss= 72.944526672 Accuracy= 0.9464
Train Epoch: 04 Loss= 61.340919495 Accuracy= 0.95
Train Epoch: 05 Loss= 54.733924866 Accuracy= 0.9554
Train Epoch: 06 Loss= 63.757812500 Accuracy= 0.9532
Train Epoch: 07 Loss= 50.438999176 Accuracy= 0.9586
Train Epoch: 08 Loss= 50.709644318 Accuracy= 0.9604
Train Epoch: 09 Loss= 48.553268433 Accuracy= 0.9618
Train Epoch: 10 Loss= 52.624656677 Accuracy= 0.962
Train Epoch: 11 Loss= 47.286739349 Accuracy= 0.9646
Train Epoch: 12 Loss= 41.962261200 Accuracy= 0.9652
Train Epoch: 13 Loss= 49.289993286 Accuracy= 0.9656
Train Epoch: 14 Loss= 53.298263550 Accuracy= 0.9668
Train Epoch: 15 Loss= 47.958324432 Accuracy= 0.9668
Accuracy: 0.9657
Take time: 229.40656542778015
预测结果对比图

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值