Tensorflow的入门级程序(神经网络mnist)

Tensorflow训练卷积神经网络

开始入门tensorflow,使用神经网络来实现对mnist的训练,笔记本没有GPU,安装的CPU版本,对付使用吧。

实现代码:

# -*- coding: utf-8 -*-
"""
Created on Thu Jul  6 12:54:37 2017

@author: dilusense
"""

import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
INPUT_NODE = 784    #输入层的节点数
OUTPUT_NODE = 10    #输出层节点数
LAYER1_NODE = 500   #隐藏层节点数
BATCH_SIZE = 100    #一个batch的训练个数
LEARNING_RATE_BASE = 0.8    #基础学习率
LEARNING_RATE_DECAY = 0.99    #学习率的衰减率
REGULARIZATION_RATE = 0.0001    #正则化项的系数
TRAINING_STEPS = 10000    #迭代次数
MOVING_AVERAGE_DECAY = 0.99    #滑动的平均衰减 

#前向传播:给定神经网络的输入和所有参数
def inference(input_tensor, avg_class, weights1, biases1, weights2, biases2):
    #当没有提供滑动平滑类时,直接使用参数当前的取值
    if avg_class == None:
        layer1 = tf.nn.relu(tf.matmul(input_tensor, weights1) + biases1)
        return tf.matmul(layer1, weights2) + biases2
    else:
        #使用avg_class.average函数:获得变量的滑动平均值
        layer1 = tf.nn.relu(tf.matmul(input_tensor, avg_class.average(weights1)) + 
            avg_class.average(biases1))
        return  tf.matmul(layer1, avg_class.average(weights2)) + avg_class.average(biases2)

#训练的过程
def train(mnist):
    x = tf.placeholder(tf.float32,[None, INPUT_NODE], name = 'x-input')
    y_ = tf.placeholder(tf.float32,[None, OUTPUT_NODE], name = 'y-input')
    #生成隐藏层的参数
    weights1 = tf.Variable(tf.truncated_normal([INPUT_NODE, LAYER1_NODE], stddev=0.1))
    biases1 = tf.Variable(tf.constant(0.1,shape=[LAYER1_NODE]))
    #生成输出层的参数
    weights2 = tf.Variable(tf.truncated_normal([LAYER1_NODE, OUTPUT_NODE], stddev=0.1))
    biases2 = tf.Variable(tf.constant(0.1, shape = [OUTPUT_NODE]))
    
    #调用之前写的前向传播函数
    y = inference(x, None, weights1, biases1, weights2, biases2)
    #指定(trainable = False):不可训练的参数
    global_step = tf.Variable(0, trainable=False)
    #初始化滑动平均类:给定滑动平均衰减率和训练轮数的变量
    variable_averages = tf.train.ExponentialMovingAverage(MOVING_AVERAGE_DECAY, global_step)
    #在训练参数上加入滑动平均
    variables_averages_op = variable_averages.apply(tf.trainable_variables())
    #计算使用滑动平滑之后的前向传播结果,就得调用variable_averages
    average_y = inference(x, variable_averages, weights1, biases1, weights2, biases2)
    cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=y,labels=tf.argmax(y_,1))
    #计算当前batch中所有样例的交叉熵的平均值
    cross_entropy_mean = tf.reduce_mean(cross_entropy)
    
    #计算L2正则化损失函数
    regularizer = tf.contrib.layers.l2_regularizer(REGULARIZATION_RATE)
    #计算模型的正则化损失
    regularization = regularizer(weights1) + regularizer(weights2)
    #总损失等于交叉熵损失和正则化损失的和
    loss = cross_entropy_mean + regularization
    
    #设置指数衰减的学习率
    learning_rate = tf.train.exponential_decay(LEARNING_RATE_BASE, global_step , mnist.train.num_examples/BATCH_SIZE,\
                                               LEARNING_RATE_DECAY)
    #优化器的选择
    train_step = tf.train.GradientDescentOptimizer(learning_rate)\
                                                  .minimize(loss,global_step)#global_step自增1
    with tf.control_dependencies([train_step,variables_averages_op]):
        train_op = tf.no_op(name='train')
    
    #检验前向传播结果是否正确
    correct_prediction = tf.equal(tf.arg_max(average_y,1), tf.arg_max(y_,1))
    #正确率
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
    
    #初始会话,并开始训练过程
    with tf.Session() as sess:
        tf.initialize_all_variables().run()#初始化所有变量
        #准备验证数据
        validate_feed= {x:mnist.validation.images,
                        y_:mnist.validation.labels}
        #准备测试数据
        test_feed = {x:mnist.test.images,y_:mnist.test.labels}
        
        #迭代的训练神经网络
        for i in range(TRAINING_STEPS):
            #每1000轮输出在验证集上的测试结果
            if i%1000 == 0:
                validate_acc = sess.run(accuracy, feed_dict=validate_feed)
                print("After %d training step(s), validation accuracy ""using average model is %g "%(i,validate_acc))
            #产生这一轮的batch训练数据,并进行训练过程
            xs,ys = mnist.train.next_batch(BATCH_SIZE)
            sess.run(train_op,feed_dict={x:xs, y_:ys})
        
        #在训练结束后,在测试数据上验证神经网络模型的最终正确率
        test_acc = sess.run(accuracy, feed_dict=test_feed)
        print("After %d training step(s) ,test accuracy using average ""model is %g"%(TRAINING_STEPS,test_acc))

#主程序的入口
def main(argv=None):
    #声明处理MNIST数据集的类,这个类在初始化时会自动下载数据
    mnist = input_data.read_data_sets(r"F:\Tensor\MNIST_data", one_hot = True)
    train(mnist)
    
#Tensorflow提供的一个程序入口,tf.app.run会调用上面定义的main函数。
if __name__=='__main__':
    tf.app.run()        

运行上面的程序,将得到类似下面的输出结果:

Extracting F:\Tensor\MNIST_data\train-images-idx3-ubyte.gz
Extracting F:\Tensor\MNIST_data\train-labels-idx1-ubyte.gz
Extracting F:\Tensor\MNIST_data\t10k-images-idx3-ubyte.gz
Extracting F:\Tensor\MNIST_data\t10k-labels-idx1-ubyte.gz
WARNING:tensorflow:From C:/Users/alienware/Desktop/mnist_tensor.py:80: initialize_all_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
Instructions for updating:
Use `tf.global_variables_initializer` instead.
After 0 training step(s), validation accuracy using average model is 0.106 
After 1000 training step(s), validation accuracy using average model is 0.9774 
After 2000 training step(s), validation accuracy using average model is 0.9808 
After 3000 training step(s), validation accuracy using average model is 0.9824 
After 4000 training step(s), validation accuracy using average model is 0.9836 
After 5000 training step(s), validation accuracy using average model is 0.9838 
After 6000 training step(s), validation accuracy using average model is 0.9848 
After 7000 training step(s), validation accuracy using average model is 0.984 
After 8000 training step(s), validation accuracy using average model is 0.9836 
After 9000 training step(s), validation accuracy using average model is 0.9836 
After 10000 training step(s) ,test accuracy using average model is 0.9833
从上面的结果看出,在训练的初期,随着训练的进行,模型在验证集上的表现越来越好。从第4000轮开始,模型在验证集上的表现开始波动,这说明已经接近极小值了,所以迭代也就可以结束了。


评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Peanut_范

您的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值