Tensorflow作为当今主流的深度学习框架学习下还是很有必要的,相比其他的框架,tensorflow具有更大的灵活性;同时在资料方面也更具优势,因此,tensorflow无疑将是我们进行深度学习开发的首选工具。好了,不多废话了,总之,适合自己的才是最好的,毕竟鞋子合不合脚自己穿了才知道。下面是基于mnist的一个hellworld程序,主要是采用softmaxRession对mnist进行分类,借此了解tensorflow的基本流程:
(1)定义算法公式,也就是神经网络的前向计算
(2)定义loss,选定优化器,并制定优化器的优化方式
(3)迭代对数据进行训练
(4)在测试集或验证集上对准确率进行评测
# -*- coding: utf-8 -*-
"""
Created on Sat Jun 9 12:20:04 2018
@author: kuang yong jian
"""
import numpy as np
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data/',one_hot = True)
#定义输入端口
x = tf.placeholder(tf.float32,[None,784])
y_ = tf.placeholder(tf.float32,[None,10])
#定义参数变量
W = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10]))
#算法实现
predict = tf.nn.softmax(tf.matmul(x,W) + b)
#定义损失函数
loss = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(predict),reduction_indices = [1]))
#定义优化算法
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
#全局参数初始化,这个一定要记得
init = tf.global_variables_initializer()
#定义会话窗口
sess = tf.Session()
sess.run(init)
#迭代模型训练
for i in range(1000):
batch_x,batch_y = mnist.train.next_batch(100)
train_step.run(session = sess,feed_dict = {x:batch_x,y_:batch_y}) #也可以写成:sess.run(train_step,feed_dict = {x:batch_x,y_:batch_y})
#sess.run(train_step,feed_dict = {x:batch_x,y_:batch_y})
#模型性能评估
correct_predict = tf.equal(tf.argmax(predict,1),tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_predict,tf.float32))
print(accuracy.eval(session = sess,feed_dict = {x:mnist.test.images,y_:mnist.test.labels}))
![](https://i-blog.csdnimg.cn/blog_migrate/bf17c028b2b1b0d3844a7413cd53ea85.jpeg)
若有不当之处,请指教,谢谢!