学习TensorFlow,在MNIST数据集上建立softmax回归模型并测试
一、代码
<span style="font-size:18px;">from tensorflow.examples.tutorials.mnist import input_data
mnist =input_data.read_data_sets('MNIST_data', one_hot=True)
import tensorflow astf
sess =tf.InteractiveSession()
x =tf.placeholder(tf.float32, shape=[None, 784])
y_ =tf.placeholder(tf.float32, shape=[None, 10])
W =tf.Variable(tf.zeros([784,10]))
b =tf.Variable(tf.zeros([10]))
sess.run(tf.initialize_all_variables())
y =tf.nn.softmax(tf.matmul(x,W) + b)
cross_entropy =-tf.reduce_sum(y_*tf.log(y))
train_step =tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
for i in range(1000):
batch = mnist.train.next_batch(50)
train_step.run(feed_dict={x: batch[0], y_:batch[1]})
correct_prediction =tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy =tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(accuracy.eval(feed_dict={x:mnist.test.images, y_: mnist.test.labels}))</span>
二、运行结果
三、代码解析
import tensorflow as tf
sess =tf.InteractiveSession()
InteractiveSession()可以一边构建计算图,一边执行,而Session()需要把计算图全部构建完成才能执行
x =tf.placeholder(tf.float32, shape=[None, 784])
y_ =tf.placeholder(tf.float32, shape=[None, 10])
创建图像输入节点和目标输出节点
W = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10]))
构建softmax回归模型的参数:权重和偏置
sess.run(tf.initialize_all_variables())
初始化所有的variables</span>
y = tf.nn.softmax(tf.matmul(x,W)+ b)
cross_entropy = -tf.reduce_sum(y_*tf.log(y))
预测输出,使用交叉熵作为损失函数
train_step =tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
步长为0.01,使用梯度下降法训练模型
for i in range(1000):
batch = mnist.train.next_batch(50)
train_step.run(feed_dict={x:batch[0], y_: batch[1]})
训练周期为1000,每个周期batch是50幅图像
correct_prediction =tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
比较每个预测结果和真实结果,返回一个二值向量
accuracy =tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
将二值向量转换为浮点向量,并计算正确率
print(accuracy.eval(feed_dict={x:mnist.test.images, y_: mnist.test.labels}))
打印输出训练模型对数据集的测试结果,feed_dict指定输入图像数据和目标输出结果
参考资料:https://www.tensorflow.org/versions/r0.7/tutorials/mnist/pros/index.html