MNIST数据集简单版本

import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data

# 载入数据集
mnist = input_data.read_data_sets("MNIST_data", one_hot=True)
# 每个批次的大小
batch_size = 100
# 计算一共有多少个批次
n_batch = mnist.train.num_examples // batch_size

# 定义两个placeholder
x = tf.placeholder(tf.float32, [None, 784])
y = tf.placeholder(tf.float32, [None, 10])

# 创建一个简单的神经网络
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([1, 10]))
prediction = tf.nn.softmax(tf.matmul(x, W) + b)

# 定义二次代价函数
loss = tf.reduce_mean(tf.square(y - prediction))
# 交叉熵
# loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels= y, logits= prediction))
# 定义梯度下降法
train_step = tf.train.GradientDescentOptimizer(0.2).minimize(loss)

# 初始化变量
init = tf.global_variables_initializer()

# 结果存储在一个布尔型列表中
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(prediction, 1))
# 求准确率
accuracy= tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

with tf.Session() as sess:
    sess.run(init)
    for epoch in range(21):
        for batch in range(n_batch):
            batch_xs, batch_ys = mnist.train.next_batch(batch_size)
            sess.run(train_step, feed_dict={x: batch_xs, y: batch_ys})

        acc = sess.run(accuracy, feed_dict={x: mnist.test.images, y: mnist.test.labels})
        print("Iter" + str(epoch) + ", Testing Accuracy" + str(acc))

/Users/changshuang/anaconda3/envs/tensorflow/bin/python /Users/changshuang/PycharmProjects/untitled/main.py
Extracting MNIST_data/train-images-idx3-ubyte.gz
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Successfully downloaded t10k-images-idx3-ubyte.gz 1648877 bytes.
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Successfully downloaded t10k-labels-idx1-ubyte.gz 4542 bytes.
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
2018-02-27 15:10:32.978414: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn’t compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2018-02-27 15:10:32.978430: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn’t compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2018-02-27 15:10:32.978435: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn’t compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2018-02-27 15:10:32.978441: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn’t compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2018-02-27 15:10:32.978445: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn’t compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
Iter0, Testing Accuracy0.8312
Iter1, Testing Accuracy0.8709
Iter2, Testing Accuracy0.8816
Iter3, Testing Accuracy0.8888
Iter4, Testing Accuracy0.8938
Iter5, Testing Accuracy0.8973
Iter6, Testing Accuracy0.8994
Iter7, Testing Accuracy0.901
Iter8, Testing Accuracy0.9043
Iter9, Testing Accuracy0.9049
Iter10, Testing Accuracy0.9054
Iter11, Testing Accuracy0.9075
Iter12, Testing Accuracy0.9085
Iter13, Testing Accuracy0.9097
Iter14, Testing Accuracy0.9097
Iter15, Testing Accuracy0.9108
Iter16, Testing Accuracy0.9118
Iter17, Testing Accuracy0.9127
Iter18, Testing Accuracy0.9126
Iter19, Testing Accuracy0.9133
Iter20, Testing Accuracy0.9137

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值