【深度学习】Tensorflow学习笔记--MNIST

参考文档:

Tensorflow中文社区:
http://www.tensorfly.cn/tfdoc/tutorials/mnist_beginners.html

MNIST

MNIST是一个入门级CV数据集,包含手写数字图片。其中60000张用来训练,10000张用来测试。
一个MINST数据单元包含一张手写数字图片(28*28)以及一个图片所对应的label;
将每张28*28的图片展开成向量,长度为784。那么mnist.train.images就是一个[60000,784]的张量;
mnist.train.labels是一个[60000,10]的矩阵。

构建简单softmax回归模型实现预测

代码

# -*- coding:utf-8 -*-
import tensorflow as tf 
import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)

x = tf.placeholder("float",[None,784]) #x是一个占位符,None表示张量的第一个维度可以是任意长度的
W = tf.Variable(tf.zeros([784,10])) #随意设置w和b的值,因为后续要学习
b = tf.Variable(tf.zeros([10]))
#W的维度是[784,10],因为我们想要用784维的图片向量乘以它以得到一个10维的证据值向量,每一位对应不同数字类。b的形状是[10],所以我们可以直接把它加到输出上面。

y = tf.nn.softmax(tf.matmul(x,W)+b) #预测值
y_ = tf.placeholder("float",[None,10]) #正确值
cross_entropy = -tf.reduce_sum(y_ * tf.log(y))
'''
首先,用 tf.log 计算 y 的每个元素的对数。
接下来,我们把 y_ 的每一个元素和 tf.log(y_) 的对应元素相乘。
最后,用 tf.reduce_sum 计算张量的所有元素的总和。
(注意,这里的交叉熵不仅仅用来衡量单一的一对预测和真实值,而是所有100幅图片的交叉熵的总和
'''
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
#要求TensorFlow用梯度下降算法(gradient descent algorithm)以0.01的学习速率最小化交叉熵

init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)

#开始训练模型,这里我们让模型循环训练1000次
for i in range(1000):
    #该循环的每个步骤中,我们都会随机抓取训练数据中的100个批处理数据点,
    #   然后我们用这些数据点作为参数替换之前的占位符来运行train_step
    batch_xs, batch_ys = mnist.train.next_batch(100)
    sess.run(train_step,feed_dict={x:batch_xs,y_:batch_ys})


#评估模型性能
'''
首先找出那些预测正确的标签。
tf.argmax 是一个非常有用的函数,它能给出某个tensor对象在某一维上的其数据最大值所在的索引值。
由于标签向量是由0,1组成,因此最大值1所在的索引位置就是类别标签,
比如tf.argmax(y,1)返回的是模型对于任一输入x预测到的标签值,
而 tf.argmax(y_,1) 代表正确的标签,可以用 tf.equal 来检测我们的预测是否真实标签匹配(索引位置一样表示匹配)。
'''
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
#返回一组布尔值。为了确定正确预测项的比例,可以把布尔值转换成浮点数,然后取平均值。
#例如,[True, False, True, True] 会变成 [1,0,1,1] ,取平均值后得到 0.75.

accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels})

测试准确率:0.9133

构建一个多层卷积网络实现预测

实现代码

# -*- coding:utf-8 -*-
import tensorflow as tf 
import input_data


mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
sess = tf.InteractiveSession()

x = tf.placeholder("float",[None,784]) #x是一个占位符,None表示张量的第一个维度可以是任意长度的
#W的维度是[784,10],因为我们想要用784维的图片向量乘以它以得到一个10维的证据值向量,每一位对应不同数字类。b的形状是[10],所以我们可以直接把它加到输出上面。

y_ = tf.placeholder("float",[None,10]) #正确值

#创建权重生成函数
def weight_variable(shape):
  initial = tf.truncated_normal(shape, stddev=0.1)
  return tf.Variable(initial)

#bias生成函数
def bias_variable(shape):
  initial = tf.constant(0.1, shape=shape)
  return tf.Variable(initial)

#卷积使用1步长stride size,0边距padding size的模板
def conv2d(x, W):
  return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')

#池化利用max pooling
def max_pool_2x2(x):
  return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],
                        strides=[1, 2, 2, 1], padding='SAME')


#第一层:卷积 + max pooling
#卷积的权重张量形状是[5, 5, 1, 32],前两个维度是patch的大小,接着是输入的通道数目,最后是输出的通道数目
W_conv1 = weight_variable([5, 5, 1, 32])
b_conv1 = bias_variable([32])
x_image = tf.reshape(x, [-1,28,28,1])
#为了用这一层,把x变成一个4d向量,其第2、第3维对应图片的宽、高,最后一维代表图片的颜色通道数(因为是灰度图所以这里的通道数为1,如果是rgb彩色图,则为3)

#把x_image和权值向量进行卷积,加上偏置项,然后应用ReLU激活函数,最后进行max pooling。
h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
h_pool1 = max_pool_2x2(h_conv1)

#第二层
#为了构建一个更深的网络,我们会把几个类似的层堆叠起来。第二层中,每个5x5的patch会得到64个特征
W_conv2 = weight_variable([5, 5, 32, 64])
b_conv2 = bias_variable([64])

h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
h_pool2 = max_pool_2x2(h_conv2)

#密集连接层(全连接)
#图片尺寸减小到7x7,我们加入一个有1024个神经元的全连接层,用于处理整个图片。我们把池化层输出的张量reshape成一些向量,乘上权重矩阵,加上偏置,然后对其使用ReLU。
W_fc1 = weight_variable([7 * 7 * 64, 1024])
b_fc1 = bias_variable([1024])

h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)

'''
为了减少过拟合,在输出层之前加入dropout。
我们用一个placeholder来代表一个神经元的输出在dropout中保持不变的概率。
这样我们可以在训练过程中启用dropout,在测试过程中关闭dropout。
TensorFlow的tf.nn.dropout操作除了可以屏蔽神经元的输出外,还会自动处理神经元输出值的scale。
所以用dropout的时候可以不用考虑scale。
'''
keep_prob = tf.placeholder("float")
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)

#输出层:最后添加一个softmax层
W_fc2 = weight_variable([1024, 10])
b_fc2 = bias_variable([10])

y_conv=tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc2) + b_fc2)

#用更加复杂的ADAM优化器来做梯度最速下降,在feed_dict中加入额外的参数keep_prob来控制dropout比例。然后每100次迭代输出一次日志
cross_entropy = -tf.reduce_sum(y_*tf.log(y_conv))
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
sess.run(tf.initialize_all_variables())
for i in range(20000):
  batch = mnist.train.next_batch(50)
  if i%100 == 0:
    train_accuracy = accuracy.eval(feed_dict={
        x:batch[0], y_: batch[1], keep_prob: 1.0})
    print "step %d, training accuracy %g"%(i, train_accuracy)
  train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})

print "test accuracy %g"%accuracy.eval(feed_dict={
    x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0})

测试输出:

step 0, training accuracy 0.06
step 100, training accuracy 0.78
step 200, training accuracy 0.88
step 300, training accuracy 0.9
step 400, training accuracy 0.96
step 500, training accuracy 0.92
step 600, training accuracy 1
step 700, training accuracy 0.96
step 800, training accuracy 0.9
step 900, training accuracy 1
step 1000, training accuracy 0.96
step 1100, training accuracy 0.96
step 1200, training accuracy 0.96
step 1300, training accuracy 0.98
step 1400, training accuracy 0.94
step 1500, training accuracy 0.98
step 1600, training accuracy 0.98
step 1700, training accuracy 1
step 1800, training accuracy 0.98
step 1900, training accuracy 0.96
step 2000, training accuracy 0.96
step 2100, training accuracy 1
step 2200, training accuracy 0.94
step 2300, training accuracy 1
step 2400, training accuracy 1
step 2500, training accuracy 0.96
step 2600, training accuracy 1
step 2700, training accuracy 0.96
step 2800, training accuracy 0.98
step 2900, training accuracy 0.98
step 3000, training accuracy 1
step 3100, training accuracy 0.98
step 3200, training accuracy 0.98
step 3300, training accuracy 0.98
step 3400, training accuracy 0.98
step 3500, training accuracy 1
step 3600, training accuracy 0.98
step 3700, training accuracy 0.96
step 3800, training accuracy 0.98
step 3900, training accuracy 0.98
step 4000, training accuracy 1
step 4100, training accuracy 1
step 4200, training accuracy 1
step 4300, training accuracy 0.98
step 4400, training accuracy 0.98
step 4500, training accuracy 1
step 4600, training accuracy 0.98
step 4700, training accuracy 1
step 4800, training accuracy 0.98
step 4900, training accuracy 1
step 5000, training accuracy 1
step 5100, training accuracy 0.98
step 5200, training accuracy 1
step 5300, training accuracy 1
step 5400, training accuracy 0.98
step 5500, training accuracy 1
step 5600, training accuracy 1
step 5700, training accuracy 1
step 5800, training accuracy 0.98
step 5900, training accuracy 1
step 6000, training accuracy 1
step 6100, training accuracy 1
step 6200, training accuracy 1
step 6300, training accuracy 0.98
step 6400, training accuracy 1
step 6500, training accuracy 1
step 6600, training accuracy 1
step 6700, training accuracy 1
step 6800, training accuracy 1
step 6900, training accuracy 1
step 7000, training accuracy 1
step 7100, training accuracy 1
step 7200, training accuracy 1
step 7300, training accuracy 1
step 7400, training accuracy 0.98
step 7500, training accuracy 1
step 7600, training accuracy 0.98
step 7700, training accuracy 1
step 7800, training accuracy 0.98
step 7900, training accuracy 1
step 8000, training accuracy 1
step 8100, training accuracy 1
step 8200, training accuracy 1
step 8300, training accuracy 1
step 8400, training accuracy 0.98
step 8500, training accuracy 1
step 8600, training accuracy 0.98
step 8700, training accuracy 1
step 8800, training accuracy 1
step 8900, training accuracy 1
step 9000, training accuracy 1
step 9100, training accuracy 1
step 9200, training accuracy 0.98
step 9300, training accuracy 1
step 9400, training accuracy 1
step 9500, training accuracy 1
step 9600, training accuracy 1
step 9700, training accuracy 1
step 9800, training accuracy 1
step 9900, training accuracy 1
step 10000, training accuracy 1
step 10100, training accuracy 1
step 10200, training accuracy 1
step 10300, training accuracy 1
step 10400, training accuracy 1
step 10500, training accuracy 1
step 10600, training accuracy 1
step 10700, training accuracy 1
step 10800, training accuracy 1
step 10900, training accuracy 1
step 11000, training accuracy 1
step 11100, training accuracy 0.98
step 11200, training accuracy 1
step 11300, training accuracy 1
step 11400, training accuracy 1
step 11500, training accuracy 1
step 11600, training accuracy 1
step 11700, training accuracy 0.98
step 11800, training accuracy 1
step 11900, training accuracy 1
step 12000, training accuracy 1
step 12100, training accuracy 1
step 12200, training accuracy 1
step 12300, training accuracy 1
step 12400, training accuracy 1
step 12500, training accuracy 1
step 12600, training accuracy 1
step 12700, training accuracy 0.98
step 12800, training accuracy 1
step 12900, training accuracy 0.98
step 13000, training accuracy 1
step 13100, training accuracy 1
step 13200, training accuracy 1
step 13300, training accuracy 1
step 13400, training accuracy 1
step 13500, training accuracy 1
step 13600, training accuracy 1
step 13700, training accuracy 1
step 13800, training accuracy 1
step 13900, training accuracy 1
step 14000, training accuracy 1
step 14100, training accuracy 1
step 14200, training accuracy 1
step 14300, training accuracy 1
step 14400, training accuracy 1
step 14500, training accuracy 1
step 14600, training accuracy 1
step 14700, training accuracy 1
step 14800, training accuracy 1
step 14900, training accuracy 1
step 15000, training accuracy 1
step 15100, training accuracy 1
step 15200, training accuracy 0.98
step 15300, training accuracy 1
step 15400, training accuracy 1
step 15500, training accuracy 1
step 15600, training accuracy 1
step 15700, training accuracy 0.98
step 15800, training accuracy 1
step 15900, training accuracy 1
step 16000, training accuracy 1
step 16100, training accuracy 1
step 16200, training accuracy 1
step 16300, training accuracy 1
step 16400, training accuracy 0.98
step 16500, training accuracy 1
step 16600, training accuracy 1
step 16700, training accuracy 1
step 16800, training accuracy 1
step 16900, training accuracy 1
step 17000, training accuracy 1
step 17100, training accuracy 1
step 17200, training accuracy 1
step 17300, training accuracy 1
step 17400, training accuracy 1
step 17500, training accuracy 1
step 17600, training accuracy 1
step 17700, training accuracy 1
step 17800, training accuracy 1
step 17900, training accuracy 1
step 18000, training accuracy 1
step 18100, training accuracy 1
step 18200, training accuracy 1
step 18300, training accuracy 1
step 18400, training accuracy 1
step 18500, training accuracy 1
step 18600, training accuracy 1
step 18700, training accuracy 1
step 18800, training accuracy 1
step 18900, training accuracy 1
step 19000, training accuracy 1
step 19100, training accuracy 1
step 19200, training accuracy 1
step 19300, training accuracy 1
step 19400, training accuracy 1
step 19500, training accuracy 1
step 19600, training accuracy 1
step 19700, training accuracy 1
step 19800, training accuracy 1
step 19900, training accuracy 1
test accuracy 0.9924
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
这个错误提示表明你的程序无法从指定的 URL 地址 `https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz` 下载 MNIST 数据集。可能是由于网络连接问题、代理设置或防火墙等原因导致的。 你可以尝试以下方法来解决这个问题: 1. 检查网络连接是否正常,确保能够访问互联网。 2. 如果你使用的是代理服务器,请检查代理设置是否正确。 3. 如果你使用的是防火墙,请确保已经允许程序访问互联网。 4. 尝试使用其他下载 MNIST 数据集的方法,例如使用 `tf.keras.datasets.mnist.load_data()` 来下载数据集。 以下是使用 `tf.keras.datasets.mnist.load_data()` 函数下载 MNIST 数据集的示例代码: ```python import tensorflow as tf # 加载 MNIST 数据集 (x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data() # 进行数据预处理 x_train = x_train.reshape((60000, 784)) / 255. x_test = x_test.reshape((10000, 784)) / 255. # 构建模型并训练 model = tf.keras.Sequential([ tf.keras.layers.Dense(units=64, activation='relu', input_shape=(784,)), tf.keras.layers.Dense(units=10, activation='softmax') ]) model.compile(optimizer=tf.keras.optimizers.Adam(), loss=tf.keras.losses.SparseCategoricalCrossentropy(), metrics=[tf.keras.metrics.SparseCategoricalAccuracy()]) model.fit(x_train, y_train, epochs=5, batch_size=64) # 在测试集上评估模型 model.evaluate(x_test, y_test) ``` 在上面的代码中,我们使用 `tf.keras.datasets.mnist.load_data()` 函数加载 MNIST 数据集,并进行数据预处理。然后,我们构建模型并训练,最后在测试集上评估模型。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值