MNIST数据处理
tensorflow中对mnist数据集作了封装,有训练数据集和测试数据集,为了验证模型训练效果,一般会从训练数据中划分出一部分数据作为验证数据,tensorflow提供的类会自动下载并处理MINIST数据,将数据从原始的数据包中解析成训练和测试神经网络时使用的格式。
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/Users/apple/Downloads/11人脸识别/projects/face/practice/MINIST/data/",one_hot=True)#one_hot:文本特征提取
print("training data size:",mnist.train.num_examples)
print("validating data size:", mnist.validation.num_examples)
print("testing data size:",mnist.test.num_examples)
print("example training data:",mnist.train.images[0])
print("example training data label:",mnist.train.labels[0])
通过input_data.read_data_sets 会自动将mnist数据集分为train, validation,test三个数据集,处理后的每张图片是长度为784的一维数组,数组中的元素对应图片像素矩阵中的每一张图片(28*28=784),像素矩阵取值范围[0,1],代表颜色的深浅,0为白色背景,1是黑色背景。为了方便使用随机梯度下降,input_data.read_data_sets还提供了mnist.train.next_batch函数,可以从所有训练数据中读取一小部分作为训练一个batch
batch_size = 100
xs, ys = mnist.train.next_batch(batch_size)
print("x shape:",xs.shape)#x shape: (100, 784)
print("y shape:",ys.shape)#y shape: (100, 10)
将MNIST数据集保存为图片
from tensorflow.examples.tutorials.mnist import input_data
#os主要包含了一些对于文件及其路径的相关函数,下文中运用到的os.path.exists()就是用于判断该路径的文件是否存在。
import os
#scipy.misc主要包含一些图像的输入输出函数,例如下文中用到的toimage函数用于显示指定路径下的图片。
import scipy.misc
#读取mnist数据集
mnist = input_data.read_data_sets("/Users/apple/Downloads/11人脸识别/projects/face/practice/MINIST/data",one_hot=True)
save_dir = "/Users/apple/Downloads/11人脸识别/projects/face/practice/MINIST/data/raw/"
#保存图片的文件夹
if os.path.exists(save_dir) is False:
os.makedirs(save_dir)
for i in range(20):
image_array = mnist.train.images[i,:]
image_array = image_array.reshape(28,28)
#路径+文件名
filename = save_dir+"train_pic_%d.jpg" % i
scipy.misc.toimage(image_array,cmin=0.0,cmax=1.0).save(filename)
图像标签的独热表示:
原始图像标签是0-9,每个训练标签都是十维向量,10维向量是类别的独热表示。
独热表示:一位有效编码,用N维向量表示N个类别,每个类别占据独立的一位,任何时候独热表示只有一位是1,其他位都是0
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/Users/apple/Downloads/11人脸识别/projects/face/practice/MINIST/data",one_hot=True)#one_hot:独热表示
print(mnist.train.labels[0,:])#[0. 0. 0. 0. 0. 0. 0. 1. 0. 0.]
print(mnist.train.labels[3,:])#[0. 0. 0. 0. 0. 0. 1. 0. 0. 0.]
即第0张图片对应的数字是7,第3张图片对应的数字是6
Softmax回归
Softmax回归是一个线性的多类分类模型,实际上它是直接从Logistic回归模型转化而来的,区别在于Logistic回归模型分为两类模型,而Softmax模型为多类分类模型。
设 x 为单个样本的特征,W,b是Softmax模型的参数,在mnist数据集中,x代表输入图片,是一个784维的向量,W是矩阵,形状是(784,10),b是一个10维的向量,10代表的是类别数。
各个类别的logit= W T x + b W^{T}x+b WTx+b,可以将logit看成对各个类别的打分,再将logit换成各个类别的概率(和为1):
y=Softmax(logit)
使用softamx实现mnist分类
import tensorflow as tf
import os
os.environ["KMP_DUPLICATE_LIB_OK"]="TRUE"
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/Users/apple/Downloads/11人脸识别/projects/face/practice/MINIST/data",one_hot=True)
x = tf.placeholder(tf.float32,[None,784])
w = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10]))
y_ = tf.placeholder(tf.float32,[None,10])
y = tf.nn.softmax(tf.matmul(x,w)+b)
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
cross_entropy = -tf.reduce_mean(y_*tf.log(y))
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
for i in range(1000):
batch_xs, batch_ys = mnist.train.next_batch(100)
sess.run(train_step,feed_dict={x:batch_xs,y_:batch_ys})
correct_prediction = tf.equal(tf.argmax(y,1),tf.argmax(y_,1))
correctcy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32))
print(sess.run(correctcy,feed_dict={x:mnist.test.images,y_:mnist.test.labels}))
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/Users/apple/Downloads/11人脸识别/projects/face/practice/MINIST/data",one_hot=True)
# print(mnist.train.labels[0,:])#[0. 0. 0. 0. 0. 0. 0. 1. 0. 0.]
# print(mnist.train.labels[3,:])#[0. 0. 0. 0. 0. 0. 1. 0. 0. 0.]
from tensorflow.examples.tutorials.mnist import input_data
import tensorflow as tf
import os
os.environ["KMP_DUPLICATE_LIB_OK"]="TRUE"
mnist = input_data.read_data_sets("/Users/apple/Downloads/11人脸识别/projects/face/practice/MINIST/data",one_hot=True)
'''placeholder:占位符
是用来存储训练图片数据的占位符,形状为[None,10],None表示这一维的大小是任意的,即可以传递任意张训练图片给这个占位符,每张图片用一个784维的向量表示
'''
x = tf.placeholder(tf.float32,shape=[None,784])
'''用变量存储模型的参数,这里w的初始值是784x10的全零矩阵,b的初始值是一个10维的0向量'''
w = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10]))
'''
x:[N,784]
w:[784,10]
b:[10,]
y:wx+b=[N,10],y的每一行是一个10维的向量,表示模型预测的样本对应到各个类别的概率
'''
y = tf.nn.softmax(tf.matmul(x,w)+b)
'''存储训练图片的实际标签'''
y_ = tf.placeholder(tf.float32,[None,10])
#定义交叉熵损失
cross_entropy = -tf.reduce_mean(y_*tf.log(y))
#梯度下降优化损失
#0.01:学习率
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
#创建会话,初始化变量
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
#进行1000步梯度下降
for i in range(30000):
'''
从train中提取100个训练数据,batch_xs形状[100,784]是图像数据,batch_ys:[100,10]是实际标签
每次不使用全部数据训练,而是每次提取1000个数据,一共训练30000次
占位符的值不会被保存,每次可以给占位符传递不同的值
'''
batch_xs, batch_ys = mnist.train.next_batch(1000)
sess.run(train_step,feed_dict={x:batch_xs,y_:batch_ys})#对占位符赋值
#梯度下降后检测模型训练的结果
'''
@argmax:求最大值的下标,可以用来将独热表示以及模型输出转换为数字标签
equal:判断输入样本和实际标签是否相等
'''
correct_prediction = tf.equal(tf.argmax(y,1),tf.argmax(y_,1))
'''
tf.cast:转换,将correction_prediction转成float32的,True变成1,False变成0
tf.reduce_mean:沿某维度的均值,求概率的平均值
'''
accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32))
'''
用全体测试样本测试准确率,大概0.893,多次训练数值会提高
'''
print(sess.run(accuracy,feed_dict={x:mnist.test.images,y_:mnist.test.labels}))
两层卷积网络分类
建立卷积神经网络可以提高mnist手写字符的识别准确率
import tensorflow as tf
import os
os.environ["KMP_DUPLICATE_LIB_OK"]="TRUE"
from tensorflow.examples.tutorials.mnist import input_data
# mnist = input_data.read_data_sets("/Users/apple/Downloads/11人脸识别/projects/face/practice/MINIST/data",one_hot=True)
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
def conv2d(x, W):
return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')
def max_pool_2x2(x):
return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1], padding='SAME')
if __name__ == '__main__':
# 读入数据
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
# x为训练图像的占位符、y_为训练图像标签的占位符
x = tf.placeholder(tf.float32, [None, 784])
y_ = tf.placeholder(tf.float32, [None, 10])
# 将单张图片从784维向量重新还原为28x28的矩阵图片
x_image = tf.reshape(x, [-1, 28, 28, 1])
# 第一层卷积层
W_conv1 = weight_variable([5, 5, 1, 32])
b_conv1 = bias_variable([32])
h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
h_pool1 = max_pool_2x2(h_conv1)
# 第二层卷积层
W_conv2 = weight_variable([5, 5, 32, 64])
b_conv2 = bias_variable([64])
h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
h_pool2 = max_pool_2x2(h_conv2)
# 全连接层,输出为1024维的向量
W_fc1 = weight_variable([7 * 7 * 64, 1024])
b_fc1 = bias_variable([1024])
h_pool2_flat = tf.reshape(h_pool2, [-1, 7 * 7 * 64])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
# 使用Dropout,keep_prob是一个占位符,训练时为0.5,测试时为1
keep_prob = tf.placeholder(tf.float32)
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
# 把1024维的向量转换成10维,对应10个类别
W_fc2 = weight_variable([1024, 10])
b_fc2 = bias_variable([10])
y_conv = tf.matmul(h_fc1_drop, W_fc2) + b_fc2
# 我们不采用先Softmax再计算交叉熵的方法,而是直接用tf.nn.softmax_cross_entropy_with_logits直接计算
cross_entropy = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y_conv))
# 同样定义train_step
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
# 定义测试的准确率
correct_prediction = tf.equal(tf.argmax(y_conv, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# 创建Session和变量初始化
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
# 训练20000步
for i in range(20000):
batch = mnist.train.next_batch(50)
# 每100步报告一次在验证集上的准确度
if i % 100 == 0:
train_accuracy = accuracy.eval(feed_dict={
x: batch[0], y_: batch[1], keep_prob: 1.0})
print("step %d, training accuracy %g" % (i, train_accuracy))
train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
# 训练结束后报告在测试集上的准确度
print("test accuracy %g" % accuracy.eval(feed_dict={
x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))
保存为pb模型
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data",one_hot=True)
batch_size = 64
n_batch = mnist.train.num_examples // batch_size
x = tf.placeholder(tf.float32,[None,784], name='x-input')
y = tf.placeholder(tf.float32,[None,10], name='y-input')
W = tf.Variable(tf.truncated_normal([784,10],stddev=0.1))
b = tf.Variable(tf.zeros([10])+0.1)
prediction = tf.nn.softmax(tf.matmul(x,W)+b, name='output')
loss = tf.losses.softmax_cross_entropy(y,prediction)
train_step = tf.train.AdamOptimizer(0.001).minimize(loss, name='train')
init = tf.global_variables_initializer()
correct_prediction = tf.equal(tf.argmax(y,1),tf.argmax(prediction,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32), name='accuracy')
with tf.Session() as sess:
sess.run(init)
for epoch in range(11):
for batch in range(n_batch):
batch_xs,batch_ys = mnist.train.next_batch(batch_size)
sess.run(train_step,feed_dict={x:batch_xs,y:batch_ys})
acc = sess.run(accuracy,feed_dict={x:mnist.test.images,y:mnist.test.labels})
print("Iter " + str(epoch) + ",Testing Accuracy " + str(acc))
output_graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, output_node_names=['output','accuracy'])
with tf.gfile.FastGFile('pb_models/mnist.pb',mode='wb') as f:
f.write(output_graph_def.SerializeToString())