逻辑回归 手写数字识别
本方法及程序适用于tensorflow1.X
mnist.npz与mnist.gz区别
mnist.gz来源于tensorflow
包含四个文件:
train-images-idx3-ubyte.gz
train-labels-idx1-ubyte.gz
t10k-images-idx3-ubyte.gz
/t10k-labels-idx1-ubyte.gz
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('data/', one_hot=True)
train_images = mnist.train.images
train_labels = mnist.train.labels
test_images = mnist.test.images
test_labels = mnist.test.labels
图像size:
train_images:(55000,784)#784=28X28,图像像素点拉成一维
test_images:(10000,784)
train_labels:(55000,10)#10维中,第n个为1其余为0,表示数字n
test_labels:(10000,10)
mnist.npz来源于keras
包含一个文件:
mnist.npz
from tensorflow.keras.datasets import mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
图像size:
train_images:(60000,28,28)
test_images:(10000,28,28)
train_labels:(60000)
test_labels:(10000)
由于tensorflow已经把keras作为一个核心API,且tensorflow原先的mnist.gz数据集在使用时会有warning,如想继续用tensorflow的方法做手写数字识别,且使用mnist.npz数据集,可用下述代码。
mnist.npz数据集不连接外网下不了,可自行搜索资源进行下载。另外下载之后为了避免程序运行时再去请求外网下载,可更改mnist.py文件(目录:anaconda安装位置+\Lib\site-packages\tensorflow_core\python\keras\datasets)进行配置
代码
将mnist.npz的数据size转化为mnist.gz的size(并归一化)
import tensorflow.compat.v1 as tf
from tensorflow.keras.datasets import mnist
from tensorflow import keras
import Method
#加载数据集并规范化
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = train_images.reshape(60000,784)/255
train_labels = keras.utils.to_categorical(train_labels,10)
test_images = test_images.reshape(10000,784)/255
test_labels = keras.utils.to_categorical(test_labels,10)
print("MNIST dataset loaded")
print(train_images.shape,test_images.shape) # out: (60000, 28, 28)
print(train_labels.shape,test_labels.shape) # out: (60000,)
Method类之后会提到,用于存放next_batch函数。
变量定义及变量初始化
#参数初始化
x = tf.placeholder("float",[None,784])
y = tf.placeholder("float",[None,10])
W = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10]))
#逻辑回归模型
actv = tf.nn.softmax(tf.matmul(x,W)+b)
#损失函数
cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(actv),reduction_indices=1))
#优化
learning_rate = 0.01
optm = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
#预测
pred = tf.equal(tf.argmax(actv,1),tf.argmax(y,1))
#准确率
accr = tf.reduce_mean(tf.cast(pred,"float"))
#初始化
init = tf.global_variables_initializer()
迭代回归
training_epochs = 50
batch_size = 100
display_step = 5
#session
sess = tf.Session()
sess.run(init)
#mini-batch-learning
for epoch in range(training_epochs):
avg_cost = 0
num_batch = int(train_images.shape[0]/batch_size)
for i in range(num_batch):
batch_xs, batch_ys = Method.next_batch(train_images,train_labels,batch_size)
#print(batch_xs, batch_ys)
#batch_xs,batch_ys = train_images,train_labels
sess.run(optm,feed_dict = {x:batch_xs, y:batch_ys})
feeds = {x:batch_xs, y:batch_ys}
avg_cost += sess.run(cost, feed_dict=feeds)/num_batch
#print(i)
#print(sess.run(cost, feed_dict=feeds))
#display
if epoch % display_step == 0:
feeds_train = {x:batch_xs, y:batch_ys}
feeds_test = {x:test_images, y:test_labels}
train_acc = sess.run(accr,feed_dict = feeds_train)
test_acc = sess.run(accr,feed_dict = feeds_test)
print("Epoch: %03d/%03d cost: %.9f train_acc: %.3f test_acc: %.3f"
% (epoch, training_epochs, avg_cost, train_acc, test_acc))
print ("DONE")
next_batch实现
由于使用tensorflow之前的mnist.gz会有warning,但是keras没找到next_batch类似函数。可新建一个Method类,里面包含如下next_batch函数,引用自他人程序
# 随机取batch_size个训练样本
import numpy as np
def next_batch(train_data, train_target, batch_size):
index = [ i for i in range(0,len(train_target)) ]
np.random.shuffle(index);
batch_data = [];
batch_target = [];
for i in range(0,batch_size):
batch_data.append(train_data[index[i]]);
batch_target.append(train_target[index[i]])
return batch_data, batch_target
如上,可实现tensorflow完成mnist.npz数据集的逻辑回归算法。
——冬至