5.1 提高mnist数据分类器准确率到98%以上

如何将Mnist数据分类器的准确率提高到98%以上?

技巧:
  • 网络的层数以及每一层神经元的个数
  • 优化器的选择:Adam,SGD, Adagrad, RMSprop, Adadelta
  • 学习率的更新:随着迭代次的增加,指数下降
  • 学习轮数的设定
程序:
mnist = input_data.read_data_sets("C:/Users/WangT/Desktop/MNIST_data",one_hot=True)#导入数据

batch_size = 100#定义每一次批次处理的数据大小
n_batch = mnist.train.num_examples // batch_size
#计算分批处理次数,//是整除的除数,结果始终为整数,区别于/
#mnist.train.num_examples是训练集的数据大小,类似还有mnist.validation.num_examples, mnist.test.num_examples.

x = tf.placeholder(tf.float32,[None,784])
y = tf.placeholder(tf.float32,[None,10])
keep_prob = tf.placeholder(tf.float32)
#placeholder占位符,希望能输入任意数量的MNIST图像,每一张图像展平为784维的向量,用2维浮点数张量来表示这些图,这个张量的形状是【none,784】,此处None表示此张量的第一个维度可以是任意长度的。

lr = tf.Variable(0.001, dtype = tf.float32)

W1 = tf.Variable(tf.truncated_normal([784,500],stddev=0.1))
b1 = tf.Variable(tf.zeros([500])+0.1)
L1 = tf.nn.tanh(tf.matmul(x,W1)+b1)
#L1_drop = tf.nn.dropout(L1,keep_prob)
#模型的参数,可以用Variable表示,可以计算输入值,也可以在计算中被修改

W2 = tf.Variable(tf.truncated_normal([500,300],stddev=0.1))
b2 = tf.Variable(tf.zeros([300])+0.1)
L2 = tf.nn.tanh(tf.matmul(L1,W2)+b2)

W3= tf.Variable(tf.truncated_normal([300,10],stddev=0.1))
b3 = tf.Variable(tf.zeros([10])+0.1)
prediction = tf.nn.softmax(tf.matmul(L2,W3)+b3)
#得到预测结果
# loss = tf.reduce_mean(tf.square(y - prediction))
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y,logits=prediction))
#损失函数,评估模型好坏,tf.square是平方,tf.reduce_mean是取平均值
#train_step = tf.train.GradientDescentOptimizer(0.2).minimize(loss)
train_step = tf.train.AdamOptimizer(lr).minimize(loss)

#tf使用梯度下降法,以lr的学习速率,不断修改模型参数来最小化loss

init = tf.global_variables_initializer()
#添加一个操作来初始化变量
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(prediction,1))
#tf.equal()比对两个数,相同返回true不同返回false,tf.argmax(y,1)返回y最大时对应的x
accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32))
#tf.cast()将上述结果每一次转换成浮点型,累加并取平均值,得到准确率

with tf.Session() as sess:#定义对话
    sess.run(init)
    for epoch in range (51):#模型循环训练51次
        for batch in range(n_batch):#每次训练要循环n_batch批次
            batch_xs,batch_ys = mnist.train.next_batch(batch_size)#读取训练集的下一批数据
            sess.run(tf.assign(lr,0.001*(0.95**epoch)))
            sess.run(train_step, feed_dict={x:batch_xs,y:batch_ys})#运行模型训练
        
        acc1 = sess.run(accuracy, feed_dict={x:mnist.test.images, y:mnist.test.labels})#每训练一次输出一次准确率,利用的是测试集的数据
        acc2 = sess.run(accuracy,feed_dict={x:mnist.train.images,y:mnist.train.labels})
        print("Iter"+str(epoch)+",Testing Accuracy"+str(acc1)+",Training accuracy"+str(acc2))


Extracting C:/Users/WangT/Desktop/MNIST_data\train-images-idx3-ubyte.gz
Extracting C:/Users/WangT/Desktop/MNIST_data\train-labels-idx1-ubyte.gz
Extracting C:/Users/WangT/Desktop/MNIST_data\t10k-images-idx3-ubyte.gz
Extracting C:/Users/WangT/Desktop/MNIST_data\t10k-labels-idx1-ubyte.gz
Iter0,Testing Accuracy0.9499,Training accuracy0.955691
Iter1,Testing Accuracy0.9627,Training accuracy0.970036
Iter2,Testing Accuracy0.9661,Training accuracy0.977127
Iter3,Testing Accuracy0.9705,Training accuracy0.981327
Iter4,Testing Accuracy0.9759,Training accuracy0.986345
Iter5,Testing Accuracy0.9755,Training accuracy0.988509
Iter6,Testing Accuracy0.9749,Training accuracy0.989545
Iter7,Testing Accuracy0.9783,Training accuracy0.991491
Iter8,Testing Accuracy0.9771,Training accuracy0.992618
Iter9,Testing Accuracy0.9793,Training accuracy0.992564
Iter10,Testing Accuracy0.9789,Training accuracy0.993691
Iter11,Testing Accuracy0.9796,Training accuracy0.994327
Iter12,Testing Accuracy0.9779,Training accuracy0.994382
Iter13,Testing Accuracy0.9788,Training accuracy0.995036
Iter14,Testing Accuracy0.9795,Training accuracy0.995073
Iter15,Testing Accuracy0.9801,Training accuracy0.995491
Iter16,Testing Accuracy0.9806,Training accuracy0.995727
新内容:
1.lr = tf.Variable(0.001, dtype = tf.float32)#使用变量方式,定义学习率,初始值为0.001,类型是float32
2.sess.run(tf.assign(lr,0.001*(0.95**epoch)))
每批次迭代,学习率以初始值为0.001,指数下降,利用tf.assign()的方法赋值更新
  • 1
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值