8多层网络加防止过拟合

#使用dropout防止过拟合,keep_prob用于设置工作神经元占比
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
 
#载入数据集
mnist=input_data.read_data_sets("MNIST_data", one_hot=True)
 
#每个批次的大小
batch_size=100
#计算一共有多少批次
n_batch=mnist.train.num_examples // batch_size
 
#定义两个placeholder
x=tf.placeholder(tf.float32,[None,784])
y=tf.placeholder(tf.float32,[None,10])
keep_prob=tf.placeholder(tf.float32)
 
#创建一个简单的神经网络
W1=tf.Variable(tf.truncated_normal([784,2000],stddev=0.1))   #使用一个截断的正太分布初始化W,stddev=0.1标准差为0.1
b1=tf.Variable(tf.zeros([1,2000]))
L1=tf.nn.tanh(tf.matmul(x,W1)+b1)   #激活函数为双曲正切函数
L1_drop=tf.nn.dropout(L1, keep_prob)#L1的dropout,keep_prob用于设置工作神经元占比
 
W2=tf.Variable(tf.truncated_normal([2000,1000],stddev=0.1))
b2=tf.Variable(tf.zeros([1,1000]))
L2=tf.nn.tanh(tf.matmul(L1_drop, W2)+b2)
L2_drop=tf.nn.dropout(L2, keep_prob)
 
W3=tf.Variable(tf.truncated_normal([1000,10], stddev=0.1))
b3=tf.Variable(tf.zeros([1,10]))
prediction=tf.nn.softmax(tf.matmul(L2_drop,W3)+b3)
 
#二次代价函数
#loss=tf.reduce_mean(tf.square(y-prediction))
loss=tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=prediction))
#使用剃度下降法
train_step=tf.train.GradientDescentOptimizer(0.2).minimize(loss)
 
#初始化变量
init=tf.global_variables_initializer()
 
#结果存放在一个布尔型列表中
correct_prediction=tf.equal(tf.argmax(y,1), tf.argmax(prediction,1)) #argmax返回一维张量中最大的值所在的位置
#求准确率
accuracy=tf.reduce_mean(tf.cast(correct_prediction,tf.float32))
 
with tf.Session() as sess:
    sess.run(init)
    for epoch in range(30):
        for batch in range(n_batch):
            batch_xs,batch_ys=mnist.train.next_batch(batch_size)
            sess.run(train_step,feed_dict={x:batch_xs, y:batch_ys,keep_prob:0.5})
 
        test_acc=sess.run(accuracy,feed_dict={x:mnist.test.images, y:mnist.test.labels,keep_prob:1.0})
        train_acc=sess.run(accuracy,feed_dict={x:mnist.train.images, y:mnist.train.labels,keep_prob:1.0})
        print("Iter"+str(epoch)+",Testing Accuracy "+str(test_acc)+"Training Accuracy "+str(train_acc))
Extracting MNIST_data\train-images-idx3-ubyte.gz
Extracting MNIST_data\train-labels-idx1-ubyte.gz
Extracting MNIST_data\t10k-images-idx3-ubyte.gz
Extracting MNIST_data\t10k-labels-idx1-ubyte.gz
Iter0,Testing Accuracy 0.9064Training Accuracy 0.9002182
Iter1,Testing Accuracy 0.9188Training Accuracy 0.9116727
Iter2,Testing Accuracy 0.9259Training Accuracy 0.92301816
Iter3,Testing Accuracy 0.9295Training Accuracy 0.9277091
Iter4,Testing Accuracy 0.9346Training Accuracy 0.9312364
Iter5,Testing Accuracy 0.9351Training Accuracy 0.93307275
Iter6,Testing Accuracy 0.94Training Accuracy 0.9374909
Iter7,Testing Accuracy 0.9395Training Accuracy 0.9377818
Iter8,Testing Accuracy 0.9401Training Accuracy 0.9410545
Iter9,Testing Accuracy 0.9425Training Accuracy 0.94283634
Iter10,Testing Accuracy 0.9439Training Accuracy 0.94476366
Iter11,Testing Accuracy 0.9444Training Accuracy 0.94512725
Iter12,Testing Accuracy 0.9464Training Accuracy 0.9472182
Iter13,Testing Accuracy 0.9467Training Accuracy 0.9476909
Iter14,Testing Accuracy 0.9493Training Accuracy 0.9499818
Iter15,Testing Accuracy 0.9504Training Accuracy 0.95107275
Iter16,Testing Accuracy 0.9508Training Accuracy 0.95272726
Iter17,Testing Accuracy 0.9504Training Accuracy 0.9532727
Iter18,Testing Accuracy 0.9499Training Accuracy 0.95463634
Iter19,Testing Accuracy 0.9514Training Accuracy 0.95527273
Iter20,Testing Accuracy 0.9527Training Accuracy 0.9560364
Iter21,Testing Accuracy 0.9534Training Accuracy 0.9568545
Iter22,Testing Accuracy 0.9539Training Accuracy 0.95745456
Iter23,Testing Accuracy 0.9536Training Accuracy 0.9584
Iter24,Testing Accuracy 0.9563Training Accuracy 0.9593091
Iter25,Testing Accuracy 0.9556Training Accuracy 0.9599091
Iter26,Testing Accuracy 0.9554Training Accuracy 0.9600545
Iter27,Testing Accuracy 0.956Training Accuracy 0.9602
Iter28,Testing Accuracy 0.9578Training Accuracy 0.9616
Iter29,Testing Accuracy 0.9573Training Accuracy 0.9619455
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值