Tensorflow—Droupout

代码:

import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data


#载入数据集
#当前路径
mnist = input_data.read_data_sets("MNISt_data", one_hot=True)

运行结果:

Extracting MNISt_data/train-images-idx3-ubyte.gz
Extracting MNISt_data/train-labels-idx1-ubyte.gz
Extracting MNISt_data/t10k-images-idx3-ubyte.gz
Extracting MNISt_data/t10k-labels-idx1-ubyte.gz

代码:

#每个批次的大小
#以矩阵的形式放进去
batch_size = 100
#计算一共有多少个批次
n_batch = mnist.train.num_examples // batch_size


#定义三个placeholder
#28 x 28 = 784
x = tf.placeholder(tf.float32, [None, 784])
y = tf.placeholder(tf.float32, [None, 10])
keep_prob = tf.placeholder(tf.float32)


#创建一个的神经网络
#输入层784,隐藏层一1000,隐藏层二1000,隐藏层三1000,输出层10个神经元
#隐藏层
W1 = tf.Variable(tf.truncated_normal([784, 1000], stddev=0.1))
b1 = tf.Variable(tf.zeros([1000]) + 0.1)
L1 = tf.nn.tanh(tf.matmul(x, W1) + b1)
L1_drop = tf.nn.dropout(L1,keep_prob)


W2 = tf.Variable(tf.truncated_normal([1000, 1000], stddev=0.1))
b2 = tf.Variable(tf.zeros([1000]) + 0.1)
L2 = tf.nn.tanh(tf.matmul(L1_drop, W2) + b2)
L2_drop = tf.nn.dropout(L2,keep_prob)


W3 = tf.Variable(tf.truncated_normal([1000, 1000], stddev=0.1))
b3 = tf.Variable(tf.zeros([1000]) + 0.1)
L3 = tf.nn.tanh(tf.matmul(L2_drop, W3) + b3)
L3_drop = tf.nn.dropout(L3,keep_prob)


W4 = tf.Variable(tf.truncated_normal([1000, 10], stddev=0.1))
b4 = tf.Variable(tf.zeros([10]) + 0.1)
prediction = tf.nn.softmax(tf.matmul(L3_drop, W4) + b4)

#交叉熵
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=prediction))

#使用梯度下降法
train_step = tf.train.GradientDescentOptimizer(0.2).minimize(loss)

#初始化变量
init = tf.global_variables_initializer()



#结果存放在一个布尔型列表中
#tf.argmax(y, 1)与tf.argmax(prediction, 1)相同返回True,不同则返回False
#argmax返回一维张量中最大的值所在的位置
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(prediction, 1))

#求准确率
#tf.cast(correct_prediction, tf.float32) 将布尔型转换为浮点型
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))


with tf.Session() as sess:
    sess.run(init)
    #总共10个周期
    for epoch in range(10):
        #总共n_batch个批次
        for batch in range(n_batch):
            #获得一个批次
            batch_xs, batch_ys = mnist.train.next_batch(batch_size)
            sess.run(train_step, feed_dict={x:batch_xs, y:batch_ys, keep_prob:0.7})
        
        #训练完一个周期后测试数据准确率
        test_acc = sess.run(accuracy, feed_dict={x:mnist.test.images, y:mnist.test.labels, keep_prob:1.0})
        #训练完一个周期后训练数据准确率
        train_acc = sess.run(accuracy, feed_dict={x:mnist.train.images, y:mnist.train.labels, keep_prob:1.0})
        print("Iter" + str(epoch) + ", Testing Accuracy" + str(test_acc)+ ", Testing Accuracy" + str(train_acc))

运行结果:

#没有使用Droupout
Iter0, Testing Accuracy0.9408, Testing Accuracy0.946473
Iter1, Testing Accuracy0.9566, Testing Accuracy0.968982
Iter2, Testing Accuracy0.963, Testing Accuracy0.976364
Iter3, Testing Accuracy0.9651, Testing Accuracy0.982218
Iter4, Testing Accuracy0.9706, Testing Accuracy0.985836
Iter5, Testing Accuracy0.9707, Testing Accuracy0.987618
Iter6, Testing Accuracy0.9719, Testing Accuracy0.989018
Iter7, Testing Accuracy0.9742, Testing Accuracy0.990255
Iter8, Testing Accuracy0.9737, Testing Accuracy0.991036
Iter9, Testing Accuracy0.9738, Testing Accuracy0.9916

运行结果:

#使用Droupout
#过拟合情况很小
Iter0, Testing Accuracy0.9175, Testing Accuracy0.9134
Iter1, Testing Accuracy0.9291, Testing Accuracy0.926327
Iter2, Testing Accuracy0.9362, Testing Accuracy0.935982
Iter3, Testing Accuracy0.9399, Testing Accuracy0.940564
Iter4, Testing Accuracy0.9433, Testing Accuracy0.9454
Iter5, Testing Accuracy0.9465, Testing Accuracy0.949091
Iter6, Testing Accuracy0.9479, Testing Accuracy0.952145
Iter7, Testing Accuracy0.9504, Testing Accuracy0.956018
Iter8, Testing Accuracy0.9523, Testing Accuracy0.956855
Iter9, Testing Accuracy0.9542, Testing Accuracy0.9586

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值