方法
tf.nn.dropout(x,keep_prob,noise_shape,seed,name)
x代表要删除的数据
keep_prob是float类型的值,代表x中每个数据被保留下来的概率
被保留下来的元素会乘以 1/keep_prob,没有被保留下来的元素乘以0
代码
- 1、演示tf.nn.dropout()函数
x = tf.Variable(tf.ones([10,10]))
drop_keep = tf.placeholder(tf.float32,name='keep')
y = tf.nn.dropout(x,drop_keep)
init_op = tf.initialize_all_variables()
with tf.Session() as sess:
# print(sess.run(x))
# print(x.eval())
sess.run(init_op)
print(sess.run(y,feed_dict={
drop_keep:0.5}))
#一般输入层keep_prob为0.8,隐藏单元设置为0.5
- 2、单隐藏层训练经典数据集Mnist手写体数字的识别
#经典数据集mnist下载
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets(r'C://Users//ays//Desktop//GIThub类//mnist',one_hot=True)
#print("Training data and label size: ")
#print(mnist.train.images.shape,mnist.train.labels.shape)
#print("Testing data and label size: ")
#print(mnist.test.images.shape,mnist.test.labels.shape)
#print("Validating data and label size: ")
#print(mnist.validation.images.shape,mnist.validation.labels.shape)
#实现一个前馈神经网络识别手写字体的经典模型
#epoch:1次epoch代表跑完一次全部的数据
#batch_size:表示一次epoch中每次取batch_size大小的数据去训练,直至取完则完成本次#epoch
'''适量增大batch_size优势
1、跑完一次 epoch(全数据集)所需的迭代次数减少