With probability keep_prob
, outputs the input element scaled up by 1 / keep_prob
, otherwise outputs 0
. The scaling is so that the expected sum is unchanged.
注意,输出的非0元素是原来的 “1/keep_prob” 倍,在数据集比较大的时候管用,
import tensorflow as tf
dropout = tf.placeholder(tf.float32)
x = tf.Variable(tf.ones([10, 10]))
y = tf.nn.dropout(x, dropout)
init = tf.initialize_all_variables()
with tf.Session() as sess:
sess.run(init)
print(sess.run(x))
print (sess.run(y, feed_dict = {dropout: 0.5}))
对应输出
[[1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]]
[[0. 2. 0. 2. 2. 0. 2. 0. 0. 2.]
[0. 0. 0. 2. 0. 0. 0. 0. 0. 0.]
[2. 0. 0. 2. 0. 2. 2. 0. 0. 2.]
[2. 2. 0. 2. 2. 0. 2. 2. 0. 2.]
[0. 0. 0. 0. 0. 2. 0. 0. 0. 0.]
[0. 2. 2. 2. 0. 2. 2. 0. 2. 0.]
[2. 0. 0. 0. 2. 2. 0. 0. 2. 0.]
[2. 2. 2. 2. 0. 0. 2. 0. 2. 0.]
[0. 2. 0. 0. 0. 0. 2. 0. 2. 0.]
[0. 2. 0. 0. 0. 0. 0. 2. 2. 2.]]