tf.nn.dropout函数
首先看官方函数定义:
def dropout(x, keep_prob, noise_shape=None, seed=None, name=None)
输入是:
- x,你自己的训练、测试数据等
- keep_prob,dropout概率
- ……,其它参数不咋用,不介绍了
输出是:
- A Tensor of the same shape of x
然后我们看看官方API是怎么说这个函数的:
With probability keep_prob, outputs the input element scaled up by 1 / keep_prob, otherwise outputs 0. The scaling is so that the expected sum is unchanged.
注意,输出的非0元素是原来的 “1/keep_prob” 倍!说了这么多,下面给一个程序例子:
import tensorflow as tf
dropout = tf.placeholder(tf.float32)
x = tf.Variable(tf.ones([10, 10]))
y = tf.nn.dropout(x, dropout)
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
print sess.run(y, feed_dict = {dropout: 0.4})
运