Dropout
- 实现(此处p是指被dropout的概率)
- 训练有scale,即"*1/(1-p)",测试不用管
- 训练没scale,测试要乘"*(1-p)"
- 原理解释
- Dropout,有详细的数学解释和以上两种实现方法的解释(此处的p是指keep的概率)
- Dropout原理介绍(此处的p是指被dropout的概率)
- tensorflow实现((此处的p是指keep的概率))
- With probability keep_prob, outputs the input element scaled up by 1 / keep_prob, otherwise outputs 0. The scaling is so that the expected sum is unchanged.
- tf.nn.dropout
- 谈谈Tensorflow的dropout
- 所以使用tensorflow的dropout函数在测试的时候只需要让dropout为1就可以,不用对输出在进行scale,因为它在训练的时候已经scale了
公开课解释
- lesson1
- lesson2