DropOut:
在训练过程中对于特定的网络层一定比例(1-P)的神经元失效,使得不同的神经元学习到相同的“概念”,提高模型泛化能力。
tensorflow中的dropout就是:使输入tensor中某些元素变为0,其它没变0的元素值变为原来的1/keep_prob大小,从而使dropout前后神经元的值的总和相差不大。
keep_prob: A scalar Tensor with the same type as x. The probability that each element is kept.
tensorFlow代码如下:
import numpy as np
import tensorflow as tf
a = np.array([[1,1],[2,2]], dtype=np.float32)
a = np.reshape(a, [1,2,2,1])
x = tf.constant(a,dtype=tf.float32)
upsample_x = tf.layers.conv2d_transpose(x, 1, 3, strides=2, padding=‘same’, activation = tf.nn.relu,kernel_initializer=tf.ones_initializer())
batch1 = tf.layers.batch_normalization(upsample_x, training=True)
dropout1 = tf.nn.dropout(batch1,0.5)
with tf.Session() as sess:
tf.global_variables_initializer().run()
print(sess.run(upsample_x))
spliteLine = tf.constant(’*****************************************************’)
print(sess.run(spliteLine))
print(sess.run(batch1))
print(sess.run(spliteLine))