net = tf_util.dropout(net, keep_prob=0.7, is_training=is_training,
scope='dp1')
具体实现如下
def dropout(inputs,
is_training,
scope,
keep_prob=0.5,
noise_shape=None):
""" Dropout layer.
Args:
inputs: tensor
is_training: boolean tf.Variable
scope: string
keep_prob: float in [0,1]
noise_shape: list of ints
Returns:
tensor variable
"""
with tf.variable_scope(scope) as sc:
outputs = tf.cond(is_training,
lambda: tf.nn.dropout(inputs, keep_prob, noise_shape),
lambda: inputs)
return outputs
从代码可以看出是对tf的api进行了封装
以0.7的概率,随机将inputs中的元素设置为0,其他元素按照1.0 / (1 - rate)的倍率进行缩放,这里是1/0.3.
bool b=true;
类似c语言的int a= b?1:2;
如果b为true则a=1,否则a=2;
此函数为,如果is_training为true那么进行dropout语句,否则原样返回inputs