中drop用法_深度学习中“drop out”的一些trick

1、dropout用法

def dropout(x, keep_prob, noise_shape=None, seed=None, name=None)

其中:

x 为神经元输出结果

keep_prob 为被保留神经元占的比重

tensorflow源代码

def dropout(x, keep_prob, noise_shape=None, seed=None, name=None):  # pylint: disable=invalid-name
  """Computes dropout.

  With probability `keep_prob`, outputs the input element scaled up by
  `1 / keep_prob`, otherwise outputs `0`.  The scaling is so that the expected
  sum is unchanged.

  By default, each element is kept or dropped independently.  If `noise_shape`
  is specified, it must be
  [broadcastable](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)
  to the shape of `x`, and only dimensions with `noise_shape[i] == shape(x)[i]`
  will make independent decisions.  For example, if `shape(x) = [k, l, m, n]`
  and `noise_shape = [k, 1, 1, n]`, each batch and channel component will be
  kept independently and each row and column will be kept or not kept together.

  Args:
    x: A floating point tensor.
    keep_prob: A scalar `Tensor` with the same type as x. The probability
      that each element is kept.
    noise_shape: A 1-D `Tensor` of type `int32`, representing the
      shape for randomly generated keep/drop flags.
    seed: A Python integer. Used to create random seeds. See
      `tf.set_random_seed`
      for behavior.
    name: A name for this operation (optional).

  Returns:
    A Tensor of the same shape of `x`.

  Raises:
    ValueError: If `keep_prob` is not in `(0, 1]` or if `x` is not a floating
      point tensor.
  """
  with ops.name_scope(name, "dropout", [x]) as name:
    x = ops.convert_to_tensor(x, name="x")
    if not x.dtype.is_floating:
      raise ValueError("x has to be a floating point tensor since it's going to"
                       " be scaled. Got a %s tensor instead." % x.dtype)
    if isinstance(keep_prob, numbers.Real) and not 0 < keep_prob <= 1:
      raise ValueError("keep_prob must be a scalar tensor or a float in the "
                       "range (0, 1], got %g" % keep_prob)

    # Early return if nothing needs to be dropped.
    if isinstance(keep_prob, float) and keep_prob == 1:
      return x
    if context.executing_eagerly():
      if isinstance(keep_prob, ops.EagerTensor):
        if keep_prob.numpy() == 1:
          return x
    else:
      keep_prob = ops.convert_to_tensor(
          keep_prob, dtype=x.dtype, name="keep_prob")
      keep_prob.get_shape().assert_is_compatible_with(tensor_shape.scalar())

      # Do nothing if we know keep_prob == 1
      if tensor_util.constant_value(keep_prob) == 1:
        return x

    noise_shape = _get_noise_shape(x, noise_shape)

    # uniform [keep_prob, 1.0 + keep_prob)
    random_tensor = keep_prob
    random_tensor += random_ops.random_uniform(
        noise_shape, seed=seed, dtype=x.dtype)
    # 0. if [keep_prob, 1.0) and 1. if [1.0, 1.0 + keep_prob)
    binary_tensor = math_ops.floor(random_tensor)
    ret = math_ops.div(x, keep_prob) * binary_tensor
    if not context.executing_eagerly():
      ret.set_shape(x.get_shape())
    return ret

依据tensorflow源代码分析dropout的原理

1)keep_prob为神经元输出保留的概率,若keep_prob=1,则神经元输出全部保留,具体见代码如下:

    # Early return if nothing needs to be dropped.
    if isinstance(keep_prob, float) and keep_prob == 1:
      return x
    if context.executing_eagerly():
      if isinstance(keep_prob, ops.EagerTensor):
        if keep_prob.numpy() == 1:
          return x
    else:
      keep_prob = ops.convert_to_tensor(
          keep_prob, dtype=x.dtype, name="keep_prob")
      keep_prob.get_shape().assert_is_compatible_with(tensor_shape.scalar())

      # Do nothing if we know keep_prob == 1
      if tensor_util.constant_value(keep_prob) == 1:
        return x

2)若keepprov不等于0, 则有一些神经元将会被淘汰,但是为了保证整个网络输出不受影响,我们只将保留的神经元作为输出均值,再利用保留概率,算出等价的网络总输出,进而保证训练与测试结果的一致性。即y = y/keepprob,具体见代码如下:

# 0. if [keep_prob, 1.0) and 1. if [1.0, 1.0 + keep_prob)
    binary_tensor = math_ops.floor(random_tensor)
    ret = math_ops.div(x, keep_prob) * binary_tensor
    if not context.executing_eagerly():
      ret.set_shape(x.get_shape())
    return ret

3)drop_out使用时一定要区分训练与测试过程,因为训练是为了得到种类多的小规模特征提取方法,而测试需要结合全部小规模特征提取方法,得到一些综合特征。

定义place_holder
keep_prob = tf.placeholder(tf.float32)  
调用训练优化器时:
sess.run(train_step, feed_dict={xs: X_train, ys: y_train, keep_prob: 0.5})  

执行前向计算,而不优化参数时:
train_result = sess.run(merged, feed_dict={xs: X_train, ys: y_train, keep_prob: 1})  
test_result = sess.run(merged, feed_dict={xs: X_test, ys: y_test, keep_prob: 1})  
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值