tf.nn.dropout() 警报信息处理

tensorflow版本更新踩坑 专栏收录该内容
0 篇文章 0 订阅

WARNING: Logging before flag parsing goes to stderr.
calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.
Instructions for updating:
Please use ‘rate’ instead of ‘keep_prob’. Rate should be set to ‘rate = 1 - keep_prob’.

tf.nn.dropout() 源码如下:

def dropout(x, keep_prob=None, noise_shape=None, seed=None, name=None, rate=None):
    '''官方备注节选
    Args:
    x: A floating point tensor.
    keep_prob: (deprecated) A deprecated alias for `(1-rate)`
    rate: A scalar `Tensor` with the same type as `x`. The probability that each
      element of `x` is discarded.
    '''
    try:
       keep = 1. - keep_prob if keep_prob is not None else None
     except TypeError:
       raise ValueError("keep_prob must be a floating point number or Tensor "
                        "(got %r)" % keep_prob)
     rate = deprecation.deprecated_argument_lookup(
         "rate", rate,
         "keep_prob", keep)
     if rate is None:
       raise ValueError("You must provide a rate to dropout.")
     return dropout_v2(x, rate, noise_shape=noise_shape, seed=seed, name=name)

官方备注,解读如下:

  1. 参数x,浮点型张量,为函数输入数据
  2. keep_prob 参数 :是一个已经“deprecated”的参数,这个参数后边版本就不要了,它现在还能用,它的功能要参考rate
  3. rate参数:和 x 形状一样的一个Tensor ,规定了x里每个元素被 扔掉(discarded)的概率 ,而且,数值上 rate = 1. - keep_prob ,emmmm,就是说,本来,keep_prob是一个和x形状一样的一个Tensor ,keep_prob 值规定了 x 里每个元素被keep的概率
  4. 这个源码里只做了3件事:
    1. 计算keep
    2. 计算 rate
    3. rate值不为空的情况下调用dropout_v2函数!!dropout_v2才会执行dropout
  5. deprecated_argument_lookup函数的功能是,如果keep_prob和rate都被传进了参数,报错,如果有keep值(1-keep_prob),返回keep给rate,如果没有keep_prob值,返回rate值给rate

举几个栗子:

import tensorflow as tf
sess = tf.InteractiveSession()
# 做准备
prob_keep = tf.placeholder(tf.float32) # 保留的概率
prob_drop = tf.placeholder(tf.float32) # 扔掉的概率
x = tf.Variable(tf.ones([10]))

# dropout
y1 = tf.nn.dropout(x, prob_keep ) # 等价于 y1 = tf.nn.dropout(x, keep_prob = prob_keep )
y2 = tf.nn.dropout(x, rate = prob_drop)

init = tf.global_variables_initializer( )
sess.run(init)
print(sess.run({'x':x, 'y1':y1}, 
		feed_dict = {prob_keep : 0.2}))

print(sess.run({'x':x,'y2':y2}, 
		feed_dict = {prob_drop: 0.2 }))

结果是这样子的
x [1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]
y1 [0., 5.0000005, 0., 5.0000005, 0., 5.0000005, 0. , 0. , 5.0000005, 5.0000005]
y2 [1.25, 1.25, 1.25, 1.25, 1.25, 1.25, 0. , 1.25, 0. , 0. ]

y1 对x执行dropout,将5个元素置为0,剩余5个,变为 原值 1 * 1/0.2
y2 对x执行dropout,将3个元素置为0,剩余7个,变为 原值 1 * 1/(1-0.2)

鹅,执行y1时就会报WARNING
所以,下次如果报WARNING
你看到的教程里边这句,
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob )
改成:
h_fc1_drop = tf.nn.dropout(h_fc1, rate = 1-keep_prob )
就没有警报了

或者你如果有强迫症的话,干脆上一句也改了:
keep_prob = tf.placeholder("float")
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob )
改成
drop_prob = tf.placeholder(“float”)
h_fc1_drop = tf.nn.dropout(h_fc1, rate = drop_prob )

但是!!!这里!!!后边训练的时候的赋值是要改的!!就下面三处:

  • train_accuracy = accuracy.eval(feed_dict={x:batch[0], y_: batch[1], drop_prob :0.0 })
  • train_step.run(feed_dict={x: batch[0], y_: batch[1],drop_prob :0.5 })
  • print内:accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels,drop_prob :0.0 }) )
  • 9
    点赞
  • 3
    评论
  • 15
    收藏
  • 一键三连
    一键三连
  • 扫一扫,分享海报

©️2021 CSDN 皮肤主题: 技术工厂 设计师:CSDN官方博客 返回首页
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、C币套餐、付费专栏及课程。

余额充值