paddle 基础函数 dropout

官方文档:https://www.paddlepaddle.org.cn/documentation/docs/zh/api_cn/layers_cn/dropout_cn.html

 

示例:

import paddle.fluid as fluid
import numpy as np

x = fluid.data(name="x", shape=[-1, 4, 4], dtype="float32")
droped = fluid.layers.dropout(x, dropout_prob=0.4)
droped_test = fluid.layers.dropout(x, dropout_prob=0.4, is_test=True)

place = fluid.CPUPlace()
exe = fluid.Executor(place)
exe.run(fluid.default_startup_program())

np_x = np.random.random(size=(3, 4, 4)).astype('float32')
output = exe.run(feed={"x": np_x}, fetch_list = [droped, droped_test])
print("np_x data: \n", np_x, "\n\n")
print("drop train: \n", output[0], "\n\n")
print("drop test: \n", output[1])

结果:

np_x data: 
 [[[0.83867234 0.49676806 0.56831753 0.9743865 ]
  [0.52929115 0.21096249 0.09850039 0.28597713]
  [0.45728818 0.26147905 0.22011554 0.6459282 ]
  [0.10191386 0.5059732  0.77578056 0.38812548]]

 [[0.32939243 0.37774974 0.15247999 0.22800855]
  [0.4500703  0.9370683  0.05210597 0.6826846 ]
  [0.7558275  0.7626539  0.6312185  0.9522939 ]
  [0.6928732  0.77195126 0.77838635 0.22987728]]

 [[0.5812424  0.6493947  0.10636061 0.9679944 ]
  [0.6280797  0.32937858 0.78634965 0.21804915]
  [0.04333039 0.71105134 0.22208126 0.09166805]
  [0.4192464  0.36191294 0.02850991 0.77054507]]] 


drop train: 
 [[[0.         0.49676806 0.56831753 0.        ]
  [0.52929115 0.21096249 0.09850039 0.        ]
  [0.         0.26147905 0.22011554 0.6459282 ]
  [0.10191386 0.5059732  0.77578056 0.38812548]]

 [[0.32939243 0.37774974 0.15247999 0.        ]
  [0.4500703  0.9370683  0.         0.6826846 ]
  [0.         0.         0.6312185  0.        ]
  [0.6928732  0.77195126 0.         0.22987728]]

 [[0.5812424  0.6493947  0.10636061 0.9679944 ]
  [0.         0.32937858 0.78634965 0.21804915]
  [0.04333039 0.71105134 0.         0.        ]
  [0.4192464  0.         0.         0.        ]]] 


drop test: 
 [[[0.50320345 0.29806083 0.34099054 0.5846319 ]
  [0.3175747  0.1265775  0.05910023 0.17158628]
  [0.2743729  0.15688744 0.13206933 0.38755694]
  [0.06114832 0.30358395 0.46546835 0.2328753 ]]

 [[0.19763547 0.22664985 0.091488   0.13680513]
  [0.27004218 0.562241   0.03126359 0.40961078]
  [0.45349652 0.45759234 0.3787311  0.5713763 ]
  [0.41572392 0.46317077 0.46703184 0.13792637]]

 [[0.34874544 0.38963684 0.06381637 0.58079666]
  [0.37684783 0.19762716 0.4718098  0.1308295 ]
  [0.02599824 0.42663082 0.13324876 0.05500083]
  [0.25154784 0.21714777 0.01710594 0.46232706]]]

可以发现:

(1)在训练阶段,dropout 的比例为0.4, 有40%的元素的值变成了0;

(2)在测试阶段,dropout的比例为0.4,元素的值为原来值的 60%(1-40%)。例如 0.83867234 * (1-0.4) = 0.5032034. 

见:神经网络Dropout层中为什么dropout后还需要进行rescale?

10个人拉一个10吨车。
第一次(训练时),只有6个人出力(有p=0.4的人被dropout了),那么这6个人每人出力拉 10/6 吨;
第二次(预测时),10个人都要求出力,这次每个人出的力就是:10/6 * (1-0.4) = 1吨了。

 

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值