pytorch中BatchNorm2d的理解

b1=torch.nn.BatchNorm2d(3)
a=torch.randn(2,3,4,4)

c=b1(a)
c.size()
Out[14]: torch.Size([2, 3, 4, 4])

(a[0,0]-torch.cat((a[0,0],a[1,0]),dim=1).mean())/
torch.pow(torch.cat((a[0,0],a[1,0]),dim=1).var(unbiased=False)+1e-5,0.5)*b1.weight[0]+b1.bias[0]
Out[23]: 
tensor([[-1.4331,  0.4803, -0.9487,  0.4142],
        [-0.4953,  0.2832,  0.0450, -0.2222],
        [-0.1621, -0.7239, -0.6519, -0.1368],
        [-1.2073,  0.3538, -0.9681, -0.1016]], grad_fn=<ThAddBackward>)
c[0,0]
Out[24]: 
tensor([[-1.4331,  0.4803, -0.9487,  0.4142],
        [-0.4953,  0.2832,  0.0450, -0.2222],
        [-0.1621, -0.7239, -0.6519, -0.1368],
        [-1.2073,  0.3538, -0.9681, -0.1016]], grad_fn=<SelectBackward>)

b1.weight
Out[26]: 
tensor([0.7185, 0.6812, 0.0770], requires_grad=True)
b1.bias
Out[27]: 
tensor([0., 0., 0.], requires_grad=True)
b1.eps
Out[28]: 1e-05


b1.running_mean
Out[29]: tensor([-0.0067,  0.0155, -0.0003])
b1.running_var
Out[30]: tensor([0.9891, 0.9808, 0.9868])
torch.cat((a[0,0],a[1,0])).mean()
Out[31]: tensor(-0.0668) #r_mean=0.9*r_mean+0.1*batch_mean   ,ini=0
torch.cat((a[0,0],a[1,0])).var(unbiased=False)
Out[32]: tensor(0.8631) #r_var=0.9*r_var+0.1*batch_var     ,ini=1,unbiased=True
#ref[4]中关于r_var初始化为0是错误的

参考:

https://blog.csdn.net/tmk_01/article/details/80679549

https://blog.csdn.net/LoseInVain/article/details/86476010

https://blog.csdn.net/xk_snail/article/details/80006624

https://blog.csdn.net/qunnie_yi/article/details/80128445

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值