pytorch f.pad差别

pytorch的pad是卷积之前进行填充;
填充对于图像较小比较重要,对于图像较大来说可能影响并不大。

self.conv1=nn.Conv2d(1,1,3,1,1)
tensor([[[[-1.3850,  0.5036, -0.0835, -0.0235],
          [ 0.1744,  2.2983,  0.9571, -0.6619],
          [ 1.3989,  1.4059, -1.4013,  1.2973],
          [ 1.6409, -1.0567, -0.2616, -0.2501]]]])
tensor([[[[ 0.3224,  0.6606,  0.2888,  0.1508],
          [ 0.1197, -0.2834,  0.0889, -0.3850],
          [-1.4643, -0.9435, -0.2578,  0.5352],
          [-0.9292,  0.2906,  0.1087, -0.4605]]]],
       grad_fn=<MkldnnConvolutionBackward>)

ZeroPad2d用零填充输入张量边界.

self.conv1=nn.Sequential(
nn.ZeroPad2d(1),
nn.Conv2d(1,1,3,1,0),
)
tensor([[[[-1.3850,  0.5036, -0.0835, -0.0235],
          [ 0.1744,  2.2983,  0.9571, -0.6619],
          [ 1.3989,  1.4059, -1.4013,  1.2973],
          [ 1.6409, -1.0567, -0.2616, -0.2501]]]])
tensor([[[[ 0.3224,  0.6606,  0.2888,  0.1508],
          [ 0.1197, -0.2834,  0.0889, -0.3850],
          [-1.4643, -0.9435, -0.2578,  0.5352],
          [-0.9292,  0.2906,  0.1087, -0.4605]]]],
       grad_fn=<MkldnnConvolutionBackward>)

由上述实验知道,在卷积操作中是0填充。

ReflectionPad2d使用输入边界的反射填充输入张量.

self.conv1=nn.Sequential(
nn.ReflectionPad2d(1),
nn.Conv2d(1,1,3,1,0),
)
tensor([[[[-1.3850,  0.5036, -0.0835, -0.0235],
          [ 0.1744,  2.2983,  0.9571, -0.6619],
          [ 1.3989,  1.4059, -1.4013,  1.2973],
          [ 1.6409, -1.0567, -0.2616, -0.2501]]]])
tensor([[[[-0.2560, -0.3574,  0.7523,  0.1149],
          [-0.0242, -0.2834,  0.0889, -0.6879],
          [-1.6183, -0.9435, -0.2578,  0.1577],
          [ 0.1001,  0.4232,  0.4886, -0.0972]]]],
       grad_fn=<MkldnnConvolutionBackward>)

ReplicationPad2d使用输入边界的复制填充输入张量.

self.conv1=nn.Sequential(
nn.ReplicationPad2d(1),
nn.Conv2d(1,1,3,1,0),
)
tensor([[[[-1.3850,  0.5036, -0.0835, -0.0235],
          [ 0.1744,  2.2983,  0.9571, -0.6619],
          [ 1.3989,  1.4059, -1.4013,  1.2973],
          [ 1.6409, -1.0567, -0.2616, -0.2501]]]])
tensor([[[[ 0.5032,  0.3801,  0.3956,  0.0496],
          [ 0.1412, -0.2834,  0.0889, -0.0786],
          [-1.4233, -0.9435, -0.2578,  0.6855],
          [-0.8149,  0.4816, -0.1508, -1.2202]]]],
       grad_fn=<MkldnnConvolutionBackward>)

ConstantPad2d用一个常数值填充输入张量边界

self.conv1=nn.Sequential(
nn.ConstantPad2d(1,0),
nn.Conv2d(1,1,3,1,0),
)
tensor([[[[-1.3850,  0.5036, -0.0835, -0.0235],
          [ 0.1744,  2.2983,  0.9571, -0.6619],
          [ 1.3989,  1.4059, -1.4013,  1.2973],
          [ 1.6409, -1.0567, -0.2616, -0.2501]]]])
tensor([[[[ 0.3224,  0.6606,  0.2888,  0.1508],
          [ 0.1197, -0.2834,  0.0889, -0.3850],
          [-1.4643, -0.9435, -0.2578,  0.5352],
          [-0.9292,  0.2906,  0.1087, -0.4605]]]],
       grad_fn=<MkldnnConvolutionBackward>)
self.conv1=nn.Sequential(
nn.ConstantPad2d(1,1),
nn.Conv2d(1,1,3,1,0),
)
tensor([[[[-1.3850,  0.5036, -0.0835, -0.0235],
          [ 0.1744,  2.2983,  0.9571, -0.6619],
          [ 1.3989,  1.4059, -1.4013,  1.2973],
          [ 1.6409, -1.0567, -0.2616, -0.2501]]]])
tensor([[[[-0.3114,  0.0486, -0.3232, -0.4158],
          [ 0.2464, -0.2834,  0.0889, -0.8646],
          [-1.3376, -0.9435, -0.2578,  0.0556],
          [-0.5656,  0.7162,  0.5343, -0.6800]]]],
       grad_fn=<MkldnnConvolutionBackward>)
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值