pytorch的pad是卷积之前进行填充;
填充对于图像较小比较重要,对于图像较大来说可能影响并不大。
self.conv1=nn.Conv2d(1,1,3,1,1)
tensor([[[[-1.3850, 0.5036, -0.0835, -0.0235],
[ 0.1744, 2.2983, 0.9571, -0.6619],
[ 1.3989, 1.4059, -1.4013, 1.2973],
[ 1.6409, -1.0567, -0.2616, -0.2501]]]])
tensor([[[[ 0.3224, 0.6606, 0.2888, 0.1508],
[ 0.1197, -0.2834, 0.0889, -0.3850],
[-1.4643, -0.9435, -0.2578, 0.5352],
[-0.9292, 0.2906, 0.1087, -0.4605]]]],
grad_fn=<MkldnnConvolutionBackward>)
ZeroPad2d用零填充输入张量边界.
self.conv1=nn.Sequential(
nn.ZeroPad2d(1),
nn.Conv2d(1,1,3,1,0),
)
tensor([[[[-1.3850, 0.5036, -0.0835, -0.0235],
[ 0.1744, 2.2983, 0.9571, -0.6619],
[ 1.3989, 1.4059, -1.4013, 1.2973],
[ 1.6409, -1.0567, -0.2616, -0.2501]]]])
tensor([[[[ 0.3224, 0.6606, 0.2888, 0.1508],
[ 0.1197, -0.2834, 0.0889, -0.3850],
[-1.4643, -0.9435, -0.2578, 0.5352],
[-0.9292, 0.2906, 0.1087, -0.4605]]]],
grad_fn=<MkldnnConvolutionBackward>)
由上述实验知道,在卷积操作中是0填充。
ReflectionPad2d使用输入边界的反射填充输入张量.
self.conv1=nn.Sequential(
nn.ReflectionPad2d(1),
nn.Conv2d(1,1,3,1,0),
)
tensor([[[[-1.3850, 0.5036, -0.0835, -0.0235],
[ 0.1744, 2.2983, 0.9571, -0.6619],
[ 1.3989, 1.4059, -1.4013, 1.2973],
[ 1.6409, -1.0567, -0.2616, -0.2501]]]])
tensor([[[[-0.2560, -0.3574, 0.7523, 0.1149],
[-0.0242, -0.2834, 0.0889, -0.6879],
[-1.6183, -0.9435, -0.2578, 0.1577],
[ 0.1001, 0.4232, 0.4886, -0.0972]]]],
grad_fn=<MkldnnConvolutionBackward>)
ReplicationPad2d使用输入边界的复制填充输入张量.
self.conv1=nn.Sequential(
nn.ReplicationPad2d(1),
nn.Conv2d(1,1,3,1,0),
)
tensor([[[[-1.3850, 0.5036, -0.0835, -0.0235],
[ 0.1744, 2.2983, 0.9571, -0.6619],
[ 1.3989, 1.4059, -1.4013, 1.2973],
[ 1.6409, -1.0567, -0.2616, -0.2501]]]])
tensor([[[[ 0.5032, 0.3801, 0.3956, 0.0496],
[ 0.1412, -0.2834, 0.0889, -0.0786],
[-1.4233, -0.9435, -0.2578, 0.6855],
[-0.8149, 0.4816, -0.1508, -1.2202]]]],
grad_fn=<MkldnnConvolutionBackward>)
ConstantPad2d用一个常数值填充输入张量边界
self.conv1=nn.Sequential(
nn.ConstantPad2d(1,0),
nn.Conv2d(1,1,3,1,0),
)
tensor([[[[-1.3850, 0.5036, -0.0835, -0.0235],
[ 0.1744, 2.2983, 0.9571, -0.6619],
[ 1.3989, 1.4059, -1.4013, 1.2973],
[ 1.6409, -1.0567, -0.2616, -0.2501]]]])
tensor([[[[ 0.3224, 0.6606, 0.2888, 0.1508],
[ 0.1197, -0.2834, 0.0889, -0.3850],
[-1.4643, -0.9435, -0.2578, 0.5352],
[-0.9292, 0.2906, 0.1087, -0.4605]]]],
grad_fn=<MkldnnConvolutionBackward>)
self.conv1=nn.Sequential(
nn.ConstantPad2d(1,1),
nn.Conv2d(1,1,3,1,0),
)
tensor([[[[-1.3850, 0.5036, -0.0835, -0.0235],
[ 0.1744, 2.2983, 0.9571, -0.6619],
[ 1.3989, 1.4059, -1.4013, 1.2973],
[ 1.6409, -1.0567, -0.2616, -0.2501]]]])
tensor([[[[-0.3114, 0.0486, -0.3232, -0.4158],
[ 0.2464, -0.2834, 0.0889, -0.8646],
[-1.3376, -0.9435, -0.2578, 0.0556],
[-0.5656, 0.7162, 0.5343, -0.6800]]]],
grad_fn=<MkldnnConvolutionBackward>)