Pytorch之SAME padding

Implement "same" padding for convolution operations

mimics TensorFlow SAME padding (I'm writing it down into the functional interface, so that nn.Conv2d can just call into F.conv2d_same_padding):

 1 def conv2d_same_padding(input, weight, bias=None, stride=1, dilation=1, groups=1):
 2   input_rows = input.size(2)
 3   filter_rows = weight.size(2)
 4   effective_filter_size_rows = (filter_rows - 1) * dilation[0] + 1
 5   out_rows = (input_rows + stride[0] - 1) // stride[0]
 6   padding_needed =
 7           max(0, (out_rows - 1) * stride[0] + effective_filter_size_rows -
 8                   input_rows)
 9   padding_rows = max(0, (out_rows - 1) * stride[0] +
10                         (filter_rows - 1) * dilation[0] + 1 - input_rows)
11   rows_odd = (padding_rows % 2 != 0)
12   # same for padding_cols
13 
14   if rows_odd or cols_odd:
15     input = F.pad(input, [0, int(cols_odd), 0, int(rows_odd)])
16 
17   return F.conv2d(input, weight, bias, stride,
18                   padding=(padding_rows // 2, padding_cols // 2),
19                   dilation=dilation, groups=groups)
20   

It was mostly copy-pasted from TensorFlow code in here and here.

“As you can see, there is a lot of hidden things going on there, and that's why it might not be worth it adding a padding='same'. And I think not replicating the SAME behavior in TensorFlow is not ideal either.

 

本文来自于:

Francisco Massa : Implement "same" padding for convolution operations?

谢谢!!!                                                                                                                  

 

转载于:https://www.cnblogs.com/wang2825/articles/8947634.html

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值