网络结构总结--卷积

下面图片和卷积核都按照正方形处理

i 输入图片大小

o 输出图片大小

k kennel size

p padding

 

 

1、No zero padding, unit strides

Relationship1. For any i and k,and for s=1 and p=0,  o = (i−k)+1.

 

2、Zero padding, unit strides

Relationship 2. For any i, k and p, and for s = 1, o = (i − k) + 2p + 1.

2.1、Half (same) padding

输入大小 = 输出大小

Relationship 3. For any i and for k odd (k = 2n+1, n ∈ N), s=1 and p=[k/2]=n,

o = i + 2[k/2]− (k − 1)

   = i + 2n − 2n
   = i.

2.2、Full padding

输入大小 > 输出大小

Relationship 4. For any i and k, and for p=k−1 and s=1,

o = i + 2(k − 1) − (k − 1)

   = i + (k − 1).

 

3、No zero padding, non-unit strides

The floor function accounts for the fact that sometimes the last possible step does not coincide with the kernel reaching the end of the input

 

4、Zero padding, non-unit strides

despite having different input sizes these convolutions share the same output size. While this doesn’t affect the analysis for convolutions, this will complicate the analysis in the case of transposed convolutions.

5、Pooling

 

6、Transposed convolution

map from a 4-dimensional space to a 16-dimensional space, while keeping the connectivity pattern of the convolution depicted in Figure 2.1. This operation is known as a transposed convolution.

 

The need for transposed convolutions generally arises from the desire to use a transformation going in the opposite direction of a normal convolution, i.e., from something that has the shape of the output of some convolution to something that has the shape of its input while maintaining a connectivity pattern that is compatible with said convolution

Transposed convolutions – also called fractionally strided convolutions or deconvolutions

The simplest way to think about a transposed convolution on a given input is to imagine such an input as being the result of a direct convolution applied on some initial feature map. The trasposed convolution can be then considered as the operation that allows to recover the shape 3 of this initial feature map.反卷积只能恢复形状,并不保证恢复输入

 padding 0 ,然后直接卷积的实现发卷积的方式效率低

 

Interestingly, this corresponds to a fully padded convolution with unit strides.

 

 

 

7、Miscellaneous convolutions

7.1 Dilated convolutions

保持网络的分辨率和接受域

棋盘效应,网格效应,丢失邻近信息

 

 

参考文献:

Dumoulin V, Visin F. A guide to convolution arithmetic for deep learning[J]. arXiv preprint arXiv:1603.07285, 2016.

 

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值