pytorch Conv2d参数解析

文档:

torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode=‘zeros’)

Applies a 2D convolution over an input signal composed of several input
planes.

In the simplest case, the output value of the layer with
input size
( N , C in , H , W ) (N, C_{\text{in}}, H, W) (N,Cin,H,W)
output
( N , C out , H out , W out ) (N, C_{\text{out}}, H_{\text{out}}, W_{\text{out}}) (N,Cout,Hout,Wout)
can be precisely described as:

out ( N i , C out j ) = bias ( C out j ) + ∑ k = 0 C in − 1 weight ( C out j , k ) ⋆ input ( N i , k ) \text{out}(N_i, C_{\text{out}_j}) = \text{bias}(C_{\text{out}_j}) + \sum_{k = 0}^{C_{\text{in}} - 1} \text{weight}(C_{\text{out}_j}, k) \star \text{input}(N_i, k) out(Ni,Coutj)=bias(Coutj)+k=0Cin1weight(Coutj,k)input(Ni,k)

where ⋆ \star is the valid 2D cross-correlation_ operator,
:math:N is a batch size, :math:C denotes a number of channels,
:math:H is a height of input planes in pixels, and :math:W is
width in pixels.

  • :attr:stride controls the stride for the cross-correlation, a single
    number or a tuple.

  • :attr:padding controls the amount of implicit zero-paddings on both
    sides for :attr:padding number of points for each dimension.

  • :attr:dilation controls the spacing between the kernel points; also
    known as the à trous algorithm. It is harder to describe, but this link_
    has a nice visualization of what :attr:dilation does.

  • :attr:groups controls the connections between inputs and outputs.
    :attr:in_channels and :attr:out_channels must both be divisible by
    :attr:groups. For example,

    • At groups=1, all inputs are convolved to all outputs.
    • At groups=2, the operation becomes equivalent to having two conv
      layers side by side, each seeing half the input channels,
      and producing half the output channels, and both subsequently
      concatenated.
    • At groups= :attr:in_channels, each input channel is convolved with
      its own set of filters, of size:
      :math:\left\lfloor\frac{out\_channels}{in\_channels}\right\rfloor.

The parameters :attr:kernel_size, :attr:stride, :attr:padding, :attr:dilation can either be:

- a single ``int`` -- in which case the same value is used for the height and width dimension
- a ``tuple`` of two ints -- in which case, the first `int` is used for the height dimension,
  and the second `int` for the width dimension

… note::

 Depending of the size of your kernel, several (of the last)
 columns of the input might be lost, because it is a valid `cross-correlation`_,
 and not a full `cross-correlation`_.
 It is up to the user to add proper padding.

… note::

When `groups == in_channels` and `out_channels == K * in_channels`,
where `K` is a positive integer, this operation is also termed in
literature as depthwise convolution.
In other words, for an input of size :math:`(N, C_{in}, H_{in}, W_{in})`,
a depthwise convolution with a depthwise multiplier `K`, can be constructed by arguments
:math:`(in\_channels=C_{in}, out\_channels=C_{in} \times K, ..., groups=C_{in})`.

… include:: cudnn_deterministic.rst

Args:
in_channels (int): Number of channels in the input image
out_channels (int): Number of channels produced by the convolution
kernel_size (int or tuple): Size of the convolving kernel
stride (int or tuple, optional): Stride of the convolution. Default: 1
padding (int or tuple, optional): Zero-padding added to both sides of the input. Default: 0
padding_mode (string, optional). Accepted values zeros and circular Default: zeros
dilation (int or tuple, optional): Spacing between kernel elements. Default: 1
groups (int, optional): Number of blocked connections from input channels to output channels. Default: 1
bias (bool, optional): If True, adds a learnable bias to the output. Default: True

Shape:
- Input: ( N , C i n , H i n , W i n ) (N, C_{in}, H_{in}, W_{in}) (N,Cin,Hin,Win)
- Output: ( N , C o u t , H o u t , W o u t ) (N, C_{out}, H_{out}, W_{out}) (N,Cout,Hout,Wout) where

H o u t = ⌊ H i n + 2 × padding [ 0 ] − dilation [ 0 ] × ( kernel_size [ 0 ] − 1 ) − 1 stride [ 0 ] + 1 ⌋ H_{out} = \left\lfloor\frac{H_{in} + 2 \times \text{padding}[0] - \text{dilation}[0] \times (\text{kernel\_size}[0] - 1) - 1}{\text{stride}[0]} + 1\right\rfloor Hout=stride[0]Hin+2×padding[0]dilation[0]×(kernel_size[0]1)1+1

W o u t = ⌊ W i n + 2 × padding [ 1 ] − dilation [ 1 ] × ( kernel_size [ 1 ] − 1 ) − 1 stride [ 1 ] + 1 ⌋ W_{out} = \left\lfloor\frac{W_{in} + 2 \times \text{padding}[1] - \text{dilation}[1] \times (\text{kernel\_size}[1] - 1) - 1}{\text{stride}[1]} + 1\right\rfloor Wout=stride[1]Win+2×padding[1]dilation[1]×(kernel_size[1]1)1+1

Attributes:
weight (Tensor): the learnable weights of the module of shape
:math:(\text{out\_channels}, \frac{\text{in\_channels}}{\text{groups}},
:math:\text{kernel\_size[0]}, \text{kernel\_size[1]}).
The values of these weights are sampled from
:math:\mathcal{U}(-\sqrt{k}, \sqrt{k}) where
:math:k = \frac{1}{C_\text{in} * \prod_{i=0}^{1}\text{kernel\_size}[i]}
bias (Tensor): the learnable bias of the module of shape (out_channels). If :attr:bias is True,
then the values of these weights are
sampled from :math:\mathcal{U}(-\sqrt{k}, \sqrt{k}) where
:math:k = \frac{1}{C_\text{in} * \prod_{i=0}^{1}\text{kernel\_size}[i]}

Examples::

>>> # With square kernels and equal stride
>>> m = nn.Conv2d(16, 33, 3, stride=2)
>>> # non-square kernels and unequal stride and with padding
>>> m = nn.Conv2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2))
>>> # non-square kernels and unequal stride and with padding and dilation
>>> m = nn.Conv2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2), dilation=(3, 1))
>>> input = torch.randn(20, 16, 50, 100)
>>> output = m(input)

# 实例
### in_channels (int): Number of channels in the input image
   - pass
### out_channels (int): Number of channels produced by the convolution
- pass
### kernel_size (int or tuple): Size of the convolving kernel
###   stride (int or tuple, optional): Stride of the convolution. Default: 1
###   padding (int or tuple, optional): Zero-padding added to both sides of the input. Default: 0
填充或者不填充填充宽度, 填充是双向的, 每个边都是双侧填充
$out_{szie}  = \frac{n - m + 2p}{s} + 1$
```python
In [93]: input = torch.rand(1, 1, 5, 5) #N, C, H, W                                                                                         

In [94]: net = torch.nn.Conv2d(1, 1, 3, 1, padding=0) 
In [96]: net(input).shape                                                                                                               
Out[96]: torch.Size([1, 1, 3, 3])

In [97]: net = torch.nn.Conv2d(1, 1, 3, 1, padding=(2, 1))                                                                                  
In [98]: net(input).shape                                                                                                                   
Out[98]: torch.Size([1, 1, 7, 5])

padding_mode (string, optional). Accepted values zeros and circular Default: zeros

dilation (int or tuple, optional): Spacing between kernel elements. Default: 1

groups (int, optional): Number of blocked connections from input channels to output channels. Default: 1

bias (bool, optional): If True, adds a learnable bias to the output. Default: True


评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值