torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True)
parameters:
- in_channels (int) - Number of channels in the input image
- out_channels (int) - Number of channels produced by the convolution
- kernel_size (int or tuple) - Size of the convolving kernel
- stride (int or tuple, optional) - Stride of the convolution. Default: 1
- padding (int or tuple, optional) - Zero-padding added to both sides of the input. Default: 0
- dilation (int or tuple, optional) - Spacing between kernel element. Default: 1
- groups (int, optional) - Number of blocked connections from input channels to output channels. Default: 1
- bias (bool, optional) - If True, adds a learnable bias to the output. Default: True
Reference:
https://blog.csdn.net/u014525760/article/details/80647339