-
所谓的一维卷积,也就是卷积核是一维的。原理和2D Conv类似,只不过1D Conv的卷积核移动方向只有一个,从滑动窗口的直观角度来看,多通道一维卷积核在数据长度方向上滑动。
-
而且每组卷积核个数也等于输入数据的通道数。
-
设定多少的输出通道,就会有多少组卷积核
-
实际操作中,以点云文件为例:包含nx3的数据,进行卷积前首先要交换样本个数与通道。
-
##代码备份##
import torch
import torch.nn as nn
from numpy import *
# Args:
# in_channels (int): Number of channels in the input image {输入通道数}
# out_channels (int): Number of channels produced by the convolution {输出通道}
# kernel_size (int or tuple): Size of the convolving kernel
# stride (int or tuple, optional): Stride of the convolution. Default: 1
# padding (int or tuple, optional): Zero-padding added to both sides of
# the input. Default: 0
# padding_mode (string, optional): ``'zeros'``, ``'reflect'``,
# ``'replicate'`` or ``'circular'``. Default: ``'zeros'``
# dilation (int or tuple, optional): Spacing between kernel
# elements. Default: 1
# groups (int, optional): Number of blocked connections from input
# channels to output channels. Default: 1
# bias (bool, optional): If ``True``, adds a learnable bias to the
# output. Default: ``True``
#
conv1 = nn.Conv1d(3, 2, 2, 1) # in, out, k_size, stride
a = torch.ones(1, 5, 3) # b_size, n, 3 # 点云格式
a = a.permute(0, 2, 1)
# a = torch.ones(3)
print('a:\n', a)
b = conv1(a)
print('b:\n', b)
print(b.shape) # b_size, ch, length