In [19]: conv2 = nn.Conv2d(in_channels=3, out_channels=32, kernel_size=(4,3,2))
In [23]: for (name, param) in conv2.named_parameters():
...: print(name)
weight
bias
也就是说每一个卷积层个有两个属性, 一个属性是weight, 另一个属性是bias
具体的就是conv2.weight conv2.bias
print(conv2.weight.shape)
torch.Size([32, 3, 4, 3, 2])
由于input_channel是3, output_channel=32, 算子的形状(fileter shape)是 4*3*2的
所以每一个算子的参数是 3*(4*3*2)== input_cahnnel * filter shape
这个卷积层有32个算子,所以一共有参数32*3*4*3*2个, 也就是conv2.weight的shape
In [44]: conv2.bias
Out[44]:
Parameter containing:
tensor([ 0.0269, -0.0939, -0.0467, 0.0744, 0.0148, -0.0733, 0.0333, -0.0539,
-0.0310, -0.0291, -0.1154, -0.1048, 0.0117, 0.0682, 0.0352, -0.0773,
0.0971, 0.0425, -0.0431, -0.0425, -0.0439, 0.0840, -0.0024, 0.0546,
-0.0344, 0.0732, 0.0632, -0.0010, -0.1169, 0.1142, -0.0022, 0.0458],
requires_grad=True)
In [45]: conv2.bias.shape
Out[45]: torch.Size([32])
深入理解pytorch 卷积层 核子和参数
最新推荐文章于 2024-06-14 11:31:00 发布