nn.AvgPool1d(kernel_size,stride,padding)与nn.AdaptiveAvgPool1d(N)

本文详细介绍了PyTorch中的nn.AvgPool1d和nn.AdaptiveAvgPool1d两个平均池化层。nn.AvgPool1d需要指定池化窗口大小,适用于已知输入长度的情况。而nn.AdaptiveAvgPool1d则会根据设定的输出尺寸自适应地调整池化区域,简化了操作,适合输入尺寸不固定的应用。通过示例展示了它们的使用方法,并提供了源码分析帮助理解其工作原理。
摘要由CSDN通过智能技术生成

nn.AvgPool1d需要根据自己需要的length去手动设计,而nn.AdaptiveAvgPool1d更像是高手设计好的麻瓜模块方便供我这种麻瓜使用


1.nn.AvgPool1d

看网上博客没看到把Lout说明白的,直接看官方文档就行了。

N - Batch Size

C - Channel number

L - Length

深度学习中的Tensor 数据格式(N,C,H,W)_d_b_的博客-CSDN博客_tensor格式

2.nn.AdaptiveAvgPool1d(N)

AdaptiveAvgPool1d(N) 对于一个输入shape为BCL的tensor进行一维的pool,输出shape为BCN

源码

class AdaptiveAvgPool1d(_AdaptiveAvgPoolNd):
    r"""Applies a 1D adaptive average pooling over an input signal composed of several input planes.

    The output size is :math:`L_{out}`, for any input size.
    The number of output features is equal to the number of input planes.

    Args:
        output_size: the target output size :math:`L_{out}`.

    Shape:
        - Input: :math:`(N, C, L_{in})` or :math:`(C, L_{in})`.
        - Output: :math:`(N, C, L_{out})` or :math:`(C, L_{out})`, where
          :math:`L_{out}=\text{output\_size}`.

    Examples:
        >>> # target output size of 5
        >>> m = nn.AdaptiveAvgPool1d(5)
        >>> input = torch.randn(1, 64, 8)
        >>> output = m(input)

    """

    output_size: _size_1_t

    def forward(self, input: Tensor) -> Tensor:
        return F.adaptive_avg_pool1d(input, self.output_size)

源码分析 

(213条消息) torch.nn.AdaptiveAvgPool分析_JerryLiu1998的博客-CSDN博客_nn.adaptiveavgpool 

参考资料

AvgPool1d — PyTorch 1.12 documentation

(213条消息) torch.nn.AdaptiveAvgPool1d(N)函数解读_wang xiang的博客-CSDN博客_avgpool1d

(213条消息) nn.AdaptiveAvgPool1d()_饿了就干饭的博客-CSDN博客_nn.adaptiveavgpool

  • 5
    点赞
  • 10
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv1d(in_channels=1, out_channels=64, kernel_size=32, stride=8, padding=12) self.pool1 = nn.MaxPool1d(kernel_size=2, stride=2) self.BN = nn.BatchNorm1d(num_features=64) self.conv3_1 = nn.Conv1d(in_channels=64, out_channels=64, kernel_size=3, stride=1, padding=1) self.pool3_1 = nn.MaxPool1d(kernel_size=2, stride=2) self.conv3_2 = nn.Conv1d(in_channels=64, out_channels=128, kernel_size=3, stride=1, padding=1) self.pool3_2 = nn.MaxPool1d(kernel_size=2, stride=2) self.conv3_3 = nn.Conv1d(in_channels=128, out_channels=256, kernel_size=3, stride=1, padding=1) self.pool3_3 = nn.MaxPool1d(kernel_size=2, stride=2) self.conv5_1 = nn.Conv1d(in_channels=64, out_channels=64, kernel_size=5, stride=1, padding=2) self.pool5_1 = nn.MaxPool1d(kernel_size=2, stride=2) self.conv5_2 = nn.Conv1d(in_channels=64, out_channels=128, kernel_size=5, stride=1, padding=2) self.pool5_2 = nn.MaxPool1d(kernel_size=2, stride=2) self.conv5_3 = nn.Conv1d(in_channels=128, out_channels=256, kernel_size=5, stride=1, padding=2) self.pool5_3 = nn.MaxPool1d(kernel_size=2, stride=2) self.conv7_1 = nn.Conv1d(in_channels=64, out_channels=64, kernel_size=7, stride=1, padding=3) self.pool7_1 = nn.MaxPool1d(kernel_size=2, stride=2) self.conv7_2 = nn.Conv1d(in_channels=64, out_channels=128, kernel_size=7, stride=1, padding=3) self.pool7_2 = nn.MaxPool1d(kernel_size=2, stride=2) self.conv7_3 = nn.Conv1d(in_channels=128, out_channels=256, kernel_size=7, stride=1, padding=3) self.pool7_3 = nn.MaxPool1d(kernel_size=2, stride=2) self.pool2 = nn.MaxPool1d(kernel_size=8, stride=1) self.fc = nn.Linear(in_features=256 * 3, out_features=4) ##这里的256*3是计算出来的 self.softmax = nn.Softmax(),解释各部分的作用和参数选择
04-19
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值