nn.AdaptiveAvgPool1d()

nn.AdaptiveAvgPool1d(5)

自适应的平均池化,应该是在最低维上转为5个数,即
9---->5
(1, 9, 8)---->(1, 5, 8)

import torch
from torch import nn
from torch import autograd
m = nn.AdaptiveAvgPool1d(5)# target output size of 5
input = autograd.Variable(torch.randn(1, 9, 8))
print(input)
tensor([[[ 3.0320e-01,  3.6879e-01, -1.0896e+00, -1.3251e+00,  2.1113e-01,
          -1.3839e+00, -1.2036e+00, -1.5154e+00],
         [ 4.2374e-01, -1.2755e+00,  2.0100e-01,  1.0388e+00,  1.0022e+00,
          -7.3641e-01, -1.3196e+00,  2.3717e-01],
         [ 2.9847e-01, -7.8317e-01, -1.1153e+00,  1.2325e+00, -6.0868e-01,
          -6.0629e-01, -1.0107e+00, -1.8141e-01],
         [ 9.8106e-01,  1.8035e+00,  1.5790e-01, -7.5658e-01, -1.5404e+00,
          -4.7695e-01,  3.9196e-01,  7.1803e-01],
         [-1.4173e+00,  6.5709e-01,  5.9748e-01, -2.3872e+00, -1.3445e+00,
          -5.9272e-01, -2.0092e-01, -2.0429e+00],
         [-3.7274e-01, -6.3365e-01, -3.1410e-01, -5.2788e-01,  2.9406e-01,
           4.7489e-01, -3.2312e-01,  2.5546e+00],
         [-4.9002e-01, -5.4810e-01,  1.4601e+00,  3.2844e-01, -3.7427e-01,
          -4.9377e-01,  2.8525e-01, -2.7228e+00],
         [-2.3692e-03,  2.1962e+00,  9.3386e-02,  4.1071e-01, -8.9220e-01,
          -2.4525e-01, -6.0414e-01,  4.3239e-01],
         [-9.7711e-01, -4.6109e-01, -3.6924e-01,  8.7918e-02, -1.1675e+00,
          -6.8003e-01, -6.2466e-01, -6.5148e-01]]])
output = m(input)
print(output)
tensor([[[ 0.3360, -0.6820, -0.5570, -0.7921, -1.3595],
         [-0.4259, -0.0119,  1.0205, -0.3513, -0.5412],
         [-0.2423, -0.2220,  0.3119, -0.7419, -0.5961],
         [ 1.3923,  0.4016, -1.1485, -0.5418,  0.5550],
         [-0.3801, -0.3776, -1.8658, -0.7127, -1.1219],
         [-0.5032, -0.4919, -0.1169,  0.1486,  1.1157],
         [-0.5191,  0.4135, -0.0229, -0.1943, -1.2188],
         [ 1.0969,  0.9001, -0.2407, -0.5805, -0.0859],
         [-0.7191, -0.2475, -0.5398, -0.8241, -0.6381]]])
m = nn.AdaptiveAvgPool1d(1)# target output size of 1
input = autograd.Variable(torch.randn(1, 9, 8))
print(input)
tensor([[[-0.4052, -0.2062, -0.3630,  0.5443, -1.1570,  0.9105, -1.7502,
          -2.0864],
         [ 0.1581, -1.6536, -0.9496, -0.5045,  0.4973, -1.6026,  1.8087,
          -1.5534],
         [-0.5572,  2.0890, -1.3753, -0.5857,  0.9093, -0.3246,  0.5703,
           0.7315],
         [ 0.5320,  0.2400, -1.8946,  1.2201, -0.8956,  0.3155, -0.4960,
           0.2246],
         [-0.7937,  0.1326,  1.5602, -0.1684,  1.3426,  0.3997, -0.7715,
          -0.1143],
         [-1.2648, -0.9803,  1.2850, -0.5430,  1.0204, -0.6017, -0.3234,
           0.8067],
         [ 1.4721,  0.4670,  0.3708,  1.1734, -0.1523,  2.5045, -0.8696,
           0.8504],
         [-0.0640, -1.6002,  0.8113, -0.5312, -0.4878,  1.7936, -0.1291,
           0.2794],
         [-0.1011, -0.9912, -0.5175, -0.3998,  0.7953, -0.9220, -0.8127,
          -1.2444]]])
output = m(input)
print(output)

[-0.4052, -0.2062, -0.3630, 0.5443, -1.1570, 0.9105, -1.7502, -2.0864]相加除以9等于-0.48不等于-0.5642
说明不是这样算的,不知道nn.AdaptiveAvgPool1d()运算过程是如何计算的

tensor([[[-0.5642],
         [-0.4749],
         [ 0.1821],
         [-0.0942],
         [ 0.1984],
         [-0.0751],
         [ 0.7270],
         [ 0.0090],
         [-0.5242]]])
  • 2
    点赞
  • 8
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 9
    评论
评论 9
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

饿了就干饭

你的鼓励将是我创作的最大动力~

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值