CLASS torch.nn.MaxPool2d(kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False)
Parameters
kernel_size – the size of the window to take a max over 生成池化层窗口的大小
stride – the stride of the window. Default value is kernel_size
padding – implicit zero padding to be added on both sides
dilation – a parameter that controls the stride of elements in the window
return_indices – if True, will return the max indices along with the outputs. Useful for torch.nn.MaxUnpool2d later
ceil_mode – when True, will use ceil instead of floor to compute the output shape
代码实战:
当ceil_mode=True时
import torch
from torch import nn
from torch.nn import MaxPool2d
input = torch.tensor([[1, 2, 0, 3, 1],
[0, 1, 2, 3, 1],
[1, 2, 1, 0, 0],
[5, 2, 3, 1, 1],
[2, 1, 0, 1, 1]], dtype=torch.float32)
input = torch.reshape(input, (-1, 1, 5, 5))
print(input.shape)
# 搭建神经网络
class Peipei(nn.Module):
def __init__(self):
super(Peipei, self).__init__()
self.maxpool1 = MaxPool2d(kernel_size=3, ceil_mode=True)
def forward(self, input1):
output = self.maxpool1(input1)
return output
# 创建神经网络
peipei = Peipei()
output = peipei(input)
print(output)
结果:
tensor([[[[2., 3.],
[5., 1.]]]])
当ceil_mode=False时
结果:
tensor([[[[2.]]]])
为什么要进行最大池化,即最大池化的作用?
保留数据的特征,但把数据量大大减小,加快训练速度
dataset = torchvision.datasets.CIFAR10(" data", train=False, download=True,
transform=torchvision.transforms.ToTensor())
dataloader = DataLoader(dataset, batch_size=64)
# 搭建神经网络
class Peipei(nn.Module):
def __init__(self):
super(Peipei, self).__init__()
self.maxpool1 = MaxPool2d(kernel_size=3, ceil_mode=False)
def forward(self, input1):
output = self.maxpool1(input1)
return output
# 创建神经网络
peipei = Peipei()
writer = SummaryWriter("logs_maxpool")
step = 0
for data in dataloader:
imgs, target = data
output = peipei(imgs)
writer.add_images("input", imgs, step)
writer.add_images("putput", output, step)
step = step + 1
writer.close()
输出: