PyTorch学习笔记(三)构建卷积神经网络模型

Environment

  • OS: macOS Mojave
  • Python version: 3.7
  • PyTorch version: 1.4.0
  • IDE: PyCharm


0. 写在前面

使用 PyTorch 建模还是比较方便滴,大致可分为三部分

  1. 构建网络层,在 继承于 torch.nn.Module 的类对象的构造器 __init__ 方法中定义卷积、池化、线性、激活函数等;
  2. 定义前向传播,重写 forward 实例方法;
  3. 定义初始化方式,使用 torch.nn.init 中的函数。

1. 模型 Containers

PyTorch 中组装模型需要用到 Containers,包括 Module、Sequential、ModuleList、ModuleDict。Module 是所有模型的基类,此外一般看到用的是 Sequential,这儿小记一下用法。

1.1 Module 类

torch.nn.Module 为所有模型的基类

from torch.nn import Module
from torch.nn import Conv2d, ReLU, MaxPool2d, Linear


class LeNet5(Module):  # 继承
    def __init__(self, num_classes=10):
        super(LeNet5, self).__init__()
        self.conv1 = Conv2d(in_channels=1, out_channels=6, kernel_size=5, stride=1, padding=2)
        self.conv2 = Conv2d(6, 16, 5)
        self.linear1 = Linear(in_features=16 * 5 * 5, out_features=120)
        self.linear2 = Linear(120, 84)
        self.linear3 = Linear(84, num_classes)
        self.relu = ReLU()
        self.maxpool = MaxPool2d(kernel_size=2)

    def forward(self, x):  # 一个 module 相当于一个运算,必须实现 forward 函数
        x = self.conv1(x)  # 1 x 28 x 28 -> 6 x 28 x 28
        x = self.relu(x)
        x = self.maxpool(x)  # 6 x 28 x 28 -> 6 x 14 x 14
        x = self.conv2(x)  # 6 x 14 x 14 -> 16 x 10 x 10
        x = self.relu(x)
        x = self.maxpool(x)  # 16 x 10 x 10 -> 16 x 5 x 5

        x = torch.flatten(x, 1)

        x = self.linear1(x)  # 400 -> 120
        x = self.relu(x)
        x = self.linear2(x)  # 120 -> 84
        x = self.relu(x)
        x = self.linear3(x)  # 84 -> num_classes

        return x


torch.nn.Module 实例对象的属性
在这里插入图片描述

1.2 Sequential 类

torch.nn.Sequential 类能够将多个网络层按顺序组装,一般向 Sequential 中传入网络层有两种格式

  1. 向 Sequential 中直接传入各网络层,实例化后以数字为索引
from torch.nn import Module, Sequential
from torch.nn import Conv2d, ReLU, MaxPool2d, Linear


class LeNet5(Module):
    def __init__(self, num_classes=10):
        super(LeNet5, self).__init__()

        self.features = Sequential(
            Conv2d(in_channels=1, out_channels=6, kernel_size=5, stride=1, padding=2),
            ReLU(),
            MaxPool2d(kernel_size=2, stride=2),
            Conv2d(6, 16, 5),
            ReLU(),
            MaxPool2d(2, 2)
        )

        self.classifier = Sequential(
            Linear(in_features=16 * 5 * 5, out_features=120),
            ReLU(),
            Linear(120, 84),
            ReLU(),
            Linear(84, num_classes)
        )

    def forward(self, x):
        x = self.features(x)  # 使用了Sequential,这儿写起来就简洁一些
        x = torch.flatten(x, 1)
        x = self.classifier(x)

        return x


if __name__ == '__main__':
    lenet = LeNet5()
    print(lenet.features[3])  # 以数字为索引
# Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1))

  1. Sequential 中传入 OrderedDict 对象,实例化后可以自定义的名字为索引
from collections import OrderedDict
from torch.nn import Module, Sequential
from torch.nn import Conv2d, ReLU, MaxPool2d, Linear


class LeNet5(Module):
    def __init__(self, num_classes=10):
        super(LeNet5, self).__init__()

        self.features = Sequential(OrderedDict({
            'conv1': Conv2d(in_channels=1, out_channels=6, kernel_size=5, stride=1, padding=2),
            'relu1': ReLU(),
            'pool1': MaxPool2d(kernel_size=2, stride=2),

            'conv2': Conv2d(6, 16, 5),
            'relu2': ReLU(),
            'pool2': MaxPool2d(2, 2)
        }))

        self.classifier = Sequential(OrderedDict({
            'fc1': Linear(in_features=16 * 5 * 5, out_features=120),
            'relu1': ReLU(),
            'fc2': Linear(120, 84),
            'relu2': ReLU(),
            'fc3': Linear(84, num_classes)
        }))

    def forward(self, x):
        x = self.features(x)
        x = torch.flatten(x, 1)
        x = self.classifier(x)

        return x


if __name__ == '__main__':
    lenet = LeNet5()
    print(lenet.features.conv2)
# Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1))

2. 构建 AlexNet 模型

至此,可以轻易地搞出一个经典的卷积神经网络模型 AlexNet,参考


导入需要的模块和类

import torch
from torch.nn import Module, Sequential, Conv2d, MaxPool2d, Linear, ReLU, Dropout, AdaptiveAvgPool2d

包含模型的 .py 文件常作为一个模块,定义被导入时将导入的内容

__all__ = ['AlexNet']

定义网络结构

class AlexNet(Module):

    def __init__(self, num_classes=1000):
        super(AlexNet, self).__init__()
        self.features = Sequential(
            Conv2d(3, 64, kernel_size=11, stride=4, padding=2),
            ReLU(inplace=True),
            MaxPool2d(kernel_size=3, stride=2),
            Conv2d(64, 192, kernel_size=5, padding=2),
            ReLU(inplace=True),
            MaxPool2d(kernel_size=3, stride=2),
            Conv2d(192, 384, kernel_size=3, padding=1),
            ReLU(inplace=True),
            Conv2d(384, 256, kernel_size=3, padding=1),
            ReLU(inplace=True),
            Conv2d(256, 256, kernel_size=3, padding=1),
            ReLU(inplace=True),
            MaxPool2d(kernel_size=3, stride=2),
        )
        self.avgpool = AdaptiveAvgPool2d((6, 6))
        self.classifier = Sequential(
            Dropout(),
            Linear(256 * 6 * 6, 4096),
            ReLU(inplace=True),
            Dropout(),
            Linear(4096, 4096),
            ReLU(inplace=True),
            Linear(4096, num_classes),
        )

    def forward(self, x):
        x = self.features(x)
        x = self.avgpool(x)
        x = torch.flatten(x, 1)
        x = self.classifier(x)
        return x


3. torchvision 提供的模型

torchvision.models 模块提供了一些经典的模型

  • 经典的神经网络(依时间从早到迟排序)
from torchvision.models import alexnet, vgg, googlenet, inception, resnet, densenet
  • 经典的轻量级神经网络
from torchvision.models import mobilenet, shufflenetv2, squeezenet
  • 经典的神经网络自动搜索结构
from torchvision.models import mnasnet
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值