AlexNet代码解读

AlexNet代码解读

概述

AlexNet的网络结构很简单,是最初级版本的CNN,没有使用什么技巧。
网络分成两个部分,分别是卷积、激活、池化构成的特征提取器,以及前向神经网络的分类器。

网络结构图

在这里插入图片描述

AlexNet代码细节分析

import numpy as np
import torch
import torch.nn as nn
from typing import Any
from torchsummary import summary
__all__ = ['AlexNet','alexnet']

mpdel_urls =  {
    'alexnet': 'https://download.pytorch.org/models/alexnet-owt-4df8aa71.pth',
}
class AlexNet(nn.Module):
class AlexNet(nn.Module):
    def __init__(self, num_classes: int = 1000) -> None:
        super(AlexNet, self).__init__()
         # conv、relu、maxpool串联结构来提取特征
        self.features = nn.Sequential(
            nn.Conv2d(3, 64, kernel_size=11, stride=4, padding=2),
            nn.ReLU(inplace=True),
            nn.MaxPool2d(kernel_size=3, stride=2),
            nn.Conv2d(64, 192, kernel_size=5, padding=2),
            nn.ReLU(inplace=True),
            nn.MaxPool2d(kernel_size=3, stride=2),
            nn.Conv2d(192, 384, kernel_size=3, padding=1),
            nn.ReLU(inplace=True),
            nn.Conv2d(384, 256, kernel_size=3, padding=1),
            nn.ReLU(inplace=True),
            nn.Conv2d(256, 256, kernel_size=3, padding=1),
            nn.ReLU(inplace=True),
            nn.MaxPool2d(kernel_size=3, stride=2),
        )
        # 分类器(前面的卷积层已经全部写好,提取出特征了)
        # AlexNet的卷积层比较简单,层数不深,就直接写在features函数里面了
        # 特征层操作:(卷积、激活、池化)*2、(卷积、激活)*2、卷积、激活、池化
        self.avgpool = nn.AdaptiveAvgPool2d(6)
        # 分类器操作:(dropout、全连接、激活)*2、全连接
        self.classifier = nn.Sequential(
            nn.Dropout(),
            nn.Linear(256 * 6 * 6, 4096),
            nn.ReLU(inplace=True),
            nn.Dropout(),
            nn.Linear(4096, 4096),
            nn.ReLU(inplace=True),
            nn.Linear(4096, num_classes),
        )

    def forward(self, x: torch.Tensor) -> torch.Tensor:
        x = self.features(x)
        x = self.avgpool(x)
        x = torch.flatten(x, 1)
        x = self.classifier(x)
        return x
# 构建alexnet
def alexnet(pretrained:bool = False, progress:bool = True, **kwargs:Any)->AlexNet:
    model = AlexNet(**kwargs)
    if pretrained:
        state_dict = load_state_dict_from_url(model_urls['alexnet'],progress = progress)
        model.load_state_dict(state_dict)
    return model

现在输入一个3x224x224的tensor,看它经过alexnet每一层之后会变成怎样大小的tensor。

from torchsummary import summary
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
t = AlexNet().to(device)
summary(t,(3,224,224))

好了,输出结果如下所示。

----------------------------------------------------------------
        Layer (type)               Output Shape         Param #
================================================================
            Conv2d-1           [-1, 64, 55, 55]          23,296
              ReLU-2           [-1, 64, 55, 55]               0
         MaxPool2d-3           [-1, 64, 27, 27]               0
            Conv2d-4          [-1, 192, 27, 27]         307,392
              ReLU-5          [-1, 192, 27, 27]               0
         MaxPool2d-6          [-1, 192, 13, 13]               0
            Conv2d-7          [-1, 384, 13, 13]         663,936
              ReLU-8          [-1, 384, 13, 13]               0
            Conv2d-9          [-1, 256, 13, 13]         884,992
             ReLU-10          [-1, 256, 13, 13]               0
           Conv2d-11          [-1, 256, 13, 13]         590,080
             ReLU-12          [-1, 256, 13, 13]               0
        MaxPool2d-13            [-1, 256, 6, 6]               0
AdaptiveAvgPool2d-14            [-1, 256, 6, 6]               0
          Dropout-15                 [-1, 9216]               0
           Linear-16                 [-1, 4096]      37,752,832752,832                                                       0
             ReLU-17                 [-1, 4096]               0
      0                                                 781,312
          Dropout-18                 [-1, 4096]               0
      0                                                 097,000
           Linear-19                 [-1, 4096]      16,========781,312
             ReLU-20                 [-1, 4096]
      0
           Linear-21                 [-1, 1000]       4,097,000
================================================================
Total params: 61,100,840
Trainable params: 61,100,840
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.57
Forward/backward pass size (MB): 8.38
Params size (MB): 233.08
Estimated Total Size (MB): 242.03
----------------------------------------------------------------
  • 0
    点赞
  • 8
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值