整理经典网络并计算其flops等

本文介绍了如何使用TensorBoard的add_graph函数来可视化AlexNet、VGG Net、GoogLeNet和ResNet等经典网络模型,并讨论了如何计算这些模型的FLOPs。通过设置input_to_model参数和verbose选项,可以在不显示详细信息的情况下运行模型。
摘要由CSDN通过智能技术生成

利用tensorboard显示模型示意图使用到的函数add_graph中的参数

add_graph(model, input_to_model=None, verbose=False, **kwargs)

参数

    model (torch.nn.Module): 待可视化的网络模型
    input_to_model (torch.Tensor or list of torch.Tensor, optional): 待输入神经网络的变量或一组变量
   verbose表示详细信息,verbose=FALSE,意思就是设置运行的时候不显示详细信息

1.AlexNet

import torch
from torch import nn
from torchstat import stat
class AlexNet(nn.Module):
    def __init__(self,num_classes):
        super(AlexNet,self).__init__()
        self.features=nn.Sequential(
            nn.Conv2d(3,64,kernel_size=11,stride=4,padding=2),
            nn.ReLU(True),
            nn.MaxPool2d(kernel_size=3,stride=2),
        
        
            nn.Conv2d(64,192,kernel_size=5,padding=2),
            nn.ReLU(True),
            nn.MaxPool2d(kernel_size=3,stride=2),
        
        
            nn.Conv2d(192, 384, kernel_size=3, padding=1),   # b, 384, 13, 13
            nn.ReLU(True),
            nn.Conv2d(384, 256, kernel_size=3, padding=1),   # b, 256, 13, 13
            nn.ReLU(True),
            nn.Conv2d(256, 256, kernel_size=3, padding=1),   # b, 256, 13, 13
            nn.ReLU(True),
            nn.MaxPool2d(kernel_size=3, stride=2))    # b, 256, 6, 6
        self.classifier = nn.Sequential(
            nn.Dropout(),
            nn.Linear(256*6*6, 4096),
            nn.ReLU(True),
            nn.Dropout(),
            nn.Linear(4096, 4096),
            nn.ReLU(True),
            nn.Linear(4096, num_classes))
    def forward(self,x):
        x=self.features(x)
        print(x.size())
        x=x.view(x.size(0),256*6*6)
        x=self.classifier(x)
        return x

    
model=AlexNet(10)
stat(model,(3,224,224))

 

 

 2.vgg net

# VGG-16模型

from torch import nn
from torchstat import stat
class VGG(nn.Module):
    def __init__(self, num_classes):
        super(VGG, self).__init__()     # b, 3, 224, 224
        self.features = nn.Sequential(
            nn.Conv2d(3, 64, kernel_size=3, padding=1),   # b, 64, 224, 224
            nn.ReLU(True),
            nn.Conv2d(64, 64, kernel_size=3, padding=1),    # b, 64, 224, 224
            nn.ReLU(True),
            nn.MaxPool2d(kernel_size=2, stride=2),     # b, 64, 112, 112

            nn.Conv2d(64, 128, kernel_size=3, padding=1),  # b, 128, 112, 112
            nn.ReLU(True),
            nn.Conv2d(128, 128, kernel_size=3, padding=1),   # b, 128, 112, 112
            nn.ReLU(True),
            nn.MaxPool2d(kernel_size=2, stride=2),   # b, 128, 56, 56

            nn.Conv2d(128, 256, kernel_size=3, padding=1),    # b, 256, 56, 56
            nn.ReLU(True),
            nn.Conv2d(256, 256, kernel_size=3, padding=1),    # b, 256, 56, 56
            nn.ReLU(True),
            nn.Conv2d(256, 256, kernel_size=3, padding=1),  # b, 256, 56, 56
            nn.ReLU(True),
            nn.MaxPool2d(kernel_size=2, stride=2),    # b, 256, 28, 28

            nn.Conv2d(256, 512, kernel_size=3, padding=1),  # b, 512, 28, 28
            nn.ReLU(True),
            nn.Conv2d(512, 512, kernel_size=3, padding=1),  # b, 512, 28, 28
            nn.ReLU(True),
            nn.Conv2d(512, 512, kernel_size=3, padding=1),  # b, 512, 28, 28
            nn.ReLU(True),
            nn.MaxPool2d(kernel_size=2, stride=2),  # b, 512, 14, 14

            nn.Conv2d(512, 512, kernel_size=3, padding=1),  # b, 512, 14, 14
       
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值