Inception Net-V3结构图

前言

Google Inception Net在2014年的 ImageNet Large Scale Visual Recognition Competition (ILSVRC)中取得第一名,该网络以结构上的创新取胜,通过采用全局平均池化层取代全连接层,极大的降低了参数量,是非常实用的模型,一般称该网络模型为Inception V1。随后的Inception V2中,引入了Batch Normalization方法,加快了训练的收敛速度。在Inception V3模型中,通过将二维卷积层拆分成两个一维卷积层,不仅降低了参数数量,同时减轻了过拟合现象。

一、多少层?

Inception V3究竟有多少层呢?某书籍上说42层,某书籍上说46层。参考实现的源代码,仔细数一数,应该是47层

层次结构图.png

5(前面)+
3(block1_module1)+3(block1_module2)+3(block1_module3)+
3(block2_module1)+5(block2_module2)+5(block2_module3)+5(block2_module4)+5(block2_module5)+
4(block3_module1)+3(block3_module2)+3(block3_module3)
= 47层

Tips:上面的这张层次结构图出现在某些帖子和书籍中,根据实现的源码,标注的红色方框处应该是5个卷积层,而不是4个。

二、详细网络结构

详细的网络结构及其子网络结构如下。

总体结构图 .png

block1_module1.png

block1_module2.png

block1_module3.png

block2_module1.png

block2_module2.png

block2_module3,4.png

block2_module5.png

block3_module1.png

block3_module2.png

block3_module3.png



作者:忆霜晨
链接:https://www.jianshu.com/p/3bbf0675cfce
来源:简书
著作权归作者所有。商业转载请联系作者获得授权,非商业转载请注明出处。

  • 7
    点赞
  • 49
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
以下是DenseNet和Inception结合的代码实现,同时也包含了网络结构图的绘制过程。本示例使用的是PyTorch。 首先,我们需要导入必要的库: ```python import torch import torch.nn as nn from torch.autograd import Variable import torch.nn.functional as F ``` 接下来,我们定义一个Dense Block,它是DenseNet网络的基础单元,由多个卷积层组成。 ```python class DenseBlock(nn.Module): def __init__(self, in_channels, growth_rate, num_layers): super(DenseBlock, self).__init__() self.layers = nn.ModuleList([nn.Conv2d(in_channels + i * growth_rate, growth_rate, kernel_size=3, stride=1, padding=1) for i in range(num_layers)]) def forward(self, x): features = [x] for layer in self.layers: out = layer(torch.cat(features, dim=1)) features.append(out) return torch.cat(features, dim=1) ``` 然后,我们定义一个Transition层,用于将输入的通道数减半。 ```python class Transition(nn.Module): def __init__(self, in_channels, out_channels): super(Transition, self).__init__() self.conv = nn.Conv2d(in_channels, out_channels, kernel_size=1, stride=1, padding=0) self.pool = nn.AvgPool2d(kernel_size=2, stride=2) def forward(self, x): out = self.conv(x) out = self.pool(F.relu(out)) return out ``` 接下来,我们将Dense Block和Transition层结合在一起,构建整个DenseNet网络。 ```python class DenseNet(nn.Module): def __init__(self, growth_rate=32, block_config=(6, 12, 24, 16), num_classes=1000): super(DenseNet, self).__init__() # initial convolution self.conv = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3) self.pool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) # dense blocks num_features = 64 self.dense_blocks = nn.ModuleList() self.trans_layers = nn.ModuleList() for i, num_layers in enumerate(block_config): block = DenseBlock(num_features, growth_rate, num_layers) self.dense_blocks.append(block) num_features += num_layers * growth_rate if i != len(block_config) - 1: trans = Transition(num_features, num_features // 2) self.trans_layers.append(trans) num_features //= 2 # final layers self.bn = nn.BatchNorm2d(num_features) self.fc = nn.Linear(num_features, num_classes) # initialize weights for m in self.modules(): if isinstance(m, nn.Conv2d): nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') elif isinstance(m, nn.BatchNorm2d): nn.init.constant_(m.weight, 1) nn.init.constant_(m.bias, 0) elif isinstance(m, nn.Linear): nn.init.constant_(m.bias, 0) def forward(self, x): out = self.conv(x) out = self.pool(F.relu(out)) for block, trans in zip(self.dense_blocks, self.trans_layers): out = block(out) out = trans(out) out = F.relu(self.bn(out)) out = F.avg_pool2d(out, kernel_size=7, stride=1).view(out.size(0), -1) out = self.fc(out) return out ``` 现在我们将DenseNet和Inception结合起来。我们使用DenseNet网络替换Inception网络中的每个Inception模块。这里我们使用的是Inception-v3网络。 ```python class InceptionDenseNet(nn.Module): def __init__(self, num_classes=1000): super(InceptionDenseNet, self).__init__() self.features = nn.Sequential( nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3), nn.MaxPool2d(kernel_size=3, stride=2, padding=1), nn.Conv2d(64, 192, kernel_size=3, stride=1, padding=1), nn.MaxPool2d(kernel_size=3, stride=2, padding=1), DenseBlock(192, 32, 5), Transition(1024, 512), DenseBlock(512, 32, 10), Transition(1536, 768), DenseBlock(768, 32, 15), Transition(2304, 1152), DenseBlock(1152, 32, 10), Transition(3584, 1792), DenseBlock(1792, 32, 5), nn.BatchNorm2d(2144), nn.ReLU(inplace=True), nn.AvgPool2d(kernel_size=7, stride=1), ) self.classifier = nn.Linear(2144, num_classes) def forward(self, x): x = self.features(x) x = x.view(x.size(0), -1) x = self.classifier(x) return x ``` 最后,我们可以使用Graphviz库绘制整个网络的结构图。我们需要安装Graphviz库,并在代码中导入它。 ```python import graphviz # 绘制网络结构图 def make_dot(var, params=None): if params is not None: nodes = {id(v): v for v in var} param_map = {id(v): k for k, v in params.items() if id(v) in nodes} edges = [] for v in nodes: if v in param_map: name = param_map[v] else: name = '' for u in nodes[v].inputs(): edges.append('%s -> %s' % (name, u.grad_fn.name)) dot = graphviz.Digraph() for node in nodes.values(): dot.node(str(id(node)), label=node.__class__.__name__) for edge in edges: dot.edge(*edge.split(' -> ')) return dot ``` 现在我们可以使用make_dot函数来绘制网络结构图。 ```python model = InceptionDenseNet() x = torch.randn(1, 3, 299, 299) y = model(x) params = dict(model.named_parameters()) g = make_dot(y, params=params) g.view() ``` 这将自动打开Graphviz的可视化窗口,展示整个网络的结构图

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值