深度学习卷积神经网络——经典网络GoogLeNet(Inception V3)网络的搭建与实现

一、Inception网络(google公司)——GoogLeNet网络的综述

获得高质量模型最保险的做法就是增加模型的深度(层数)或者是其宽度(层核或者神经元数),

但是这里一般设计思路的情况下会出现如下的缺陷:

1.参数太多,若训练数据集有限,容易过拟合;

2.网络越大计算复杂度越大,难以应用;

3.网络越深,梯度越往后穿越容易消失,难以优化模型。 

解决上述两个缺点的根本方法是将全连接甚至一般的卷积都转化为稀疏连接。为了打破网络对称性和提高

学习能力,传统的网络都使用了随机稀疏连接。但是,计算机软硬件对非均匀稀疏数据的计算效率很差,

所以在AlexNet中又重新启用了全连接层,目的是为了更好地优化并行运算。现在的问题是有没有一种方法,

既能保持网络结构的稀疏性,又能利用密集矩阵的高计算性能。


二、 Inception模块介绍

Inception架构的主要思想是找出如何用密集成分来近似最优的局部稀疏结。


对上图做以下说明: 

1 . 采用不同大小的卷积核意味着不同大小的感受野,最后拼接意味着不同尺度特征的融合; 

2 . 之所以卷积核大小采用1*1、3*3和5*5,主要是为了方便对齐。设定卷积步长stride=1之后,

只要分别设定padding =0、1、2,采用same卷积可以得到相同维度的特征,然后这些特征直接拼接在一起; 

3 . 文章说很多地方都表明pooling挺有效,所以Inception里面也嵌入了pooling。 

4 . 网络越到后面特征越抽象,且每个特征涉及的感受野也更大,随着层数的增加,3x3和5x5卷积的比例也要增加。

Inception的作用:代替人工确定卷积层中的过滤器类型或者确定是否需要创建卷积层和池化层,即:不需要人为的

决定使用哪个过滤器,是否需要池化层等,由网络自行决定这些参数,可以给网络添加所有可能值,将输出连接

起来,网络自己学习它需要什么样的参数。


naive版本的Inception网络的缺陷:计算成本。使用5x5的卷积核仍然会带来巨大的计算量,约需要1.2亿次的计算量。



为减少计算成本,采用1x1卷积核来进行降维。 示意图如下:


在3x3和5x5的过滤器前面,max pooling后分别加上了1x1的卷积核,最后将它们全部以通道/厚度为轴拼接起来,

最终输出大小为28*28*256,卷积的参数数量比原来减少了4倍,得到最终版本的Inception模块:




三 、googLeNet介绍


1、googLeNet——Inception V1结构


googlenet的主要思想就是围绕这两个思路去做的:


(1).深度,层数更深,文章采用了22层,为了避免上述提到的梯度消失问题,

googlenet巧妙的在不同深度处增加了两个loss来保证梯度回传消失的现象。

(2).宽度,增加了多种核 1x1,3x3,5x5,还有直接max pooling的,

但是如果简单的将这些应用到feature map上的话,concat起来的feature map厚度将会很大,

所以在googlenet中为了避免这一现象提出的inception具有如下结构,在3x3前,5x5前,

max pooling后分别加上了1x1的卷积核起到了降低feature map厚度的作用。


对上图做如下说明: 
( 1 ) 显然GoogLeNet采用了Inception模块化(9个)的结构,共22层,方便增添和修改; 
( 2 ) 网络最后采用了 average pooling来代替全连接层 ,想法来自NIN,参数量仅为AlexNet的1/12,性能优于AlexNet,
事实证明可以将TOP1 accuracy提高0.6%。 但是, 实际在最后还是加了一个全连接层 ,主要是为了方便finetune; 
( 3 ) 虽然移除了全连接,但是 网络中依然使用了Dropout  ; 
( 4 ) 为了 避免梯度消失,网络额外增加了2个辅助的softmax用于向前传导梯度 。
文章中说这两个辅助的分类器的loss应该加一个衰减系数,但看caffe中的model也没有加任何衰减。
此外, 实际测试的时候,这两个额外的softmax会被去掉 。

(5)上述的GoogLeNet的版本成它使用的Inception V1结构。

2、Inception V2结构

大尺寸的卷积核可以带来更大的感受野,也意味着更多的参数,比如5x5卷积核参数是3x3卷积核的25/9=2.78倍。

为此,作者提出可以用2个连续的3x3卷积层(stride=1)组成的小网络来代替单个的5x5卷积层,这便是Inception V2结构,

保持感受野范围的同时又减少了参数量,如下图:




3、Inception V3结构

大卷积核完全可以由一系列的3x3卷积核来替代,那能不能分解的更小一点呢。

文章考虑了 nx1 卷积核,如下图所示的取代3x3卷积:

于是,任意nxn的卷积都可以通过1xn卷积后接nx1卷积来替代。实际上,作者发现在网络的前期使用这种分解效果

并不好,还有在中度大小的feature map上使用效果才会更好,对于mxm大小的feature map,建议m在12到20之间。

用nx1卷积来代替大卷积核,这里设定n=7来应对17x17大小的feature map。该结构被正式用在GoogLeNet V2中。





4、Inception V4结构,它结合了残差神经网络ResNet。

参考链接:http://blog.csdn.net/stdcoutzyx/article/details/51052847

http://blog.csdn.net/shuzfan/article/details/50738394#googlenet-inception-v2


5、Inception——ResNet V1 & Inception——ResNet V2



四、整体架构代码实现

GoogLeNet.py文件实现Inception v3网络前向传播过程以及网络的参数:


(一)slim应用介绍


 slim这个模块是在16年新推出的,其主要目的是来做所谓的“代码瘦身”。

tensorflow官方对它的描述是:此目录中的任何代码未经官方支持,可能会随时更改或删除。每个目录下都有指定的所有者。它旨在包含额外功能和贡献,最终会合并到核心TensorFlow中,但其接口可能仍然会发生变化,或者需要进行一些测试,看是否可以获得更广泛的接受。所以slim依然不属于原生tensorflow。

slim是一个使构建,训练,评估神经网络变得简单的库。它可以消除原生tensorflow里面很多重复的模板性的代码,让代码更紧凑,更具备可读性。另外slim提供了很多计算机视觉方面的著名模型(VGG, AlexNet等),我们不仅可以直接使用,甚至能以各种方式进行扩展。


1、slim的导入方法


2、slim子模块及功能介绍


(1)arg_scope:除了基本的namescope,variabelscope外,又加了arg_scope,它是用来控制每一层的默认超参数的。如果你的网络有大量相同的参数,如下所示:



 
 
  1. net = slim.conv2d(inputs, 64, [ 11, 11], 4, padding= 'SAME',
  2. weights_initializer=tf.truncated_normal_initializer(stddev= 0.01),
  3. weights_regularizer=slim.l2_regularizer( 0.0005), scope= 'conv1')
  4. net = slim.conv2d(net, 128, [ 11, 11], padding= 'VALID',
  5. weights_initializer=tf.truncated_normal_initializer(stddev= 0.01),
  6. weights_regularizer=slim.l2_regularizer( 0.0005), scope= 'conv2')
  7. net = slim.conv2d(net, 256, [ 11, 11], padding= 'SAME',
  8. weights_initializer=tf.truncated_normal_initializer(stddev= 0.01),
  9. weights_regularizer=slim.l2_regularizer( 0.0005), scope= 'conv3')
用arg_scope处理一下:

 
 
  1. with slim.arg_scope([slim.conv2d], padding= 'SAME',
  2. weights_initializer=tf.truncated_normal_initializer(stddev= 0.01)
  3. weights_regularizer=slim.l2_regularizer( 0.0005)):
  4. net = slim.conv2d(inputs, 64, [ 11, 11], scope= 'conv1')
  5. net = slim.conv2d(net, 128, [ 11, 11], padding= 'VALID', scope= 'conv2')
  6. net = slim.conv2d(net, 256, [ 11, 11], scope= 'conv3')
arg_scope作用范围内:是定义了 指定层的默认参数 ,若想 特别指定某些层的参数,可以重新赋值 (相当于重写)


 
 
  1. with slim.arg_scope([slim.conv2d, slim.fully_connected],
  2. activation_fn=tf.nn.relu,
  3. weights_initializer=tf.truncated_normal_initializer(stddev= 0.01),
  4. weights_regularizer=slim.l2_regularizer( 0.0005)):
  5. with slim.arg_scope([slim.conv2d], stride= 1, padding= 'SAME'):
  6. net = slim.conv2d(inputs, 64, [ 11, 11], 4, padding= 'VALID', scope= 'conv1')
  7. net = slim.conv2d(net, 256, [ 5, 5],
  8. weights_initializer=tf.truncated_normal_initializer(stddev= 0.03),
  9. scope= 'conv2')
  10. net = slim.fully_connected(net, 1000, activation_fn= None, scope= 'fc')

(2)layers: 这个比较重要,slim的核心和精髓,一些复杂层的定义。


 对比分别用tensorflow和slim实现一个卷积层的案例:


 
 
  1. #tensorflow实现卷积层
  2. with tf.name_scope( 'conv1_1') as scope:
  3. kernel = tf.Variable(tf.truncated_normal([ 3, 3, 64, 128], dtype=tf.float32,
  4. stddev= 1e-1), name= 'weights')
  5. conv = tf.nn.conv2d(input, kernel, [ 1, 1, 1, 1], padding= 'SAME')
  6. biases = tf.Variable(tf.constant( 0.0, shape=[ 128], dtype=tf.float32),
  7. trainable= True, name= 'biases')
  8. bias = tf.nn.bias_add(conv, biases)
  9. conv1 = tf.nn.relu(bias, name=scope)


 
 
  1. #slim实现卷积层
  2. net = slim.conv2d(input, 128, [ 3, 3], scope= 'conv1_1')

比较吸引人的是slim中的repeat和stack操作,假设定义三个相同的卷积层,

在slim中的repeat操作可减少代码量:


 
 
  1. net = slim.conv2d(net, 256, [ 3, 3], scope= 'conv3_1')
  2. net = slim.conv2d(net, 256, [ 3, 3], scope= 'conv3_2')
  3. net = slim.conv2d(net, 256, [ 3, 3], scope= 'conv3_3')
  4. net = slim.max_pool2d(net, [ 2, 2], scope= 'pool2')

 
 
  1. net = slim.repeat(net, 3, slim.conv2d, 256, [ 3, 3], scope= 'conv3')
  2. net = slim.max_pool2d(net, [ 2, 2], scope= 'pool2')
stack是处理卷积核或者输出不一样的情况 :假设定义三层FC:


 
 
  1. # Verbose way:
  2. x = slim.fully_connected(x, 32, scope= 'fc/fc_1')
  3. x = slim.fully_connected(x, 64, scope= 'fc/fc_2')
  4. x = slim.fully_connected(x, 128, scope= 'fc/fc_3')
使用stack操作:


slim.stack(x, slim.fully_connected, [32, 64, 128], scope='fc')
 
 
卷积层使用stack操作:



 
 
  1. # 普通方法:
  2. x = slim.conv2d(x, 32, [ 3, 3], scope= 'core/core_1')
  3. x = slim.conv2d(x, 32, [ 1, 1], scope= 'core/core_2')
  4. x = slim.conv2d(x, 64, [ 3, 3], scope= 'core/core_3')
  5. x = slim.conv2d(x, 64, [ 1, 1], scope= 'core/core_4')
  6. # 简便方法:
  7. slim.stack(x, slim.conv2d, [( 32, [ 3, 3]), ( 32, [ 1, 1]), ( 64, [ 3, 3]), ( 64, [ 1, 1])], scope= 'core')

(3)nets: 包含一些经典网络,VGG等,用的也比较多.


(4)variables:这个比较有用,slim管理变量的机制.

 变量分为两类:模型变量和局部变量。局部变量是不作为模型参数保存的,而模型变量会再save的时候保存下来。诸如global_step之类的就是局部变量。slim中可以写明变量存放的设备,正则和初始化规则。还有获取变量的函数也需要注意一下,get_variables是返回所有的变量。

slim中定义一个变量的实例:



 
 
  1. # Model Variables
  2. weights = slim.model_variable( 'weights',
  3. shape=[ 10, 10, 3 , 3],
  4. initializer=tf.truncated_normal_initializer(stddev= 0.1),
  5. regularizer=slim.l2_regularizer( 0.05),
  6. device= '/CPU:0')
  7. model_variables = slim.get_model_variables()
  8. # Regular variables
  9. my_var = slim.variable( 'my_var',
  10. shape=[ 20, 1],
  11. initializer=tf.zeros_initializer())
  12. regular_variables_and_model_variables = slim.get_variables()

(5)regularizers:包含一些正则规则.

(6)metrics:评估模型的度量标准.

(7)queues:文本队列管理,比较有用。

(8)learning、losses


(二)inception_v3的网络结构


(三)inception_v3用slim实现的具体步骤及代码


1、定义函数 inception_v3_arg_scope 用来生成网络中经常用到的函数的默认参数





 
 
  1. import tensorflow as tf
  2. import tensorflow.contrib.slim as slim
  3. #定义简单的函数产生截断的正态分布
  4. trunc_normal = lambda stddev:tf.truncated_normal_initializer( 0.0,stddev)
  5. #定义函数 inception_v3_arg_scope 用来生成网络中经常用到的函数的默认参数
  6. def inception_v3_arg_scope(weight_decay=0.00004,stddev=0.1,
  7. batch_norm_var_collection="moving_vars"):
  8. batch_norm_params = {
  9. "decay": 0.9997, "epsilon": 0.001, "updates_collections":tf.GraphKeys.UPDATE_OPS,
  10. "variables_collections":{
  11. "beta": None, "gamma": None, "moving_mean":[batch_norm_var_collection],
  12. "moving_variance":[batch_norm_var_collection]
  13. }
  14. }
  15. with slim.arg_scope([slim.conv2d,slim.fully_connected],
  16. weights_regularizer=slim.l2_regularizer(weight_decay)):
  17. #对卷积层生成函数的几个参数赋予默认值
  18. with slim.arg_scope([slim.conv2d],
  19. weights_regularizer = tf.truncated_normal_initializer(stddev=stddev),
  20. activation_fc = tf.nn.relu,
  21. normalizer_fc = slim.batch_norm,
  22. normalizer_params = batch_norm_params) as scope:
  23. return scope

2、定义Inception V3的卷积部分



 
 
  1. #定义Inception V3的卷积部分
  2. def inception_v3_base(inputs,scope=None):
  3. end_points = {}
  4. with tf.variable_scope(scope, "InceptionV3",[inputs]):
  5. with slim.arg_scope([slim.conv2d,slim.max_pool2d,slim.avg_pool2d],
  6. stride = 1,padding = "VALID"):
  7. net = slim.conv2d(inputs,num_outputs= 32,kernel_size=[ 3, 3],stride= 2,scope= "Conv2d_1a_3x3")
  8. net = slim.conv2d(net,num_outputs= 32,kernel_size=[ 3, 3],scope= "Conv2d_2a_3x3")
  9. net = slim.conv2d(net,num_outputs= 64,kernel_size=[ 3, 3],padding= "SAME",scope= "Conv2d_2b_3x3")
  10. net = slim.max_pool2d(net,kernel_size=[ 3, 3],stride= 2,scope= "MaxPool_3a_3x3")
  11. net = slim.conv2d(net,num_outputs= 80,kernel_size=[ 1, 1],scope= "Conv2d_3b_1x1")
  12. net = slim.conv2d(net,num_outputs= 192,kernel_size=[ 3, 3],scope= "Conv2d_4a_3x3")
  13. net = slim.max_pool2d(net,kernel_size=[ 3, 3],stride= 2,scope= "MaxPool_5a_3x3")
3.1、定义第一个Inception模块组



 
 
  1. #定义第一个Inception模块组
  2. with slim.arg_scope([slim.conv2d,slim.max_pool2d,slim.avg_pool2d],
  3. stride = 1,padding = "SAME"):
  4. with tf.variable_scope( "Mixed_5b"):
  5. with tf.variable_scope( "Branch_0"):
  6. batch_0 = slim.conv2d(net,num_outputs= 64,kernel_size=[ 1, 1],scope= "Conv2d_0a_1x1")
  7. with tf.variable_scope( "Branch_1"):
  8. batch_1 = slim.conv2d(net,num_outputs= 48,kernel_size=[ 1, 1],scope= "Conv2d_0a_1x1")
  9. batch_1 = slim.conv2d(batch_1,num_outputs= 64,kernel_size=[ 5, 5],scope= "Conv2d_0b_5x5")
  10. with tf.variable_scope( "Branch_2"):
  11. batch_2 = slim.conv2d(net,num_outputs= 64,kernel_size=[ 1, 1],scope= "Conv2d_0a_1x1")
  12. batch_2 = slim.conv2d(batch_2,num_outputs= 96,kernel_size=[ 3, 3],scope= "Conv2d_0b_3x3")
  13. batch_2 = slim.conv2d(batch_2,num_outputs= 96,kernel_size=[ 3, 3],scope= "Conv2d_0c_3x3")
  14. with tf.variable_scope( "Branch_3"):
  15. batch_3 = slim.avg_pool2d(net,kernel_size=[ 3, 3],scope= "AvgPool_0a_3x3")
  16. batch_3 = slim.conv2d(batch_3,num_outputs= 32,kernel_size=[ 1, 1],scope= "Conv2d_0b_1x1")
  17. net = tf.concat([batch_0,batch_1,batch_2,batch_3], 3)


 
 
  1. # 第二个Inception模块
  2. with tf.variable_scope( "Mixed_5c"):
  3. with tf.variable_scope( "Branch_0"):
  4. batch_0 = slim.conv2d(net, num_outputs= 64, kernel_size=[ 1, 1], scope= "Conv2d_0a_1x1")
  5. with tf.variable_scope( "Branch_1"):
  6. batch_1 = slim.conv2d(net, num_outputs= 48, kernel_size=[ 1, 1], scope= "Conv2d_0b_1x1")
  7. batch_1 = slim.conv2d(batch_1, num_outputs= 64, kernel_size=[ 5, 5], scope= "Conv2d_0c_5x5")
  8. with tf.variable_scope( "Branch_2"):
  9. batch_2 = slim.conv2d(net, num_outputs= 64, kernel_size=[ 1, 1], scope= "Conv2d_0a_1x1")
  10. batch_2 = slim.conv2d(batch_2, num_outputs= 96, kernel_size=[ 3, 3], scope= "Conv2d_0b_3x3")
  11. batch_2 = slim.conv2d(batch_2, num_outputs= 96, kernel_size=[ 3, 3], scope= "Conv2d_0c_3x3")
  12. with tf.variable_scope( "Branch_3"):
  13. batch_3 = slim.avg_pool2d(net, kernel_size=[ 3, 3], scope= "AvgPool_0a_3x3")
  14. batch_3 = slim.conv2d(batch_3, num_outputs= 64, kernel_size=[ 1, 1], scope= "Conv2d_0b_1x1")
  15. net = tf.concat([batch_0, batch_1, batch_2, batch_3], 3)
  16. # 第三个Inception模块
  17. with tf.variable_scope( "Mixed_5d"):
  18. with tf.variable_scope( "Branch_0"):
  19. batch_0 = slim.conv2d(net, num_outputs= 64, kernel_size=[ 1, 1], scope= "Conv2d_0a_1x1")
  20. with tf.variable_scope( "Branch_1"):
  21. batch_1 = slim.conv2d(net, num_outputs= 48, kernel_size=[ 1, 1], scope= "Conv2d_0b_1x1")
  22. batch_1 = slim.conv2d(batch_1, num_outputs= 64, kernel_size=[ 5, 5], scope= "Conv2d_0c_5x5")
  23. with tf.variable_scope( "Branch_2"):
  24. batch_2 = slim.conv2d(net, num_outputs= 64, kernel_size=[ 1, 1], scope= "Conv2d_0a_1x1")
  25. batch_2 = slim.conv2d(batch_2, num_outputs= 96, kernel_size=[ 3, 3], scope= "Conv2d_0b_3x3")
  26. batch_2 = slim.conv2d(batch_2, num_outputs= 96, kernel_size=[ 3, 3], scope= "Conv2d_0c_3x3")
  27. with tf.variable_scope( "Branch_3"):
  28. batch_3 = slim.avg_pool2d(net, kernel_size=[ 3, 3], scope= "AvgPool_0a_3x3")
  29. batch_3 = slim.conv2d(batch_3, num_outputs= 64, kernel_size=[ 1, 1], scope= "Conv2d_0b_1x1")
  30. net = tf.concat([batch_0, batch_1, batch_2, batch_3], 3)
3.2、定义第二个Inception模块组



 
 
  1. # 定义第二个Inception模块组,第一个Inception模块
  2. with tf.variable_scope( "Mixed_6a"):
  3. with tf.variable_scope( "Branch_0"):
  4. batch_0 = slim.conv2d(net, num_outputs= 384, kernel_size=[ 3, 3],
  5. stride= 2, padding= "VALID",scope= "Conv2d_1a_1x1")
  6. with tf.variable_scope( "Branch_1"):
  7. batch_1 = slim.conv2d(net, num_outputs= 64, kernel_size=[ 1, 1], scope= "Conv2d_0a_1x1")
  8. batch_1 = slim.conv2d(batch_1, num_outputs= 96, kernel_size=[ 3, 3], scope= "Conv2d_0b_3x3")
  9. batch_1 = slim.conv2d(batch_1, num_outputs= 96, kernel_size=[ 3, 3],
  10. stride= 2, padding= "VALID",scope= "Conv2d_1a_1x1")
  11. with tf.variable_scope( "Branch_2"):
  12. batch_2 = slim.max_pool2d(net,kernel_size=[ 3, 3],stride= 2,padding= "VALID",
  13. scope= "MaxPool_1a_3x3")
  14. net = tf.concat([batch_0, batch_1, batch_2], 3)



 
 
  1. # 定义第二个Inception模块组,第一个Inception模块
  2. with tf.variable_scope( "Mixed_6b"):
  3. with tf.variable_scope( "Branch_0"):
  4. batch_0 = slim.conv2d(net,num_outputs= 192,kernel_size=[ 1, 1],scope= "Conv2d_0a_1x1")
  5. with tf.variable_scope( "Branch_1"):
  6. batch_1 = slim.conv2d(net, num_outputs= 128, kernel_size=[ 1, 1], scope= "Conv2d_0a_1x1")
  7. batch_1 = slim.conv2d(batch_1, num_outputs= 128, kernel_size=[ 1, 7], scope= "Conv2d_0b_1x7")
  8. batch_1 = slim.conv2d(batch_1, num_outputs= 192, kernel_size=[ 7, 1],scope= "Conv2d_0c_7x1")
  9. with tf.variable_scope( "Branch_2"):
  10. batch_2 = slim.conv2d(net, num_outputs= 128, kernel_size=[ 1, 1], scope= "Conv2d_0a_1x1")
  11. batch_2 = slim.conv2d(batch_2, num_outputs= 128, kernel_size=[ 7, 1], scope= "Conv2d_0b_7x1")
  12. batch_2 = slim.conv2d(batch_2, num_outputs= 128, kernel_size=[ 1, 7], scope= "Conv2d_0c_1x7")
  13. batch_2 = slim.conv2d(batch_2, num_outputs= 128, kernel_size=[ 7, 1], scope= "Conv2d_0d_7x1")
  14. batch_2 = slim.conv2d(batch_2, num_outputs= 192, kernel_size=[ 1, 7], scope= "Conv2d_0e_1x7")
  15. with tf.variable_scope( "Branch_3"):
  16. batch_3 = slim.avg_pool2d(net, kernel_size=[ 3, 3], scope= "AvgPool_0a_3x3")
  17. batch_3 = slim.conv2d(batch_3, num_outputs= 192, kernel_size=[ 1, 1], scope= "Conv2d_0b_1x1")
  18. net = tf.concat([batch_0, batch_1, batch_2,batch_3], 3)


 
 
  1. # 定义第二个Inception模块组,第三个Inception模块
  2. with tf.variable_scope( "Mixed_6c"):
  3. with tf.variable_scope( "Branch_0"):
  4. batch_0 = slim.conv2d(net,num_outputs= 192,kernel_size=[ 1, 1],scope= "Conv2d_0a_1x1")
  5. with tf.variable_scope( "Branch_1"):
  6. batch_1 = slim.conv2d(net, num_outputs= 160, kernel_size=[ 1, 1], scope= "Conv2d_0a_1x1")
  7. batch_1 = slim.conv2d(batch_1, num_outputs= 160, kernel_size=[ 1, 7], scope= "Conv2d_0b_1x7")
  8. batch_1 = slim.conv2d(batch_1, num_outputs= 160, kernel_size=[ 7, 1],scope= "Conv2d_0c_7x1")
  9. with tf.variable_scope( "Branch_2"):
  10. batch_2 = slim.conv2d(net, num_outputs= 160, kernel_size=[ 1, 1], scope= "Conv2d_0a_1x1")
  11. batch_2 = slim.conv2d(batch_2, num_outputs= 160, kernel_size=[ 7, 1], scope= "Conv2d_0b_7x1")
  12. batch_2 = slim.conv2d(batch_2, num_outputs= 160, kernel_size=[ 1, 7], scope= "Conv2d_0c_1x7")
  13. batch_2 = slim.conv2d(batch_2, num_outputs= 160, kernel_size=[ 7, 1], scope= "Conv2d_0d_7x1")
  14. batch_2 = slim.conv2d(batch_2, num_outputs= 192, kernel_size=[ 1, 7], scope= "Conv2d_0e_1x7")
  15. with tf.variable_scope( "Branch_3"):
  16. batch_3 = slim.avg_pool2d(net, kernel_size=[ 3, 3], scope= "AvgPool_0a_3x3")
  17. batch_3 = slim.conv2d(batch_3, num_outputs= 192, kernel_size=[ 1, 1], scope= "Conv2d_0b_1x1")
  18. net = tf.concat([batch_0, batch_1, batch_2,batch_3], 3)
  19. # 定义第二个Inception模块组,第四个Inception模块
  20. with tf.variable_scope( "Mixed_6d"):
  21. with tf.variable_scope( "Branch_0"):
  22. batch_0 = slim.conv2d(net,num_outputs= 192,kernel_size=[ 1, 1],scope= "Conv2d_0a_1x1")
  23. with tf.variable_scope( "Branch_1"):
  24. batch_1 = slim.conv2d(net, num_outputs= 160, kernel_size=[ 1, 1], scope= "Conv2d_0a_1x1")
  25. batch_1 = slim.conv2d(batch_1, num_outputs= 160, kernel_size=[ 1, 7], scope= "Conv2d_0b_1x7")
  26. batch_1 = slim.conv2d(batch_1, num_outputs= 160, kernel_size=[ 7, 1],scope= "Conv2d_0c_7x1")
  27. with tf.variable_scope( "Branch_2"):
  28. batch_2 = slim.conv2d(net, num_outputs= 160, kernel_size=[ 1, 1], scope= "Conv2d_0a_1x1")
  29. batch_2 = slim.conv2d(batch_2, num_outputs= 160, kernel_size=[ 7, 1], scope= "Conv2d_0b_7x1")
  30. batch_2 = slim.conv2d(batch_2, num_outputs= 160, kernel_size=[ 1, 7], scope= "Conv2d_0c_1x7")
  31. batch_2 = slim.conv2d(batch_2, num_outputs= 160, kernel_size=[ 7, 1], scope= "Conv2d_0d_7x1")
  32. batch_2 = slim.conv2d(batch_2, num_outputs= 192, kernel_size=[ 1, 7], scope= "Conv2d_0e_1x7")
  33. with tf.variable_scope( "Branch_3"):
  34. batch_3 = slim.avg_pool2d(net, kernel_size=[ 3, 3], scope= "AvgPool_0a_3x3")
  35. batch_3 = slim.conv2d(batch_3, num_outputs= 192, kernel_size=[ 1, 1], scope= "Conv2d_0b_1x1")
  36. net = tf.concat([batch_0, batch_1, batch_2,batch_3], 3)
  37. # 定义第二个Inception模块组,第五个Inception模块
  38. with tf.variable_scope( "Mixed_6e"):
  39. with tf.variable_scope( "Branch_0"):
  40. batch_0 = slim.conv2d(net,num_outputs= 192,kernel_size=[ 1, 1],scope= "Conv2d_0a_1x1")
  41. with tf.variable_scope( "Branch_1"):
  42. batch_1 = slim.conv2d(net, num_outputs= 160, kernel_size=[ 1, 1], scope= "Conv2d_0a_1x1")
  43. batch_1 = slim.conv2d(batch_1, num_outputs= 160, kernel_size=[ 1, 7], scope= "Conv2d_0b_1x7")
  44. batch_1 = slim.conv2d(batch_1, num_outputs= 160, kernel_size=[ 7, 1],scope= "Conv2d_0c_7x1")
  45. with tf.variable_scope( "Branch_2"):
  46. batch_2 = slim.conv2d(net, num_outputs= 160, kernel_size=[ 1, 1], scope= "Conv2d_0a_1x1")
  47. batch_2 = slim.conv2d(batch_2, num_outputs= 160, kernel_size=[ 7, 1], scope= "Conv2d_0b_7x1")
  48. batch_2 = slim.conv2d(batch_2, num_outputs= 160, kernel_size=[ 1, 7], scope= "Conv2d_0c_1x7")
  49. batch_2 = slim.conv2d(batch_2, num_outputs= 160, kernel_size=[ 7, 1], scope= "Conv2d_0d_7x1")
  50. batch_2 = slim.conv2d(batch_2, num_outputs= 192, kernel_size=[ 1, 7], scope= "Conv2d_0e_1x7")
  51. with tf.variable_scope( "Branch_3"):
  52. batch_3 = slim.avg_pool2d(net, kernel_size=[ 3, 3], scope= "AvgPool_0a_3x3")
  53. batch_3 = slim.conv2d(batch_3, num_outputs= 192, kernel_size=[ 1, 1], scope= "Conv2d_0b_1x1")
  54. net = tf.concat([batch_0, batch_1, batch_2,batch_3], 3)
  55. end_points[ "Mixed_6e"] = net #第二个模块组的最后一个Inception模块,将Mixed_6e存储于end_points中
3.3、定义第三个Inception模块组


 
 
  1. # 定义第三个Inception模块组,第一个Inception模块
  2. with tf.variable_scope( "Mixed_7a"):
  3. with tf.variable_scope( "Branch_0"):
  4. batch_0 = slim.conv2d(net, num_outputs= 192, kernel_size=[ 1, 1], scope= "Conv2d_0a_1x1")
  5. batch_0 = slim.conv2d(net, num_outputs= 320, kernel_size=[ 3, 3],stride= 2,
  6. padding= "VALID",scope= "Conv2d_1a_3x3")
  7. with tf.variable_scope( "Branch_1"):
  8. batch_1 = slim.conv2d(net, num_outputs= 192, kernel_size=[ 1, 1], scope= "Conv2d_0a_1x1")
  9. batch_1 = slim.conv2d(batch_1, num_outputs= 192, kernel_size=[ 1, 7], scope= "Conv2d_0b_1x7")
  10. batch_1 = slim.conv2d(batch_1, num_outputs= 192, kernel_size=[ 7, 1],scope= "Conv2d_0c_7x1")
  11. batch_1 = slim.conv2d(batch_1, num_outputs= 192, kernel_size=[ 3, 3], stride= 2,
  12. padding= "VALID",scope= "Conv2d_1a_3x3")
  13. with tf.variable_scope( "Branch_2"):
  14. batch_2 = slim.max_pool2d(net, kernel_size=[ 3, 3], stride= 2, padding= "VALID",
  15. scope= "MaxPool_1a_3x3")
  16. net = tf.concat([batch_0, batch_1, batch_2], 3)


 
 
  1. # 定义第三个Inception模块组,第二个Inception模块
  2. with tf.variable_scope( "Mixed_7b"):
  3. with tf.variable_scope( "Branch_0"):
  4. batch_0 = slim.conv2d(net, num_outputs= 320, kernel_size=[ 1, 1],scope= "Conv2d_0a_1x1")
  5. with tf.variable_scope( "Branch_1"):
  6. batch_1 = slim.conv2d(net, num_outputs= 384, kernel_size=[ 1, 1], scope= "Conv2d_0a_1x1")
  7. batch_1 = tf.concat([
  8. slim.conv2d(batch_1,num_outputs= 384,kernel_size=[ 1, 3],scope= "Conv2d_0b_1x3"),
  9. slim.conv2d(batch_1,num_outputs= 384,kernel_size=[ 3, 1],scope= "Conv2d_0b_3x1")],axis= 3)
  10. with tf.variable_scope( "Branch_2"):
  11. batch_2 = slim.conv2d(net,num_outputs= 448,kernel_size=[ 1, 1],scope= "Conv2d_0a_1x1")
  12. batch_2 = slim.conv2d(batch_2,num_outputs= 384,kernel_size=[ 3, 3],scope= "Conv2d_0b_3x3")
  13. batch_2 = tf.concat([
  14. slim.conv2d(batch_2,num_outputs= 384,kernel_size=[ 1, 3],scope= "Conv2d_0c_1x3"),
  15. slim.conv2d(batch_2,num_outputs= 384,kernel_size=[ 3, 1],scope= "Conv2d_0d_3x1")],axis= 3)
  16. with tf.variable_scope( "Branch_3"):
  17. batch_3 = slim.avg_pool2d(net,kernel_size=[ 3, 3],scope= "AvgPool_0a_3x3")
  18. batch_3 = slim.conv2d(batch_3,num_outputs= 192,kernel_size=[ 1, 1],scope= "Conv2d_0b_1x1")
  19. net = tf.concat([batch_0, batch_1, batch_2,batch_3], 3)
  20. # 定义第三个Inception模块组,第三个Inception模块
  21. with tf.variable_scope( "Mixed_7c"):
  22. with tf.variable_scope( "Branch_0"):
  23. batch_0 = slim.conv2d(net, num_outputs= 320, kernel_size=[ 1, 1],scope= "Conv2d_0a_1x1")
  24. with tf.variable_scope( "Branch_1"):
  25. batch_1 = slim.conv2d(net, num_outputs= 384, kernel_size=[ 1, 1], scope= "Conv2d_0a_1x1")
  26. batch_1 = tf.concat([
  27. slim.conv2d(batch_1,num_outputs= 384,kernel_size=[ 1, 3],scope= "Conv2d_0b_1x3"),
  28. slim.conv2d(batch_1,num_outputs= 384,kernel_size=[ 3, 1],scope= "Conv2d_0b_3x1")],axis= 3)
  29. with tf.variable_scope( "Branch_2"):
  30. batch_2 = slim.conv2d(net,num_outputs= 448,kernel_size=[ 1, 1],scope= "Conv2d_0a_1x1")
  31. batch_2 = slim.conv2d(batch_2,num_outputs= 384,kernel_size=[ 3, 3],scope= "Conv2d_0b_3x3")
  32. batch_2 = tf.concat([
  33. slim.conv2d(batch_2,num_outputs= 384,kernel_size=[ 1, 3],scope= "Conv2d_0c_1x3"),
  34. slim.conv2d(batch_2,num_outputs= 384,kernel_size=[ 3, 1],scope= "Conv2d_0d_3x1")],axis= 3)
  35. with tf.variable_scope( "Branch_3"):
  36. batch_3 = slim.avg_pool2d(net,kernel_size=[ 3, 3],scope= "AvgPool_0a_3x3")
  37. batch_3 = slim.conv2d(batch_3,num_outputs= 192,kernel_size=[ 1, 1],scope= "Conv2d_0b_1x1")
  38. net = tf.concat([batch_0, batch_1, batch_2,batch_3], 3)
  39. return net,end_points
4、Inception网络最后一部分——全局平均池化、softmax和Auxiliary Logits(辅助分类的节点)


 



 
 
  1. def inception_v3(inputs,num_classes=1000,is_training=True,droupot_keep_prob = 0.8,
  2. prediction_fn = slim.softmax,spatial_squeeze = True,reuse = None,scope="InceptionV3"):
  3. """
  4. InceptionV3整个网络的构建
  5. param :
  6. inputs -- 输入tensor
  7. num_classes -- 最后分类数目
  8. is_training -- 是否是训练过程
  9. droupot_keep_prob -- dropout保留节点比例
  10. prediction_fn -- 最后分类函数,默认为softmax
  11. patial_squeeze -- 是否对输出去除维度为1的维度
  12. reuse -- 是否对网络和Variable重复使用
  13. scope -- 函数默认参数环境
  14. return:
  15. logits -- 最后输出结果
  16. end_points -- 包含辅助节点的重要节点字典表
  17. """
  18. with tf.variable_scope(scope, "InceptionV3",[inputs,num_classes],
  19. reuse=reuse) as scope:
  20. with slim.arg_scope([slim.batch_norm,slim.dropout],
  21. is_training = is_training):
  22. net,end_points = inception_v3_base(inputs,scope=scope) #前面定义的整个卷积网络部分
  23. #辅助分类节点部分
  24. with slim.arg_scope([slim.conv2d,slim.max_pool2d,slim.avg_pool2d],
  25. stride = 1,padding = "SAME"):
  26. #通过end_points取到Mixed_6e
  27. aux_logits = end_points[ "Mixed_6e"]
  28. with tf.variable_scope( "AuxLogits"):
  29. aux_logits = slim.avg_pool2d(aux_logits,kernel_size=[ 5, 5],stride= 3,
  30. padding= "VALID",scope= "Avgpool_1a_5x5")
  31. aux_logits = slim.conv2d(aux_logits,num_outputs= 128,kernel_size=[ 1, 1],scope= "Conv2d_1b_1x1")
  32. aux_logits = slim.conv2d(aux_logits,num_outputs= 768,kernel_size=[ 5, 5],
  33. weights_initializer=trunc_normal( 0.01),padding= "VALID",
  34. scope= "Conv2d_2a_5x5")
  35. aux_logits = slim.conv2d(aux_logits,num_outputs=num_classes,kernel_size=[ 1, 1],
  36. activation_fn= None,normalizer_fn= None,
  37. weights_initializer=trunc_normal( 0.001),scope= "Conv2d_1b_1x1")
  38. #消除tensor中前两个维度为1的维度
  39. if spatial_squeeze:
  40. aux_logits = tf.squeeze(aux_logits,axis=[ 1, 2],name= "SpatialSqueeze")
  41. end_points[ "AuxLogits"] = aux_logits #将辅助节点分类的输出aux_logits存到end_points中
  42. #正常分类预测
  43. with tf.variable_scope( "Logits"):
  44. net = slim.avg_pool2d(net,kernel_size=[ 8, 8],padding= "VALID",
  45. scope= "Avgpool_1a_8x8")
  46. net = slim.dropout(net,keep_prob=droupot_keep_prob,scope= "Dropout_1b")
  47. end_points[ "Logits"] = net
  48. logits = slim.conv2d(net,num_outputs=num_classes,kernel_size=[ 1, 1],activation_fn= None,
  49. normalizer_fn= None,scope= "Conv2d_1c_1x1")
  50. if spatial_squeeze:
  51. logits = tf.squeeze(logits,axis=[ 1, 2],name= "SpatialSqueeze")
  52. end_points[ "Logits"] = logits
  53. end_points[ "Predictions"] = prediction_fn(logits,scope= "Predictions")
  54. return logits,end_points



  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值