tensorflow实战——学习笔记(二)Inception_V3实现

本文深入分析了Inception_V3的网络结构,并详细介绍了代码实现过程,包括模块导入、截断正态分布函数、Inception_v3_arg_scope函数、Inception_v3_base函数以及网络的构建。在训练部分,由于使用CPU导致速度较慢,但指出了如何进行完整的训练流程。文章总结了InceptionV3的优点和关键设计理念,并解答了关于slim.arg_scope、variable_scope等的问题。
摘要由CSDN通过智能技术生成

网络结构分析

这里写图片描述
这里写图片描述

其中Inception模块组结构如下:

这里写图片描述

完整模型如下:

这里写图片描述

代码实现

1、导入模块

import tensorflow as tf
import tensorflow.contrib.slim as slim
from datetime import datetime
import math
import time

2、实现一个简单的函数trunc_normal,产生截断的正态分布

trunc_normal = lambda stddev: tf.truncated_normal_initializer(0.0, stddev)

3、定义函数inception_v3_arg_scope,用来生成网络中经常用到的函数的默认参数

def inception_v3_arg_scope(weight_decay=0.00004, stddev=0.1,
                           batch_norm_var_collection='moving_vars'):
    """生成网络中经常用到的函数的默认参数"""
    batch_norm_params = {
        'decay': 0.9997,
        'epsilon': 0.001,
        'updates_collections': tf.GraphKeys.UPDATE_OPS,
        'variables_collections': {
            'beta': None,
            'gamma': None,
            'moving_mean': [batch_norm_var_collection],
            'moving_variance': [batch_norm_var_collection]
        }
    }

    # slim.arg_scope可以给函数的参数自动赋予某些默认值,例如with slim.arg_scope([slim.conv2d,
    # slim.fully_connected],weights_regularizer=slim.l2_regularizer(weight_decay))这句会对
    # slim.conv2d, slim.fully_connected这两个函数的参数自动赋值,将参数weights_regularizer默认
    # 设置为slim.l2_regularizer(weight_decay)。此后不需要每次都设置参数了只需要在修改的时候设置
    with slim.arg_scope([slim.conv2d, slim.fully_connected],
                        weights_regularizer=slim.l2_regularizer(weight_decay)):
        # 再对卷积层函数进行默认参数配置
        with slim.arg_scope([slim.conv2d],
                            weights_initializer=tf.truncated_normal_initializer(stddev=stddev),
                            activation_fn=tf.nn.relu,
                            normalizer_fn=slim.batch_norm,
                            normalizer_params=batch_norm_params) as sc:
            return sc

4、定义Inception_v3_base函数,用以生成Inception V3网络的卷积部分

def inception_v3_base(inputs, scope=None):
    """一个inception_v3基本模块"""
    end_points = {}  # 用来保存常用的关键节点
    # qs1:这里的variable_scope的三个参数是什么意思
    # 非Inception层的构建
    with tf.variable_scope(scope, 'InceptionV3', [inputs]):
        with slim.arg_scope([slim.conv2d, slim.max_pool2d, slim.avg_pool2d],
                            stride=1, padding='VALID'):
            # 299 x 299 x 3
            net = slim.conv2d(inputs, 32, [3, 3], stride=2, scope='Conv2d_1a_3x3')
            # 149 x 149 x 32
            net = slim.conv2d(net, 32, [3, 3], scope='Conv2d_2a_3x3')
            # 147 x 147 x 32
            net = slim.conv2d(net, 64, [3, 3], padding='SAME', scope='Conv2d_2b_3x3')
            # 147 x 147 x 64
            net = slim.max_pool2d(net, [3, 3], stride=2, scope='MaxPool_3a_3x3')
            # 73 x 73 x 64
            net = slim.conv2d(net, 80, [3, 3], scope='Conv2d_3b_1x1')
            # 73 x 73 x 80.
            net = slim.conv2d(net, 192, [3, 3], stride=2, scope='Conv2d_4a_3x3')
            # 71 x 71 x 192.
            net = slim.max_pool2d(net, [3, 3], scope='MaxPool_5a_3x3')
            # 35 x 35 x 192.

    with slim.arg_scope([slim.conv2d, slim.max_pool2d, slim.avg_pool2d],
                        stride=1, padding="SAME"):
        # 第一个Inception模块组的第一个Inception模块构建
        # 35x35x256
        with tf.variable_scope('Mixed_5b'):
            with tf.variable_scope('Branch_0'):
                branch_0 = slim.conv2d(net, 64, [1, 1], scope='Conv2d_0a_1x1')
            with tf.variable_scope('Branch_1'):
                branch_1 = slim.conv2d(net, 48, [1, 1], scope='Conv2d_0a_1x1')
                branch_1 = slim.conv2d(branch_1, 64, [5, 5], scope='Conv2d_0b_5x5')
            with tf.variable_scope('Branch_2'):
                branch_2 = slim.conv2d(net, 64, [1, 1], scope='Conv2d_0a_1x1')
                branch_2 = slim.conv2d(branch_2, 96, [3, 3], scope='Conv2d_0b_3x3')
                branch_2 = slim.conv2d(branch_2, 96, [3, 3], scope='Conv2d_0c_3x3')
            with tf.variable_scope('Branch_3'):
                branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_0a_3x3')
                branch_3 = slim.conv2d(branch_3, 32, [1, 1], scope='Conv2d_0b_1x1')
            net = tf.concat([branch_0, branch_1, branch_2, branch_3], 3)

        # 第一个模块组的第二个Inception模块——Mixed_5c构建
        # 35x35x288
        with tf.variable_scope('Mixed_5c'):
            with tf.variable_scope('Branch_0'):
                branch_0 = slim.conv2d(net, 64, [1, 1], scope='Conv2d_0a_1x1')
            with tf.variable_scope('Branch_1'):
                branch_1 = slim.conv2d(net, 48, [1, 1], scope='Conv2d_0a_1x1')
                branch_1 = slim.conv2d(branch_1, 64, [5, 5], scope='Conv_1_0b_5x5')
            with tf.variable_scope('Branch_2'):
                branch_2 = slim.conv2d(net, 64, [1, 1], scope='Conv2d_0a_1x1')
                branch_2 = slim.conv2d(branch_2, 96, [3, 3], scope='Conv2d_0b_3x3')
                branch_2 = slim.conv2d(branch_2, 96, [3, 3], sc
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值