TensorFlow实现ResNet V2

一、ResNet 介绍

ResNet(Residual Neural Network)由微软研究员的Kaiming He等4名华人提出,通过使用Residual Unit成功训练152层深的神经网络,在ILSVRC 2015的比赛中获得了冠军。取得了3.75%的top-5错误率,同时参数量却比VGGNet低,效率非常突出。在ResNet之前,瑞士教授Schmidhuber提出了Highway Network,原理与ResNet很相似。他是LSTM网络的发明者,是神经网络领域元老级的学者。认为神经网络的深度对其性能非常重要,但是网络越深其训练难度越大,Highway Network的目标就是解决极深的神经网络难以训练的问题。相当于修改了每一层的激活函数,此前的激活函数只是对输入做一个非线性变换y=H(x,W_{H}),Highway NetWork 则允许保留一定比例的原始输入x,即y=H(x, W_{H})\cdot T(x,W_{T})+x\cdot C(x,W_{c}),其中T为变换系数,C为保留系数,论文中令C=1-T。这样前面一层的信息,有一定比例可以不经过矩阵乘法和非线性变换,直接传到下一层。主要是通过gating unit学习如何控制网络中的信息流,即学习原始信息应保留的比例。 这个可学习的gating机制,正是借鉴自LSTM循环神经网络的gating。几百乃至上千层深的Highway Network可以直接使用梯度下降算法训练,并可以配合多种非线性激活函数,学习极深的神经网络变得可行了。

ResNet和HighWay Network非常类似,允许原始输入信息直接传输到后面的层中。解决了准确率会先上升然后达到饱和,再持续增加深度则会导致准确率下降。这不是过拟合的问题,因为不光在训练集上误差大,测试集上误差也大。假如有一个比较浅的网络达到了饱和的准确率,那么后面再加上几个y=x全等映射层,起码误差不会增加,即更深的网络不应该带来训练集上误差上升。而这里提到的使用全等连接将前一层输出传到后面的思想,就是ResNet的灵感来源。

如上图,假定某段神经网络的输入时x,期望输出是H(x),此时H(x)等于F(x).H(x)是期望的复杂潜在映射,学习这样的模型,训练难度就会很大。假如已经学习到饱和的误差率,或者误差已经开始下降,那么接下来的学习就转变为恒等的学习。也就是使输入x近似于输出H(x)。以保持在后面的层中精度不会下降。所以,通过捷径连接的方式把x传入到输出作为初试结果,输出为H(x) = F(x) + x。此时当F(x)=0的时候,那么H(x)=x。此时学习目标改变了,不在是学习完整的输出,而是学习输出与输入的差值,即残差:F(x)=H(x)-x。后面的结果就是学习残差F(x),使其逼近为0.使后面的网络层数加深, 而准确率不下降。在捷径连接中,有时候是实线,有时候是虚线。是因为实现的时候表示通道相同,H(x) = F(x) + x。而虚线的时候,通道不同,这时需要对输入做一个变换使通道相同,H(x) = F(x) + Wx.W 是卷积操作,变换维度的。

在ResNet的第二篇论文中,提出了ResNet V2,区别是作者发现前馈和反馈信号可以直接传输,因此,非线性激活函数,直接替换为y=x。同时,在每一层都使用了BN。使新的残差单元将比以前更容易训练且泛化性更强。

二、TensorFlow实现ResNet V2

import collections
import tensorflow as tf
import time
from datetime import datetime
import math
slim = tf.contrib.slim

class Block(collections.namedtuple('Block', ['scope', 'unit_fn', 'args'])):
    'A namee tuple describing a ResNet blick'
    """
    collections.nametuple:设计ResNet基本Block模块组,只包含数据结构,不包含具体方法,
    定义一个典型的Block,需要输入三个参数:
    scope:名称
    unit_fn:残差学习单元
    args:一个列表,每一个元素对应一个残差学习单元,维度
    [(256, 64, 1)] * 2 + [(256, 64, 2)]表示:残差学习单元的维度,代表了三个残差学习单元,以最后一个为例表示:深度为256,前两层深度为64,中间步长为2
    """

# 降采样
def subsample(inputs, factor, scope=None):
    """
    inputs:输入张量
    factor:采样因子
    
    """
    if factor == 1: # 不采样
        return inputs
    else:
        return slim.max_pool2d(inputs, [1, 1], stride=factor, scope=scope)

# 创建卷积层
def conv2d_same(inputs, num_outputs, kernel_size, stride, scope=None):
    # 判断步长是否为1,为1直接创建,padding为SAME
    if stride == 1:
        return slim.conv2d(inputs, num_outputs, kernel_size, stride=1, padding='SAME', scope=scope)
    # 不为1,显式的填充0,填充总数为kernel_size-1,
    else:
        pad_total = kernel_size - 1
        pad_beg = pad_total // 2
        pad_end = pad_total - pad_beg
        inputs = tf.pad(inputs, [[0, 0], [pad_beg, pad_end],[pad_beg, pad_end], [0, 0]])
        # 补零之后,使用Padding为VALID
        return slim.conv2d(inputs, num_outputs, kernel_size, stride=stride, padding='VALID', scope=scope)

# 定义堆叠Blocks的函数
@slim.add_arg_scope
def stack_block_dense(net, blocks, outputs_collections=None):
    """
    net:输入
    blocks:Block的class列表
    """
    for block in blocks:
        with tf.variable_scope(block.scope, 'block', 'net') as sc:
            for i, unit in enumerate(block.args):
                with tf.variable_scope('unit_%d' % (i + 1), values=[net]):
                    unit_depth, unit_depth_bottleneck, unit_stride = unit
                    # unit_fn函数,残差单元的生成函数
                    net = block.unit_fn(net, depth=unit_depth, depth_bottleneck=unit_depth_bottleneck, stride=unit_stride)
            net = slim.utils.collect_named_outputs(outputs_collections, sc.name, net)
    return net

def resnet_arg_scope(is_training=True,
                    weight_decay=0.0001,
                    batch_norm_decay=0.997,
                    batch_norm_epsilon=1e-5,
                    batch_norm_scale=True):
    batch_norm_params = {
        'is_training': is_training,
        'decay': batch_norm_decay,
        'epsilon': batch_norm_epsilon,
        'scale': batch_norm_scale,
        'updates_collections': tf.GraphKeys.UPDATE_OPS
    }
    
    with slim.arg_scope([slim.conv2d], 
                       weights_regularizer=slim.l2_regularizer(weight_decay),
                       weights_initializer=slim.variance_scaling_initializer(),
                       activation_fn=tf.nn.relu,
                       normalizer_fn=slim.batch_norm,
                       normalizer_params=batch_norm_params):
        with slim.arg_scope([slim.batch_norm], **batch_norm_params):
            with slim.arg_scope([slim.max_pool2d], padding='SAME') as arg_sc:
                return arg_sc

# 残差学习单元
@slim.add_arg_scope
def bottleneck(inputs, depth, depth_bottleneck, stride, outputs_collections=None, scope=None):
    with tf.variable_scope(scope, 'bottleneck_v2', [inputs]) as sc:
        depth_in = slim.utils.last_dimension(inputs.get_shape(), min_rank=4)
        # 对输入进行激活和BN
        preact = slim.batch_norm(inputs, activation_fn=tf.nn.relu, scope='preact')
        # 如果输入和输出通道一致,则对inputs进行空间的降采样
        if depth == depth_in:
            shortcut = subsample(inputs, stride, 'shortcut')
        # 如果不一致,则使用1x1的卷积改变其通道数
        else:
            shortcut = slim.conv2d(preact, depth, [1, 1], stride=stride, normalizer_fn=None, activation_fn=None, scope='shortcut')
        # 定义残差,3层
        residual = slim.conv2d(preact, depth_bottleneck, [1, 1], stride=1, scope='conv1')
        residual = conv2d_same(residual, depth_bottleneck, 3, stride, scope='conv2')
        residual = slim.conv2d(residual, depth, [1, 1], stride=1, normalizer_fn=None, activation_fn=None, scope='conv3')
        
        output = shortcut + residual
        return slim.utils.collect_named_outputs(outputs_collections, sc.name, output)

def resnet_v2(inputs, blocks, num_classes=None, global_pool=True, include_root_block=True, reuse=tf.AUTO_REUSE, scope=None):
    with tf.variable_scope(scope, 'resnet_v2', [inputs], reuse=reuse) as sc:
        end_points_collection = sc.original_name_scope + '_end_point'
        with slim.arg_scope([slim.conv2d, bottleneck, stack_block_dense], outputs_collections=end_points_collection):
            net = inputs
            if include_root_block:
                with slim.arg_scope([slim.conv2d], activation_fn=None, normalizer_fn=None):
                    # 第一层卷积
                    net = conv2d_same(net, 64, 7, stride=2, scope='conv1')
                # 池化
                net = slim.max_pool2d(net, [3, 3], stride=2, scope='pool1')
            # 将残差学习模块组生成好,生成堆叠网络。
            net = stack_block_dense(net, blocks)
            net = slim.batch_norm(net, activation_fn=tf.nn.relu, scope='postnorm')
            if global_pool:
                net = tf.reduce_mean(net, [1, 2], name='pool5', keep_dims=True)
            if num_classes is not None:
                net = slim.conv2d(net, num_classes, [1, 1], activation_fn=None, normalizer_fn=None, scope='logits')
            end_points = slim.utils.convert_collection_to_dict(end_points_collection)
            if num_classes is not None:
                end_points['predictions'] = slim.softmax(net, scope='predictions')
            return net, end_points

# 50层网络
def resnet_v2_50(inputs, num_classes=None, global_pool=True, resuse=tf.AUTO_REUSE, scope='resnet_v2_50'):
    blocks = [
        Block('block1', bottleneck, [(256, 64, 1)] * 2 + [(256, 64, 2)]),
        Block('block2', bottleneck, [(512, 128, 1)] * 3 + [(512, 128, 2)]),
        Block('block3', bottleneck, [(1024, 256, 1)] * 5 + [(1024, 256, 2)]),
        Block('block4', bottleneck, [(2048, 512, 1)] * 3)
    ]
    return resnet_v2(inputs, blocks, num_classes, global_pool, include_root_block=True, reuse=reuse, scope=scope)

# 101层网络
def resnet_v2_101(inputs, num_classes=None, global_pool=True, reuse=tf.AUTO_REUSE, scope='resnet_v2_101'):
    blocks = [
        Block('block1', bottleneck, [(256, 64, 1)] * 2 + [(256, 64, 2)]),
        Block('block2', bottleneck, [(512, 128, 1)] * 3 + [(512, 128, 2)]),
        Block('block3', bottleneck, [(1024, 256, 1)] * 22 + [1024, 256, 2]),
        Block('block4', bottleneck, [(2048, 512, 1)] * 3)
    ]
    return resnet_v2(inputs, blocks, num_classes, global_pool, include_root_block=True, reuse=reuse, scope=scope)

# 152层网络
def resnet_v2_152(inputs, num_classes=None, global_pool=True, reuse=tf.AUTO_REUSE, scope='resnet_v2_152'):
    blocks = [
        Block('block1', bottleneck, [(256, 64, 1)] * 2 + [(256, 64, 2)]), 
        Block('block2', bottleneck, [(512, 128, 1)] * 7 + [(512, 128, 2)]),
        Block('block3', bottleneck, [(1024, 256, 1)] * 35 + [(1024, 256, 2)]),
        Block('block4', bottleneck, [(2048, 512, 1)] * 3)
    ]
    return resnet_v2(inputs, blocks, num_classes, global_pool, include_root_block=True, reuse=reuse, scope=scope)

# 200层网咯
def resnet_v2_200(inputs, num_classes=None, global_pool=True, reuse=tf.AUTO_REUSE, scope='resnet_v2_200'):
    blocks = [
        Block('block1', bottleneck, [(256, 64, 1)] * 2 + [(256, 64, 2)]),
        Block('block2', bottleneck, [(512, 128, 1)] * 23 + [(512, 128, 2)]),
        Block('block3', bottleneck, [(1024, 256, 1)] * 35 + [(1024, 256, 2)]),
        Block('block4', bottleneck, [(2048, 512, 1)] * 3)
    ]
    return resnet_v2(inputs, blocks, num_classes, global_pool, include_root_block=True, reuse=reuse, scope=scope)

# 评估每轮计算时间
def time_tensorflow_run(session, target, info_string):
    num_steps_burn_in = 10  # 预热轮数
    total_duration = 0.0    # 总时间
    total_duration_squared = 0.0  # 平方和
    
    for i in range(num_batches + num_steps_burn_in):
        start_time = time.time()
        _ = session.run(target)
        duration = time.time() - start_time
        
        if i >= num_steps_burn_in:
            if not i % 10:
                print('%s: step %d, duration = %.3f' % (datetime.now(), i - num_steps_burn_in, duration))
                
            total_duration += duration
            total_duration_squared += duration * duration
    
    mn = total_duration / num_batches
    vr = total_duration_squared / num_batches - mn * mn
    sd = math.sqrt(vr)
    print('%s: %s across %d steps, %.3f +/- %.3f sec/batch' % (datetime.now(), info_string, num_batches, mn, sd))

batch_size = 32
height, width = 224, 224
inputs = tf.random_uniform((batch_size, height, width, 3))
with slim.arg_scope(resnet_arg_scope(is_training=False)):
    # 评估152层网络
    net, end_points = resnet_v2_152(inputs, 1000)
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
num_batches = 100
time_tensorflow_run(sess, net, "Forward")

  • 1
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值