SSD-目标检测代码解读

本文详细解读了SSD(Single Shot MultiBox Detector)目标检测算法的代码,包括网络结构、样本编码和损失函数的构建。重点分析了SSD如何在不同特征层提取候选框,以及如何利用anchors进行目标检测。通过理解参数含义,如feat_layers、feat_shapes和normalizations,逐步解析网络搭建过程。同时,介绍了样本编码的关键步骤,包括 anchors 的构建、GT与anchors的匹配编码。最后,概述了损失函数的构造,指出只有可能存在目标的anchor才会参与loss计算。
摘要由CSDN通过智能技术生成

最近看了SSD的源代码,理了一下其中的逻辑,写一篇学习笔记。

代码地址:https://github.com/balancap/SSD-Tensorflow

一、网络结构

首先贴出来网络结构图,便于后续的分析,这里的图是SSD 300的结构图,而我看的代码是SSD 512,但是思想差别不大,可以看出来SSD比YOLO的差别就是,不仅在最后一层提取预选框,而是在中间某几层就已经开始通过3X3的卷积提取候选框,且引入了anchors,可以看到不同的特征层的anchors数量也不一样,从开始的38X38X4到19X19X6到3X3X4到后面的1X1X4都是候选框个数,加一起据说总共3800多个,大大扩充了候选窗数量,而且还具有检测大小物体的侧重分工。

接下开始分析代码,首先网络的结构是在ssd_512_net.py中搭起来的,首先看一下与网络结构有关的参数:

下面的参数是用于构建网络用的参数。feat_layers指定第几个层做为特征层用来提取候选框,feat_shapes则是标明对应的特征层尺寸,相当于以前的cell_size,不过由于好多个特征层一起提取,所以有好多的cell_size,normalizations则指定对应特征层的归一化系数,因为第一个特征层较靠前,其数值较其他的特征层偏大,故只对其进行归一化。

feat_layers = ['block4', 'block7', 'block8', 'block9', 'block10', 'block11', 'block12']
feat_shapes = [(64, 64), (32, 32), (16, 16), (8, 8), (4, 4), (2, 2), (1, 1)]
normalizations = [20, -1, -1, -1, -1, -1, -1]

下面的参数则是用于anchors的构建,主要的是anchor_sizes与anchor_ratios,对于anchors的构建,主要是这样的规则:

第一个:anchor_sizes[0],即原尺寸

第二个:sqrt(anchor_sizes[0] * anchor_sizes[1]),两项乘积开方

后续:anchor_ratios* anchor_sizes[0]

所以一共是1+1+len(anchor_ratios) = len(anchor_sizes) + len(anchor_ratios)

anchor_size_bounds = [0.10, 0.90]
anchor_sizes = [(20.48, 51.2),
                (51.2, 133.12),
                (133.12, 215.04),
                (215.04, 296.96),
                (296.96, 378.88),
                (378.88, 460.8),
                (460.8, 542.72)]
anchor_ratios = [[2, .5],
                       [2, .5, 3, 1./3],
                       [2, .5, 3, 1./3],
                       [2, .5, 3, 1./3],
                       [2, .5, 3, 1./3],
                       [2, .5],
                       [2, .5]]
anchor_steps = [8, 16, 32, 64, 128, 256, 512]
anchor_offset = 0.5

解释完上述的参数,就可以先看代码了 ,首先是网络的搭建,这里直接去看ssd_net()函数,这是详细的构造过程:

    def ssd_net(inputs,
            num_classes,
            feat_layers,
            anchor_sizes,
            anchor_ratios,
            normalizations,
            is_training=True,
            dropout_keep_prob=0.5,
            prediction_fn=slim.softmax,
            reuse=None,
            scope='ssd_300_vgg'):
    """SSD net definition.
    """
    # if data_format == 'NCHW':
    #     inputs = tf.transpose(inputs, perm=(0, 3, 1, 2))

    # End_points collect relevant activations for external use.
    # 分块进行卷积池化处理,并将不同块的处理结果储存在end_points中
    end_points = {}
    with tf.variable_scope(scope, 'ssd_512_vgg', [inputs], reuse=reuse):
        # Original VGG-16 blocks.
        print(inputs)
        net = slim.repeat(inputs, 2, slim.conv2d, 64, [3, 3], scope='conv1')
        end_points['block1'] = net
        print('block1', net)
        net = slim.max_pool2d(net, [2, 2], scope='pool1')
        # Block 2.
        net = slim.repeat(net, 2, slim.conv2d, 128, [3, 3], scope='conv2')
        end_points['block2'] = net
        net = slim.max_pool2d(net, [2, 2], scope='pool2')
        # Block 3.
        net = slim.repeat(net, 3, slim.conv2d, 256, [3, 3], scope='conv3')
        end_points['block3'] = net
        net = slim.max_pool2d(net, [2, 2], scope='pool3')
        # Block 4.
        net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3], scope='conv4')
        end_points['block4'] = net
        net = slim.max_pool2d(net, [2, 2], scope='pool4')
        # Block 5.
        net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3], scope='conv5')
        end_points['block5'] = net
        net = slim.max_pool2d(net, [3, 3], 1, scope='pool5')

        # Additional SSD blocks.
        # Block 6: let's dilate the hell out of it!
        net = slim.conv2d(net, 1024, [3, 3], rate=6, scope='conv6')
        end_points['block6'] = net
        # Block 7: 1x1 conv. Because the fuck.
        net = slim.conv2d(net, 1024, [1, 1], scope='conv7')
        end_points['block7'] = net

        # Block 8/9/10/11: 1x1 and 3x3 convolutions stride 2 (except lasts).
        end_point = 'block8'
        with tf.variable_scope(end_point):
            net = slim.conv2d(net, 256, [1, 1], scope='conv1x1')
            net = custom_layers.pad2d(net, pad=(1, 1))
            net = slim.conv2d(net, 512, [3, 3], stride=2, scope='conv3x3', padding='VALID')
        end_points[end_point] = net
        print('block8', net)
        end_point = 'block9'
        with tf.variable_scope(end_point):
            net = slim.conv2d(net, 128, [1, 1], scope='conv1x1')
            net = custom_layers.pad2d(net, pad=(1, 1))
            net = slim.conv2d(net, 256, [3, 3], stride=2, scope='conv3x3', padding='VALID')
        end_points[end_point] = net
        print('block9', net)
        end_point = 'block10'
        with tf.variable_scope(end_point):
            net = slim.conv2d(net, 128, [1, 1], scope='conv1x1')
            net = custom_layers.pad2d(net, pad=(1, 1))
            net = slim.conv2d(net, 256, [3, 3], stride=2, scope='conv3x3', padding='VALID')
        end_points[end_point] = net
        print('block10', net)
        end_point = 'block11'
        with tf.variable_scope(end_point):
            net = slim.conv2d(net, 128, [1, 1], scope='conv1x1')
            net = custom_layers.pad2d(net, pad=(1, 1))
            net = slim.conv2d(net, 256, [3, 3], stride=2, scope='conv3x3', padding='VALID')
        end_points[end_point] = net
        print('bloc
  • 0
    点赞
  • 19
    收藏
    觉得还不错? 一键收藏
  • 9
    评论
评论 9
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值