VGG16与SSD算法Tensorflow代码实现对比

VGG16代码实现:

#定义VGG网络
def vgg16(inputs):
  with slim.arg_scope([slim.conv2d, slim.fully_connected],
                      activation_fn=tf.nn.relu,
                      weights_initializer=tf.truncated_normal_initializer(0.0, 0.01),
                      weights_regularizer=slim.l2_regularizer(0.0005)):
    net = slim.repeat(inputs, 2, slim.conv2d, 64, [3, 3], scope='conv1') # 定义两个conv  卷积核为3*2
    net = slim.max_pool2d(net, [2, 2], scope='pool1')  #定义池化层  2*2
    net = slim.repeat(net, 2, slim.conv2d, 128, [3, 3], scope='conv2')
    net = slim.max_pool2d(net, [2, 2], scope='pool2')
    net = slim.repeat(net, 3, slim.conv2d, 256, [3, 3], scope='conv3')
    net = slim.max_pool2d(net, [2, 2], scope='pool3')
    net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3], scope='conv4')
    net = slim.max_pool2d(net, [2, 2], scope='pool4')
    net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3], scope='conv5')
    net = slim.max_pool2d(net, [2, 2], scope='pool5')
    net = slim.fully_connected(net, 4096, scope='fc6')  #全连接网络
    net = slim.dropout(net, 0.5, scope='dropout6')
    net = slim.fully_connected(net, 4096, scope='fc7')
    net = slim.dropout(net, 0.5, scope='dropout7')  #dropout
    net = slim.fully_connected(net, 1000, activation_fn=None, scope='fc8')
  return net

原文链接:https://blog.csdn.net/qq_30638831/article/details/81389533

SSD代码实现:

    with tf.variable_scope(scope, 'ssd_300_vgg', [inputs], reuse=reuse):
        # Original VGG-16 blocks.
        net = slim.repeat(inputs, 2, slim.conv2d, 64, [3, 3], scope='conv1')
        end_points['block1'] = net
        net = slim.max_pool2d(net, [2, 2], scope='pool1')
        # Block 2.
        net = slim.repeat(net, 2, slim.conv2d, 128, [3, 3], scope='conv2')
        end_points['block2'] = net
        net = slim.max_pool2d(net, [2, 2], scope='pool2')
        # Block 3.
        net = slim.repeat(net, 3, slim.conv2d, 256, [3, 3], scope='conv3')
        end_points['block3'] = net
        net = slim.max_pool2d(net, [2, 2], scope='pool3')
        # Block 4.
        net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3], scope='conv4')
        end_points['block4'] = net
        net = slim.max_pool2d(net, [2, 2], scope='pool4')
        # Block 5.
        net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3], scope='conv5')
        end_points['block5'] = net
        #注意处
        net = slim.max_pool2d(net, [3, 3], stride=1, scope='pool5')

        # Additional SSD blocks.
        # Block 6: let's dilate the hell out of it!
        #注意处
        net = slim.conv2d(net, 1024, [3, 3], rate=6, scope='conv6')
        end_points['block6'] = net
        net = tf.layers.dropout(net, rate=dropout_keep_prob, training=is_training)
        # Block 7: 1x1 conv. Because the fuck.
        #注意处
        net = slim.conv2d(net, 1024, [1, 1], scope='conv7')
        end_points['block7'] = net
        net = tf.layers.dropout(net, rate=dropout_keep_prob, training=is_training)

        # Block 8/9/10/11: 1x1 and 3x3 convolutions stride 2 (except lasts).
        end_point = 'block8'
        with tf.variable_scope(end_point):
            net = slim.conv2d(net, 256, [1, 1], scope='conv1x1')
            #注意点:实际上相当于下面的卷积操作进行padding了
            net = custom_layers.pad2d(net, pad=(1, 1))
            net = slim.conv2d(net, 512, [3, 3], stride=2, scope='conv3x3', padding='VALID')
        end_points[end_point] = net
        end_point = 'block9'
        with tf.variable_scope(end_point):
            net = slim.conv2d(net, 128, [1, 1], scope='conv1x1')
            #注意点:实际上相当于下面的卷积操作进行padding了
            net = custom_layers.pad2d(net, pad=(1, 1))
            net = slim.conv2d(net, 256, [3, 3], stride=2, scope='conv3x3', padding='VALID')
        end_points[end_point] = net
        end_point = 'block10'
        with tf.variable_scope(end_point):
            net = slim.conv2d(net, 128, [1, 1], scope='conv1x1')
            net = slim.conv2d(net, 256, [3, 3], scope='conv3x3', padding='VALID')
        end_points[end_point] = net
        end_point = 'block11'
        with tf.variable_scope(end_point):
            net = slim.conv2d(net, 128, [1, 1], scope='conv1x1')
            net = slim.conv2d(net, 256, [3, 3], scope='conv3x3', padding='VALID')
        end_points[end_point] = net
原文链接:https://blog.csdn.net/qq_42278791/article/details/90613191

 

 

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
SSD(Single Shot MultiBox Detector)是一种目标检测算法,其主要思想是将目标检测任务转化为一个回归问题和一个分类问题,并通过一个单一的CNN网络同时解决这两个问题。 SSD代码实现主要包括以下几个步骤: 1. 数据准备:首先需要准备用于目标检测的数据集,包括训练集和测试集。每个图像需要标注目标的位置和类别,并转化为特定的数据格式,如VOC或COCO格式。 2. 构建模型:使用深度学习框架(如TensorFlow、PyTorch等)创建SSD模型。SSD模型由一个基础的卷积神经网络(如VGG、ResNet等)和几个额外的卷积层和预测层组成。 3. 数据预处理:对输入的图像进行预处理,使其适应SSD模型的输入要求。预处理包括图像的缩放、裁剪、归一化等操作。 4. 模型训练:使用训练集对SSD模型进行训练。训练过程中主要包括前向传播(计算损失函数)和反向传播(更新模型参数)。 5. 目标检测:使用训练好的SSD模型对测试集或新的图像进行目标检测。首先对图像进行预处理,然后通过前向传播计算预测框和类别得分。根据设定的阈值和非极大值抑制算法,筛选出预测框中得分较高的目标,并抑制重叠的框。 6. 评估性能:使用评价指标(如精确率、召回率、平均精确率均值mAP等)对SSD模型的性能进行评估,衡量其目标检测的准确性和鲁棒性。 7. 模型优化:根据评估结果,对SSD模型进行优化,如调整超参数、更改网络结构、引入数据增强等方法,提升模型的性能。 综上所述,SSD目标检测代码实现主要包括数据准备、模型构建、数据预处理、模型训练、目标检测、性能评估和模型优化等步骤。通过这些步骤,可以实现一个高效准确的SSD目标检测系统。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值