TF.Slim的repeat和stack操作

参考   TF.Slim的repeat和stack操作 - 云+社区 - 腾讯云

一、常规做法

在搭建网络时,TF-Slim 提供 repeat 和 stack,允许用户重复执行相同的 操作,方便网络构建,例如:

net = ...
net = slim.conv2d(net, 256, [3, 3], scope='conv3_1')
net = slim.conv2d(net, 256, [3, 3], scope='conv3_2')
net = slim.conv2d(net, 256, [3, 3], scope='conv3_3')
net = slim.max_pool2d(net, [2, 2], scope='pool2')

常见的作法:可用循环减少工作

net = ...
for i in range(3):
   net = slim.conv2d(net, 256, [3, 3], scope='conv3_%d' % (i+1))
   net = slim.max_pool2d(net, [2, 2], scope='pool2')

二、TF-Slim 中的 repeat 操作

使用 TF-Slim 中的 repeat 操作替代上边的定义:

net = slim.repeat(net, 3, slim.conv2d, 256, [3, 3], scope='conv3')
net = slim.max_pool2d(net, [2, 2], scope='pool2')

slim.repeat 会自动给每一个卷积层的scopes命名为’conv3/conv3_1’, ’conv3/conv3_2’ 和’conv3/conv3_3’

三、TF-Slim 中的 stack操作

TF-Slim 的 slim.stack 操作允许用户用不同的参数重复调用同一种操作 slim.stack 也为每一个被创建的操作创建一个新的 tf.variable_scope

# Verbose way:原始操作
x = slim.fully_connected(x, 32, scope='fc/fc_1')
x = slim.fully_connected(x, 64, scope='fc/fc_2')
x = slim.fully_connected(x, 128, scope='fc/fc_3')
     
# Equivalent , TF-Slim way using slim.stack:等价操作
slim.stack(x, slim.fully_connected , [32, 64, 128], scope='fc')
     
# slim.stack 调用了 slim.fully_connected 三次

看到repeat和stack在细节上有差异。具体查看文档

例子,定义vgg

def vgg16(inputs):
   with slim.arg_scope([slim.conv2d, slim.fully_connected],
   activation_fn=tf.nn.relu,
   weights_initializer=tf.truncated_normal_initializer(0.0, 0.01),
   weights_regularizer=slim.l2_regularizer(0.0005)):  # with之下
            net = slim.repeat(inputs, 2, slim.conv2d, 64, [3, 3], scope='conv1')
            net = slim.max_pool2d(net, [2, 2], scope='pool1')
            net = slim.repeat(net, 2, slim.conv2d, 128, [3, 3], scope='conv2')
            net = slim.max_pool2d(net, [2, 2], scope='pool2')
            net = slim.repeat(net, 3, slim.conv2d, 256, [3, 3], scope='conv3')
            net = slim.max_pool2d(net, [2, 2], scope='pool3')
            net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3], scope='conv4')
            net = slim.max_pool2d(net, [2, 2], scope='pool4')
            net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3], scope='conv5')
            net = slim.max_pool2d(net, [2, 2], scope='pool5')
            net = slim.fully_connected(net, 4096, scope='fc6')
            net = slim.dropout(net, 0.5, scope='dropout6')
            net = slim.fully_connected(net, 4096, scope='fc7')
            net = slim.dropout(net, 0.5, scope='dropout7')
            net = slim.fully_connected(net, 1000, activation_fn=None, scope='fc8')
return net

基于slim的vgg框架:

import tensorflow as tf
import tensorflow.contrib.slim.nets as nets
     
slim = tf.contrib.slim
vgg = nets.vgg
     
...
     
train_log_dir = ...
if not tf.gfile.Exists(train_log_dir):
        tf.gfile.MakeDirs(train_log_dir)
     
with tf.Graph().as_default():
        # Set up the data loading:
        images, labels = ...
     
        # Define the model:
        predictions = vgg.vgg_16(images, is_training=True)
     
        # Specify the loss function:
        slim.losses.softmax_cross_entropy(predictions , labels)
     
        total_loss = slim.losses.get_total_loss()
        tf.summary.scalar('losses/total_loss', total_loss)
     
        # Specify the optimization scheme:
        optimizer = tf.train.GradientDescentOptimizer(learning_rate=.001)
     
        # create_train_op that ensures that when we evaluate it to get the loss,
        # the update_ops are done and the gradient updates are computed.
        train_tensor = slim.learning.create_train_op(total_loss , optimizer)
     
        # Actually runs training.
        slim.learning.train(train_tensor , train_log_dir)

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Wanderer001

ROIAlign原理

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值