Tensorflow学习笔记-SLIM

: 使用 from tensorflow.contrib import slim,可以加速程序的开发,但需要了解其中一些函数的使用方法.

1 arg_scope

作用

可以为函数加入默认的参数.

函数定义:

def arg_scope(list_ops_or_scope, **kwargs):
  list_ops_or_scope:参数可以以列表,元组或字典的形式给出,如果以字典的形式给出,则需要在字典中给出对应的值,此时不需要参数kwargs.
  """Stores the default arguments for the given set of list_ops.
  For usage, please see examples at top of the file.
  Args:
    list_ops_or_scope: List or tuple of operations to set argument scope for or a dictionary containing the current scope. When list_ops_or_scope is a dict, kwargs must be empty. When list_ops_or_scope is a list or tuple, then every op in it need to be decorated with @add_arg_scope to work.

使用方法

  arg_scope = tf.contrib.framework.arg_scope
  with arg_scope([layers.conv2d], padding='SAME',
                 initializer=layers.variance_scaling_initializer(),
                 regularizer=layers.l2_regularizer(0.05)):
    net = layers.conv2d(inputs, 64, [11, 11], 4, padding='VALID', scope='conv1')
    net = layers.conv2d(net, 256, [5, 5], scope='conv2')
  #则上面的代码相当于下面代码的缩写:
    layers.conv2d(inputs, 64, [11, 11], 4, padding='VALID',
                  initializer=layers.variance_scaling_initializer(),
                  regularizer=layers.l2_regularizer(0.05), scope='conv1')
  #The second call to conv2d will also use the arg_scope's default for padding:
    layers.conv2d(inputs, 256, [5, 5], padding='SAME',
                  initializer=layers.variance_scaling_initializer(),
                  regularizer=layers.l2_regularizer(0.05), scope='conv2')
  #arg_scope还可以重复使用
  with arg_scope([layers.conv2d], padding='SAME',
                 initializer=layers.variance_scaling_initializer(),
                 regularizer=layers.l2_regularizer(0.05)) as sc:
    net = layers.conv2d(net, 256, [5, 5], scope='conv1')
    ....
  with arg_scope(sc):
    net = layers.conv2d(net, 256, [5, 5], scope='conv2')

参数的默认值,如果在调用时指出,在arg_scope设置的默认值将会失效。

2 slim.conv2d()

作用

slim工具下的卷积,使用它,可以大大缩减代码量。

定义

def convolution(inputs,
                num_outputs,
                kernel_size,
                stride=1,
                padding='SAME',
                data_format=None,
                rate=1,
                activation_fn=nn.relu,
                normalizer_fn=None,
                normalizer_params=None,
                weights_initializer=initializers.xavier_initializer(),
                weights_regularizer=None,
                biases_initializer=init_ops.zeros_initializer(),
                biases_regularizer=None,
                reuse=None,
                variables_collections=None,
                outputs_collections=None,
                trainable=True,
                scope=None):

inputs:输入网络
num_outputs:输出层网络的深度
stride:过滤器的步长

使用方法

如果使用原始的API定义一个卷积层,则需要的定义如下。

 with tf.variable_scope('layer3-conv'):
        w = tf.get_variable('w', [CONV2_SIZE, CONV2_SIZE, CONV1_DEEP, CONV2_DEEP],
                            initializer=tf.truncated_normal_initializer(stddev=0.1))
        b = tf.get_variable('b',shape=[CONV2_DEEP],initializer=tf.constant_initializer(0.0))

        conv2 = tf.nn.conv2d(pool1, w, strides=[1, 1, 1, 1], padding='SAME')
        relu2 = tf.nn.relu(tf.nn.bias_add(conv2, b))

但使用slim.conv2d()只需要一行代码:

conv = slim.conv2d(CONV2_DEEP,[1,1],scope='conv2d')
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值