【Tensorflow】tf.contrib.slim

可能很多tensorflow的老版本玩家没见过这个东西,slim这个模块是在16年新推出的,其主要目的是来做所谓的“代码瘦身”。

但事实上它已经成为我比较喜欢,甚至是比较常用的模块,github上面大部分tensorflow的工程都会涉及到它,不得不说,撇开Keras,TensorLayer,tfLearn这些个高级库不谈,光用tensorflow能不能写出简洁的代码?当然行,有slim就够了!

惟一的缺点是slim这玩意的中文的文档几乎绝迹。所以国内还是Keras,tensorLayer这些官方文档比较完备的高级库的天下。

 

一.简介

slim被放在tensorflow.contrib这个库下面,导入的方法如下:

 

?
1
import tensorflow.contrib.slim as slim

 

这样我们就可以使用slim了,既然说到了,先来扒一扒tensorflow.contrib这个库,tensorflow官方对它的描述是:此目录中的任何代码未经官方支持,可能会随时更改或删除。每个目录下都有指定的所有者。它旨在包含额外功能和贡献,最终会合并到核心TensorFlow中,但其接口可能仍然会发生变化,或者需要进行一些测试,看是否可以获得更广泛的接受。所以slim依然不属于原生tensorflow。

那么什么是slim?slim到底有什么用?

slim是一个使构建,训练,评估神经网络变得简单的库。它可以消除原生tensorflow里面很多重复的模板性的代码,让代码更紧凑,更具备可读性。另外slim提供了很多计算机视觉方面的著名模型(VGG, AlexNet等),我们不仅可以直接使用,甚至能以各种方式进行扩展。

 

slim的子模块及功能介绍:

arg_scope: provides a new scope named arg_scope that allows a user to define default arguments for specific operations within that scope.

除了基本的namescope,variabelscope外,又加了argscope,它是用来控制每一层的默认超参数的。(后面会详细说)

data: contains TF-slim's dataset definition, data providers, parallel_reader, and decoding utilities.

貌似slim里面还有一套自己的数据定义,这个跳过,我们用的不多。

evaluation: contains routines for evaluating models.

评估模型的一些方法,用的也不多

layers: contains high level layers for building models using tensorflow.

这个比较重要,slim的核心和精髓,一些复杂层的定义

learning: contains routines for training models.

一些训练规则

losses: contains commonly used loss functions.

一些loss

metrics: contains popular evaluation metrics.

评估模型的度量标准

nets: contains popular network definitions such as VGG and AlexNet models.

包含一些经典网络,VGG等,用的也比较多

queues: provides a context manager for easily and safely starting and closing QueueRunners.

文本队列管理,比较有用。

regularizers: contains weight regularizers.

包含一些正则规则

variables: provides convenience wrappers for variable creation and manipulation.

这个比较有用,我很喜欢slim管理变量的机制

具体子库就这么多拉,接下来干货时间!

 

二.slim定义模型

slim中定义一个变量的示例:

 

?
1
2
3
4
5
6
7
8
9
10
11
12
13
# Model Variables
weights = slim.model_variable( 'weights' ,
                               shape=[ 10 , 10 , 3 , 3 ],
                               initializer=tf.truncated_normal_initializer(stddev= 0.1 ),
                               regularizer=slim.l2_regularizer( 0.05 ),
                               device= '/CPU:0' )
model_variables = slim.get_model_variables()
 
# Regular variables
my_var = slim.variable( 'my_var' ,
                        shape=[ 20 , 1 ],
                        initializer=tf.zeros_initializer())
regular_variables_and_model_variables = slim.get_variables()

如上,变量分为两类:模型变量和局部变量。局部变量是不作为模型参数保存的,而模型变量会再save的时候保存下来。这个玩过tensorflow的人都会明白,诸如global_step之类的就是局部变量。slim中可以写明变量存放的设备,正则和初始化规则。还有获取变量的函数也需要注意一下,get_variables是返回所有的变量。

 

 

slim中实现一个层:

首先让我们看看tensorflow怎么实现一个层,例如卷积层:

 

?
1
2
3
4
5
6
7
8
9
input = ...
with tf.name_scope( 'conv1_1' ) as scope:
   kernel = tf.Variable(tf.truncated_normal([ 3 , 3 , 64 , 128 ], dtype=tf.float32,
                                            stddev=1e- 1 ), name= 'weights' )
   conv = tf.nn.conv2d(input, kernel, [ 1 , 1 , 1 , 1 ], padding= 'SAME' )
   biases = tf.Variable(tf.constant( 0.0 , shape=[ 128 ], dtype=tf.float32),
                        trainable=True, name= 'biases' )
   bias = tf.nn.bias_add(conv, biases)
   conv1 = tf.nn.relu(bias, name=scope)

然后slim的实现:

 

 

?
1
2
input = ...
net = slim.conv2d(input, 128 , [ 3 , 3 ], scope= 'conv1_1' )

但这个不是重要的,因为tenorflow目前也有大部分层的简单实现,这里比较吸引人的是slim中的repeat和stack操作:

 

假设定义三个相同的卷积层:

 

?
1
2
3
4
5
net = ...
net = slim.conv2d(net, 256 , [ 3 , 3 ], scope= 'conv3_1' )
net = slim.conv2d(net, 256 , [ 3 , 3 ], scope= 'conv3_2' )
net = slim.conv2d(net, 256 , [ 3 , 3 ], scope= 'conv3_3' )
net = slim.max_pool2d(net, [ 2 , 2 ], scope= 'pool2' )

在slim中的repeat操作可以减少代码量:

 

 

?
1
2
net = slim.repeat(net, 3 , slim.conv2d, 256 , [ 3 , 3 ], scope= 'conv3' )
net = slim.max_pool2d(net, [ 2 , 2 ], scope= 'pool2' )

而stack是处理卷积核或者输出不一样的情况:

 

假设定义三层FC:

 

?
1
2
3
4
# Verbose way:
x = slim.fully_connected(x, 32 , scope= 'fc/fc_1' )
x = slim.fully_connected(x, 64 , scope= 'fc/fc_2' )
x = slim.fully_connected(x, 128 , scope= 'fc/fc_3' )

使用stack操作:

 

 

?
1
slim.stack(x, slim.fully_connected, [ 32 , 64 , 128 ], scope= 'fc' )

同理卷积层也一样:

 

 

?
1
2
3
4
5
6
7
8
# 普通方法:
x = slim.conv2d(x, 32 , [ 3 , 3 ], scope= 'core/core_1' )
x = slim.conv2d(x, 32 , [ 1 , 1 ], scope= 'core/core_2' )
x = slim.conv2d(x, 64 , [ 3 , 3 ], scope= 'core/core_3' )
x = slim.conv2d(x, 64 , [ 1 , 1 ], scope= 'core/core_4' )
 
# 简便方法:
slim.stack(x, slim.conv2d, [( 32 , [ 3 , 3 ]), ( 32 , [ 1 , 1 ]), ( 64 , [ 3 , 3 ]), ( 64 , [ 1 , 1 ])], scope= 'core' )

 

slim中的argscope:

如果你的网络有大量相同的参数,如下:

 

?
1
2
3
4
5
6
7
8
9
net = slim.conv2d(inputs, 64 , [ 11 , 11 ], 4 , padding= 'SAME' ,
                   weights_initializer=tf.truncated_normal_initializer(stddev= 0.01 ),
                   weights_regularizer=slim.l2_regularizer( 0.0005 ), scope= 'conv1' )
net = slim.conv2d(net, 128 , [ 11 , 11 ], padding= 'VALID' ,
                   weights_initializer=tf.truncated_normal_initializer(stddev= 0.01 ),
                   weights_regularizer=slim.l2_regularizer( 0.0005 ), scope= 'conv2' )
net = slim.conv2d(net, 256 , [ 11 , 11 ], padding= 'SAME' ,
                   weights_initializer=tf.truncated_normal_initializer(stddev= 0.01 ),
                   weights_regularizer=slim.l2_regularizer( 0.0005 ), scope= 'conv3' )

 

 

然后我们用arg_scope处理一下:

?
1
2
3
4
5
6
with slim.arg_scope([slim.conv2d], padding= 'SAME' ,
                       weights_initializer=tf.truncated_normal_initializer(stddev= 0.01 )
                       weights_regularizer=slim.l2_regularizer( 0.0005 )):
     net = slim.conv2d(inputs, 64 , [ 11 , 11 ], scope= 'conv1' )
     net = slim.conv2d(net, 128 , [ 11 , 11 ], padding= 'VALID' , scope= 'conv2' )
     net = slim.conv2d(net, 256 , [ 11 , 11 ], scope= 'conv3' )

是不是一下子就变简洁了?这里额外说明一点,arg_scope的作用范围内,是定义了指定层的默认参数,若想特别指定某些层的参数,可以重新赋值(相当于重写),如上倒数第二行代码。那如果除了卷积层还有其他层呢?那就要如下定义:

 

 

?
1
2
3
4
5
6
7
8
9
10
with slim.arg_scope([slim.conv2d, slim.fully_connected],
                       activation_fn=tf.nn.relu,
                       weights_initializer=tf.truncated_normal_initializer(stddev= 0.01 ),
                       weights_regularizer=slim.l2_regularizer( 0.0005 )):
   with slim.arg_scope([slim.conv2d], stride= 1 , padding= 'SAME' ):
     net = slim.conv2d(inputs, 64 , [ 11 , 11 ], 4 , padding= 'VALID' , scope= 'conv1' )
     net = slim.conv2d(net, 256 , [ 5 , 5 ],
                       weights_initializer=tf.truncated_normal_initializer(stddev= 0.03 ),
                       scope= 'conv2' )
     net = slim.fully_connected(net, 1000 , activation_fn=None, scope= 'fc' )

写两个arg_scope就行了。

 

采用如上方法,定义一个VGG也就十几行代码的事了。

 

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
def vgg16(inputs):
   with slim.arg_scope([slim.conv2d, slim.fully_connected],
                       activation_fn=tf.nn.relu,
                       weights_initializer=tf.truncated_normal_initializer( 0.0 , 0.01 ),
                       weights_regularizer=slim.l2_regularizer( 0.0005 )):
     net = slim.repeat(inputs, 2 , slim.conv2d, 64 , [ 3 , 3 ], scope= 'conv1' )
     net = slim.max_pool2d(net, [ 2 , 2 ], scope= 'pool1' )
     net = slim.repeat(net, 2 , slim.conv2d, 128 , [ 3 , 3 ], scope= 'conv2' )
     net = slim.max_pool2d(net, [ 2 , 2 ], scope= 'pool2' )
     net = slim.repeat(net, 3 , slim.conv2d, 256 , [ 3 , 3 ], scope= 'conv3' )
     net = slim.max_pool2d(net, [ 2 , 2 ], scope= 'pool3' )
     net = slim.repeat(net, 3 , slim.conv2d, 512 , [ 3 , 3 ], scope= 'conv4' )
     net = slim.max_pool2d(net, [ 2 , 2 ], scope= 'pool4' )
     net = slim.repeat(net, 3 , slim.conv2d, 512 , [ 3 , 3 ], scope= 'conv5' )
     net = slim.max_pool2d(net, [ 2 , 2 ], scope= 'pool5' )
     net = slim.fully_connected(net, 4096 , scope= 'fc6' )
     net = slim.dropout(net, 0.5 , scope= 'dropout6' )
     net = slim.fully_connected(net, 4096 , scope= 'fc7' )
     net = slim.dropout(net, 0.5 , scope= 'dropout7' )
     net = slim.fully_connected(net, 1000 , activation_fn=None, scope= 'fc8' )
   return net

 

三.训练模型

这个没什么好说的,说一下直接拿经典网络来训练吧。

 

?
1
2
3
4
5
6
7
8
9
10
11
import tensorflow as tf
vgg = tf.contrib.slim.nets.vgg
 
# Load the images and labels.
images, labels = ...
 
# Create the model.
predictions, _ = vgg.vgg_16(images)
 
# Define the loss functions and get the total loss.
loss = slim.losses.softmax_cross_entropy(predictions, labels)

是不是超级简单?

 

关于loss,要说一下定义自己的loss的方法,以及注意不要忘记加入到slim中让slim看到你的loss。

还有正则项也是需要手动添加进loss当中的,不然最后计算的时候就不优化正则目标了。

 

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# Load the images and labels.
images, scene_labels, depth_labels, pose_labels = ...
 
# Create the model.
scene_predictions, depth_predictions, pose_predictions = CreateMultiTaskModel(images)
 
# Define the loss functions and get the total loss.
classification_loss = slim.losses.softmax_cross_entropy(scene_predictions, scene_labels)
sum_of_squares_loss = slim.losses.sum_of_squares(depth_predictions, depth_labels)
pose_loss = MyCustomLossFunction(pose_predictions, pose_labels)
slim.losses.add_loss(pose_loss) # Letting TF-Slim know about the additional loss.
 
# The following two ways to compute the total loss are equivalent:
regularization_loss = tf.add_n(slim.losses.get_regularization_losses())
total_loss1 = classification_loss + sum_of_squares_loss + pose_loss + regularization_loss
 
# (Regularization Loss is included in the total loss by default ).
total_loss2 = slim.losses.get_total_loss()

 

四.读取保存模型变量

通过以下功能我们可以载入模型的部分变量:

 

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# Create some variables.
v1 = slim.variable(name= "v1" , ...)
v2 = slim.variable(name= "nested/v2" , ...)
...
 
# Get list of variables to restore (which contains only 'v2' ).
variables_to_restore = slim.get_variables_by_name( "v2" )
 
# Create the saver which will be used to restore the variables.
restorer = tf.train.Saver(variables_to_restore)
 
with tf.Session() as sess:
   # Restore variables from disk.
   restorer.restore(sess, "/tmp/model.ckpt" )
   print( "Model restored." )

除了这种部分变量加载的方法外,我们甚至还能加载到不同名字的变量中。

 

假设我们定义的网络变量是conv1/weights,而从VGG加载的变量名为vgg16/conv1/weights,正常load肯定会报错(找不到变量名),但是可以这样:

 

?
1
2
3
4
5
6
7
8
9
10
def name_in_checkpoint(var):
   return 'vgg16/' + var.op.name
 
variables_to_restore = slim.get_model_variables()
variables_to_restore = {name_in_checkpoint(var):var for var in variables_to_restore}
restorer = tf.train.Saver(variables_to_restore)
 
with tf.Session() as sess:
   # Restore variables from disk.
   restorer.restore(sess, "/tmp/model.ckpt" )

通过这种方式我们可以加载不同变量名的变量!!
  • 0
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值