首先要导入slim:
from tensorflow.contrib import slim
TF-slim主要由下面几个组成:
arg_scope
data
evluation
layers
learning
losses
metrics
nets
queues
regularizers
variables
Layer
最常用的是 slim 的 Layers , 创建 Layer 十分方便:
input = ...
net = slim.conv2d(input, 128, [3, 3], scope='conv1_1')
net = slim.max_pool2d(net, kernel_size=[2,2], stride=2, scope='pool1')
# 一般为 (inputs=?, kernel_size=?, stride=?, padding=?, ....)
net = slim.repeat(net, 3, slim.conv2d, 256, [3, 3], scope='conv3')
# repeat操作即为重复创建某个layer
Layer | TF-Slim |
---|---|
BiasAdd | slim.bias_add |
BatchNorm | slim.batch_norm |
Conv2d | slim.conv2d |
Conv2dInPlane | slim.conv2d_in_plane |
Conv2dTranspose (Deconv) | slim.conv2d_transpose |
FullyConnected | slim.fully_connected |
AvgPool2D | slim.avg_pool2d |
Dropout | slim.dropout |
Flatten | slim.flatten |
MaxPool2D | slim.max_pool2d |
OneHotEncoding | slim.one_hot_encoding |
SeparableConv2 | slim.separable_conv2d |
UnitNorm | slim.unit_norm |
# 所有的参数如下
@add_arg_scope
def convolution2d_in_plane(
inputs,
kernel_size,
stride=1,
padding='SAME',
activation_fn=nn.relu,
normalizer_fn=None,
normalizer_params=None,
weights_initializer=initializers.xavier_initializer(),
weights_regularizer=None,
biases_initializer=init_ops.zeros_initializer(),
biases_regularizer=None,
reuse=None,
variables_collections=None,
outputs_collections=None,
trainable=True,
scope=None):
Scopes:
使用scope也十分方便
使用arg_scope后可以避免在每个slim.conv2d中添加重复的参数(如weight_initializer=xxx等)
# 为里面的每个slim.conv2d中添加同样的参数
with slim.arg_scope([slim.conv2d], padding='SAME',
weights_initializer=tf.truncated_normal_initializer(stddev=0.01)
weights_regularizer=slim.l2_regularizer(0.0005)):
net = slim.conv2d(inputs, 64, [11, 11], scope='conv1')
net = slim.conv2d(net, 128, [11, 11], padding='VALID', scope='conv2')
net = slim.conv2d(net, 256, [11, 11], scope='conv3')
Variables:
可像如下创建:
weights = slim.variable('weights',
shape=[10, 10, 3 , 3],
initializer=tf.truncated_normal_initializer(stddev=0.1),
regularizer=slim.l2_regularizer(0.05),
device='/CPU:0')
# Model Variables
weights = slim.model_variable('weights',
shape=[10, 10, 3 , 3],
initializer=tf.truncated_normal_initializer(stddev=0.1),
regularizer=slim.l2_regularizer(0.05),
device='/CPU:0')
model_variables = slim.get_model_variables()
# Regular variables
my_var = slim.variable('my_var',
shape=[20, 1],
initializer=tf.zeros_initializer())
regular_variables_and_model_variables = slim.get_variables()
Training Loop
简易的训练方式,调用 slim.learning.create_train_op 和 slim.learning.train 来实现
# 调用 slim.learning.create_train_op 和 slim.learning.train 来实现
g = tf.Graph()
# Create the model and specify the losses...
...
total_loss = slim.losses.get_total_loss()
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
# create_train_op ensures that each time we ask for the loss, the update_ops
# are run and the gradients being computed are applied too.
train_op = slim.learning.create_train_op(total_loss, optimizer)
logdir = ... # Where checkpoints are stored.
slim.learning.train(
train_op,
logdir,
number_of_steps=1000,
save_summaries_secs=300,
save_interval_secs=600):
其他
还有其他的方法本文没写,只把作者(小白)使用的较多的列出
其他的可以参考
https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/slim
Reference:
TensorFlow-Slim:
https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/slim