不知道未来slim会不会被遗弃。
既然解释不清,slim.utils.convert_collection_to_dic把with作用域下面的计算结果,都汇总到一个list中,每个成员是一个tuple,包括 每层网络的名字,每一层的输出结果,形状,数据类型
import tensorflow as tf
scope='alexnet_v2'
inputs = tf.ones([1,500,600,3])
slim = tf.contrib.slim
with tf.variable_scope(scope, 'alexnet_v2', [inputs]) as sc:
end_points_collection = sc.original_name_scope + '_end_points'
print("end_points_collection--->",end_points_collection)
with slim.arg_scope([slim.conv2d, slim.fully_connected, slim.max_pool2d],
outputs_collections=[end_points_collection]):
net = slim.conv2d(inputs, 64, [11, 11], 4, padding='VALID',
scope='conv1')
net = slim.max_pool2d(net, [3, 3], 2, scope='pool1')
net = slim.conv2d(net, 192, [5, 5], scope='conv2')
net = slim.max_pool2d(net, [3, 3], 2, scope='pool2')
net = slim.conv2d(net, 384, [3, 3], scope='conv3')
end_points = slim.utils.convert_collection_to_dict(
end_points_collection)
print("end_points--->", end_points)
结果为
"""
collection---> alexnet_v2/_end_points
OrderedDict([
('alexnet_v2/conv1', <tf.Tensor 'alexnet_v2/conv1/Relu:0' shape=(1, 123, 148, 64) dtype=float32>),
('alexnet_v2/pool1', <tf.Tensor 'alexnet_v2/pool1/MaxPool:0' shape=(1, 61, 73, 64) dtype=float32>),
('alexnet_v2/conv2', <tf.Tensor 'alexnet_v2/conv2/Relu:0' shape=(1, 61, 73, 192) dtype=float32>),
('alexnet_v2/pool2', <tf.Tensor 'alexnet_v2/pool2/MaxPool:0' shape=(1, 30, 36, 192) dtype=float32>),
('alexnet_v2/conv3', <tf.Tensor 'alexnet_v2/conv3/Relu:0' shape=(1, 30, 36, 384) dtype=float32>)])
"""
不懂就把它打印出来吧,就明白了。类似于
tf.add_to_collection("out", finish_output)