tf.layers
定义在tensorflow/python/layers/layers.py, 为我们提供了一些高层次的构建神经网络的接口。
函数
提供的主要API包含以下几个方面:
- 卷积
- pooling
- 全连接
- 批归一化BN
- dropout
卷积
tf.layers.conv1d
conv1d(
inputs,
filters,
kernel_size,
strides=1,
padding=’valid’,
data_format=’channels_last’,
dilation_rate=1,
activation=None,
use_bias=True,
kernel_initializer=None,
bias_initializer=tf.zeros_initializer(),
kernel_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
trainable=True,
name=None,
reuse=None
)
tf.layers.conv2d
conv2d(
inputs,
filters,
kernel_size,
strides=(1, 1),
padding=’valid’,
data_format=’channels_last’,
dilation_rate=(1, 1),
activation=None,
use_bias=True,
kernel_initializer=None,
bias_initializer=tf.zeros_initializer(),
kernel_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
trainable=True,
name=None,
reuse=None
)
tf.layers.conv3d
conv3d(
inputs,
filters,
kernel_size,
strides=(1, 1, 1),
padding=’valid’,
data_format=’channels_last’,
dilation_rate=(1, 1, 1),
activation=None,
use_bias=True,
kernel_initializer=None,
bias_initializer=tf.zeros_initializer(),
kernel_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
trainable=True,
name=None,
reuse=None
)
以上3个函数重要的设置函数主要是:
inputs
为你输入的tensorfilters
: 整数,指定了你输出的channel数目kernel_size
: 卷积核大小,一个整数/3个整数的列表/元组, 指定[depth, height, width]strides
, [depth, height, width]上移动的步长padding
:same 输入输出的[height, width] 一致, valid 为[height - kernel_size +1, width-kernel_size +1]data_format
: 默认是channels_last ,即(batch, depth, height, width, channels),也可以是channels_first,即(batch, channels, depth, height, width)
pool
主要包含average_pooling
和 max_pooling
两种方式, 处理的维度有1,2,3维。
- tf.layers.average_pooling1d
- tf.layers.average_pooling2d
- tf.layers.average_pooling3d
- tf.layers.max_pooling1d
- tf.layers.max_pooling2d
- tf.layers.max_pooling3d
以max_pooling3d函数为例
max_pooling3d(
inputs,
pool_size,
strides,
padding=’valid’,
data_format=’channels_last’,
name=None
)
需要注意的参数有:
- inputs: 输入的张量
- pool_size: 3个整数的列表或者元组对应于 (pool_depth, pool_height, pool_width)上的大小,或者一个整数指定这三个。
- strides: 在3个维度上的移动的步长
- padding:同上
全连接
tf.layers.dense
dense(
inputs,
units,
activation=None,
use_bias=True,
kernel_initializer=None,
bias_initializer=tf.zeros_initializer(),
kernel_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
trainable=True,
name=None,
reuse=None
)
- inputs: 输入的张量
- units: 输出channel数
- activation: 激活函数
输出为:activation(inputs.kernel + bias)
dropout
tf.layers.dropout
dropout(
inputs,
rate=0.5,
noise_shape=None,
seed=None,
training=False,
name=None
)
- inputs:输入的张量
- training: True时会加入dropout,false即测试时不会加入dropout
- rate: 丢失率
BN
tf.layers.batch_normalization
batch_normalization(
inputs,
axis=-1,
momentum=0.99,
epsilon=0.001,
center=True,
scale=True,
beta_initializer=tf.zeros_initializer(),
gamma_initializer=tf.ones_initializer(),
moving_mean_initializer=tf.zeros_initializer(),
moving_variance_initializer=tf.ones_initializer(),
beta_regularizer=None,
gamma_regularizer=None,
training=False,
trainable=True,
name=None,
reuse=None,
renorm=False,
renorm_clipping=None,
renorm_momentum=0.99
)
- inputs: 输入的张量
Reference:
tf.layers