TFlearn中用于残差网络的Residual Block和Residual Bottleneck

Residual Block

tflearn.layers.conv.residual_block (incoming, nb_blocks, out_channels, downsample=False, downsample_strides=2, activation=‘relu’, batch_norm=True, bias=True, weights_init=‘variance_scaling’, bias_init=‘zeros’, regularizer=‘L2’, weight_decay=0.0001, trainable=True, restore=True, reuse=False, scope=None, name=‘ResidualBlock’)

A residual block as described in MSRA’s Deep Residual Network paper. Full pre-activation architecture is used here.

Input
4-D Tensor [batch, height, width, in_channels].

Output
4-D Tensor [batch, new height, new width, nb_filter].

Arguments
incoming: Tensor. Incoming 4-D Layer.
nb_blocks: int. Number of layer blocks.
out_channels: int. The number of convolutional filters of the convolution layers.
downsample: bool. If True, apply downsampling using ‘downsample_strides’ for strides.
downsample_strides: int. The strides to use when downsampling.
activation: str (name) or function (returning a Tensor). Activation applied to this layer (see tflearn.activations). Default: ‘linear’.
batch_norm: bool. If True, apply batch normalization.
bias: bool. If True, a bias is used.
weights_init: str (name) or Tensor. Weights initialization. (see tflearn.initializations) Default: ‘uniform_scaling’.
bias_init: str (name) or tf.Tensor. Bias initialization. (see tflearn.initializations) Default: ‘zeros’.
regularizer: str (name) or Tensor. Add a regularizer to this layer weights (see tflearn.regularizers). Default: None.
weight_decay: float. Regularizer decay parameter. Default: 0.001.
trainable: bool. If True, weights will be trainable.
restore: bool. If True, this layer weights will be restored when loading a model.
reuse: bool. If True and ‘scope’ is provided, this layer variables will be reused (shared).
scope: str. Define this layer scope (optional). A scope can be used to share variables between layers. Note that scope will override name.
name: A name for this layer (optional). Default: ‘ShallowBottleneck’.
References
Deep Residual Learning for Image Recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. 2015.
Identity Mappings in Deep Residual Networks. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. 2015.

Residual Bottleneck
tflearn.layers.conv.residual_bottleneck (incoming, nb_blocks, bottleneck_size, out_channels, downsample=False, downsample_strides=2, activation=‘relu’, batch_norm=True, bias=True, weights_init=‘variance_scaling’, bias_init=‘zeros’, regularizer=‘L2’, weight_decay=0.0001, trainable=True, restore=True, reuse=False, scope=None, name=‘ResidualBottleneck’)

A residual bottleneck block as described in MSRA’s Deep Residual Network paper. Full pre-activation architecture is used here.

Input
4-D Tensor [batch, height, width, in_channels].

Output
4-D Tensor [batch, new height, new width, nb_filter].

Arguments
incoming: Tensor. Incoming 4-D Layer.
nb_blocks: int. Number of layer blocks.
bottleneck_size: int. The number of convolutional filter of the bottleneck convolutional layer.
out_channels: int. The number of convolutional filters of the layers surrounding the bottleneck layer.
downsample: bool. If True, apply downsampling using ‘downsample_strides’ for strides.
downsample_strides: int. The strides to use when downsampling.
activation: str (name) or function (returning a Tensor). Activation applied to this layer (see tflearn.activations). Default: ‘linear’.
batch_norm: bool. If True, apply batch normalization.
bias: bool. If True, a bias is used.
weights_init: str (name) or Tensor. Weights initialization. (see tflearn.initializations) Default: ‘uniform_scaling’.
bias_init: str (name) or tf.Tensor. Bias initialization. (see tflearn.initializations) Default: ‘zeros’.
regularizer: str (name) or Tensor. Add a regularizer to this layer weights (see tflearn.regularizers). Default: None.
weight_decay: float. Regularizer decay parameter. Default: 0.001.
trainable: bool. If True, weights will be trainable.
restore: bool. If True, this layer weights will be restored when loading a model.
reuse: bool. If True and ‘scope’ is provided, this layer variables will be reused (shared).
scope: str. Define this layer scope (optional). A scope can be used to share variables between layers. Note that scope will override name.
name: A name for this layer (optional). Default: ‘DeepBottleneck’.
References
Deep Residual Learning for Image Recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. 2015.
Identity Mappings in Deep Residual Networks. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. 2015.
Links
http://arxiv.org/pdf/1512.03385v1.pdf
Identity Mappings in Deep Residual Networks

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值