y = x − E [ x ] V a r [ x ] + ϵ ∗ γ + β y = \frac{x - \mathrm{E}[x]}{\sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta y=Var[x]+ϵx−E[x]∗γ+β
The mean and standard-deviation are calculated per-dimension over the mini-batches and :math:\gamma
and :math:\beta
are learnable parameter vectors of size C
(where C
is the input size). By default, the elements of :math:\gamma
are set to 1 and the elements of :math:\beta
are set to 0.
Also by default, during training this layer keeps running estimates of its computed mean and variance, which are then used for normalization during evaluation. The running estimates are kept with a default :attr:momentum
of 0.1.
If :attr:track_running_stats
is set to False
, this layer then does not keep running estimates, and batch statistics are instead used during evaluation time as well.
This :attr:momentum
argument is different from one used in optimizer classes and the conventional notion of momentum. Mathematically, the update rule for running statistics here is
x ^ new = ( 1 − momentum ) × x ^ + momentum × x t \hat{x}_\text{new} = (1 - \text{momentum}) \times \hat{x} + \text{momentum} \times x_t x^new=(1−momentum)×x^+momentum×xt,
x
^
\hat{x}
x^ is the estimated statistic
x
t
x_t
xt is the new observed value.
Because the Batch Normalization is done over the C
dimension, computing statistics on (N, L)
slices, it’s common terminology to call this Temporal Batch Normalization.
- num_features: :math:
C
from an expected input of size :math:(N, C, L)
or :math:L
from input of size :math:(N, L)
- eps: a value added to the denominator for numerical stability. Default: 1e-5
- momentum: the value used for the running_mean and running_var computation. Can be set to
None
for cumulative moving average (i.e. simple average). Default: 0.1 - affine: a boolean value that when set to
True
, this module has learnable affine parameters. Default:True
- track_running_stats: a boolean value that when set to
True
, this module tracks the running mean and variance, and when set toFalse
, this module does not track such statistics and always uses batch statistics in both training and eval modes. Default:True
Examples::
>>> # With Learnable Parameters
>>> m = nn.BatchNorm1d(100)
>>> # Without Learnable Parameters
>>> m = nn.BatchNorm1d(100, affine=False)
>>> input = torch.randn(20, 100)
>>> output = m(input)