Batch Normalization 批标准化及其相关数学原理和推导
数据经过一层层网络之后,输出的数据分布会发生变化,此现象称为Internal Covariate Shift,会给下一层的网络学习带来困难:
- 整个网络的学习速度较慢;
- 数据的分布容易陷入激活函数(如sigmoid,tanh)的梯度饱和区,减慢了网络的收敛速度。
直接对每一层做归一化是不合理的
如果将每一层的输出都归一化为标准正态分布,均值为0,方差为1,会导致网络完全学习不到输入数据的特征,因为所有的特征都被归一化了。
底层网络学习到的参数信息被丢掉了,降低了网络的数据表达能力。
Batch Normalization的步骤 1
-
求出整个batch数据的均值:
μ b = 1 m ∑ i = 1 m x i \mu_b=\frac{1}{m}\sum_{i=1}^mx_i μb=m1i=1∑mxi -
求出整个batch数据的方差:
σ b 2 = 1 m ∑ i = 1 m ( x i − μ b ) 2 \sigma^2_b = \frac{1}{m}\sum_{i=1}^m(x_i-\mu_b)^2 σb2=m1i=1∑m(xi−μb)2 -
将数据归一化为标准正态分布:
x ^ i = x i − μ b σ b 2 + ε \hat{x}_i = \frac{x_i-\mu_b}{\sqrt{\sigma^2_b+\varepsilon}} x^i=σb2+εxi−μb
其中的 ε \varepsilon ε是为了防止方差为0。 -
引入平移和缩放参数,得到最终的归一化结果:
y i = γ x ^ i + β y_i = \gamma\hat{x}_i+\beta yi=γx^i+β
测试阶段使用Unbiased variance estimate 2 来进行总体方差的无偏估计
Question: Is the estimate of the population variance that arises in this way using the sample mean always smaller than what we would get if we used the population mean?
Answer: yes except when the sample mean happens to be the same as the population mean.
We are seeking the sum of squares of distances from the population mean, but end up calculating the sum of squares of differences from the sample mean, which, as will be seen, is the number that minimizes that sum of squares of distances. So unless the sample happens to have the same mean as the population, this estimate will always underestimate the sum of squared differences from the population mean.
证明过程如下:
设样本的平均值是 μ s \mu_s μs,设总体的平均值是 μ \mu μ,则
(1) μ s = 1 m ∑ i = 1 m x i \mu_s = \frac{1}{m}\sum_{i=1}^mx_i \tag{1} μs=m1i=