本文介绍深度学习中常用的四种Normalization方法,Batch Normalization、Layer Normalization、Instance Normalization和Group Normalization。主要结合代码分析其计算过程
总结
对于输入大小为NxCxHxW
的特征
- BN对所有样本的每个通道进行归一化 [均值形状为C]
- LN对每个样本进行归一化 [均值形状为N]
- IN对每个样本的每个通道进行归一化 [均值形状为NC]
- GN对每个样本的部分通道进行归一化(先将通道分为G组) [均值形状为NG]
BatchNormalization
BN对NHW进行归一化,保留C维度,对较小的batch_size效果不好
def BatchNormalization(x):
# x: [NxCxHxW]
mean, std = mean_std(x, dim=[0,2,3], keepdim=True)
x = (x - mean) / std
return x
# track_running_stats=False,求当前 batch 真实平均值和标准差,而不是更新全局平均值和标准差
# affine=False, 只做归一化,不乘以 gamma 加 beta(通过训练才能确定)
# num_features 为 feature map 的 channel 数目
bn = nn.BatchNorm2d(num_features=20, affine=False, track_running_stats=False)
LayerNormalization
LN对CHW进行归一化,保留N维度
def LayerNormalization(x):
# x: [NxCxHxW]
mean, std = mean_std(x, dim=[1,2,3], keepdim=True)
x = (x - mean) / std
return x
# elementwise_affine=False 不作映射
# 这里的映射和 BN 以及下文的 IN 有区别,它是 elementwise 的 affine,
# 即 gamma 和 beta 不是 channel 维的向量,而是维度等于 normalized_shape 的矩阵
ln = nn.LayerNorm(normalized_shape=[20, 5, 5], elementwise_affine=False)
InstaceNormalization
IN对HW进行归一化,保留NC维度
def InstanceNormalization(x):
# x: [NxCxHxW]
mean, std = mean_std(x, dim=[2,3], keepdim=True)
x = (x - mean) / std
return x
# track_running_stats=False,求当前 batch 真实平均值和标准差,而不是更新全局平均值和标准差
# affine=False, 只做归一化,不乘以 gamma 加 beta(通过训练才能确定)
# num_features 为 feature map 的 channel 数目
In = nn.InstanceNorm2d(num_features=20, affine=False, track_running_stats=False)
GroupNormalization
GN对channel先进行分组,再进行归一化,是LN和IN的折中
# [NxCxHxW] -> [NxGx(C//G)xHxW], 再对(C//G)HW进行归一化,保留NG维度
def GroupNormalization(x, num_groups):
# x: [NxCxHxW]
size = x.size()
x = x.view(size[0], num_groups, -1, size[2], size[3])
mean, std = mean_std(x, dim=[2,3,4], keepdim=True)
x = (x - mean) / std
x = x.view(size)
return x
# 分成 4 个 group
gn = nn.GroupNorm(num_groups=4, num_channels=20, affine=False)
测试代码如下:
x = torch.rand(10, 20, 5, 5)
official_bn = bn(x)
my_bn = BatchNormalization(x)
print("BatchNormalization diff: %f" % torch.sum(torch.abs(official_bn-my_bn)))
official_ln = ln(x)
my_ln = LayerNormalization(x)
print("LayerNormalization diff: %f" % torch.sum(torch.abs(official_ln-my_ln)))
official_in = In(x)
my_in = InstanceNormalization(x)
print("InstanceNormalization diff: %f" % torch.sum(torch.abs(official_in-my_in)))
official_gn = gn(x)
my_gn = GroupNormalization(x, num_groups=4)
print("GroupNormalization diff: %f" % torch.sum(torch.abs(official_gn-my_gn)))
我们将自己计算的结果与pytorch中官方实现的结果进行比较,结果如下:
BatchNormalization diff: 8.509657
LayerNormalization diff: 4.204389
InstanceNormalization diff: 86.581696
GroupNormalization diff: 17.165443
其中IN的实现结果与官方实现差距较大,但是原因不明,理论上代码应该没有问题
此外,上文中的mean_std
函数为自己写的支持多维度同时求均值和方差的函数(pytorch1.1.0版本前torch.std
函数不支持多维度同时计算)
def mean_std(x, dim, keepdim=False, eps=1e-5):
dim = list(dim) if not isinstance(dim, int) else [dim]
size = list(x.size())
dims = len(size)
permute_dim = [i for i in range(dims) if i not in dim]
permute_dim += dim
x = x.permute(permute_dim)
view_size = [size[i] for i in range(dims) if i not in dim]
view_size += [-1]
x = x.contiguous()
x = x.view(view_size)
mean = torch.mean(x, dim=-1, keepdim=False)
std = torch.std(x, dim=-1, keepdim=False) + eps
if keepdim:
final_size = [size[i] if i not in dim else 1 for i in range(dims)]
mean = mean.view(final_size)
std = std.view(final_size)
return mean, std