论文简介:训练更快,泛化更强的Dropout:Multi-Sample Dropout
论文标题:Multi-Sample Dropout for Accelerated Training and Better Generalization
论文链接:https://arxiv.org/pdf/1905.09788.pdf
论文作者:{Hiroshi Inoue}
论文简介:
本文阐述的也是一种 dropout 技术的变形——multi-sample dropout。传统 dropout 在每轮训练时会从输入中随机选择一组样本(称之为 dropout 样本),而 multi-sample dropout 会创建多个 dropout 样本,然后平均所有样本的损失,从而得到最终的损失。这种方法只要在 dropout 层后复制部分训练网络,并在这些复制的全连接层之间共享权重就可以了,无需新运算符。通过综合 M 个 dropout 样本的损失来更新网络参数,使得最终损失比任何一个 dropout 样本的损失都低。这样做的效果类似于对一个 minibatch 中的每个输入重复训练 M 次。因此,它大大减少了训练迭代次数。在 dropout 层中,每个 dropout 样本使用不同的掩码来使其神经元子集不同,但复制的全连接层之间会共享参数(即连接权重),然后利用相同的损失函数,如交叉熵,计算每个 dropout 的损失,并对所有 dropout 样本的损失值进行平均,就可以得到最终的损失值。
代码实现:
self.dropouts = nn.ModuleList([nn.Dropout(dropout_p) for _ in range(dropout_num)])
class ResNet(nn.Module):
def __init__(self, ResidualBlock, num_classes,dropout_num,dropout_p):
super(ResNet, self).__init__()
self.inchannel = 32
self.conv1 = nn.Sequential(
nn.Conv2d(3, 32, kernel_size=3, stride=1, padding=1, bias=False),
nn.BatchNorm2d(32),
nn.ReLU(),
)
self.layer1 = self.make_layer(ResidualBlock, 32, 2, stride=1)
self.layer2 = self.make_layer(ResidualBlock, 64, 2, stride=2)
self.layer3 = self.make_layer(ResidualBlock, 64, 2, stride=2)
self.layer4 = self.make_layer(ResidualBlock, 128, 2, stride=2)
self.fc = nn.Linear(128,num_classes)
self.dropouts = nn.ModuleList([nn.Dropout(dropout_p) for _ in range(dropout_num)])
def make_layer(self, block, channels, num_blocks, stride):
strides = [stride] + [1] * (num_blocks - 1) #strides=[1,1]
layers = []
for stride in strides:
layers.append(block(self.inchannel, channels, stride))
self.inchannel = channels
return nn.Sequential(*layers)
def forward(self, x,y = None,loss_fn = None):
out = self.conv1(x)
out = self.layer1(out)
out = self.layer2(out)
out = self.layer3(out)
out = self.layer4(out)
feature = F.avg_pool2d(out, 4)
if len(self.dropouts) == 0:
out = feature.view(feature.size(0), -1)
out = self.fc(out)
if loss_fn is not None:
loss = loss_fn(out,y)
return out,loss
return out,None
else:
for i,dropout in enumerate(self.dropouts):
if i== 0:
out = dropout(feature)
out = out.view(out.size(0),-1)
out = self.fc(out)
if loss_fn is not None:
loss = loss_fn(out, y)
else:
temp_out = dropout(feature)
temp_out = temp_out.view(temp_out.size(0),-1)
out =out+ self.fc(temp_out)
if loss_fn is not None:
loss = loss+loss_fn(temp_out, y)
if loss_fn is not None:
return out / len(self.dropouts),loss / len(self.dropouts)
return out,None
Multi-Sample Dropout作用
- 大幅减少训练迭代次数
- 提高泛化能力
参考资料
Multi-Sample Dropout for Accelerated Training and Better Generalization
大幅减少训练迭代次数,提高泛化能力:IBM提出「新版Dropout」