[deep_thought Pytorch教程] 学习笔记15 Dropout原理以及其TF/Torch/Numpy源码实现

Dropout是一种防止神经网络过拟合的技术,通过在训练过程中随机关闭部分神经元。在PyTorch中,它有两种实现形式:nn.dropout(类)和nn.functional.dropout(函数)。在训练时,dropout概率为p,测试时权重会按比例调整以保持期望值一致。源码分析显示,dropout涉及到权重的缩放和伯努利分布的使用。
摘要由CSDN通过智能技术生成

Dropout原理以及其TF/Torch/Numpy源码实现

Dropout在pytorch中的表现形式

Dropout在pytorch中有两种实现形式,一是nn.dropout,为类的形式;二是,nn.functional.dropout.为函数的形式。
对于nn.dropout需要将类进行实例化

m = nn.Dropout(p=0.2)
input = torch.randn(20, 16)
output = m(input)

其中p为随机置零的概率
对于nn.functional.dropout

torch.nn.functional.dropout(input, p=0.5, training=True, inplace=False)

可见其有不同于nn.dropout的参数training

Dropout论文解读

Dropout是由hinton团队在2014年所提出的,该篇论文为《Dropout: a simple way to prevent neural networks from overfitting》
Dropout的原理
dropout的大致原理为随机让网络中的节点失联。
dropout训练和测试时的不同
在训练过程的dropout中,一个unit的权重存在的概率为0,所以在测试时,为了寻求与训练时一直的效果,测试时的权重要乘上dropout的概率p,其意义为权重的期望值。为了减少测试阶段的计算量,在实际pytorch的训练中常将权重除以(1-p),以达成期望值相同的效果。
#源码解析
接下来是nn.dropout的源码

class _DropoutNd(Module):
    __constants__ = ['p', 'inplace']
    p: float
    inplace: bool

    def __init__(self, p: float = 0.5, inplace: bool = False) -> None:
        super(_DropoutNd, self).__init__()
        if p < 0 or p > 1:
            raise ValueError("dropout probability has to be between 0 and 1, "
                             "but got {}".format(p))
        self.p = p
        self.inplace = inplace

    def extra_repr(self) -> str:
        return 'p={}, inplace={}'.format(self.p, self.inplace)

首先,dropout继承自_DropoutNd,而_DropoutNd又继承自nn.Module,_DropoutNd,该类实现了dropout类初始化等一系列基本功能,不再赘述。

class Dropout(_DropoutNd):
   """
   一些注释
   """
    def forward(self, input: Tensor) -> Tensor:
        return F.dropout(input, self.p, self.training, self.inplace)

可以看到Dropout类就是在调用nn.functional.dropout,其中参数self.training继承自module
接下来是nn.functional.dropout的caffe源码(c语言)
caffe源码
首先可以看到该函数计算了scale,是对输入变量进行缩放;然后生成满足期望值为p的伯努利分布函数dist,在对节点中每一个权重进行遍历时,判断dist(gen)是否大于0.5来生成mask。最后将最后结果依次相乘。

import numpy as np

def train(rate,x,w1,b1,w2,b2):
	layer1 = np.maximum(0,np.dot(w1,x) +b1)
	mask1 = np.random.binomial(1,1-rate,layer1.shape)
	layer1 = layer1*mask1
	layer2 = np.maximum(0,np.dot(w2,layer1) +b2)
	mask2 = np.random.binomial(1,1-rate,layer2.shape)
	layer2 = layer2*mask2
    return layer2

def test(rate,x,w1,b1,w2,b2):
	layer1 = np.maximum(0,np.dot(w1,x) +b1)
	layer1 = layer1*(1-rate)
	layer2 = np.maximum(0,np.dot(w2,layer1) +b2)
	layer2 = layer2*(1-rate)
    return layer2

def another_train(rate,x,w1,b1,w2,b2):
	layer1 = np.maximum(0,np.dot(w1,x) +b1)
	mask1 = np.random.binomial(1,1-rate,layer1.shape)
	layer1 = layer1*mask1
	layer1 = layer1/(1-rate)
	
	layer2 = np.maximum(0,np.dot(w2,layer1) +b2)
	mask2 = np.random.binomial(1,1-rate,layer2.shape)
	layer2 = layer2*mask2
	layer2 = layer2/(1-rate)
    return layer2
    
def another_test(x,w1,b1,w2,b2):
	layer1 = np.maximum(0,np.dot(w1,x) +b1)
	
	layer2 = np.maximum(0,np.dot(w2,layer1) +b2)

    return layer2

最后时用numpy实现的两种不同方式的dropput

好的,下面是使用PyTorch实现对抗训练实现图像分类的增量学习任务的步骤: 1. 导入必要的库和数据集 ```python import torch import torch.nn as nn import torch.optim as optim import torchvision import torchvision.transforms as transforms import numpy as np transform = transforms.Compose( [transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))]) trainset = torchvision.datasets.MNIST(root='./data', train=True, download=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True, num_workers=2) testset = torchvision.datasets.MNIST(root='./data', train=False, download=True, transform=transform) testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=False, num_workers=2) ``` 2. 定义模型和损失函数 这里我们定义一个简单的卷积神经网络模型和交叉熵损失函数。 ```python class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(1, 32, 3, 1) self.conv2 = nn.Conv2d(32, 64, 3, 1) self.dropout1 = nn.Dropout2d(0.25) self.dropout2 = nn.Dropout2d(0.5) self.fc1 = nn.Linear(9216, 128) self.fc2 = nn.Linear(128, 10) def forward(self, x): x = self.conv1(x) x = nn.functional.relu(x) x = self.conv2(x) x = nn.functional.relu(x) x = nn.functional.max_pool2d(x, 2) x = self.dropout1(x) x = torch.flatten(x, 1) x = self.fc1(x) x = nn.functional.relu(x) x = self.dropout2(x) x = self.fc2(x) output = nn.functional.log_softmax(x, dim=1) return output criterion = nn.CrossEntropyLoss() ``` 3. 定义对抗训练的方法 对抗训练的方法是在训练过程中加入对抗样本,使得模型更加鲁棒。这里我们使用FGSM(Fast Gradient Sign Method)算法生成对抗样本。 ```python def fgsm_attack(image, epsilon, data_grad): sign_data_grad = data_grad.sign() perturbed_image = image + epsilon * sign_data_grad perturbed_image = torch.clamp(perturbed_image, 0, 1) return perturbed_image def train(model, device, trainloader, optimizer, epoch, epsilon): model.train() for batch_idx, (data, target) in enumerate(trainloader): data, target = data.to(device), target.to(device) data.requires_grad = True optimizer.zero_grad() output = model(data) loss = criterion(output, target) loss.backward() data_grad = data.grad.data perturbed_data = fgsm_attack(data, epsilon, data_grad) output = model(perturbed_data) loss = criterion(output, target) loss.backward() optimizer.step() print('Train Epoch: {} \tLoss: {:.6f}'.format(epoch, loss.item())) ``` 4. 定义测试的方法 ```python def test(model, device, testloader): model.eval() test_loss = 0 correct = 0 with torch.no_grad(): for data, target in testloader: data, target = data.to(device), target.to(device) output = model(data) test_loss += criterion(output, target).item() pred = output.argmax(dim=1, keepdim=True) correct += pred.eq(target.view_as(pred)).sum().item() test_loss /= len(testloader.dataset) print('Test set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)'.format( test_loss, correct, len(testloader.dataset), 100. * correct / len(testloader.dataset))) ``` 5. 定义主函数 ```python def main(epsilon): device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = Net().to(device) optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9) for epoch in range(1, 11): train(model, device, trainloader, optimizer, epoch, epsilon) test(model, device, testloader) if __name__ == '__main__': main(0.1) ``` 注意:在增量学习任务中,每个epoch需要重新加载数据集,并且只训练新的数据。此外,还需要将之前训练好的模型参数加载到当前模型中,以使得模型保持连续性。
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值