Fast Gradient Sign Attack(FGSM)
算法原理参考:https://blog.csdn.net/ilalaaa/article/details/105963778
在现实生活中,往往改动一小部分数据将会对model的结果产生巨大的影响,那么我们需要将这种微小的变化考虑进model中,增强model的泛化能力。我们可以自主生成对抗样本,让model基于对抗样本和训练样本学习。
代码解释参考:https://blog.csdn.net/hg_zhh/article/details/100155785
github源码主要实现利用FGSM生成对抗样本,比较在不同ε下FGSM的攻击效果,最后绘图呈现结果。
github地址:https://github.com/fanjiarong2343/CNN_FGSM
关键代码
CNN神经网络构建:
class Net(nn.Module):
def __init__(self, num_classes=10):
super(Net, self).__init__()
self.layer1 = nn.Sequential(
nn.Conv2d(1, 16, kernel_size=5, stride=1, padding=2),
nn.BatchNorm2d(16), # https://blog.csdn.net/bigFatCat_Tom/article/details/91619977 卷积层之后添加BatchNorm2d进行数据的归一化处理,这使得数据在进行Relu之前不会因为数据过大而导致网络性能的不稳定
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2))
self.layer2 = nn.Sequential(
nn.Conv2d(16, 32, kernel_size=5, stride=1, padding=2),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2))
self.fc = nn.Linear(7 * 7 * 32, num_classes)
def forward(self, x):
out = self.layer1(x)
out = self.layer2(out)
out = out.reshape(out.size(0), -1)
out = self.fc(out)
return F.log_softmax(out, dim=1) # https://blog.csdn.net/qq_28418387/article/details/95918829
FGSM算法:
def fgsm_attack(image, epsilon, data_grad):
"""
获取扰动图片
:param image: 原始图片
:param epsilon: 扰动量
:param data_grad: 损失梯度
:return:
"""
sign_data_grad = data_grad.sign() # 获取梯度的符号
perturbed_image = image + epsilon * sign_data_grad
perturbed_image = torch.clamp(perturbed_image, 0, 1) # 将数值裁剪到0-1的范围内
return perturbed_image
结论
Figure_1
Figure_2
Figure_3