1 引言
- MIM,即基于动量的基本迭代法。在BIM上加了动量的操作。不会BIM?跳转
- 理解了BIM(基本迭代法),相信MIM的原理对你也不难。
- 使用pytorch实现MIM。pytorch不会?跳转
- MIM 论文地址
2 MIM原理
- 对比
- BIM
- MIM
- 如上,MIM与BIM在公式上就是有没有 miu*gt 的区别。其方法主要借鉴于神经网络中参数更新的momentum动量的思想。
- 说白了,MIM的本质就是,在进行迭代的时候,每一轮的扰动不仅与当前的梯度方向有关,还与之前算出来的梯度方向相关。
- 为什么这么说?如果前几次迭代过程求的梯度都是负,当前梯度却是正,那么这个g(t+1)就会受到之前求出的梯度方向影响,最终对抗样本的更新就不一定是加上正数,也可能是负数。
- 直觉理解,就像你正在以1m/s的速度向前走,突然有个妹子在身后,你想过去liaosao,那肯定不可能立马以1m/s的速度往后啊,你会有个减速过程,才能往后走。也就是我们常说的惯性,而MIM也是用了这个原理。
- 那惯性起什么作用?肯定是更稳定啊。你神经网络中使用动量(亦可理解为惯性)的目的也是为了训练过程中loss值能够不那么波动,以较平稳的曲线进行,也能有效避免局部最优啦。而MIM中使用动量的思想则能提升对抗样本的可迁移性,其攻击效果更好。
3 coding
- 实验步骤:
- 训练一个简单模型(mnist手写数字分类任务)
- 通过该模型生成对抗样本
- 可视化展示对抗样本效果
3.1 训练模型
from __future__ import print_function
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
import numpy as np
import matplotlib.pyplot as plt
from tqdm import tqdm
# 加载mnist数据集
test_loader = torch.utils.data.DataLoader(
datasets.MNIST('../data', train=False, download=True, transform=transforms.Compose([
transforms.ToTensor(),
])),
batch_size=10, shuffle=True)
train_loader = torch.utils.data.DataLoader(
datasets.MNIST('../data', train=True, download=True, transform=transforms.Compose([
transforms.ToTensor(),
])),
batch_size=10, shuffle=True)
# 超参数设置
batch_size = 10
epoch = 1
learning_rate = 0.001
# 设置扰动最大就是50
epsilon = 5/256
iter = 10
# 生成对抗样本的个数
adver_nums = 1000
# MI-FGSM的衰减因子miu
miu = 1
# LeNet Model definition
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
self.conv2_drop = nn.Dropout2d()
self.fc1 = nn.Linear(320, 50)
self.fc2 = nn.Linear(50, 10)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
x = x.view(-1, 320)
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return F.log_softmax(x, dim=1)
# 选择设备
device = torch.device("cuda" if (torch.cuda.is_available()) else "cpu")
# 初始化网络,并定义优化器
simple_model = Net().to(device)
optimizer1 = torch.optim.SGD(simple_model.parameters(),lr = learning_rate,momentum=0.9)
print (simple_model)
Output:
- 如上,下载数据,设置超参数以及构建模型
- 如下,开始训练模型,并进行测试,观察模型准确率
# 训练模型
def train(model,optimizer):
for i in range(epoch):
for j,(data,target) in tqdm(enumerate(train_loader)):
data = data.to(device)
target = target.to(device)
logit = model(data)
loss = F.nll_loss(logit,target)
model.zero_grad()
# 如下:因为其中的loss是单个tensor就不能用加上一个tensor的维度限制
loss.backward()
# 如下有两种你形式表达,一种是原生,一种是使用optim优化函数直接更新参数
# 为什么原生的训练方式没有效果???代表参数没有更新,就离谱。
# 下面的detach与requires_grad_有讲究哦,终于明白了;但是为什么下面代码不能work还是没搞懂
# for params in model.parameters():
# params = (params - learning_rate * params.grad).detach().requires_grad_()
optimizer.step()
if j % 1000 == 0:
print ('第{}个数据,loss值等于{}'.format(j,loss))
train(simple_model,optimizer1)
# eval eval ,老子被你害惨了
# 训练完模型后,要加上,固定DROPOUT层
simple_model.eval()
# 模型测试
def test(model,name):
correct_num = torch.tensor(0).to(device)
for j,(data,target) in tqdm(enumerate(test_loader)):
data = data.to(device)
target = target.to(device)
logit = model(data)
pred = logit.max(1)[1]
num = torch.sum(pred==target)
correct_num = correct_num + num
print (correct_num)
print ('\n{} correct rate is {}'.format(name,correct_num/10000))
test(simple_model,'simple model')
Output:
3.2 MIM对抗样本生成
# FGSM attack code
def fgsm_attack(image, epsilon, data_grad):
# 该函数本身就可以按照batch的实现
# Collect the element-wise sign of the data gradient
sign_data_grad = data_grad.sign()
# Create the perturbed image by adjusting each pixel of the input image
perturbed_image = image + epsilon*sign_data_grad
# Adding clipping to maintain [0,1] range
perturbed_image = torch.clamp(perturbed_image, 0, 1)
# Return the perturbed image
return perturbed_image
# 选取测试集中1000张图片,作为生成对抗样本的干净数据
# 由于设置batch_size为10,故循环100次即可
adver_example_by_IMFGSM = torch.zeros((batch_size,1,28,28)).to(device)
adver_target = torch.zeros(batch_size).to(device)
clean_example = torch.zeros((batch_size,1,28,28)).to(device)
clean_target = torch.zeros(batch_size).to(device)
for i,(data,target) in enumerate(test_loader):
data,target = data.to(device),target.to(device)
data.requires_grad = True
if i == 0:
clean_example = data
else:
clean_example = torch.cat((clean_example,data),dim = 0)
data_grad = torch.zeros((batch_size,1,28,28)).to(device)
for j in range(iter):
output = simple_model(data)
loss = F.nll_loss(output,target)
simple_model.zero_grad()
loss.backward()
# data对x求偏导的L1范数
# 除法要按照其广播机制来玩
data_grad = miu*data_grad + data.grad.data/torch.sum(data.grad.data,axis=[1,2,3]).reshape(batch_size,1,1,1)
data = fgsm_attack(data,epsilon,data_grad)
# 迭代求对抗样本中,需要及时的使用截断detach将重复使用变量,变成计算图中的叶子节点;
# 由于变成了叶子节点,后续还需要对该变量求偏导,故添加requires_grad参数
data.detach_()
data.requires_grad = True
# 使用对抗样本攻击simple_model模型
pred = simple_model(data).max(1)[1]
if i == 0:
adver_example_by_IMFGSM = data
clean_target = target
adver_target = pred
else:
adver_example_by_IMFGSM = torch.cat((adver_example_by_IMFGSM , data), dim = 0)
clean_target = torch.cat((clean_target,target),dim = 0)
adver_target = torch.cat((adver_target,pred),dim = 0)
if i+1 >= adver_nums/batch_size:
break
print (adver_example_by_IMFGSM.shape)
print (adver_target.shape)
print (clean_example.shape)
print (clean_target.shape)
- 如上,对抗样本已经生成,存储在变量 adver_example_by_IMFGSM 中。
- 其中 adver_target,clean_example,clean_target,都是为了后面的可视化作准备。
3.3 可视化展示
def plot_clean_and_adver(adver_example,adver_target,clean_example,clean_target):
n_cols = 5
n_rows = 5
cnt = 1
cnt1 = 1
plt.figure(figsize=(n_cols*4,n_rows*2))
for i in range(n_cols):
for j in range(n_rows):
plt.subplot(n_cols,n_rows*2,cnt1)
plt.xticks([])
plt.yticks([])
plt.title("{} -> {}".format(clean_target[cnt], adver_target[cnt]))
plt.imshow(clean_example[cnt].reshape(28,28).to('cpu').detach().numpy(),cmap='gray')
plt.subplot(n_cols,n_rows*2,cnt1+1)
plt.xticks([])
plt.yticks([])
# plt.title("{} -> {}".format(clean_target[cnt], adver_target[cnt]))
plt.imshow(adver_example[cnt].reshape(28,28).to('cpu').detach().numpy(),cmap='gray')
cnt = cnt + 1
cnt1 = cnt1 + 2
plt.show()
plot_clean_and_adver(adver_example_by_IMFGSM,adver_target,clean_example,clean_target)
Output:
附录
代码地址:
https://colab.research.google.com/drive/1F5gMNZln0M9ezfl2Nz5BCTLViFr4Cq2d?usp=sharing
如有疑问,评论区内留言,力所能及,必答之。