【论文笔记】 Generating Adversarial Examples with Adversarial Networks

结论:由于本篇论文是在阅读完PS-GAN的基础上了来看的,所以并未感到有什么优越的策略。唯一新颖的点是在测试方法上使用了蒸馏网络来模拟黑盒攻击。

摘要

深度神经网络(DNNs)已经被发现容易受到对抗性例子的影响,这是由于输入中添加了小幅度的扰动。这种对抗性的例子会误导dnn产生对抗性选择的结果。不同的攻击策略已被提出以产生对抗实例,但如何产生高感知质量和更有效的对抗实例还需要更多的研究。在本文中,我们提出了利用生成对抗网络(GANs)生成对抗例子的AdvGAN算法,它可以学习和近似原始实例的分布。对于AdvGAN来说,一旦生成器被训练,它就可以在任何情况下有效地产生扰动,从而潜在地加速对抗训练作为防御。我们在半白盒和黑盒攻击设置中都应用了AdvGAN。与传统的白盒攻击不同,在半白盒攻击中,不需要在训练生成器之后访问原始目标模型。在黑盒攻击中,我们动态地训练一个经过提炼的模型作为黑盒模型,并相应地优化生成器。与其他攻击相比,AdvGAN在不同目标模型上生成的对抗实例在先进防御技术下具有较高的攻击成功率。在公开的MNIST黑箱攻击挑战中,我们的攻击以92.76%的准确率位居第一。

阅读理解

这篇文章是19年IJCAI中,在PS-GAN中被引用为network-based technique实现攻击,最开始还没有看明白为什么说是基于网络技术的,读完之后这种为网络技术就是基于GAN。
在这里插入图片描述

这篇文章虽然说是基于GAN(由于目前还没了解过GAN内部到底是如何生成图像的,这是下面工作中需要去补充的部分),但整体来看还是基于loss优化(GAN损失:区分生成图像和原图像、ADV损失:保证对抗样本分类置信度、对抗样本铰链损失:参考CW方法中的L2 norm控制干扰程度),整张图像的生成对抗样本进行白盒攻击(区分于PS-GAN中的patch)。
在这里插入图片描述

对于黑盒攻击,在黑盒模型选择上采用distilltion模拟黑盒,当获得distilltion后,执行与前面白盒攻击相同的策略。本文为了更进一步清晰黑盒和distilltion对于对抗样本的执行距离,选择采用动态蒸馏的方式,将蒸馏过程与生成对抗样本过程结合。
在这里插入图片描述

在实验部分又针对有防御的模型验证本文方法的攻击有效性,所谓的防御模型就是通过对抗训练的模型,实验效果也是较FGSM,CW方法要好。

Adversarial attacks are a major concern in the field of deep learning as they can cause misclassification and undermine the reliability of deep learning models. In recent years, researchers have proposed several techniques to improve the robustness of deep learning models against adversarial attacks. Here are some of the approaches: 1. Adversarial training: This involves generating adversarial examples during training and using them to augment the training data. This helps the model learn to be more robust to adversarial attacks. 2. Defensive distillation: This is a technique that involves training a second model to mimic the behavior of the original model. The second model is then used to make predictions, making it more difficult for an adversary to generate adversarial examples that can fool the model. 3. Feature squeezing: This involves converting the input data to a lower dimensionality, making it more difficult for an adversary to generate adversarial examples. 4. Gradient masking: This involves adding noise to the gradients during training to prevent an adversary from estimating the gradients accurately and generating adversarial examples. 5. Adversarial detection: This involves training a separate model to detect adversarial examples and reject them before they can be used to fool the main model. 6. Model compression: This involves reducing the complexity of the model, making it more difficult for an adversary to generate adversarial examples. In conclusion, improving the robustness of deep learning models against adversarial attacks is an active area of research. Researchers are continually developing new techniques and approaches to make deep learning models more resistant to adversarial attacks.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值