DEFENSE-GAN: PROTECTING CLASSIFIERS AGAINST ADVERSARIAL ATTACKS USING GENERATIVE MODELS

文章目录

Samangouei P, Kabkab M, Chellappa R, et al. Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models.[J]. arXiv: Computer Vision and Pattern Recognition, 2018.

@article{samangouei2018defense-gan:,
title={Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models.},
author={Samangouei, Pouya and Kabkab, Maya and Chellappa, Rama},
journal={arXiv: Computer Vision and Pattern Recognition},
year={2018}}

本文介绍了一种针对对抗样本的defense方法, 主要是利用GAN训练的生成器, 将样本 x x x投影到干净数据集上 x ^ \hat{x} x^.

主要内容

我们知道, GAN的损失函数到达最优时, p d a t a = p G p_{data}=p_G pdata=pG, 又倘若对抗样本的分布是脱离于 p d a t a p_{data} pdata的, 则如果我们能将 x x x投影到真实数据的分布 p d a t a p_{data} pdata(如果最优也就是 p G p_G pG), 则我们不就能找到一个防御方法了吗?

对于每一个样本, 首先初始化 R R R个随机种子 z 0 ( 1 ) , … , z 0 ( R ) z_0^{(1)}, \ldots, z_0^{(R)} z0(1),,z0(R), 对每一个种子, 利用梯度下降( L L L步)以求最小化
min ⁡ ∥ G ( z ) − x ∥ 2 2 , (DGAN) \tag{DGAN} \min \quad \|G(z)-x\|_2^2, minG(z)x22,(DGAN)
其中 G ( z ) G(z) G(z)为利用训练样本训练的生成器.

得到 R R R个点 z ∗ ( 1 ) , … , z ∗ ( R ) z_*^{(1)},\ldots, z_*^{(R)} z(1),,z(R), 设使得(DGAN)最小的为 z ∗ z^* z, 以及 x ^ = G ( z ∗ ) \hat{x} = G(z^*) x^=G(z), 则 x ^ \hat{x} x^就是我们要的, 样本 x x x在普通样本数据中的投影. 将 x ^ \hat{x} x^喂入网络, 判断其类别.

在这里插入图片描述

在这里插入图片描述

另外, 作者还在实验中说明, 可以直接用 ∥ G ( z ∗ ) − x ∥ 2 2 < > θ \|G(z^*)-x\|_2^2 \frac{<}{>} \theta G(z)x22><θ 来判断是否是对抗样本, 并计算AUC指标, 结果不错.

注: 这个方法, 利用梯度方法更新的难处在于, x → x ^ x \rightarrow \hat{x} xx^这一过程, 包含了 L L L步的内循环, 如果直接反向传梯度会造成梯度爆炸或者消失.

  • 1
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Privacy-preserving machine learning is becoming increasingly important in today's world where data privacy is a major concern. Federated learning and secure aggregation are two techniques that can be used to achieve privacy-preserving machine learning. Federated learning is a technique where the machine learning model is trained on data that is distributed across multiple devices or servers. In this technique, the model is sent to the devices or servers, and the devices or servers perform the training locally on their own data. The trained model updates are then sent back to a central server, where they are aggregated to create a new version of the model. The key advantage of federated learning is that the data remains on the devices or servers, which helps to protect the privacy of the data. Secure aggregation is a technique that can be used to protect the privacy of the model updates that are sent to the central server. In this technique, the updates are encrypted before they are sent to the central server. The central server then performs the aggregation operation on the encrypted updates, and the result is sent back to the devices or servers. The devices or servers can then decrypt the result to obtain the updated model. By combining federated learning and secure aggregation, it is possible to achieve privacy-preserving machine learning. This approach allows for the training of machine learning models on sensitive data while protecting the privacy of the data and the model updates.

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值