Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks笔记

Code: https://github.com/mzweilin/EvadeML-Zoo

  • Feature squeezing: reducing the color bit depth of each pixel and spatial smoothing.
  • Framework:

在这里插入图片描述

  • Adversarial examples attacks

    • L p n o r m L_p norm Lpnorm attack
    • FGSM
    • BIM
    • DeepFool
    • JSMA
    • Carlini/Wagner attacks
  • Defense:

    • Adversarial training
    • Gradient masking
    • Feature squeezing/input transformation
  • Detecting adversarial examples

    • Sample statistics: maximum mean discrepancy
    • Training a detector
    • Prediction inconsistency: one adversarial example may not fool every DNN model.
  • Color depth
    在这里插入图片描述

  • Spatial smoothing

    • Local smoothing
      在这里插入图片描述
    • Non-local smoothing
      在这里插入图片描述
  • 这篇paper大篇幅都在survey adversarial attack and defense, 提出的方案很简单,并不effective

在这里插入图片描述
More Update:https://github.com/Billy1900/Backdoor-Learning

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Adversarial attacks are a major concern in the field of deep learning as they can cause misclassification and undermine the reliability of deep learning models. In recent years, researchers have proposed several techniques to improve the robustness of deep learning models against adversarial attacks. Here are some of the approaches: 1. Adversarial training: This involves generating adversarial examples during training and using them to augment the training data. This helps the model learn to be more robust to adversarial attacks. 2. Defensive distillation: This is a technique that involves training a second model to mimic the behavior of the original model. The second model is then used to make predictions, making it more difficult for an adversary to generate adversarial examples that can fool the model. 3. Feature squeezing: This involves converting the input data to a lower dimensionality, making it more difficult for an adversary to generate adversarial examples. 4. Gradient masking: This involves adding noise to the gradients during training to prevent an adversary from estimating the gradients accurately and generating adversarial examples. 5. Adversarial detection: This involves training a separate model to detect adversarial examples and reject them before they can be used to fool the main model. 6. Model compression: This involves reducing the complexity of the model, making it more difficult for an adversary to generate adversarial examples. In conclusion, improving the robustness of deep learning models against adversarial attacks is an active area of research. Researchers are continually developing new techniques and approaches to make deep learning models more resistant to adversarial attacks.

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值