![](https://img-blog.csdnimg.cn/20201014180756925.png?x-oss-process=image/resize,m_fixed,h_64,w_64)
Adversarial Attack
文章平均质量分 90
爽儿歪歪歪
一根有趣的葱~
展开
-
Spatially transformed adversarial examples
本文为转载,原作者地址https://zhuanlan.zhihu.com/p/47419905Spatially transformed adversarial examples文章目录Spatially transformed adversarial examples背景空间转换问题建模实验结果空间转换可视化防御模型下的攻击效率CAM图总结背景以往的攻击算法多是通过改变图片中某些像素点的值来生成对抗样本,本文将分享一篇不走寻常路的方法,通过改变像素点的位置来达成攻击。作者不仅测试了自己的攻击算法转载 2021-04-25 20:03:52 · 607 阅读 · 0 评论 -
Adversarial Objects Against LiDAR-Based Autonomous Driving Systems
Adversarial Objects Against LiDAR-Based Autonomous Driving Systems文章目录Adversarial Objects Against LiDAR-Based Autonomous Driving Systems背景LiADAR-Adv内容LiADV-based detection流程LiDARPreprocessing phaseMachine learning modelPost-processing phase背景自动驾驶系统不仅仅是基原创 2021-04-25 13:21:31 · 367 阅读 · 0 评论 -
PhysGAN: Generating Physical-World-Resilient Adversarial Examples for Autonomous Driving
PhysGAN: Generating Physical-World-Resilient Adversarial Examples for Autonomous Driving本文收录于CVPR 2020文章目录PhysGAN: Generating Physical-World-Resilient Adversarial Examples for Autonomous Driving背景以前的攻击方式physGAN流程1、输入2、符号定义3、详细流程4、损失函数定义实验总结本文针对在现实场景中针对自原创 2021-04-21 12:07:13 · 436 阅读 · 0 评论 -
ColorFool: Semantic Adversarial Colorization
ColorFool: Semantic Adversarial Colorization文章目录ColorFool: Semantic Adversarial Colorization问题之前的攻击方法BigAdv:SemanticAdv:ColorFool论文收录于CVPR 2020问题作者将对抗扰动分为 restricted 和 unrestrictedrestricted 是指通过 Lp 范式控制扰动的大小。这种攻击方式对去噪滤波器、对抗训练等防御并不健壮,因为这种扰动通常具有高的空间频原创 2021-04-16 20:36:55 · 328 阅读 · 0 评论 -
Boosting the Transferability of Adversarial Samples via Attention
Boosting the Transferability of Adversarial Samples via Attention文章目录Boosting the Transferability of Adversarial Samples via Attention问题之前的攻击方法TAPATA论文收录于CVPR 2020问题作者将对抗攻击分为白盒攻击和黑盒攻击attack:white-box attackblack-box attackquery-based attack: 根据返回原创 2021-04-16 20:12:37 · 789 阅读 · 0 评论 -
ILFO:Adaversarial Attack on Adaptive Neural Networks
ILFO:Adaversarial Attack on Adaptive Neural Networks论文收录于CVPR 2020文章目录ILFO:Adaversarial Attack on Adaptive Neural Networks背景攻击攻击形式Attacking Early-termination AdNNAttacking Conditional-skipping AdNN背景神经网络在一般情况下,模型规模越大,效果越好。但模型跑起来是很耗费资源的,尤其是大型网络,对手持或嵌入式设原创 2021-04-16 19:54:59 · 399 阅读 · 0 评论