ICML(2020)对抗学习论文汇总

点击上方,选择星标,每天给你送干货!


来自 | 知乎

作者 | GeorgeLee

地址 | https://zhuanlan.zhihu.com/p/145624170

编辑 | 机器学习算法与自然语言处理

本文仅作学术分享,若侵权,请联系后台删文处理

本文的主要目的是整理一下最新放榜的ICML2020中接收的对抗学习相关论文。毕竟之前伸手党做太久了,这次伸手失败还是自己整理共享一下吧,hhhh。如有遗漏和误解,欢迎在评论区指正。

对抗攻击

  1. Adversarial Attacks on Probabilistic Autoregressive Forecasting Models

  2. Stronger and Faster Wasserstein Adversarial Attacks

  3. Policy Teaching via Environment Poisoning: Training-time Adversarial Attacks against Reinforcement Learning

  4. Adversarial Attacks on Copyright Detection Systems

  5. Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack

  6. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks

  7. Dual-Path Distillation: A Unified Framework to Improve Black-Box Attacks

  8. Nonlinear Gradient Estimation for Query Efficient Blackbox Attack

对抗防御

  1. Adversarial Robustness via Runtime Masking and Cleansing

  2. Implicit Euler Skip Connections: Enhancing Adversarial Robustness via Numerical Stability

  3. Adversarial Neural Pruning with Latent Vulnerability Suppression

  4. Hierarchical Verification for Adversarial Robustness

  5. Randomization matters How to defend against strong adversarial attacks

  6. Margin-aware Adversarial Domain Adaptation with Optimal Transport (没看到文章不确定)

  7. Second-Order Provable Defenses against Adversarial Attacks

  8. Adversarial Risk via Optimal Transport and Optimal Couplings

  9. Optimal Statistical Guaratees for Adversarially Robust Gaussian Classification

  10. Scalable Differential Privacy with Certified Robustness in Adversarial Learning

  11. Adversarial Robustness Against the Union of Multiple Threat Models

  12. Confidence-Calibrated Adversarial Training: Generalizing to Unseen Attacks

  13. Learning Adversarially Robust Representations via Worst-Case Mutual Information Maximization

  14. Black-box Certification and Learning under Adversarial Perturbations

相关机理的理解及与其他领域的结合

  1. Feature-map-level Online Adversarial Knowledge Distillation

  2. Attacks Which Do Not Kill Training Make Adversarial Learning Stronger

  3. Towards Understanding the Regularization of Adversarial Robustness on Neural Networks

  4. More Data Can Expand The Generalization Gap Between Adversarially Robust and Standard Models

  5. Interpreting Robust Optimization via Adversarial Influence Functions

  6. Proper Network Interpretability Helps Adversarial Robustness in Classification

  7. DeepMatch: Balancing Deep Covariate Representations for Causal Inference Using Adversarial Training

  8. Overfitting in adversarially robust deep learning

  9. Adversarial Learning Guarantees for Linear Hypotheses and Neural Networks

  10. Adversarial Robustness for Code

  11. Rank Aggregation from Pairwise Comparisons in the Presence of Adversarial Corruptions (没看到文章不确定)

  12. Concise Explanations of Neural Networks using Adversarial Training

  13. Logarithmic Regret for Online Control with Adversarial Noise (没看到文章不确定)

  14. Adversarial Filters of Dataset Biases

  15. Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial Perturbations

  16. Neural Network Control Policy Verification With Persistent Adversarial Perturbation

  17. Efficiently Learning Adversarially Robust Halfspaces with Noise

  18. Representation Learning via Adversarially-Contrastive Optimal Transport (没看到文章不确定)



说个正事哈

由于微信平台算法改版,公号内容将不再以时间排序展示,如果大家想第一时间看到我们的推送,强烈建议星标我们和给我们多点点【在看】。星标具体步骤为:(1)点击页面最上方“深度学习自然语言处理”,进入公众号主页。(2)点击右上角的小点点,在弹出页面点击“设为星标”,就可以啦。
感谢支持,比心。投稿或交流学习,备注:昵称-学校(公司)-方向,进入DL&NLP交流群。
方向有很多:机器学习、深度学习,python,情感分析、意见挖掘、句法分析、机器翻译、人机对话、知识图谱、语音识别等。记得备注呦

推荐两个专辑给大家:专辑 | 李宏毅人类语言处理2020笔记专辑 | NLP论文解读专辑 | 情感分析

整理不易,还望给个在看!
  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值