点击上方,选择星标,每天给你送干货!
来自 | 知乎
作者 | GeorgeLee
地址 | https://zhuanlan.zhihu.com/p/145624170
编辑 | 机器学习算法与自然语言处理
本文仅作学术分享,若侵权,请联系后台删文处理
本文的主要目的是整理一下最新放榜的ICML2020中接收的对抗学习相关论文。毕竟之前伸手党做太久了,这次伸手失败还是自己整理共享一下吧,hhhh。如有遗漏和误解,欢迎在评论区指正。
对抗攻击
Adversarial Attacks on Probabilistic Autoregressive Forecasting Models
Stronger and Faster Wasserstein Adversarial Attacks
Policy Teaching via Environment Poisoning: Training-time Adversarial Attacks against Reinforcement Learning
Adversarial Attacks on Copyright Detection Systems
Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack
Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks
Dual-Path Distillation: A Unified Framework to Improve Black-Box Attacks
Nonlinear Gradient Estimation for Query Efficient Blackbox Attack
对抗防御
Adversarial Robustness via Runtime Masking and Cleansing
Implicit Euler Skip Connections: Enhancing Adversarial Robustness via Numerical Stability
Adversarial Neural Pruning with Latent Vulnerability Suppression
Hierarchical Verification for Adversarial Robustness
Randomization matters How to defend against strong adversarial attacks
Margin-aware Adversarial Domain Adaptation with Optimal Transport (没看到文章不确定)
Second-Order Provable Defenses against Adversarial Attacks
Adversarial Risk via Optimal Transport and Optimal Couplings
Optimal Statistical Guaratees for Adversarially Robust Gaussian Classification
Scalable Differential Privacy with Certified Robustness in Adversarial Learning
Adversarial Robustness Against the Union of Multiple Threat Models
Confidence-Calibrated Adversarial Training: Generalizing to Unseen Attacks
Learning Adversarially Robust Representations via Worst-Case Mutual Information Maximization
Black-box Certification and Learning under Adversarial Perturbations
相关机理的理解及与其他领域的结合
Feature-map-level Online Adversarial Knowledge Distillation
Attacks Which Do Not Kill Training Make Adversarial Learning Stronger
Towards Understanding the Regularization of Adversarial Robustness on Neural Networks
More Data Can Expand The Generalization Gap Between Adversarially Robust and Standard Models
Interpreting Robust Optimization via Adversarial Influence Functions
Proper Network Interpretability Helps Adversarial Robustness in Classification
DeepMatch: Balancing Deep Covariate Representations for Causal Inference Using Adversarial Training
Overfitting in adversarially robust deep learning
Adversarial Learning Guarantees for Linear Hypotheses and Neural Networks
Adversarial Robustness for Code
Rank Aggregation from Pairwise Comparisons in the Presence of Adversarial Corruptions (没看到文章不确定)
Concise Explanations of Neural Networks using Adversarial Training
Logarithmic Regret for Online Control with Adversarial Noise (没看到文章不确定)
Adversarial Filters of Dataset Biases
Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial Perturbations
Neural Network Control Policy Verification With Persistent Adversarial Perturbation
Efficiently Learning Adversarially Robust Halfspaces with Noise
Representation Learning via Adversarially-Contrastive Optimal Transport (没看到文章不确定)
说个正事哈
由于微信平台算法改版,公号内容将不再以时间排序展示,如果大家想第一时间看到我们的推送,强烈建议星标我们和给我们多点点【在看】。星标具体步骤为:(1)点击页面最上方“深度学习自然语言处理”,进入公众号主页。(2)点击右上角的小点点,在弹出页面点击“设为星标”,就可以啦。
感谢支持,比心。投稿或交流学习,备注:昵称-学校(公司)-方向,进入DL&NLP交流群。
方向有很多:机器学习、深度学习,python,情感分析、意见挖掘、句法分析、机器翻译、人机对话、知识图谱、语音识别等。记得备注呦
推荐两个专辑给大家:专辑 | 李宏毅人类语言处理2020笔记专辑 | NLP论文解读专辑 | 情感分析
整理不易,还望给个在看!