文章目录
- 一、对抗样本
- 1.1Enhancing Transformation-Based Defenses Against Adversarial Attacks with a Distribution Classifier .
- 1.2 Implicit Bias of Gradient Descent based Adversarial Training on Separable Data
- 1.3 Mixup Inference: Better Exploiting Mixup to Defend Adversarial Attacks
- 1.4 Rethinking Softmax Cross-Entropy Loss for Adversarial Robustness
- 1.5 Robust Local Features for Improving the Generalization of Adversarial Training
- 1.6 Fooling Detection Alone is Not Enough: Adversarial Attack against Multiple Object Tracking
- 1.7 Improving Adversarial Robustness Requires Revisiting Misclassified Examples
- 1.8 Adversarial Policies: Attacking Deep Reinforcement Learning
- 1.9 Detecting and Diagnosing Adversarial Images with Class-Conditional Capsule Reconstructions
- 1.10 GAT: Generative Adversarial Training for Adversarial Example Detection and Robust Classification
- 1.11 Black-Box Adversarial Attack with Transferable Model-based Embedding
- 1.12 Certified Robustness for Top-k Predictions against Adversarial Perturbations via Randomized Smoothing
- 1.13 Adversarially Robust Representations with Smooth Encoders
- 1.14 Unpaired Point Cloud Completion on Real Scans using Adversarial Training
- 1.15 Adversarially robust transfer learning
- 1.16 Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness
- 1.17 Sign-OPT: A Query-Efficient Hard-label Adversarial Attack
- 1.18 Fast is better than free: Revisiting adversarial training
- 1.19 Intriguing Properties of Adversarial Training at Scale
- 1.20 Biologically inspired sleep algorithm for increased generalization and adversarial robustness in deep neural networks
- 1.21 Jacobian Adversarially Regularized Networks for Robustness
- 1.22 Certified Defenses for Adversarial Patches
- 1.23 Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks
- 1.24 Provable robustness against all adversarial lp-perturbations for p ≥ 1
- 1.25 EMPIR: Ensembles of Mixed Precision Deep Networks for Increased Robustness Against Adversarial Attacks
- 1.26 MMA Training: Direct Input Space Margin Maximization through Adversarial Training
- 1.27 BayesOpt Adversarial Attack
- 1.28 Unrestricted Adversarial Examples via Semantic Manipulation
- 1.29 BREAKING CERTIFIED DEFENSES: SEMANTIC ADVERSARIAL EXAMPLES WITH SPOOFED ROBUSTNESS CERTIFICATES
- 1.30 (Spotlight)Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets
- 1.31 (Spotlight)Enhancing Adversarial Defense by k-Winners-Take-All
- 1.32 (Spotlight)FreeLB: Enhanced Adversarial Training for Natural Language Understanding
- 1.33 (Spotlight)ON ROBUSTNESS OF NEURAL ORDINARY DIFFERENTIAL EQUATIONS
- 1.34 (Oral)Adversarial Training and Provable Defenses: Bridging the Gap
- 1.35 MACER: ATTACK-FREE AND SCALABLE ROBUST TRAINING VIA MAXIMIZING CERTIFIED RADIUS
- 1.36 IMPROVED SAMPLE COMPLEXITIES FOR DEEP NETWORKS AND ROBUST CLASSIFICATION VIA AN ALLLAYER MARGIN
- 1.37 TOWARDS STABLE AND EFFICIENT TRAINING OF VERIFIABLY ROBUST NEURAL NETWORKS
- 1.38 TRIPLE WINS: BOOSTING ACCURACY, ROBUSTNESS AND EFFICIENCY TOGETHER BY ENABLING INPUTADAPTIVE INFERENCE
- 1.39 A FRAMEWORK FOR ROBUSTNESS CERTIFICATION OF SMOOTHED CLASSIFIERS USING F-DIVERGENCES
- 1.40 ROBUSTNESS VERIFICATION FOR TRANSFORMERS