Adversarial attacks on neural network policies, Huang et al. 2017
1. Goodfellow, I.J., J. Shlens, and C. Szegedy, Explaining and Harnessing Adversarial Examples. Computer Science, 2014.
2. Nguyen, A., J. Yosinski, and J. Clune, Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. 2015: p. 427-436.
3. Kurakin, A., I. Goodfellow, and S. Bengio, Adversarial examples in the physical world. 2016.
4. Papernot, N., et al., Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples. 2016.
5. Huang, S., et al., Adversarial Attacks on Neural Network Policies. 2017.
1. Goodfellow, I.J., J. Shlens, and C. Szegedy, Explaining and Harnessing Adversarial Examples. Computer Science, 2014.
2. Nguyen, A., J. Yosinski, and J. Clune, Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. 2015: p. 427-436.
3. Kurakin, A., I. Goodfellow, and S. Bengio, Adversarial examples in the physical world. 2016.
4. Papernot, N., et al., Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples. 2016.
5. Huang, S., et al., Adversarial Attacks on Neural Network Policies. 2017.