This is a paper list concerning topics on the trust and dependencies of deep learning: adversarial deep learning and privacy-preserving deep learning. A list of adversarial machine learning papers is also provided for reference.
- Audio Adversarial Examples: Targeted Attacks on Speech-to-Text - Nicholas Carlini et al., 2018
- The Limitations of Deep Learning in Adversarial Settings - Nicolas Papernot et al., (Euro S&P 2016)
- Robust Physical-World Attacks on Deep Learning Models - Kevin Eykholt et al., (CVPR 2018)
- Explaining and Harnessing Adversarial Examples - Ian J. Goodfellow et al., (ICLR 2015)
- DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks - Seyed-Mohsen Moosavi-Dezfooli et al., (CVPR 2016)
- Adversarial examples in the physical world - Alexey Kurakin et al., (ICLR 2017)
- Adversarial Machine Learning at Scale - Alexey Kurakin et al., (ICLR 2017)
- Intriguing properties of neural networks - Christian Szegedy et al., 2014
- Adversarial Attacks on Neural Network Policies - Sandy Huang et al., 2017
- Practical Black-Box Attacks against Machine Learning - Nicolas Papernot et al., 2017
- Adversarial Examples for Malware Detection - Kathrin Grosse et al., 2017
- Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples - Nicolas Papernot et al., 2016
- Towards Evaluating the Robustness of Neural Networks - Nicholas Carlini et al., (S&P 2017)
- Boosting Adversarial Attacks with Momentum CYinpeng Dong et al., (CVPR 2018)
- Synthesizing Robust Adversarial Examples - Anish Athalye et al., 2017
- Learning with a Strong Adversary - Ruitong Huang et al., 2016
- Mitigating Evasion Attacks to Deep Neural Networks via Region-based Classification - Xiaoyu Cao et al., (ACSAC 2017)
- Adversarial Example Defenses: Ensembles of Weak Defenses are not Strong - Warren He et al., 2017
- Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks - Nicolas Papernot et al., 2016
- Understanding Adversarial Training: Increasing Local Stability of Neural Nets through Robust Optimization - Uri Shaham et al., 2015.
- Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods - Nicholas Carlini et al., 2017
- Towards Deep Learning Models Resistant to Adversarial Attacks - Aleksander Madry et al., 2017
- Towards Deep Neural Network Architectures Robust to Adversarial Examples - Shixiang Gu et al., 2015
- Analysis of classifiers robustness to adversarial perturbation - Alhussein Fawzi et al., 2018
- Towards Robust Deep Neural Networks with BANG - Andras Rozsa et al., 2018
- Certifying Some Distributional Robustness with Principled Adversarial Training - Aman Sinha et al., 2017
- Adversarial Logit Pairing Harini Kannan et al., 2018
- Efficient Defenses Against Adversarial Attacks - Valentina Zantedeschi et al., 2017
- Adversarial Examples in Deep Learning: Characterization and Divergence - Wenqi Wei et al., 2018
- On the Protection of Private Information in Machine Learning Systems: Two Recent Approches - Martín Abadi et al., 2017
- Deep Learning with Differential Privacy - Martin Abadi et al., (CCS 2016)
- Privacy_preserving Deep Learning - Reza Shokri et al., (CCS 2015)
- Distilling the Knowledge in a Neural Network - Geoffrey Hinton et al., 2015
- Large Scale Distributed Deep Networks - Jeffrey Dean et al., 2012
- Very Deep Convolutional Networks for Large-Scale Image Recognition - Karen Simonyan et al., 2015
- Spatial Transformer Networks - Max Jaderberg et al., 2015
- Adversarial Machine Learning - Ling Huang et al., 2011
- Adversarial Learning - Daniel Lowd et al., 2005
- Stealing Hyperparameters in Machine Learning - Binghui Wang et al., 2018
- Can Machine Learning be Secure? - Marco Barreno et al., 2006
- Adversarial Classification - Nilesh Dalvi et al., 2004
- Adversarial Active Learning - Brad Miller 2014
- Adversarial learning: A critical review and active learning study - D.J. Miller et al., 2017
- Stealing Machine Learning Models via Prediction APIs - Florian Tramèr et al., 2016
- Bounding an Attack’s Complexity for a Simple Learning Model - Blaine Nelson et al., 2006
- Nightmare at test time: robust learning by feature deletion - Amir Globerson et al., 2006
- Evasion Attacks against Machine Learning at Test Time -Battista Biggio et al., 2013
- Learning in a Large Function Space: Privacy-Preserving Mechanisms for SVM Learning - Benjamin I. P. Rubinstein et al., 2009
- Revealing information while preserving privacy - Irit Dinur et al., 2003
- Privacy-preserving logistic regression - Kamalika Chaudhuri et al., 2008
- A firm foundation for private data analysis - Cynthia Dwork 2011
- P4P: Practical Large-Scale Privacy-Preserving Distributed Computation Robust against Malicious Users - Yitao Duan et al., 2010
License
To the extent possible under law, Wenqi Wei has waived all copyright and related or neighboring rights to this work.
