Data-driven security and privacy papers

  1. Calibrating Noise to Sensitivity in Private Data Analysis, TCC, 2006
  2. How To Break Anonymity of the Netflix Prize Dataset, 2006
  3. Bitcoin: A Peer-to-Peer Electronic Cash System, 2009
  4. A Firm Foundation for Private Data Analysis, Communications of the ACM, 2011
  5. “You Might Also Like:” Privacy Risks of Collaborative Filtering, IEEE S&P, 2011
  6. Intriguing properties of neural networks, 2014
  7. Privacy in Pharmacogenetics: An End-to-End Case Study of Personalized Warfarin Dosing, USENIX Security, 2014
  8. Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures, ACM CCS, 2015
  9. DeepFool: a simple and accurate method to fool deep neural networks, CVPR, 2016
  10. Deep Learning with Differential Privacy, ACM CCS, 2016
  11. Stealing Machine Learning Models via Prediction APIs, USENIX Security, 2016
  12. The Limitations of Deep Learning in Adversarial Settings, IEEE EuroS&P, 2016
  13. Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks, IEEE S&P, 2016
  14. Towards Evaluating the Robustness of Neural Networks, IEEE S&P, 2017
  15. Membership Inference Attacks against Machine Learning Models, IEEE S&P, 2017
  16. Universal adversarial perturbations, CVPR, 2017
  17. Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting, 2017
  18. Practical Black-Box Attacks against Machine Learning, ACM ASIACCS, 2017
  19. BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain, 2017
  20. Stealing Hyperparameters in Machine Learning, IEEE S&P, 2018
  21. TextBugger: Generating Adversarial Text Against Real-world Applications, NDSS, 2018
  22. Trojaning Attack on Neural Networks, NDSS, 2018
  23. Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring, USENIX Security, 2018
  24. Making AI Forget You: Data Deletion in Machine Learning, NIPS, 2019
  25. Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks, IEEE S&P, 2019
  26. Universal Adversarial Triggers for Attacking and Analyzing NLP, EMNLP, 2019
  27. STRIP: A Defence Against Trojan Attacks on Deep Neural Networks, ACSAC, 2019
  28. ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models, NDSS, 2019
  29. Machine Unlearning, IEEE S&P, 2021
  30. Extracting Training Data from Large Language Models, USENIX Security, 2021
  31. Blind Backdoors in Deep Learning Models, USENIX Security, 2021
  32. TableGAN-MCA: Evaluating Membership Collisions of GAN-Synthesized Tabular Data Releasing, ACM CCS 2021
  33. On the (In)Feasibility of Attribute Inference Attacks on Machine Learning Models, IEEE EuroS&P, 2021
  34. DeepPayload: Black-box Backdoor Attack on Deep Learning Models through Neural Payload Injection, ICSE 2021
  35. Evaluating the Susceptibility of Pre-Trained Language Models via Handcrafted Adversarial Examples, 2022
  36. Enhanced Membership Inference Attacks against Machine Learning Models, ACM CCS, 2022
  37. Reconstructing Training Data with Informed Adversaries, IEEE S&P, 2022
  38. Copy, Right? A Testing Framework for Copyright Protection of Deep Learning Models, IEEE S&P, 2022
  39. Property Inference Attacks Against GANs, NDSS, 2022
  40. Local and Central Differential Privacy for Robustness and Privacy in Federated Learning, NDSS, 2022
  41. Teacher Model Fingerprinting Attacks Against Transfer Learning, USENIX Security, 2022
  42. Transferring Adversarial Robustness Through Robust Representation Matching, USENIX Security, 2022
  43. Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks, USENIX Security, 2022
  44. Understanding Challenges for Developers to Create Accurate Privacy Nutrition Labels, CHI, 2022