Skip to content
- Calibrating Noise to Sensitivity in Private Data Analysis, TCC, 2006
- How To Break Anonymity of the Netflix Prize Dataset, 2006
- Bitcoin: A Peer-to-Peer Electronic Cash System, 2009
- A Firm Foundation for Private Data Analysis, Communications of the ACM, 2011
- “You Might Also Like:” Privacy Risks of Collaborative Filtering, IEEE S&P, 2011
- Intriguing properties of neural networks, 2014
- Privacy in Pharmacogenetics: An End-to-End Case Study of Personalized Warfarin Dosing, USENIX Security, 2014
- Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures, ACM CCS, 2015
- DeepFool: a simple and accurate method to fool deep neural networks, CVPR, 2016
- Deep Learning with Differential Privacy, ACM CCS, 2016
- Stealing Machine Learning Models via Prediction APIs, USENIX Security, 2016
- The Limitations of Deep Learning in Adversarial Settings, IEEE EuroS&P, 2016
- Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks, IEEE S&P, 2016
- Towards Evaluating the Robustness of Neural Networks, IEEE S&P, 2017
- Membership Inference Attacks against Machine Learning Models, IEEE S&P, 2017
- Universal adversarial perturbations, CVPR, 2017
- Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting, 2017
- Practical Black-Box Attacks against Machine Learning, ACM ASIACCS, 2017
- BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain, 2017
- Stealing Hyperparameters in Machine Learning, IEEE S&P, 2018
- TextBugger: Generating Adversarial Text Against Real-world Applications, NDSS, 2018
- Trojaning Attack on Neural Networks, NDSS, 2018
- Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring, USENIX Security, 2018
- Making AI Forget You: Data Deletion in Machine Learning, NIPS, 2019
- Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks, IEEE S&P, 2019
- Universal Adversarial Triggers for Attacking and Analyzing NLP, EMNLP, 2019
- STRIP: A Defence Against Trojan Attacks on Deep Neural Networks, ACSAC, 2019
- ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models, NDSS, 2019
- Machine Unlearning, IEEE S&P, 2021
- Extracting Training Data from Large Language Models, USENIX Security, 2021
- Blind Backdoors in Deep Learning Models, USENIX Security, 2021
- TableGAN-MCA: Evaluating Membership Collisions of GAN-Synthesized Tabular Data Releasing, ACM CCS 2021
- On the (In)Feasibility of Attribute Inference Attacks on Machine Learning Models, IEEE EuroS&P, 2021
- DeepPayload: Black-box Backdoor Attack on Deep Learning Models through Neural Payload Injection, ICSE 2021
- Evaluating the Susceptibility of Pre-Trained Language Models via Handcrafted Adversarial Examples, 2022
- Enhanced Membership Inference Attacks against Machine Learning Models, ACM CCS, 2022
- Reconstructing Training Data with Informed Adversaries, IEEE S&P, 2022
- Copy, Right? A Testing Framework for Copyright Protection of Deep Learning Models, IEEE S&P, 2022
- Property Inference Attacks Against GANs, NDSS, 2022
- Local and Central Differential Privacy for Robustness and Privacy in Federated Learning, NDSS, 2022
- Teacher Model Fingerprinting Attacks Against Transfer Learning, USENIX Security, 2022
- Transferring Adversarial Robustness Through Robust Representation Matching, USENIX Security, 2022
- Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks, USENIX Security, 2022
- Understanding Challenges for Developers to Create Accurate Privacy Nutrition Labels, CHI, 2022