-
Notifications
You must be signed in to change notification settings - Fork 22
Membership Inference Attacks
johanos1 edited this page Dec 19, 2024
·
27 revisions
LeakPro supports a wide range of attack scenarios targeting different privacy vulnerabilities in machine learning models. This section provides an in-depth look at each scenario, including supported attack types, data modalities, and references to specific implementations.
- Label-Only Attacks: Infer membership using only predicted labels.
- Logit-Based Attacks: Use model confidence scores (logits) to increase inference accuracy.
- Image Data: Vision models trained on datasets like CIFAR-10 and MNIST.
- Text Data: NLP models such as sentiment analyzers.
- Tabular Data: Structured datasets like financial or healthcare records.
- Graph Data: Graph neural networks used in social network analysis.
This table lists all implemented attacks in LeakPro across different scenarios, linking directly to the code implementation and original research papers.
| Attack Scenario | Implemented Attack Types | Code Implementation | Original Paper |
|---|---|---|---|
| LiRA | Logit-Based | LeakPro Code: MIA | Carlini et al., IEEE SP (2022) |
| RMIA | Logit-Based | LeakPro Code: RMIA | Zarifzadeh et al., in ICML (2024) |
- Click on the Code Implementation links to explore the corresponding LeakPro modules.
- Click on the Original Paper links to read the foundational research behind each attack.