Skip to content

Membership Inference Attacks

johanos1 edited this page Dec 19, 2024 · 27 revisions

LeakPro supports a wide range of attack scenarios targeting different privacy vulnerabilities in machine learning models. This section provides an in-depth look at each scenario, including supported attack types, data modalities, and references to specific implementations.


3.1 Membership Inference Attacks

Attack Types

  • Label-Only Attacks: Infer membership using only predicted labels.
  • Logit-Based Attacks: Use model confidence scores (logits) to increase inference accuracy.

Supported Data Modalities

  • Image Data: Vision models trained on datasets like CIFAR-10 and MNIST.
  • Text Data: NLP models such as sentiment analyzers.
  • Tabular Data: Structured datasets like financial or healthcare records.
  • Graph Data: Graph neural networks used in social network analysis.

Implemented Attacks Reference

Implemented Attacks References

Implemented Attacks References

This table lists all implemented attacks in LeakPro across different scenarios, linking directly to the code implementation and original research papers.

Attack Scenario Implemented Attack Types Code Implementation Original Paper
Membership Inference Attacks Label-Only, Logit-Based LeakPro Code: MIA Shokri et al. (2017)
RMIA Logit-Based LeakPro Code: RMIA Zarifzadeh et al. ICML (2024)
Federated Learning Attacks Gradient Leakage, Model Inversion LeakPro Code: FLA Geiping et al. (2020)
Attacks on Synthetic Data Membership Inference, Attribute Inference LeakPro Code: SD Yoon et al. (2019)

How to Use This Table

  • Click on the Code Implementation links to explore the corresponding LeakPro modules.
  • Click on the Original Paper links to read the foundational research behind each attack.

Clone this wiki locally