Skip to content

Membership Inference Attacks

johanos1 edited this page Dec 19, 2024 · 27 revisions

LeakPro supports a wide range of attack scenarios targeting different privacy vulnerabilities in machine learning models. This section provides an in-depth look at each scenario, including supported attack types, data modalities, and references to specific implementations.


3.1 Membership Inference Attacks

Attack Types

  • Label-Only Attacks: Infer membership using only predicted labels.
  • Logit-Based Attacks: Use model confidence scores (logits) to increase inference accuracy.

Supported Data Modalities

  • Image Data: Vision models trained on datasets like CIFAR-10 and MNIST.
  • Text Data: NLP models such as sentiment analyzers.
  • Tabular Data: Structured datasets like financial or healthcare records.
  • Graph Data: Graph neural networks used in social network analysis.

Implemented Attacks Reference

Implemented Attacks References

Implemented Attacks References

This table lists all implemented attacks in LeakPro across different scenarios, linking directly to the code implementation and original research papers.

Attack Adversary Access Code Implementation Original Paper
HSJ Label-Based LeakPro Code Choquette-Choo et al., in ICML (2021)
LiRA Logit-Based LeakPro Code Carlini et al., IEEE SP (2022)
TrajectoryMIA Logit-Based LeakPro Code Liu et al., in ACM SIGSAC (2022)
P-Attack Loss-Based LeakPro Code Ye et al., in ACM CCS (2022)
QMIA Logit-Based LeakPro Code Bertran et al., in NeurIPS (2023)
RMIA Logit-Based LeakPro Code Zarifzadeh et al., in ICML (2024)
YOQO Label-Based LeakPro Code Wu et al., in ICLR (2024)

How to Use This Table

  • Click on the Code Implementation links to explore the corresponding LeakPro modules.
  • Click on the Original Paper links to read the foundational research behind each attack.

Clone this wiki locally