Replies: 1 comment
-
woah, chill down g, too much effort, literally nobody read it |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
In the provided script, the steps are outlined for federated unlearning, a process where a global model is trained collaboratively across multiple clients, and then specific client data is "forgotten" from the model to enhance privacy or security. Let's break down what each step does:
Federated Learning Settings:
Client Data Loading and Model Initialization:
Federated Learning and Unlearning Training:
Membership Inference Attack against Global Model (GM):
The output includes precision and recall scores for the MIA against each version of the global model. Precision measures the proportion of correctly identified positives among all samples classified as positive by the attacker. Recall measures the proportion of correctly identified positives among all actual positives.
From the provided output:
Comparing precision and recall between different versions of the global model (FL Standard and FL Unlearn) helps assess the effectiveness of the unlearning process in reducing the model's vulnerability to membership inference attacks and preserving privacy.
Beta Was this translation helpful? Give feedback.
All reactions