You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/usage.md
+167Lines changed: 167 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -737,4 +737,171 @@ When using tuple or list batches, elements are automatically mapped to standard
737
737
738
738
**Note**: All formats are fully backward compatible. Existing code continues to work without modifications.
739
739
740
+
---
741
+
742
+
## Fairness-Aware Pruning (NEW in v0.3.0)
743
+
744
+
OptiPFair v0.3.0 introduces fairness-aware pruning, which combines bias analysis with pruning decisions to create models that are both smaller and potentially less biased.
745
+
746
+
### Overview
747
+
748
+
Traditional pruning focuses solely on minimizing performance loss. Fairness-aware pruning adds an additional dimension: identifying and potentially removing neurons that contribute to demographic bias.
749
+
750
+
The workflow consists of two main steps:
751
+
752
+
1.**Analyze Neuron Bias**: Identify which neurons contribute most to bias across demographic groups
753
+
2.**Compute Fairness Scores**: Combine bias scores with importance scores for balanced pruning decisions
754
+
755
+
### Step 1: Analyze Neuron Bias
756
+
757
+
```python
758
+
from transformers import AutoModelForCausalLM, AutoTokenizer
759
+
import optipfair as opf
760
+
from optipfair.bias import analyze_neuron_bias
761
+
762
+
# Load model
763
+
model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.2-1B")
0 commit comments