You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: CONTRIBUTING.md
+12-1Lines changed: 12 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -28,6 +28,12 @@ By participating in this project, you agree to maintain a respectful and inclusi
28
28
3. Install development dependencies:
29
29
```bash
30
30
pip install -e ".[dev]"
31
+
32
+
# For working on bias visualization
33
+
pip install -e ".[viz]"
34
+
35
+
# For working on evaluation tools
36
+
pip install -e ".[eval]"
31
37
```
32
38
4. Create a new branch for your feature or bugfix:
33
39
```bash
@@ -98,6 +104,8 @@ For new features:
98
104
- Add unit tests for each function or method
99
105
- Add integration tests for interactions between components
100
106
- Ensure tests cover both normal behavior and error cases
107
+
- For bias visualization features, test both the numerical computations and visualization generation
108
+
- Mock transformer models for unit tests to avoid requiring large model downloads
101
109
102
110
## Documentation
103
111
@@ -114,15 +122,18 @@ Documentation is a crucial part of the project. Please follow these guidelines:
114
122
115
123
3.**README**: Update the README.md if your changes affect the installation, basic usage, or other key aspects.
116
124
125
+
4.**Visualization Examples**: When adding new visualization features, include visual examples in the documentation.
126
+
117
127
## Future Roadmap
118
128
119
129
OptiPFair is an evolving project with plans for several future enhancements. If you're interested in contributing to these areas, please join the discussion in the related issues:
120
130
121
131
1.**Attention Layer Pruning**: Implementation of structured pruning for attention mechanisms.
122
-
2.**Bias visualisations**: Implement visualizations of bias in pair prompts.
132
+
2.**Bias-aware Pruning**: Techniques that optimize for both efficiency and fairness.
123
133
3.**Block Pruning**: Methods for pruning entire transformer blocks.
124
134
4.**Evaluation Framework**: Comprehensive evaluation suite for pruned models.
125
135
5.**Fine-tuning Integration**: Tools for fine-tuning after pruning.
136
+
6.**Extended Bias Analysis**: Support for intersectional and multi-attribute bias analysis.
Copy file name to clipboardExpand all lines: README.md
+36-1Lines changed: 36 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,18 +8,22 @@
8
8
</h3>
9
9
</div>
10
10
11
-
A Python library for structured pruning of large language models, with a focus on GLU architectures.
11
+
A Python library for structured pruning, and Bias visualization, of large language models, with a focus on GLU architectures and fairness analysis.
12
+
12
13
13
14
## Overview
14
15
15
16
OptiPFair enables efficient pruning of large language models while maintaining their performance. It implements various structured pruning methods, starting with MLP pruning for GLU architectures (as used in models like LLaMA, Mistral, etc.).
16
17
18
+
17
19
Key features:
18
20
- GLU architecture-aware pruning that preserves model structure
19
21
- Multiple neuron importance calculation methods
20
22
- Support for both pruning percentage and target expansion rate
21
23
- Simple Python API and CLI interface
22
24
- Progress tracking and detailed statistics
25
+
-**NEW**: Bias visualization tools to analyze and understand fairness issues
0 commit comments