You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+27-1Lines changed: 27 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -53,6 +53,24 @@ A .csv file of max. 1GB, with columns: features, performance metric. Note: Only
53
53
54
54
Features values can be numeric or categorical values. The numeric performance metric is context-dependent. The variable can, for instance, represents being 'selected for examination' (yes or no), 'assigned to a high-risk catagory (yes or no)' or false positive (yes or no). Low scores are considered to be a negative bias, i.e., if being selected for examination is considered to be harmful, 'selected for examination=Yes' should be codified as 0 and 'selected for examination=No' should be codified as 1.
55
55
56
+
## Example – Hierarchical Bias-Aware Clustering
57
+
58
+
Note: The feature labels used in this example can easily be changed for numeric targets. This flexibility enables adaptation to detect (higher-dimensional) bias in various AI classifiers.
59
+
60
+
```python
61
+
import unsupervised-bias-detection as usb
62
+
63
+
X = [[35, 55000, 1], # age, income, number of cars
64
+
[40, 45000, 0],
65
+
[20, 30000, 0]]
66
+
y = [1, 0, 0] # flagged for fraud examination (yes:0, no:1)
@@ -91,4 +109,12 @@ Features values can be numeric or categorical values. The numeric performance me
91
109
- Marlies van Eck, Assistant Professor in Administrative Law & AI at Radboud University
92
110
- Aileen Nielsen, Fellow Law&Tech at ETH Zürich
93
111
- Vahid Niamadpour, PhD-candidate in Linguistics at Leiden University
94
-
- Ola Al Khatib, PhD-candidate in the legal regulation of algorithmic decision-making at Utrecht University
112
+
- Ola Al Khatib, PhD-candidate in the legal regulation of algorithmic decision-making at Utrecht University
113
+
114
+
## Help and Support
115
+
116
+
This project is still in its early stages, and the documentation is a work in progress. In the meantime, feel free to open an [issue](https://github.com/NGO-Algorithm-Audit/unsupervised-bias-detection/issues), and we'll do our best to assist you.
117
+
118
+
## Contributing
119
+
120
+
Your contributions are highly encouraged! There are many opportunities for potential projects, so please reach out if you'd like to get involved. Whether it's code, notebooks, examples, or documentation, every contribution is valuable—so don’t hesitate to jump in. To contribute, simply fork the project, make your changes, and submit a pull request. We’ll work with you to address any issues and get your code merged into the main branch.
0 commit comments