You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/english/algoprudence/how-we-work.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -54,7 +54,7 @@ Advice of commission is published together with problem statement on our website
54
54
55
55
Over the years, we have developed our own deliberative audit methodology. Algorithm Audit's guidelines for convening a normative advice commission can be consulted in the below document. [Get in touch](/about/contact/) to share feedback about these guidelines.
Copy file name to clipboardExpand all lines: content/english/technical-tools/BDT.md
+13-5Lines changed: 13 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -69,15 +69,23 @@ type: bias-detection-tool
69
69
70
70
<br>
71
71
72
-
#####What is the tool about?
72
+
#### What is the tool about?
73
73
74
-
The tool identifies potentially unfairly treated groups of similar users by an AI system. The tool returns clusters of users for which the system is underperforming compared to the rest of the data set. The tool makes use of <ahref="https://en.wikipedia.org/wiki/Cluster_analysis"target="_blank">clustering</a> – an unsupervised statistal learning method. This means that no data is required on protected attributes of users, e.g., gender, nationality or ethnicity, to detect higher-dimensional forms of apparently neutral differentiation, also referred to as higher-dimensional proxy or intersectional discrimination. The metric by which bias is defined can be manually chosen in advance and is referred to as the `performance metric`.
74
+
The tool identifies potentially unfairly treated groups of similar users by an AI system. The tool returns clusters of users for which the system is underperforming compared to the rest of the data set. The tool makes use of <ahref="https://en.wikipedia.org/wiki/Cluster_analysis"target="_blank">clustering</a> – an unsupervised statistal learning method. This means that no data is required on protected attributes of users, e.g., gender, nationality or ethnicity, to detect indirect discrimination, also referred to as higher-dimensional proxy or intersectional discrimination. The metric by which bias is defined can be manually chosen and is referred to as the `performance metric`.
75
75
76
-
#####How is my data processed?
76
+
#### How is my data processed?
77
77
78
-
The tool is privacy preserving. It uses computing power of your own computer to analyze a dataset. In this architectural setup, data is processed entirely on your device and it not uploaded to any third party, such as cloud providers. This local-only feature allows organisations to securely use the tool with proprietary data. The used software is also available as <ahref="https://pypi.org/project/unsupervised-bias-detection/"target="_blank">pip package</a> `unsupervised-bias-detection`. [](https://pypi.org/project/unsupervised-bias-detection/)
78
+
The tool is privacy preserving. It uses computing power of your own computer to analyze the attached data set. In this architectural setup, data is processed entirely on your device and it not uploaded to any third-party, such as cloud providers. This computing approach is called *local-first* and allows organisations to securely use tools locally.
79
+
80
+
The used software is also available as <ahref="https://pypi.org/project/unsupervised-bias-detection/"target="_blank">pip package</a> `unsupervised-bias-detection`.[](https://pypi.org/project/unsupervised-bias-detection/)
81
+
82
+
#### What data can be processed?
83
+
84
+
Numerical and categorical data set can be analysed. The type of data is automatically detected.
The tool returns a report which presents the cluster with the highest bias and describes this cluster by the features that characterizes it. This is quantitatively expressed by the (statistically significant) differences in feature means between the identified cluster and the rest of the data. These results serve as a starting point for a deliberative assessment by human experts to evaluate potential discrimination and unfairness in the AI system under review. The tool also visualizes the outcomes.
Copy file name to clipboardExpand all lines: content/nederlands/algoprudence/how-we-work.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -55,7 +55,7 @@ Het uitgebrachte advies van de commissie wordt samen met het probleemstelling-do
55
55
56
56
Door de jaren heen hebben we onze eigen deliberatieve auditmethodologie ontwikkeld. Algorithm Audit's richtlijnen voor het bijeenbrengen van normatieve adviescommissie kunnen worden gevonden in onderstaand document.
0 commit comments