Skip to content

Commit 1a3910b

Browse files
committed
web app styling edits
1 parent 34972ad commit 1a3910b

File tree

6 files changed

+106
-15
lines changed

6 files changed

+106
-15
lines changed

config/_default/menus.NL.toml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -63,8 +63,8 @@ url = "/nl/technical-tools/BDT"
6363
icon = "fa-table"
6464
[[main]]
6565
parent = "Technische tools"
66-
name = "Documentatie"
67-
url = "nl/technical-tools/documentation"
66+
name = "AI-verordening Implementatie Tool"
67+
url = "nl/technical-tools/implementation-tool"
6868
weight = 4
6969
[[main.params]]
7070
icon = "fa-file"

config/_default/menus.en.toml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -61,8 +61,8 @@ url = "/technical-tools/BDT"
6161
icon = "fa-table"
6262
[[main]]
6363
parent = "Technical tools"
64-
name = "Documentation"
65-
url = "/technical-tools/documentation"
64+
name = "AI Act Implementation Tool"
65+
url = "/technical-tools/implementation-tool"
6666
weight = 4
6767
[[main.params]]
6868
icon = "fa-file"

content/english/technical-tools/BDT.md

Lines changed: 31 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -66,25 +66,46 @@ type: bias-detection-tool
6666

6767
#### What is the tool about?
6868

69-
The tool identifies potentially unfairly treated groups of similar users by an AI system. The tool returns clusters of users for which the system is underperforming compared to the rest of the data set. The tool makes use of <a href="https://en.wikipedia.org/wiki/Cluster_analysis" target="_blank">clustering</a> – an unsupervised statistal learning method. This means that no data is required on protected attributes of users, e.g., gender, nationality or ethnicity, to detect indirect discrimination, also referred to as higher-dimensional proxy or intersectional discrimination. The metric by which bias is defined can be manually chosen and is referred to as the `performance metric`.
69+
The tool identifies potentially unfairly treated groups of similar users by an AI system. The tool returns clusters of users for which the system is underperforming compared to the rest of the data set. The tool makes use of <a href="https://en.wikipedia.org/wiki/Cluster_analysis" target="_blank">clustering</a> – an unsupervised statistal learning method. This means that no data are required on protected attributes of users, e.g., gender, nationality or ethnicity, to detect indirect discrimination, also referred to as higher-dimensional proxy or intersectional discrimination. The metric by which bias is defined can be manually chosen and is referred to as the `performance metric`.
7070

71-
#### How is my data processed?
72-
73-
The tool is privacy preserving. It uses computing power of your own computer to analyze the attached data set. In this architectural setup, data is processed entirely on your device and it not uploaded to any third-party, such as cloud providers. This computing approach is called *local-first* and allows organisations to securely use tools locally.
71+
#### What data can be processed?
7472

75-
The used software is also available as <a href="https://pypi.org/project/unsupervised-bias-detection/" target="_blank">pip package</a> `unsupervised-bias-detection`.[![!pypi](https://img.shields.io/pypi/v/unsupervised-bias-detection?logo=pypi\&color=blue)](https://pypi.org/project/unsupervised-bias-detection/)
73+
Numerical and categorical data can be analysed. The type of data is automatically detected by the tool. The `performance metric` column should always contain numerical values. The user should indicate in the app whether a higher of lower value of the `performance metric` is considered to be better.
74+
75+
The tool contains a demo data set and a 'Try it out' button. More information can be found in the app.
76+
77+
<div>
78+
<p><u>Example of numerical data set</u>:</p>
79+
<style type="text/css">.tg{border-collapse:collapse;border-spacing:0}.tg td{border-color:#000;border-style:solid;border-width:1px;font-size:14px;overflow:hidden;padding:10px 5px;word-break:normal}.tg th{border-color:#000;border-style:solid;border-width:1px;font-size:14px;font-weight:400;overflow:hidden;padding:10px 5px;word-break:normal}.tg .tg-uox0{border-color:#grey;font-weight:700;text-align:left;vertical-align:top}.tg .tg-uoz0{border-color:#grey;text-align:left;vertical-align:top} .tg-1wig{font-weight:700;text-align:left;vertical-align:top}.tg .tg-0lax{text-align:left;vertical-align:top}</style>
80+
<table class="tg">
81+
<thead>
82+
<tr>
83+
<th class="tg-uox0">feat_1</th><th class="tg-uox0">feat_2</th><th class="tg-uox0">...</th><th class="tg-uox0">feat_n</th><th class="tg-uox0">perf_metr</th>
84+
</tr>
85+
</thead>
86+
<tbody>
87+
<tr><td class="tg-uoz0">10</td><td class="tg-uoz0">1</td><td class="tg-uoz0">...</td><td class="tg-uoz0">0.1</td><td class="tg-uoz0">1</td></tr>
88+
<tr><td class="tg-uoz0">20</td><td class="tg-uoz0">2</td><td class="tg-uoz0">...</td><td class="tg-uoz0">0.2</td><td class="tg-uoz0">1</td></tr>
89+
<tr><td class="tg-uoz0">30</td><td class="tg-uoz0">3</td><td class="tg-uoz0">...</td><td class="tg-uoz0">0.3</td><td class="tg-uoz0">0</td></tr>
90+
</tbody>
91+
</table>
92+
</div>
93+
<br>
7694

77-
#### What data can be processed?
95+
#### How is my data processed?
7896

79-
Numerical and categorical data set can be analysed. The type of data is automatically detected.
97+
The tool is privacy preserving. It uses computing power of your own computer to analyze the attached data set. In this architectural setup, data is processed entirely on your device and it not uploaded to any third-party, such as cloud providers. This computing approach is called *local-first* and allows organisations to securely use tools locally. Instructions how the tool can be hosted locally, incl. source code, can be found <a href="https://github.com/NGO-Algorithm-Audit/local-first-web-tool" target="_blank">here</a>.
8098

81-
<div><p><u>Example</u>:</p><style type="text/css">.tg{border-collapse:collapse;border-spacing:0}.tg td{border-color:#000;border-style:solid;border-width:1px;font-size:14px;overflow:hidden;padding:10px 5px;word-break:normal}.tg th{border-color:#000;border-style:solid;border-width:1px;font-size:14px;font-weight:400;overflow:hidden;padding:10px 5px;word-break:normal}.tg .tg-uox0{border-color:#grey;font-weight:700;text-align:left;vertical-align:top}.tg .tg-uoz0{border-color:#grey;text-align:left;vertical-align:top} .tg-1wig{font-weight:700;text-align:left;vertical-align:top}.tg .tg-0lax{text-align:left;vertical-align:top}</style><table class="tg"><thead><tr><th class="tg-uox0">eig_1</th><th class="tg-uox0">eig_2</th><th class="tg-uox0">...</th><th class="tg-uox0">eig_n</th><th class="tg-uox0">pred_label</th><th class="tg-uox0">true_label</th></tr></thead><tbody><tr><td class="tg-uoz0">10</td><td class="tg-uoz0">1</td><td class="tg-uoz0">...</td><td class="tg-uoz0">0.1</td><td class="tg-uoz0">1</td><td class="tg-uoz0">1</td></tr><tr><td class="tg-uoz0">20</td><td class="tg-uoz0">2</td><td class="tg-uoz0">...</td><td class="tg-uoz0">0.2</td><td class="tg-uoz0">1</td><td class="tg-uoz0">0</td></tr><tr><td class="tg-uoz0">30</td><td class="tg-uoz0">3</td><td class="tg-uoz0">...</td><td class="tg-uoz0">0.3</td><td class="tg-uoz0">0</td><td class="tg-uoz0">0</td></tr></tbody></table></div><br>
99+
[![!pypi](https://img.shields.io/pypi/v/unsupervised-bias-detection?logo=pypi\&color=blue)](https://pypi.org/project/unsupervised-bias-detection/)
100+
Software of the used statistical methods is available in a seperate <a href="https://github.com/NGO-Algorithm-Audit/unsupervised-bias-detection" target="_blank">Github repository</a>, and is also available as <a href="https://pypi.org/project/unsupervised-bias-detection/" target="_blank">pip package</a> `unsupervised-bias-detection`.
82101

83102
#### What does the tool return?
84103

85-
The tool returns a report which presents the cluster with the highest bias and describes this cluster by the features that characterizes it. This is quantitatively expressed by the (statistically significant) differences in feature means between the identified cluster and the rest of the data. These results serve as a starting point for a deliberative assessment by human experts to evaluate potential discrimination and unfairness in the AI system under review. The tool also visualizes the outcomes.
104+
The tool returns a pdf report or `.json` file with identified clusters. It specifically focusses on the identified cluster with highest bias and describes this cluster by the features that characterizes it. These results serve as a starting point for a deliberative assessment by human experts to evaluate potential discrimination and unfairness in the AI system under review. The tool also visualizes the outcomes.
86105

87106
Try the tool below ⬇️
107+
<!-- This is quantitatively expressed by the (statistically significant) differences in feature means between the identified cluster and the rest of the data. -->
108+
88109

89110
{{< container_close >}}
90111

@@ -112,7 +133,7 @@ Algorithm Audit's bias detection tool is part of OECD's <a href="https://oecd.ai
112133

113134
{{< container_open title="Hierarchical Bias-Aware Clustering (HBAC) algorithm" icon="fas fa-code-branch" id="HBAC" >}}
114135

115-
The bias detection tool currently works for tabular numerical and categorical data. The _Hierarchical Bias-Aware Clustering_ (HBAC) algorithm processes input data according to the k-means or k-modes clustering algorithm. The HBAC-algorithm is introduced by Misztal-Radecka and Indurkya in a [scientific article](https://www.sciencedirect.com/science/article/abs/pii/S0306457321000285) as published in *Information Processing and Management* (2021). Our implementation of the HBAC-algorithm can be found on <a href="https://github.com/NGO-Algorithm-Audit/unsupervised-bias-detection/blob/master/README.md" target="_blank">Github</a>.
136+
The bias detection tool utilizes the _Hierarchical Bias-Aware Clustering_ (HBAC) algorithm. HBAC processes input data according to the k-means (for numerical data) or k-modes (for categorical data) clustering algorithm. The HBAC-algorithm is introduced by Misztal-Radecka and Indurkya in a [scientific article](https://www.sciencedirect.com/science/article/abs/pii/S0306457321000285) as published in *Information Processing and Management* (2021). Our implementation of the HBAC-algorithm can be found on <a href="https://github.com/NGO-Algorithm-Audit/unsupervised-bias-detection/blob/master/README.md" target="_blank">Github</a>. The methodology has been reviewed by a team of machine learning engineers and statisticians, and is continuously undergoing evaluation.
116137

117138
{{< container_close >}}
118139

content/english/technical-tools/documentation.md

Lines changed: 37 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,4 +31,40 @@ overview_block:
3131

3232
{{< iframe src="https://ai-documentation.s3.eu-central-1.amazonaws.com/index.html" id="forms" height="500px" >}}
3333

34-
{{< webapp id="webapp" appId="AIActWizard" stylesheet="https://ai-documentation.s3.eu-central-1.amazonaws.com/AI-Act-Questionnaire-v1.0.0.css" src="https://ai-documentation.s3.eu-central-1.amazonaws.com/AI-Act-Questionnaire-v1.0.0.js" title="" >}}
34+
{{< webapp id="webapp" appId="AIActWizard" src="https://ai-documentation.s3.eu-central-1.amazonaws.com/AI-Act-Questionnaire-v1.0.0.js" title="" >}}
35+
36+
<style>
37+
/* Styling for form-group elements inside #AIActWizard */
38+
#AIActWizard .form-group {
39+
display: block;
40+
}
41+
42+
/* Styling for form-group elements header labels inside #AIActWizard */
43+
#AIActWizard .form-group .form-label {
44+
margin-left: 0;
45+
color: black;
46+
}
47+
48+
/* Styling for intermediate-output labels in #AIActWizard */
49+
#AIActWizard .intermediate-output label {
50+
font-weight: 700;
51+
}
52+
53+
/* Styling for intermediate-output textareas in #AIActWizard */
54+
#AIActWizard .intermediate-output textarea {
55+
border: none;
56+
background-color: transparent;
57+
resize: none;
58+
width: 100%;
59+
height: auto;
60+
padding: 0;
61+
margin: 0;
62+
font-size: inherit;
63+
font-family: inherit;
64+
line-height: inherit;
65+
color: inherit;
66+
overflow: hidden;
67+
white-space: pre-wrap;
68+
word-wrap: break-word;
69+
}
70+
</style>
Lines changed: 34 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,34 @@
1+
---
2+
type: regular
3+
title: Documentation for AI-systems
4+
subtitle: >
5+
Open-source templates for model documentation. Based on AI Act requirements
6+
and soft law frameworks, such as the [Research framework
7+
Algorithms](https://www.rijksoverheid.nl/documenten/rapporten/2023/07/11/onderzoekskader-algoritmes-adr-2023#:~:text=De%20Auditdienst%20Rijk%20heeft%20een,risico's%20beheerst%20\(kunnen\)%20worden.)
8+
of the Netherlands Executive Audit Agency, the [Algorithm
9+
framework](https://minbzk.github.io/Algoritmekader/) of the Dutch Ministry of
10+
the Interior and the Dutch Fundamental Rights Impact Assessment
11+
([IAMA](https://www.rijksoverheid.nl/documenten/rapporten/2021/02/25/impact-assessment-mensenrechten-en-algoritmes)).
12+
13+
14+
Help developing and share feedback through
15+
[Github](https://github.com/NGO-Algorithm-Audit/AlgorithmAudit_website) or via
16+
17+
image: /images/svg-illustrations/case_repository.svg
18+
overview_block:
19+
- title: Identification of AI-systems and high-risk algorithms
20+
content: >
21+
By answering maximum 8 targeted questions, you can determine whether a
22+
data-driven application qualifies as an AI-system or as an impactful
23+
algorithm. Complete the dynamic questionnaire to find out.
24+
icon: fas fa-search
25+
id: quick-scan
26+
items:
27+
- title: Identify
28+
icon: fas fa-star
29+
link: classification-quick-scan/#form
30+
---
31+
32+
{{< iframe src="https://ai-documentation.s3.eu-central-1.amazonaws.com/index.html" id="forms" height="500px" >}}
33+
34+
{{< webapp id="webapp" appId="AIActWizard" stylesheet="https://ai-documentation.s3.eu-central-1.amazonaws.com/AI-Act-Questionnaire-v1.0.0.css" src="https://ai-documentation.s3.eu-central-1.amazonaws.com/AI-Act-Questionnaire-v1.0.0.js" title="" >}}

content/nederlands/technical-tools/documentation.md renamed to content/nederlands/technical-tools/implementation-tool.md

File renamed without changes.

0 commit comments

Comments
 (0)