Skip to content

Commit ecc20a6

Browse files
authored
Merge pull request #140 from NGO-Algorithm-Audit/feature/self-hosted-forms
Feature/self hosted forms
2 parents c71e2f7 + a847ffe commit ecc20a6

File tree

11 files changed

+222
-200
lines changed

11 files changed

+222
-200
lines changed

content/english/technical-tools/BDT.md

Lines changed: 11 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -57,11 +57,6 @@ team:
5757
name: Mackenzie Jorgensen
5858
bio: |
5959
PhD-candidate Computer Science, King’s College London
60-
web_app:
61-
title: Bias detection tool
62-
icon: fas fa-cloud
63-
id: web-app
64-
content: ''
6560
type: bias-detection-tool
6661
---
6762

@@ -75,7 +70,7 @@ The tool identifies potentially unfairly treated groups of similar users by an A
7570

7671
##### How is my data processed?
7772

78-
The tool is privacy preserving. It uses computing power of your own computer to analyze a dataset. In this architectural setup, data is processed entirely on your device and it not uploaded to any third party, such as cloud providers. This local-only feature allows organisations to securely use the tool with proprietary data. The used software is also available as <a href="https://pypi.org/project/unsupervised-bias-detection/" target="_blank">pip package</a> `unsupervised-bias-detection`. [![!pypi](https://img.shields.io/pypi/v/unsupervised-bias-detection?logo=pypi\&color=blue)](https://pypi.org/project/unsupervised-bias-detection/)
73+
The tool is privacy preserving. It uses computing power of your own computer to analyze a dataset. In this architectural setup, data is processed entirely on your device and it not uploaded to any third party, such as cloud providers. This local-only feature allows organisations to securely use the tool with proprietary data. The used software is also available as <a href="https://pypi.org/project/unsupervised-bias-detection/" target="_blank">pip package</a> `unsupervised-bias-detection`. [![!pypi](https://img.shields.io/pypi/v/unsupervised-bias-detection?logo=pypi&color=blue)](https://pypi.org/project/unsupervised-bias-detection/)
7974

8075
##### What does the tool return?
8176

@@ -85,7 +80,7 @@ Try the tool below ⬇️
8580

8681
{{< container_close >}}
8782

88-
{{< web_app >}}
83+
{{< iframe title="Bias detection tool" icon="fas fa-cloud" id="web-app" src="https://local-first-bias-detection.s3.eu-central-1.amazonaws.com/bias-detection.html" height="770px" >}}
8984

9085
{{< promo_bar content="Do you appreciate the work of Algorithm Audit? ⭐️ us on" id="promo" >}}
9186

@@ -109,20 +104,20 @@ Algorithm Audit's bias detection tool is part of OECD's <a href="https://oecd.ai
109104

110105
{{< container_open title="Hierarchical Bias-Aware Clustering (HBAC) algorithm" icon="fas fa-code-branch" id="HBAC" >}}
111106

112-
The bias detection tool currently works for tabular numerical and categorical data. The *Hierarchical Bias-Aware Clustering* (HBAC) algorithm processes input data according to the k-means or k-modes clustering algorithm. The HBAC-algorithm is introduced by Misztal-Radecka and Indurkya in a [scientific article](https://www.sciencedirect.com/science/article/abs/pii/S0306457321000285) as published in *Information Processing and Management* (2021). Our implementation of the HBAC-algorithm can be found on <a href="https://github.com/NGO-Algorithm-Audit/unsupervised-bias-detection/blob/master/README.md" target="_blank">Github</a>.
107+
The bias detection tool currently works for tabular numerical and categorical data. The _Hierarchical Bias-Aware Clustering_ (HBAC) algorithm processes input data according to the k-means or k-modes clustering algorithm. The HBAC-algorithm is introduced by Misztal-Radecka and Indurkya in a [scientific article](https://www.sciencedirect.com/science/article/abs/pii/S0306457321000285) as published in *Information Processing and Management* (2021). Our implementation of the HBAC-algorithm can be found on <a href="https://github.com/NGO-Algorithm-Audit/unsupervised-bias-detection/blob/master/README.md" target="_blank">Github</a>.
113108

114109
{{< container_close >}}
115110

116111
{{< container_open title="FAQ" icon="fas fa-question-circle" >}}
117112

118113
##### Why this bias detection tool?
119114

120-
* **Quantitative-qualitative joint method**: Data-driven bias testing combined with the balanced and context-sensitive judgment of human experts;
121-
* **Unsupervised bias detection**: No user data needed on protected attributes;
122-
* **Bias scan tool**: Scalable method based on statistical learning to detect algorithmic bias;
123-
* **Detects complex bias**: Identifies unfairly treated groups characterized by mixture of features, detects intersectional bias;
124-
* **Model-agnostic**: Works for all AI systems;
125-
* **Open-source and not-for-profit**: Easy to use and available for the entire AI auditing community.
115+
- **Quantitative-qualitative joint method**: Data-driven bias testing combined with the balanced and context-sensitive judgment of human experts;
116+
- **Unsupervised bias detection**: No user data needed on protected attributes;
117+
- **Bias scan tool**: Scalable method based on statistical learning to detect algorithmic bias;
118+
- **Detects complex bias**: Identifies unfairly treated groups characterized by mixture of features, detects intersectional bias;
119+
- **Model-agnostic**: Works for all AI systems;
120+
- **Open-source and not-for-profit**: Easy to use and available for the entire AI auditing community.
126121

127122
##### By whom can the bias detection tool be used? 
128123

@@ -138,11 +133,11 @@ No. The bias detection tool serves as a starting point to assess potentially unf
138133

139134
##### How is my data processed?
140135

141-
The tool is privacy preserving. It uses computing power of your own computer to analyze a dataset. In this architectural setup, data is processed entirely on your device and it not uploaded to any third party, such as cloud providers. This local-only feature allows organisations to securely use the tool with proprietary data. The used software is also available as <a href="https://pypi.org/project/unsupervised-bias-detection/" target="_blank">pip package</a> `unsupervised-bias-detection`. [![!pypi](https://img.shields.io/pypi/v/unsupervised-bias-detection?logo=pypi\&color=blue)](https://pypi.org/project/unsupervised-bias-detection/)
136+
The tool is privacy preserving. It uses computing power of your own computer to analyze a dataset. In this architectural setup, data is processed entirely on your device and it not uploaded to any third party, such as cloud providers. This local-only feature allows organisations to securely use the tool with proprietary data. The used software is also available as <a href="https://pypi.org/project/unsupervised-bias-detection/" target="_blank">pip package</a> `unsupervised-bias-detection`. [![!pypi](https://img.shields.io/pypi/v/unsupervised-bias-detection?logo=pypi&color=blue)](https://pypi.org/project/unsupervised-bias-detection/)
142137

143138
##### In sum 
144139

145-
Quantitative methods, such as unsupervised bias detection, are helpful to discover potentially unfair treated groups of similar users in AI systems in a scalable manner. Automated identification of cluster disparities in AI models allows human experts to assess observed disparities in a qualitative manner, subject to political, social and environmental traits. This two-pronged approach bridges the gap between the qualitative requirements of law and ethics, and the quantitative nature of AI (see figure). In making normative advice, on identified ethical issues publicly available, over time a [repository](/algoprudence/) of case reviews emerges. We call case-based normative advice for ethical algorithm *algoprudence*. Data scientists and public authorities can learn from our algoprudence and can criticise it, as ultimately normative decisions regarding fair AI should be made within democratic sight.
140+
Quantitative methods, such as unsupervised bias detection, are helpful to discover potentially unfair treated groups of similar users in AI systems in a scalable manner. Automated identification of cluster disparities in AI models allows human experts to assess observed disparities in a qualitative manner, subject to political, social and environmental traits. This two-pronged approach bridges the gap between the qualitative requirements of law and ethics, and the quantitative nature of AI (see figure). In making normative advice, on identified ethical issues publicly available, over time a [repository](/algoprudence/) of case reviews emerges. We call case-based normative advice for ethical algorithm _algoprudence_. Data scientists and public authorities can learn from our algoprudence and can criticise it, as ultimately normative decisions regarding fair AI should be made within democratic sight.
146141

147142
[Read more](/algoprudence/how-we-work/) about algoprudence and how Algorithm Audit's builds it.
148143

content/english/technical-tools/documentation.md

Lines changed: 12 additions & 56 deletions
Original file line numberDiff line numberDiff line change
@@ -2,13 +2,13 @@
22
type: regular
33
title: Documentation for AI-systems
44
subtitle: >
5-
Open-source templates for model documentation. Based on
6-
AI Act requirements and soft law frameworks, such as the
7-
[Research framework Algorithms](https://www.rijksoverheid.nl/documenten/rapporten/2023/07/11/onderzoekskader-algoritmes-adr-2023#:~:text=De%20Auditdienst%20Rijk%20heeft%20een,risico's%20beheerst%20\(kunnen\)%20worden.)
8-
of the Netherlands Executive Audit Agency, the
9-
[Algorithm framework](https://minbzk.github.io/Algoritmekader/) of the Dutch Ministry of the Interior
10-
and the Dutch Fundamental Rights Impact Assessment
11-
([IAMA](https://www.rijksoverheid.nl/documenten/rapporten/2021/02/25/impact-assessment-mensenrechten-en-algoritmes)).
5+
Open-source templates for model documentation. Based on AI Act requirements
6+
and soft law frameworks, such as the [Research framework
7+
Algorithms](https://www.rijksoverheid.nl/documenten/rapporten/2023/07/11/onderzoekskader-algoritmes-adr-2023#:~:text=De%20Auditdienst%20Rijk%20heeft%20een,risico's%20beheerst%20\(kunnen\)%20worden.)
8+
of the Netherlands Executive Audit Agency, the [Algorithm
9+
framework](https://minbzk.github.io/Algoritmekader/) of the Dutch Ministry of
10+
the Interior and the Dutch Fundamental Rights Impact Assessment
11+
([IAMA](https://www.rijksoverheid.nl/documenten/rapporten/2021/02/25/impact-assessment-mensenrechten-en-algoritmes)).
1212
1313
1414
Help developing and share feedback through
@@ -18,61 +18,17 @@ image: /images/svg-illustrations/case_repository.svg
1818
overview_block:
1919
- title: Identification of AI-systems and high-risk algorithms
2020
content: >
21-
By answering maximum 8 targeted questions, you can determine whether a data-driven application qualifies as an AI-system or as an impactful algorithm. Complete the dynamic questionnaire to find out.
21+
By answering maximum 8 targeted questions, you can determine whether a
22+
data-driven application qualifies as an AI-system or as an impactful
23+
algorithm. Complete the dynamic questionnaire to find out.
2224
icon: fas fa-search
2325
id: quick-scan
2426
items:
2527
- title: Identify
2628
icon: fas fa-star
2729
link: classification-quick-scan/#form
28-
# - title: Documentatie en classificatie tool
29-
# content: >
30-
# Organisatiebreed algoritmemanagementbeleid vraagt om pragmatische kaders.
31-
# Hierbij is een risico-georienteerde aanpak vaak leidend. Dit is in lijn
32-
# met nationale en internationale wetgeving voor algoritmes, zoals de AI
33-
# Verordening. Hieronder vindt u een voorbeeld van verschillende dynamische
34-
# vragenlijsten die organisaties helpen om algoritmes en AI-systemen te
35-
# indexeren, documenteren en classificeren.
36-
# icon: far fa-file
37-
# id: organisation-wide
38-
# items:
39-
# - title: 1. Intakeformulier
40-
# icon: fa fa-plus
41-
# link: intake/#form
42-
# - title: 2. Sturing en verantwoording
43-
# icon: fas fa-user-tag
44-
# link: roles-and-responsibilities/#form
45-
# - title: 3. Privacy
46-
# icon: fa fa-eye
47-
# link: privacy/#form
48-
# - title: 4. Data en model
49-
# icon: fas fa-share-alt
50-
# link: data-and-model/#form
51-
# - title: AI Verordening
52-
# content: >
53-
# Deze dynamische vragenlijsten helpen je grip te krijgen op de AI
54-
# Verordening.
55-
# icon: fas fa-gavel
56-
# id: ai-act
57-
# items:
58-
# - title: AI-systeem classificatie
59-
# icon: fas fa-code-branch
60-
# link: AI-system-classification/#form
61-
# - title: Risicoclassificatie
62-
# icon: fas fa-mountain
63-
# link: risk-classification/#form
6430
---
6531

66-
{{< container_open icon="fas fa-wrench" title="Documentatie en classificatie tool" id="landing-container" >}}
32+
{{< iframe src="https://ai-documentation.s3.eu-central-1.amazonaws.com/index.html" id="forms" height="500px" >}}
6733

68-
Maak gebruik van dynamische vragenlijsten om algoritmes en AI-systemen te documenten en classificeren.
69-
70-
* [Identificatie van AI-systemen en hoog-risico algoritmes (max. 8 vragen)](#quick-scan)
71-
72-
{{< container_close >}}
73-
74-
{{< overview_block index="0" >}}
75-
76-
<!-- {{< overview_block index="1" >}}
77-
78-
{{< overview_block index="2" >}} -->
34+
{{< webapp id="webapp" appId="AIActWizard" stylesheet="https://ai-documentation.s3.eu-central-1.amazonaws.com/AI-Act-Questionnaire-v1.0.0.css" src="https://ai-documentation.s3.eu-central-1.amazonaws.com/AI-Act-Questionnaire-v1.0.0.js" title="" >}}

0 commit comments

Comments
 (0)