Skip to content

Commit 873aab6

Browse files
committed
remove web_app shortcode as its now implemented in the iframe shortcode.
1 parent 756fbe0 commit 873aab6

File tree

8 files changed

+54
-145
lines changed

8 files changed

+54
-145
lines changed

content/english/technical-tools/BDT.md

Lines changed: 11 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -57,11 +57,6 @@ team:
5757
name: Mackenzie Jorgensen
5858
bio: |
5959
PhD-candidate Computer Science, King’s College London
60-
web_app:
61-
title: Bias detection tool
62-
icon: fas fa-cloud
63-
id: web-app
64-
content: ''
6560
type: bias-detection-tool
6661
---
6762

@@ -75,7 +70,7 @@ The tool identifies potentially unfairly treated groups of similar users by an A
7570

7671
##### How is my data processed?
7772

78-
The tool is privacy preserving. It uses computing power of your own computer to analyze a dataset. In this architectural setup, data is processed entirely on your device and it not uploaded to any third party, such as cloud providers. This local-only feature allows organisations to securely use the tool with proprietary data. The used software is also available as <a href="https://pypi.org/project/unsupervised-bias-detection/" target="_blank">pip package</a> `unsupervised-bias-detection`. [![!pypi](https://img.shields.io/pypi/v/unsupervised-bias-detection?logo=pypi\&color=blue)](https://pypi.org/project/unsupervised-bias-detection/)
73+
The tool is privacy preserving. It uses computing power of your own computer to analyze a dataset. In this architectural setup, data is processed entirely on your device and it not uploaded to any third party, such as cloud providers. This local-only feature allows organisations to securely use the tool with proprietary data. The used software is also available as <a href="https://pypi.org/project/unsupervised-bias-detection/" target="_blank">pip package</a> `unsupervised-bias-detection`. [![!pypi](https://img.shields.io/pypi/v/unsupervised-bias-detection?logo=pypi&color=blue)](https://pypi.org/project/unsupervised-bias-detection/)
7974

8075
##### What does the tool return?
8176

@@ -85,7 +80,7 @@ Try the tool below ⬇️
8580

8681
{{< container_close >}}
8782

88-
{{< web_app >}}
83+
{{< iframe title="Bias detection tool" icon="fas fa-cloud" id="web-app" src="https://local-first-bias-detection.s3.eu-central-1.amazonaws.com/bias-detection.html" height="770px" >}}
8984

9085
{{< promo_bar content="Do you appreciate the work of Algorithm Audit? ⭐️ us on" id="promo" >}}
9186

@@ -109,20 +104,20 @@ Algorithm Audit's bias detection tool is part of OECD's <a href="https://oecd.ai
109104

110105
{{< container_open title="Hierarchical Bias-Aware Clustering (HBAC) algorithm" icon="fas fa-code-branch" id="HBAC" >}}
111106

112-
The bias detection tool currently works for tabular numerical and categorical data. The *Hierarchical Bias-Aware Clustering* (HBAC) algorithm processes input data according to the k-means or k-modes clustering algorithm. The HBAC-algorithm is introduced by Misztal-Radecka and Indurkya in a [scientific article](https://www.sciencedirect.com/science/article/abs/pii/S0306457321000285) as published in *Information Processing and Management* (2021). Our implementation of the HBAC-algorithm can be found on <a href="https://github.com/NGO-Algorithm-Audit/unsupervised-bias-detection/blob/master/README.md" target="_blank">Github</a>.
107+
The bias detection tool currently works for tabular numerical and categorical data. The _Hierarchical Bias-Aware Clustering_ (HBAC) algorithm processes input data according to the k-means or k-modes clustering algorithm. The HBAC-algorithm is introduced by Misztal-Radecka and Indurkya in a [scientific article](https://www.sciencedirect.com/science/article/abs/pii/S0306457321000285) as published in *Information Processing and Management* (2021). Our implementation of the HBAC-algorithm can be found on <a href="https://github.com/NGO-Algorithm-Audit/unsupervised-bias-detection/blob/master/README.md" target="_blank">Github</a>.
113108

114109
{{< container_close >}}
115110

116111
{{< container_open title="FAQ" icon="fas fa-question-circle" >}}
117112

118113
##### Why this bias detection tool?
119114

120-
* **Quantitative-qualitative joint method**: Data-driven bias testing combined with the balanced and context-sensitive judgment of human experts;
121-
* **Unsupervised bias detection**: No user data needed on protected attributes;
122-
* **Bias scan tool**: Scalable method based on statistical learning to detect algorithmic bias;
123-
* **Detects complex bias**: Identifies unfairly treated groups characterized by mixture of features, detects intersectional bias;
124-
* **Model-agnostic**: Works for all AI systems;
125-
* **Open-source and not-for-profit**: Easy to use and available for the entire AI auditing community.
115+
- **Quantitative-qualitative joint method**: Data-driven bias testing combined with the balanced and context-sensitive judgment of human experts;
116+
- **Unsupervised bias detection**: No user data needed on protected attributes;
117+
- **Bias scan tool**: Scalable method based on statistical learning to detect algorithmic bias;
118+
- **Detects complex bias**: Identifies unfairly treated groups characterized by mixture of features, detects intersectional bias;
119+
- **Model-agnostic**: Works for all AI systems;
120+
- **Open-source and not-for-profit**: Easy to use and available for the entire AI auditing community.
126121

127122
##### By whom can the bias detection tool be used? 
128123

@@ -138,11 +133,11 @@ No. The bias detection tool serves as a starting point to assess potentially unf
138133

139134
##### How is my data processed?
140135

141-
The tool is privacy preserving. It uses computing power of your own computer to analyze a dataset. In this architectural setup, data is processed entirely on your device and it not uploaded to any third party, such as cloud providers. This local-only feature allows organisations to securely use the tool with proprietary data. The used software is also available as <a href="https://pypi.org/project/unsupervised-bias-detection/" target="_blank">pip package</a> `unsupervised-bias-detection`. [![!pypi](https://img.shields.io/pypi/v/unsupervised-bias-detection?logo=pypi\&color=blue)](https://pypi.org/project/unsupervised-bias-detection/)
136+
The tool is privacy preserving. It uses computing power of your own computer to analyze a dataset. In this architectural setup, data is processed entirely on your device and it not uploaded to any third party, such as cloud providers. This local-only feature allows organisations to securely use the tool with proprietary data. The used software is also available as <a href="https://pypi.org/project/unsupervised-bias-detection/" target="_blank">pip package</a> `unsupervised-bias-detection`. [![!pypi](https://img.shields.io/pypi/v/unsupervised-bias-detection?logo=pypi&color=blue)](https://pypi.org/project/unsupervised-bias-detection/)
142137

143138
##### In sum 
144139

145-
Quantitative methods, such as unsupervised bias detection, are helpful to discover potentially unfair treated groups of similar users in AI systems in a scalable manner. Automated identification of cluster disparities in AI models allows human experts to assess observed disparities in a qualitative manner, subject to political, social and environmental traits. This two-pronged approach bridges the gap between the qualitative requirements of law and ethics, and the quantitative nature of AI (see figure). In making normative advice, on identified ethical issues publicly available, over time a [repository](/algoprudence/) of case reviews emerges. We call case-based normative advice for ethical algorithm *algoprudence*. Data scientists and public authorities can learn from our algoprudence and can criticise it, as ultimately normative decisions regarding fair AI should be made within democratic sight.
140+
Quantitative methods, such as unsupervised bias detection, are helpful to discover potentially unfair treated groups of similar users in AI systems in a scalable manner. Automated identification of cluster disparities in AI models allows human experts to assess observed disparities in a qualitative manner, subject to political, social and environmental traits. This two-pronged approach bridges the gap between the qualitative requirements of law and ethics, and the quantitative nature of AI (see figure). In making normative advice, on identified ethical issues publicly available, over time a [repository](/algoprudence/) of case reviews emerges. We call case-based normative advice for ethical algorithm _algoprudence_. Data scientists and public authorities can learn from our algoprudence and can criticise it, as ultimately normative decisions regarding fair AI should be made within democratic sight.
146141

147142
[Read more](/algoprudence/how-we-work/) about algoprudence and how Algorithm Audit's builds it.
148143

content/nederlands/technical-tools/BDT.md

Lines changed: 11 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -11,11 +11,6 @@ subtitle: >
1111
fairness assessment method (JFAM)</a> genaamd.
1212
image: /images/svg-illustrations/illustration_cases.svg
1313
type: bias-detection-tool
14-
web_app:
15-
title: Bias detectie tool
16-
icon: fas fa-cloud
17-
id: web-app
18-
content: ''
1914
reports_preview:
2015
title: Voorbeeld output bias detectie tool
2116
icon: fas fa-file
@@ -74,7 +69,7 @@ Gebruik de tool hieronder ⬇️
7469

7570
{{< container_close >}}
7671

77-
{{< web_app >}}
72+
{{< iframe title="Bias detection tool" icon="fas fa-cloud" id="web-app" src="https://local-first-bias-detection.s3.eu-central-1.amazonaws.com/bias-detection.html" height="770px" >}}
7873

7974
{{< promo_bar content="Waardeer je het werk van Algorithm Audit? ⭐️ ons op" id="promo" >}}
8075

@@ -98,7 +93,7 @@ Algorithm Audit's bias detectie tool is onderdeel van de OECD's [Catalogus voor
9893

9994
{{< container_open title="Hierarchisch Bias-Bewust Clustering (HBAC) algoritme" icon="fas fa-code-branch" id="HBAC" >}}
10095

101-
De bias detectie tool werkt momenteel alleen voor numeriek data. Volgens een hierarchisch schema clustert het *Hierarchical Bias-Aware Clustering* (HBAC) algoritme input data met behulp van k-means clustering algoritme. Op termijn kan de tool ook categorische data verwerken volgens k-modes clustering. Het HBAC-algoritme is geïntroduceerd door Misztal-Radecka en Indurkya in een [wetenschappelijk artikel](https://www.sciencedirect.com/science/article/abs/pii/S0306457321000285) in *Information Processing and Management* (2021). Onze implementatie van het HBAC-algoritme is open source en kan worden gevonden in [Github.](https://github.com/NGO-Algorithm-Audit/AI_Audit_Challenge)
96+
De bias detectie tool werkt momenteel alleen voor numeriek data. Volgens een hierarchisch schema clustert het _Hierarchical Bias-Aware Clustering_ (HBAC) algoritme input data met behulp van k-means clustering algoritme. Op termijn kan de tool ook categorische data verwerken volgens k-modes clustering. Het HBAC-algoritme is geïntroduceerd door Misztal-Radecka en Indurkya in een [wetenschappelijk artikel](https://www.sciencedirect.com/science/article/abs/pii/S0306457321000285) in *Information Processing and Management* (2021). Onze implementatie van het HBAC-algoritme is open source en kan worden gevonden in [Github.](https://github.com/NGO-Algorithm-Audit/AI_Audit_Challenge)
10297

10398
[Download](https://github.com/NGO-Algorithm-Audit/Bias_scan/blob/master/classifiers/BERT_disinformation_classifier/test_pred_BERT.csv) een voorbeeld dataset om de bias detectie tool te gebruiken.
10499

@@ -108,10 +103,10 @@ De bias detectie tool werkt momenteel alleen voor numeriek data. Volgens een hie
108103

109104
Welke input data kan de bias detectie tool verwerken? Een csv-bestand van maximaal 5GB met kolommen kenmerken (`features`), de voorspelde waarde (`pred_label`) en de echte waarde (`true_label`). Alleen de volgorde van de kolommen is van belang (eerst `features`, dan `pred_label`, dan `true_label`). Alle kolommen moeten numeriek en ongeschaald (niet gestandaardiseerd of genormaliseerd) zijn. Samengevat:
110105

111-
* `features`: ongeschaalde numerieke waarden, bijvoorbeeld `kenmerk_1`, `kenmerk_2`, ..., `kenmerk_n`;
112-
* `pred_label`: 0 of 1;
113-
* `true_label`: 0 of 1;
114-
* Biasmetriek: proportie valspositieven (FPR), proportie valsnegatieven (FNR) of nauwkeurigheid (Acc).
106+
- `features`: ongeschaalde numerieke waarden, bijvoorbeeld `kenmerk_1`, `kenmerk_2`, ..., `kenmerk_n`;
107+
- `pred_label`: 0 of 1;
108+
- `true_label`: 0 of 1;
109+
- Biasmetriek: proportie valspositieven (FPR), proportie valsnegatieven (FNR) of nauwkeurigheid (Acc).
115110

116111
<div><p><u>Voorbeeld</u>:</p><style type="text/css">.tg{border-collapse:collapse;border-spacing:0}.tg td{border-color:grey;border-style:solid;border-width:1px;font-size:14px;overflow:hidden;padding:10px 5px;word-break:normal}.tg th{border-color:#grey;border-style:solid;border-width:1px;font-size:14px;font-weight:400;overflow:hidden;padding:10px 5px;word-break:normal}.tg .tg-uox0{border-color:#grey;font-weight:700;text-align:left;vertical-align:top}.tg .tg-uoz0{border-color:#grey;text-align:left;vertical-align:top}</style><table class="tg"><thead><tr><th class="tg-uox0">eig_1</th><th class="tg-uox0">eig_2</th><th class="tg-uox0">...</th><th class="tg-uox0">eig_n</th><th class="tg-uox0">pred_label</th><th class="tg-uox0">true_label</th></tr></thead><tbody><tr><td class="tg-uoz0">10</td><td class="tg-uoz0">1</td><td class="tg-uoz0">...</td><td class="tg-uoz0">0.1</td><td class="tg-uoz0">1</td><td class="tg-uoz0">1</td></tr><tr><td class="tg-uoz0">20</td><td class="tg-uoz0">2</td><td class="tg-uoz0">...</td><td class="tg-uoz0">0.2</td><td class="tg-uoz0">1</td><td class="tg-uoz0">0</td></tr><tr><td class="tg-uoz0">30</td><td class="tg-uoz0">3</td><td class="tg-uoz0">...</td><td class="tg-uoz0">0.3</td><td class="tg-uoz0">0</td><td class="tg-uoz0">0</td></tr></tbody></table><br><p><u>Overzicht van ondersteunde biasmetrieken</u>:</p><style type="text/css">.tg{border-collapse:collapse;border-spacing:0}.tg td{border-color:#000;border-style:solid;border-width:1px;font-size:14px;overflow:hidden;padding:10px 5px;word-break:normal}.tg th{border-color:#000;border-style:solid;border-width:1px;font-size:14px;font-weight:400;overflow:hidden;padding:10px 5px;word-break:normal}.tg .tg-1wig{font-weight:700;text-align:left;vertical-align:top}.tg .tg-0lax{text-align:left;vertical-align:top}</style><table class="tg"><thead><tr><th class="tg-1wig">Biasmetriek</th><th class="tg-1wig">Beschrijving</th></tr></thead><tbody><tr><td class="tg-0lax">Proportie valspositieven (FPR)</td><td class="tg-0lax">De bias detectie tool vindt het cluster met de hoogste proportie valspositieven (False Positive Rate). Bijvoorbeeld: algoritme voorspelt dat een financiële transactie wel risicovol is, terwijl deze transactie dat na handmatige inspectie niet blijkt te zijn.</span></td></tr><tr><td class="tg-0lax">Proportie valsnegatieven (FNR)</td><td class="tg-0lax">De bias detectie tool vindt het cluster met de hoogste proportie valsnegatieven (False Negative Rate). Bijvoorbeeld: algoritme voorspelt dat een financiële transactie niet risicovol is, terwijl deze transactie dat na handmatige inspectie wel blijkt te zijn.</span></td></tr><tr><td class="tg-0lax">Nauwkeurigheid (Acc)</td><td class="tg-0lax">Deel echt positieven (True Positives) en echt negatieven (True Negatives) van alle voorspellingen.</td></tr></tbody></table><div style="margin-top:20px"><a style="color:#005aa7" href="https://en.wikipedia.org/wiki/Confusion_matrix#Table_of_confusion" target="_blank">Meer informatie</a> over biasmetrieken.</div></div>
117112

@@ -121,11 +116,11 @@ Welke input data kan de bias detectie tool verwerken? Een csv-bestand van maxima
121116

122117
##### Waarom deze bias detectie tool?
123118

124-
* Geen toegang nodig tot bijzondere persoonsgegevens (unsupervised bias detectie);
125-
* Model-agnostisch (werkt voor alle binaire classificatie algoritmen);
126-
* Informeert de mens welke gedrag van een AI-systeem gericht handmatig te onderzoeken.
127-
* Verbindt kwantitatieve, statissche methoden met de kwalitatieve doctrine van recht en ethiek om eerlijke AI vorm te geven;
128-
* Open-source ontwikkeld, zonder winstoogmerk.
119+
- Geen toegang nodig tot bijzondere persoonsgegevens (unsupervised bias detectie);
120+
- Model-agnostisch (werkt voor alle binaire classificatie algoritmen);
121+
- Informeert de mens welke gedrag van een AI-systeem gericht handmatig te onderzoeken.
122+
- Verbindt kwantitatieve, statissche methoden met de kwalitatieve doctrine van recht en ethiek om eerlijke AI vorm te geven;
123+
- Open-source ontwikkeld, zonder winstoogmerk.
129124

130125
##### Door wie kan deze bias detectie tool worden gebruikt? 
131126

layouts/shortcodes/iframe.html

Lines changed: 16 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,9 +4,24 @@
44
<div class="shadow mobile-desktop-container-layout-web-app rounded-lg" id='{{.Get "id" }}'>
55
<div class="row">
66
<div class="col-12">
7+
{{ if .Get "title" }}
8+
{{ if .Get "icon" }}
9+
<!-- Title and icon -->
10+
<h3 class="">
11+
<span class='{{ .Get "icon" }} icon mb-4 pl-5'></span>
12+
{{ .Get "title" }}
13+
</h3>
14+
15+
{{ else }}
16+
<!-- Title -->
17+
<h3 class="pl-3">{{ .Get "title" }}
18+
</h3>
19+
20+
{{ end }}
21+
{{ end}}
722
<div class="i-frame__container" style="margin: 0px -20px;">
823
<iframe class="iframe" src='{{.Get "src" }}'
9-
title="">
24+
title='.Get "title"'>
1025
</iframe>
1126
</div>
1227
<style>

layouts/shortcodes/web_app.html

Lines changed: 0 additions & 55 deletions
This file was deleted.

tina/collections/shared/page/building_blocks.ts

Lines changed: 0 additions & 37 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,6 @@ import team from "../templates/team";
2828
import team1 from "../templates/team1";
2929
import team2 from "../templates/team2";
3030
import tooltip from "../templates/tooltip";
31-
import web_app from "../templates/web_app";
3231
import image from "./image";
3332
import subtitle from "./subtitle";
3433
import title from "./title";
@@ -69,7 +68,6 @@ const building_blocks: TinaField[] = [
6968
team1,
7069
team2,
7170
tooltip,
72-
web_app,
7371
],
7472
},
7573
{
@@ -1106,41 +1104,6 @@ const building_blocks: TinaField[] = [
11061104
},
11071105
],
11081106
},
1109-
{
1110-
type: "object",
1111-
name: "web_app",
1112-
label: "Web app",
1113-
fields: [
1114-
{
1115-
name: "title",
1116-
label: "Title",
1117-
type: "string",
1118-
description: "",
1119-
required: true,
1120-
},
1121-
{
1122-
type: "string",
1123-
name: "icon",
1124-
label: "Icon",
1125-
description:
1126-
"From https://fontawesome.com/v5/search?m=free (e.g. fa fa-list for https://fontawesome.com/icons/list?f=classic&s=solid)",
1127-
required: false,
1128-
},
1129-
{
1130-
type: "string",
1131-
name: "id",
1132-
label: "ID",
1133-
description: "ID to refer to this block as algorithmaudit.eu/.../#ID",
1134-
required: false,
1135-
},
1136-
{
1137-
type: "rich-text",
1138-
name: "content",
1139-
label: "Content",
1140-
isBody: false,
1141-
},
1142-
],
1143-
},
11441107
];
11451108

11461109
export default building_blocks;

0 commit comments

Comments
 (0)