Skip to content

Commit 0cdc2cf

Browse files
committed
knowledge base update
1 parent 4bfe37c commit 0cdc2cf

32 files changed

+211
-416
lines changed

content/.DS_Store

0 Bytes
Binary file not shown.

content/english/about/teams.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ about_AA:
3333
[op-eds](/knowledge-platform/knowledge-base/).
3434
team:
3535
title: Synthetic data generation cohort
36-
content: Part-time team working on synthetic data generation
36+
content: Part-time team working on synthetic data generation for AI bias testing
3737
icon: fas fa-table
3838
button_text: Synthetic data generation
3939
id: SDG-team
@@ -61,9 +61,9 @@ team:
6161
Research Scientist, Spotify
6262
team1:
6363
title: Bias detection tool cohort
64-
content: Part-time team working on the bias detection tool
64+
content: Part-time team working on unsupervised the bias detection tool
6565
icon: fas fa-search
66-
button_text: Our bias detection tool
66+
button_text: Bias detection tool
6767
id: bdt
6868
button_link: /technical-tools/bdt/
6969
team_members:
@@ -97,7 +97,7 @@ team2:
9797
name: Jurriaan Parie
9898
bio: |
9999
Director-board member
100-
- image: /images/people/VDjwalapersad.png
100+
- image: /images/people/VDjwalapersad.jpeg
101101
name: Vardâyani Djwalapersad
102102
bio: |
103103
Project manager Algoprudence
Lines changed: 7 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,13 @@
11
---
2-
title: >-
3-
How 'algoprudence' can contribute to responsible use of ML-algorithms
4-
author: 'Anne Meuwese, Jurriaan Parie & Ariën Voogt'
2+
title: How 'algoprudence' can contribute to responsible use of ML-algorithms
3+
subtitle: ''
54
image: /images/knowledge_base/NJB-cover.jpg
5+
author: 'Anne Meuwese, Jurriaan Parie & Ariën Voogt'
66
type: featured
77
summary: >-
8-
By means of two case positions regarding the use of machine learning-driven risk profiling by the municipalities of Rotterdam and Amsterdam, the concept of 'algoprudence' is introduced and explained.
9-
subtitle: ''
8+
By means of two case positions regarding the use of machine learning-driven
9+
risk profiling by the municipalities of Rotterdam and Amsterdam, the concept
10+
of 'algoprudence' is introduced and explained.
1011
---
1112

1213
Article in journal for Dutch legal scholars #10 https://www.njb.nl/magazines/njb-10-2024/
@@ -15,4 +16,4 @@ Article in journal for Dutch legal scholars #10 https://www.njb.nl/magazines/njb
1516

1617
By means of two case positions regarding the use of machine learning-driven risk profiling by the municipalities of Rotterdam and Amsterdam, the concept of 'algoprudence' is introduced and explained. This new term refers to concrete, case-based, and decentralized judgments regarding the normative decision made to design and deploy algorithms. The article illustrates that the general principles of sound administration (as specified in Dutch Administrative Law) are insufficient to provide concrete standards for these algorithms, the authors argue that algoprudence can serve as a useful addition to and concretization of existing legal frameworks.
1718

18-
{{< pdf_frame articleUrl1="https://drive.google.com/file/d/1V9k4ghq4RJHu_9UMiA8QctkcDX3teQYR/preview" >}}
19+
{{< pdf_frame articleUrl1="https://drive.google.com/file/d/1fIZ9oRTULNOlgzk6Dwsr3hYujSiRu85n/preview" width_desktop_pdf="6" width_mobile_pdf="12" >}}

content/nederlands/knowledge-platform/knowledge-base/White_paper_algorithmic_regulatory_body.md renamed to content/english/knowledge-platform/knowledge-base/Op-ed_algorithmic_regulatory_body.md

Lines changed: 6 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,10 @@
11
---
2-
title: >-
3-
Algorithm Audit (white paper) – A new algorithmic regulatory body in The
4-
Netherlands
2+
title: Algorithm Audit (op-ed) – A new algorithmic regulatory body in The Netherlands
3+
subtitle: ''
4+
image: /images/knowledge_base/Op-ed-1.png
55
author: Algorithm Audit
6-
image: /images/knowledge_base/white-paper-2.png
76
type: regular
8-
summary: "A new algorithmic regulatory body is institutionalized in The Netherlands. We say: Make it a bulldog \U0001F43A. Not a lap dog \U0001F436."
7+
summary: "A new algorithmic regulatory body is institutionalized in The Netherlands. We say: Make it a bulldog \U0001F43A, not a lap dog \U0001F436"
98
---
109

1110
New white paper on algorithmic supervision. We reflect on recently announced plans by the Dutch Minister of Digitalization to institutionalize a national algorithmic watchdog. We say: Make it a bulldog <span style="font-size: 25px;">🐺</span>. Not a lap dog <span style="font-size: 25px;">🐶</span>.
@@ -14,6 +13,6 @@ New white paper on algorithmic supervision. We reflect on recently announced pla
1413

1514
The new regulatory body seems to rely on the principle of ‘market supervises market’. As an effect, commercial firms get a key position in defining wat is ‘fair’ and what not. But, this is a public tasks that ought to be performed in the democratic sight. Read the full white paper (2 pages) below.
1615

17-
<span style="font-size: 14px; font-style:italic">This white paper is published as an [op-ed](https://fd.nl/opinie/1462782/maak-nieuwe-algoritmewaakhond-een-bulldog-in-plaats-van-een-schoothond) in the Dutch newspaper *Het Financieele Dagblad* on December 30th 2022.</span>
16+
<span style="font-size: 14px; font-style:italic">This white paper is published as an <a href="https://fd.nl/opinie/1462782/maak-nieuwe-algoritmewaakhond-een-bulldog-in-plaats-van-een-schoothond" target="_blank">op-ed</a> in the Dutch newspaper *Het Financieele Dagblad* on December 30th 2022.</span>
1817

19-
{{< pdf_frame title="White-paper" name="white-paper" articleUrl="https://drive.google.com/file/d/18GyjLGhKqzBmYIqWP8Y20BcxVbxyfuLR/preview" >}}
18+
{{< pdf_frame articleUrl1="https://drive.google.com/file/d/1gZo_1IUFuCmTFZfHW4j7jSk9QzCyGbxN/preview" width_desktop_pdf="6" width_mobile_pdf="12" >}}
Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
---
2+
title: Algorithm Audit (op-ed) – Human vs machine bias
3+
subtitle: ''
4+
image: /images/knowledge_base/Op-ed-3.png
5+
author: 'Jurriaan Parie, Vardâyani Djwalapersad'
6+
type: regular
7+
summary: >-
8+
Based on bias testing results from the Municipality of Amsterdam, it is argued
9+
that that algorithms can play an important role in mitigating biases
10+
originating from humans
11+
---
12+
13+
Op-ed, as <a href="https://www.parool.nl/columns-opinie/opinie-onderzoek-vooringenomenheid-van-zowel-algoritme-als-ambtenaar~bd69aa5e/" target="_blank">published</a> in Parool on 14-02-2024, arguing that:
14+
15+
* Not only algorithmic-driven processes can have discriminatory effects, but that human-driven process can be severely biased too;
16+
* They argue therefore that, as a result of the performed bias test by the City of Amsterdam, not only should the explainable boosting ML-model be abandoned, but also the allegedly detected human biases within the processes of the City of Amsterdam should be subject to further investigation;
17+
* Because more open and transparent research is needed to strengthen human-machine interplay to prevent systemic biases in the digital future.
18+
19+
{{< pdf_frame articleUrl1="https://drive.google.com/file/d/1rtXD82BsXu2qRPlIemOgKYijZMYSrRNT/preview" width_desktop_pdf="6" width_mobile_pdf="12" >}}

content/english/knowledge-platform/knowledge-base/White_paper_reversal_burden_of_proof.md renamed to content/english/knowledge-platform/knowledge-base/Op-ed_reversal_burden_of_proof.md

Lines changed: 11 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,14 @@
11
---
2-
title: Algorithm Audit (white paper) – Reversing the burden of proof
2+
title: Algorithm Audit (op-ed) – Reversing the burden of proof
3+
subtitle: ''
4+
image: /images/knowledge_base/Op-ed-2.png
35
author: Algorithm Audit
4-
image: /images/knowledge_base/white-paper-1.png
5-
type: "regular"
6-
summary: Algorithm Audit's first white paper on the reversal on the burden of proof in the context of (semi-)automated decision-making
7-
6+
type: regular
7+
summary: >-
8+
Algorithm Audit's first white paper on the reversal on the burden of proof in
9+
the context of (semi-)automated decision-making
810
---
11+
912
Algorithm Audit has published its first white paper🥇 Reversing the burden of proof is a promising approach to protect against discrimination in the context of (semi-)automated decision-making (ADM). Yet to make it feasible we need to overcome some hurdles.
1013

1114
This is what we advise policy makers across the EU to facilitate reversing the burden of proof and improve legal protection for (semi-)ADM:
@@ -14,14 +17,12 @@ This is what we advise policy makers across the EU to facilitate reversing the b
1417

1518
<span style="font-size: 25px;">🤳</span>Establish national hotlines where potentially discriminatory (semi-)ADM can be reported and where litigation support can be provided;
1619

17-
<span style="font-size: 25px;">📚</span>Train staff of courts, parliament, ministries, and other authorities in interpreting (semi-)ADM qualitatively.
20+
<span style="font-size: 25px;">📚</span>Train staff of courts, parliament, ministries, and other authorities in interpreting (semi-)ADM qualitatively.
1821

19-
<span style="font-size: 14px; font-style:italic">This white paper is published as an [op-ed](https://fd.nl/opinie/1436425/we-moeten-ons-bezinnen-op-het-bestaansrecht-van-algoritmen) in the Dutch newspaper _Het Financieele Dagblad_ on April 13th 2022.</span>
22+
<span style="font-size: 14px; font-style:italic">This white paper is published as an [op-ed](https://fd.nl/opinie/1436425/we-moeten-ons-bezinnen-op-het-bestaansrecht-van-algoritmen) in the Dutch newspaper *Het Financieele Dagblad* on April 13th 2022.</span>
2023

2124
###### **Summary**
2225

2326
This document formulates actionable suggestions to improve legal protection for citizens and consumers in the European Union in the context for (semi-)automated decision-making (ADM). The suggestions in this document are linked to an existing concept in EU non- discrimination law: the reversal of the burden of proof.
2427

25-
{{< container_open >}}
26-
{{< pdf_frame title="White-paper" name="white-paper" articleUrl="https://drive.google.com/file/d/1RHdqoGVgwv-FTv8qC9fAlsVl8eUTcR7s/preview" >}}
27-
{{< container_close >}}
28+
{{< pdf_frame articleUrl1="https://drive.google.com/file/d/1gZo_1IUFuCmTFZfHW4j7jSk9QzCyGbxN/preview" width_desktop_pdf="6" width_mobile_pdf="12" >}}
Lines changed: 5 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,15 @@
11
---
22
title: >-
3-
Algorithm Audit (white paper) – Feedback on DSA Delegated Regulation (conducting independent
4-
audits)
5-
author: Algorithm Audit
3+
Algorithm Audit (white paper) – Feedback on DSA Delegated Regulation
4+
(conducting independent audits)
5+
subtitle: ''
66
image: /images/knowledge_base/white-paper-3.png
7+
author: Algorithm Audit
78
type: regular
89
summary: >-
910
Plea to include the normative dimension of AI auditing in delegated regulation
1011
of the Digital Services Act (DSA). Current limitations are illustrated by
1112
focussing on a recommender systems example
12-
subtitle: ''
1313
---
1414

1515
Feedback to the European Commission on DSA Delegated Regulation – conducting independent audits.
@@ -18,8 +18,4 @@ Feedback to the European Commission on DSA Delegated Regulation – conducting i
1818

1919
In addition to Article 37 of the Digital Services Act (DSA), Delegated Regulation (DR) sets out procedures, methodologies and templates for third-party auditing of Very Large Open Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs). The DR builds upon established sector-specific risk management frameworks to provide procedural guidance for AI audits. However, the regulation lacks provisions to disclose normative methodological choices that underlie AI systems (e.g., recommender systems), which is crucial for evaluating associated risks in a meaningful way (as mandated by DSA Article 34). To illustrate this limitation, we elaborate on methodological crossroads that determine the performance of recommender systems and its downstream risks. We make concrete suggestions how the definition of ‘inherent risk’ (Article 2), audit methodologies of risk assessments (Section IV) and the audit report template (Annex I) set out by the DR should be amended to incorporate normative dimension of AI auditing in a meaningful way. Only if both the technical and normative dimension of AI systems are thoroughly examined, risk assessed under the DSA will empower the European Union and its citizens to determine what public values should to be safeguarded in the digital world.
2020

21-
{{< container_open >}}
22-
23-
{{< pdf_frame >}}
24-
25-
{{< container_close >}}
21+
{{< pdf_frame articleUrl1="https://drive.google.com/file/d/1v6CApiRsT4vE1e-EXJnHDufk0FyXLHwL/preview" width_desktop_pdf="6" width_mobile_pdf="12" >}}

content/english/knowledge-platform/knowledge-base/White_paper_algorithmic_regulatory_body.md

Lines changed: 0 additions & 18 deletions
This file was deleted.

0 commit comments

Comments
 (0)