You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/english/_index.md
+41-52Lines changed: 41 additions & 52 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,11 +8,6 @@ Banner:
8
8
title_mobile_line2: knowledge for
9
9
title_mobile_line3_underline: responsible
10
10
title_mobile_line3_after: algorithms
11
-
phonetica: /æl.ɡə-ˈpruː.dəns/
12
-
type: noun
13
-
description1: Case-based normative advice for ethical algorithms
14
-
description2: Guidance for decentralised self-assessment of fair AI
15
-
description3: Jurisprudence for algorithms
16
11
slogan:
17
12
title: A European knowledge platform for
18
13
labels:
@@ -30,22 +25,22 @@ About:
30
25
overview_block:
31
26
activities:
32
27
- title: Knowledge platform
33
-
subtitle: Statistical and legal expertise
28
+
subtitle: Expertise in statistics, software development, legal framework and ethics
34
29
url: /knowledge-platform/
35
30
icon: fa-light fa-layer-group
36
31
color: "#E3F0FE"
37
32
- title: Algoprudence
38
-
subtitle: Case-based normative advice
33
+
subtitle: Case-based normative advice about responsible AI
39
34
url: /algoprudence/
40
35
icon: fa-light fa-scale-balanced
41
36
color: "#F7CDBF"
42
37
- title: Technical tools
43
-
subtitle: Open source AI auditing tools
38
+
subtitle: Open source tools for validating algorithmic systems
44
39
url: /technical-tools/
45
40
icon: fa-light fa-toolbox
46
41
color: "#FFFDE4"
47
42
- title: Project work
48
-
subtitle: "Validation, AI Act etc."
43
+
subtitle: Validation, AI Act implementation, organisational control measures etc.
49
44
url: /knowledge-platform/project-work/
50
45
icon: fa-light fa-magnifying-glass-plus
51
46
color: "#E3F0FE"
@@ -55,20 +50,20 @@ Activity_Feed:
55
50
- title: >-
56
51
Local-only tools for AI validation
57
52
intro: >
58
-
Slides explaining the concept of local-only tools. Highlighting similarities and differences with cloud computing and providing examples of architerctural set-up for Algorithm Audit's unsupervised bias detection and synthetic data generation tool.
53
+
Slides explaining the concept of 'local-only' tools. Highlighting similarities and differences with cloud computing, including examples how Algorithm Audit's open source software can be used for unsupervised bias detection and synthetic data generation tool.
- name: Sociotechnical evaluation of generative AI
107
102
icon: fas fa-robot
108
103
content: >
109
-
Auditing data-analysis methods and algorithms used for decision support.
110
-
Among others by checking organizational checks and balances, and
111
-
assessing the quantitative dimension
112
-
- name: AI Act implementation
104
+
Evaluating Large Language Models (LLMs) and other general-purpose AI models for robustness, privacy and AI Act compliance. Based on real-world examples, are developing a framework to analyze content filters, guardrails and user interaction design choices. <a
style="text-decoration: underline;">Learn more</a> about our evaluation framework.
107
+
- name: AI Act implementation and standards
113
108
icon: fas fa-certificate
114
109
content: >
115
-
Algorithm Audit is well-informed about techno-legal concepts in the AI Act. As a member of Dutch and Europen standardization organisations NEN and CEN-CENELEC, Algorithm Audit monitors and contributes to the development of standards for AI systems. See also our public <a
style="text-decoration: underline;">AI Act Implementation Tool</a> helps organizations identifying AI systems and assigning the right risk category. As a member of Dutch and European standardization organisations NEN and CEN-CENELEC, Algorithm Audit monitors and contributes to the development of standards for AI systems. See also our public <a
113
+
href="/knowledge-platform/standards/"
117
114
style="text-decoration: underline;">knowledge base</a> on
118
115
standardization
119
116
- name: Bias analysis
120
117
icon: fas fa-chart-pie
121
118
content: >
122
-
Auditing data-analysis methods and algorithms used for decision support.
123
-
Among others by checking organizational checks and balances, and
124
-
assessing the quantitative dimension
119
+
We evaluate algorithmic systems both from a qualitative and quantitative dimension. Besides expertise about dataanalysis and AI engineering, we possess have in-depth knowledge of legal frameworks concerning non-discrimination, automated decision-making and organizational risk management. See our <a
120
+
href="/knowledge-platform/knowledge-base/"
121
+
style="text-decoration: underline;">public standards</a> how to deploy algorithmic systems responsibly.
By working nonprofit and under explicit terms and conditions, we ensure
137
-
the independence and quality of our audits and normative advice
138
-
- name: Normative advice
139
-
icon: fas fa-search
133
+
We are pioneering the future of responsible AI by bringing together expertise in statistics, software development, law and ethics. Our work is widely read throughout Europe and beyond.
134
+
- name: Not-for-profit
135
+
icon: fas fa-seedling
140
136
content: >
141
-
Mindful of societal impact our commissions provide normative advice on
142
-
ethical issues that arise in algorithmic use cases
143
-
- name: Public knowledge
144
-
icon: fab fa-slideshare
137
+
We work closely together with private and public sector organisations, regulators and policy makers to foster knowledge exchange about responsible AI. Working nonprofit suits our activities and goals best.
138
+
- name: Public knowledge building
139
+
icon: fas fa-box-open
145
140
content: >
146
-
Audits and corresponding advice (*algoprudence*) are made <a
underline;">publicly available</a>, increasing collective knowledge how
149
-
to deploy and use algorithms in an responsible way
150
-
button_text: Project work
141
+
We make our reports, software and best-practices publicy available, contributing to collective knowledge on the responsible deployment and use of AI. We prioritize public knowledge building over protecting our intellectual property.
This case study, in combination with our [bias detection tool](/technical-tools/bdt/), has been selected as a finalist for [Stanford’s AI Audit Challenge 2023](https://hai.stanford.edu/ai-audit-challenge-2023-finalists).
@@ -51,15 +51,15 @@ A visual presentation of this case study can be found in this [slide deck](http
51
51
52
52
{{< accordions_area_open id="actions" >}}
53
53
54
-
{{< accordion_item_open image="/images/supported_by/sidn.png" title="Funding for further development" id="sidn" date="01-12-2023" tag1="funding" tag2="open source" tag3="AI auditing tool" >}}
54
+
{{< accordion_item_open image="/images/partner logo-cropped/SIDN.png" title="Funding for further development" id="sidn" date="01-12-2023" tag1="funding" tag2="open source" tag3="AI auditing tool" >}}
55
55
56
56
##### Description
57
57
58
58
[SIDN Fund](https://www.sidnfonds.nl/projecten/open-source-ai-auditing) is supporting Algorithm Audit for further development of the bias detection tool. On 01-01-2024, a [team](/nl/about/teams/#bdt) has started that is further developing a testing the tool.
Copy file name to clipboardExpand all lines: content/english/algoprudence/cases/aa202401_preventing-prejudice.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -75,7 +75,7 @@ Report *Preventing prejudice* has been <a href="https://www.rijksoverheid.nl/doc
75
75
76
76
{{< accordion_item_close >}}
77
77
78
-
{{< accordion_item_open title="DUO apologizes for indirect discrimination in college allowances control process" image="/images/supported_by/DUO.png" id="DUO-apologies" date="01-03-2024" tag1="press release" >}}
78
+
{{< accordion_item_open title="DUO apologizes for indirect discrimination in college allowances control process" image="/images/partner logo-cropped/DUO.png" id="DUO-apologies" date="01-03-2024" tag1="press release" >}}
Copy file name to clipboardExpand all lines: content/english/events/press_room.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,7 +17,7 @@ quick_navigation:
17
17
18
18
{{< accordions_area_open id="DUO_CBS" >}}
19
19
20
-
{{< accordion_item_open title="DUO control process biased towards students with a non-European migration background" id="DUO_CBS" background_color="#ffffff" date="22-05-2024" tag1="DUO" tag2="CBS" tag3="bias analysis" image="/images/supported_by/DUO.png" >}}
20
+
{{< accordion_item_open title="DUO control process biased towards students with a non-European migration background" id="DUO_CBS" background_color="#ffffff" date="22-05-2024" tag1="DUO" tag2="CBS" tag3="bias analysis" image="/images/partner logo-cropped/DUO.png" >}}
21
21
22
22
**THE HAGUE - In its inspection of the legitimate use of student finance for students living away from home, DUO selected students for control with a non-European migration background significantly more often. This demonstrates an unconscious bias in DUO's control process. Students with a non-European migration background were assigned a higher risk score by a risk profile and were more often manually selected for a home visit. This is evident from follow-up research that NGO Algorithm Audit carried out on behalf of DUO, which was sent by the minister to the House of Representatives on May 22. The results of the research strengthen the outcomes of previous research, on the basis of which the minister apologized on behalf of the cabinet on March 1, 2024 for indirect discrimination in the control process.**
23
23
@@ -35,7 +35,7 @@ The Bias Prevented report (addendum) can be found [here](https://algorithmaudit.
35
35
36
36
{{< accordion_item_close >}}
37
37
38
-
{{< accordion_item_open title="Irregularities identified in college allowances control process by Dutch public sector organization DUO" id="DUO" background_color="#ffffff" date="01-03-2024" tag1="DUO" tag2="audit report" tag3="" image="/images/supported_by/DUO.png" >}}
38
+
{{< accordion_item_open title="Irregularities identified in college allowances control process by Dutch public sector organization DUO" id="DUO" background_color="#ffffff" date="01-03-2024" tag1="DUO" tag2="audit report" tag3="" image="/images/partner logo-cropped/DUO.png" >}}
39
39
40
40
**THE HAGUE – In its control process into misuse of the allowances for students living away from home, Dutch public sector organization DUO selected individuals who lived close to their parent(s) significantly more often. The risk-taxation algorithm, that was used as an assisting tool for selecting students, worked as expected. However, the combination of the algorithm and manual selection resulted in a large overrepresentation of certain groups. Selected students were visited at home to inspect whether allowances were unduly granted. This is the main conclusion of research carried out by NGO Algorithm Audit on behalf of DUO. DUO’s control process was discredited in 2023 after media attention, in which was mentioned that students with a migration background were accused of misuse more often than other students.**
Copy file name to clipboardExpand all lines: content/english/knowledge-platform/collaboration.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -41,7 +41,7 @@ Over the course of 2023-24, Algorithm Audit has paired up with Dutch public sect
41
41
* collaboratively submitted a request to the Dutch National Office of Statistics in order to conduct a large-scale study on proxy characteristics;
42
42
* finding solutions to deal with the normative dimension of auditing and evaluating algorithmic-driven decision-making processes, for instance involvement of various stakeholders to qualitatively interpret quantitative measurements.
0 commit comments