You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/english/_index.md
+61-77Lines changed: 61 additions & 77 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,11 +8,6 @@ Banner:
8
8
title_mobile_line2: knowledge for
9
9
title_mobile_line3_underline: responsible
10
10
title_mobile_line3_after: algorithms
11
-
phonetica: /æl.ɡə-ˈpruː.dəns/
12
-
type: noun
13
-
description1: Case-based normative advice for ethical algorithms
14
-
description2: Guidance for decentralised self-assessment of fair AI
15
-
description3: Jurisprudence for algorithms
16
11
slogan:
17
12
title: A European knowledge platform for
18
13
labels:
@@ -30,49 +25,45 @@ About:
30
25
overview_block:
31
26
activities:
32
27
- title: Knowledge platform
33
-
subtitle: Statistical and legal expertise
28
+
subtitle: Expertise in statistics, software development, legal framework and ethics
34
29
url: /knowledge-platform/
35
30
icon: fa-light fa-layer-group
36
31
color: "#E3F0FE"
37
32
- title: Algoprudence
38
-
subtitle: Case-based normative advice
33
+
subtitle: Case-based normative advice about responsible AI
39
34
url: /algoprudence/
40
35
icon: fa-light fa-scale-balanced
41
36
color: "#F7CDBF"
42
37
- title: Technical tools
43
-
subtitle: Open source AI auditing tools
38
+
subtitle: Open source tools for validating algorithmic systems
44
39
url: /technical-tools/
45
40
icon: fa-light fa-toolbox
46
41
color: "#FFFDE4"
47
42
- title: Project work
48
-
subtitle: "Validation, AI Act etc."
43
+
subtitle: Validation, AI Act implementation, organisational control measures etc.
49
44
url: /knowledge-platform/project-work/
50
45
icon: fa-light fa-magnifying-glass-plus
51
46
color: "#E3F0FE"
52
47
Activity_Feed:
53
48
featured_title: Featured
54
49
featured_activities:
55
50
- title: >-
56
-
Public standard 'Meaningful human intervention for risk profiling
57
-
algorithms'
51
+
Local-only tools for AI validation
58
52
intro: >
59
-
Step-by-step guide to prevent prohibited automated decision-making
60
-
solely based on profilings, as stated in Article 22 GDPR. Based on
61
-
case-based experiences with risk profiling algorithms and aligned with
62
-
recent literature.
53
+
Slides explaining the concept of 'local-only' tools. Highlighting similarities and differences with cloud computing, including examples how Algorithm Audit's open source software can be used for unsupervised bias detection and synthetic data generation tool.
- title: Guest lecture 'Fairness and Algorithms' ETH Zürich
86
77
link: /events/activities/#events
87
-
image: /images/events/eth-zurich.jpg
78
+
image: /images/partner logo-cropped/ETH.jpg
88
79
date: 23-05-2025
89
80
type: event
90
81
- title: Panel discussion CPDP'25
91
82
link: /events/activities/#events
92
-
image: /images/events/cpdp-logo-2025.svg
83
+
image: /images/partner logo-cropped/CPDP25.svg
93
84
date: 21-05-2025
94
85
type: panel discussion
95
86
- title: >-
96
87
Masterclass 'From data to decision', Jantina Tammes School of Digital
97
88
Society, Technology and AI University of Groningen
98
89
link: /events/activities/#events
99
-
image: /images/events/RUG.png
90
+
image: /images/partner logo-cropped/RUG.png
100
91
date: 06-05-2025
101
92
type: event
102
93
items_button_text: More events
@@ -107,88 +98,81 @@ Areas_of_AI_expertise:
107
98
width_m: 4
108
99
width_s: 12
109
100
feature_item:
110
-
- name: Algorithms for decision support
111
-
icon: fas fa-divide
101
+
- name: Sociotechnical evaluation of generative AI
102
+
icon: fas fa-robot
112
103
content: >
113
-
Auditing data-analysis methods and algorithms used for decision support.
114
-
Among others by checking organizational checks and balances, and
115
-
assessing the quantitative dimension
116
-
- name: AI Act standards
104
+
Evaluating Large Language Models (LLMs) and other general-purpose AI models for robustness, privacy and AI Act compliance. Based on real-world examples, are developing a framework to analyze content filters, guardrails and user interaction design choices. <a
style="text-decoration: underline;">AI Act Implementation Tool</a> helps organizations identifying AI systems and assigning the right risk category. As a member of Dutch and European standardization organisations NEN and CEN-CENELEC, Algorithm Audit monitors and contributes to the development of standards for AI systems. See also our public <a
113
+
href="/knowledge-platform/standards/"
123
114
style="text-decoration: underline;">knowledge base</a> on
124
115
standardization
125
-
- name: Profiling
116
+
- name: Bias analysis
126
117
icon: fas fa-chart-pie
127
118
content: >
128
-
Auditing rule-based and ML-driven profiling, e.g., differentiation
129
-
policies, selection criteria, Z-testing, model validation and
130
-
organizational aspects
119
+
We evaluate algorithmic systems both from a qualitative and quantitative dimension. Besides expertise about data analysis and AI engineering, we possess have in-depth knowledge of legal frameworks concerning non-discrimination, automated decision-making and organizational risk management. See our <a
120
+
href="/knowledge-platform/knowledge-base/"
121
+
style="text-decoration: underline;">public standards</a> how to deploy algorithmic systems responsibly.
By working nonprofit and under explicit terms and conditions, we ensure
143
-
the independence and quality of our audits and normative advice
144
-
- name: Normative advice
145
-
icon: fas fa-search
133
+
We are pioneering the future of responsible AI by bringing together expertise in statistics, software development, law and ethics. Our work is widely read throughout Europe and beyond.
134
+
- name: Not-for-profit
135
+
icon: fas fa-seedling
146
136
content: >
147
-
Mindful of societal impact our commissions provide normative advice on
148
-
ethical issues that arise in algorithmic use cases
149
-
- name: Public knowledge
150
-
icon: fab fa-slideshare
137
+
We work closely together with private and public sector organisations, regulators and policy makers to foster knowledge exchange about responsible AI. Working nonprofit suits our activities and goals best.
138
+
- name: Public knowledge building
139
+
icon: fas fa-box-open
151
140
content: >
152
-
Audits and corresponding advice (*algoprudence*) are made <a
underline;">publicly available</a>, increasing collective knowledge how
155
-
to deploy and use algorithms in an responsible way
156
-
button_text: Project work
141
+
We make our reports, software and best-practices publicy available, contributing to collective knowledge on the responsible deployment and use of AI. We prioritize public knowledge building over protecting our intellectual property.
Implementing and testing technical tools to detect and mitigate bias,
32
-
e.g., [unsupervised bias detection tool](/technical-tools/bdt/) and [synthetic data generation](/technical-tools/sdg/).
31
+
e.g., sociotechnical evaluation of generative AI, [unsupervised bias detection tool](/technical-tools/bdt/) and [synthetic data generation](/technical-tools/sdg/).
This case study, in combination with our [bias detection tool](/technical-tools/bdt/), has been selected as a finalist for [Stanford’s AI Audit Challenge 2023](https://hai.stanford.edu/ai-audit-challenge-2023-finalists).
@@ -51,15 +51,15 @@ A visual presentation of this case study can be found in this [slide deck](http
51
51
52
52
{{< accordions_area_open id="actions" >}}
53
53
54
-
{{< accordion_item_open image="/images/supported_by/sidn.png" title="Funding for further development" id="sidn" date="01-12-2023" tag1="funding" tag2="open source" tag3="AI auditing tool" >}}
54
+
{{< accordion_item_open image="/images/partner logo-cropped/SIDN.png" title="Funding for further development" id="sidn" date="01-12-2023" tag1="funding" tag2="open source" tag3="AI auditing tool" >}}
55
55
56
56
##### Description
57
57
58
58
[SIDN Fund](https://www.sidnfonds.nl/projecten/open-source-ai-auditing) is supporting Algorithm Audit for further development of the bias detection tool. On 01-01-2024, a [team](/nl/about/teams/#bdt) has started that is further developing a testing the tool.
Copy file name to clipboardExpand all lines: content/english/algoprudence/cases/aa202401_preventing-prejudice.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -75,7 +75,7 @@ Report *Preventing prejudice* has been <a href="https://www.rijksoverheid.nl/doc
75
75
76
76
{{< accordion_item_close >}}
77
77
78
-
{{< accordion_item_open title="DUO apologizes for indirect discrimination in college allowances control process" image="/images/supported_by/DUO.png" id="DUO-apologies" date="01-03-2024" tag1="press release" >}}
78
+
{{< accordion_item_open title="DUO apologizes for indirect discrimination in college allowances control process" image="/images/partner logo-cropped/DUO.png" id="DUO-apologies" date="01-03-2024" tag1="press release" >}}
0 commit comments