You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/english/_index.md
+72-40Lines changed: 72 additions & 40 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,4 +1,8 @@
1
1
---
2
+
search:
3
+
title: Building public knowledge for responsible algorithms
4
+
subtitle: A European knowledge platform for AI bias testing and AI standards.
5
+
image: /images/logo/logo.svg
2
6
Banner:
3
7
title_line1: Building public knowledge
4
8
title_line2_before: for
@@ -25,34 +29,35 @@ About:
25
29
overview_block:
26
30
activities:
27
31
- title: Knowledge platform
28
-
subtitle: Expertise in statistics, software development, law and ethics
32
+
subtitle: 'Expertise in statistics, software development, law and ethics'
29
33
url: /knowledge-platform/
30
34
icon: fa-light fa-layer-group
31
-
color: "#E3F0FE"
35
+
color: '#E3F0FE'
32
36
- title: Algoprudence
33
37
subtitle: Case-based normative advice about responsible AI
34
38
url: /algoprudence/
35
39
icon: fa-light fa-scale-balanced
36
-
color: "#F7CDBF"
40
+
color: '#F7CDBF'
37
41
- title: Technical tools
38
42
subtitle: Open source tools for validating algorithmic systems
39
43
url: /technical-tools/
40
44
icon: fa-light fa-toolbox
41
-
color: "#FFFDE4"
45
+
color: '#FFFDE4'
42
46
- title: Project work
43
-
subtitle: Validation, AI Act implementation, organisational control measures etc.
47
+
subtitle: 'Validation, AI Act implementation, organisational control measures etc.'
44
48
url: /knowledge-platform/project-work/
45
49
icon: fa-light fa-magnifying-glass-plus
46
-
color: "#E3F0FE"
50
+
color: '#E3F0FE'
47
51
Activity_Feed:
48
52
featured_title: Featured
49
53
featured_activities:
50
-
- title: >-
51
-
Local-only tools for AI validation
54
+
- title: Local-only tools for AI validation
52
55
intro: >
53
-
Slides explaining the concept of 'local-only' tools. Highlighting similarities and differences with cloud computing, including examples how Algorithm Audit's open source software can be used for unsupervised bias detection and synthetic data generation tool.
54
-
link: >-
55
-
/technical-tools/bdt/#local-only
56
+
Slides explaining the concept of 'local-only' tools. Highlighting
57
+
similarities and differences with cloud computing, including examples
58
+
how Algorithm Audit's open source software can be used for unsupervised
59
+
bias detection and synthetic data generation tool.
- title: Presentation 'A Public Standard for Auditing Risk Profiling Algorithms', Audit Analytics Summit 2025, Nyenrode Business University and Utrecht University
- name: Sociotechnical evaluation of generative AI
100
113
icon: fas fa-robot
101
114
content: >
102
-
Evaluating Large Language Models (LLMs) and other general-purpose AI models for robustness, privacy and AI Act compliance. Based on real-world examples, are developing a framework to analyze content filters, guardrails and user interaction design choices. <a
115
+
Evaluating Large Language Models (LLMs) and other general-purpose AI
116
+
models for robustness, privacy and AI Act compliance. Based on
117
+
real-world examples, are developing a framework to analyze content
118
+
filters, guardrails and user interaction design choices. <a
style="text-decoration: underline;">Learn more</a> about our evaluation framework.
120
+
style="text-decoration: underline;">Learn more</a> about our evaluation
121
+
framework.
105
122
- name: AI Act implementation and standards
106
123
icon: fas fa-certificate
107
124
content: >
108
-
Our open-source <a
109
-
href="/technical-tools/implementation-tool/"
110
-
style="text-decoration: underline;">AI Act Implementation Tool</a> helps organizations identifying AI systems and assigning the right risk category. As a member of Dutch and European standardization organisations NEN and CEN-CENELEC, Algorithm Audit monitors and contributes to the development of standards for AI systems. See also our public <a
111
-
href="/knowledge-platform/standards/"
112
-
style="text-decoration: underline;">knowledge base</a> on
organizations identifying AI systems and assigning the right risk
128
+
category. As a member of Dutch and European standardization
129
+
organisations NEN and CEN-CENELEC, Algorithm Audit monitors and
130
+
contributes to the development of standards for AI systems. See also our
131
+
public <a href="/knowledge-platform/standards/" style="text-decoration:
132
+
underline;">knowledge base</a> on standardization
114
133
- name: Bias analysis
115
134
icon: fas fa-chart-pie
116
135
content: >
117
-
We evaluate algorithmic systems both from a qualitative and quantitative dimension. Besides expertise about data analysis and AI engineering, we possess have in-depth knowledge of legal frameworks concerning non-discrimination, automated decision-making and organizational risk management. See our <a
118
-
href="/knowledge-platform/knowledge-base/"
119
-
style="text-decoration: underline;">public standards</a> how to deploy algorithmic systems responsibly.
136
+
We evaluate algorithmic systems both from a qualitative and quantitative
137
+
dimension. Besides expertise about data analysis and AI engineering, we
138
+
possess have in-depth knowledge of legal frameworks concerning
139
+
non-discrimination, automated decision-making and organizational risk
140
+
management. See our <a href="/knowledge-platform/knowledge-base/"
141
+
style="text-decoration: underline;">public standards</a> how to deploy
We are pioneering the future of responsible AI by bringing together expertise in statistics, software development, law and ethics. Our work is widely read throughout Europe and beyond.
154
+
We are pioneering the future of responsible AI by bringing together
155
+
expertise in statistics, software development, law and ethics. Our work
156
+
is widely read throughout Europe and beyond.
132
157
- name: Not-for-profit
133
158
icon: fas fa-seedling
134
159
content: >
135
-
We work closely together with private and public sector organisations, regulators and policy makers to foster knowledge exchange about responsible AI. Working nonprofit suits our activities and goals best.
160
+
We work closely together with private and public sector organisations,
161
+
regulators and policy makers to foster knowledge exchange about
162
+
responsible AI. Working nonprofit suits our activities and goals best.
136
163
- name: Public knowledge building
137
164
icon: fas fa-box-open
138
165
content: >
139
-
We make our reports, software and best-practices publicy available, contributing to collective knowledge on the responsible deployment and use of AI. We prioritize public knowledge building over protecting our intellectual property.
166
+
We make our reports, software and best-practices publicy available,
167
+
contributing to collective knowledge on the responsible deployment and
168
+
use of AI. We prioritize public knowledge building over protecting our
0 commit comments