Skip to content

Commit 15ad745

Browse files
authored
Merge pull request #287 from NGO-Algorithm-Audit/feature/structural_edits
Feature/structural edits
2 parents 7851888 + 5626dc4 commit 15ad745

File tree

114 files changed

+498
-492
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

114 files changed

+498
-492
lines changed

assets/scss/_landingpage.scss

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -450,4 +450,13 @@
450450

451451
.sublandingpage-banner-padding-bottom {
452452
padding-bottom: 7rem;
453+
}
454+
455+
.collaberating_logo {
456+
max-width: 300px;
457+
max-height: 140px;
458+
@media (max-width: 992px) {
459+
max-width: 200px;
460+
max-height: 94px;
461+
}
453462
}

assets/scss/templates/_main.scss

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -108,6 +108,17 @@ input[type="checkbox"] {
108108
}
109109
}
110110

111+
.img-center {
112+
width: auto;
113+
height: auto;
114+
position: absolute;
115+
top: 0;
116+
bottom: 0;
117+
left: 0;
118+
right: 0;
119+
margin: auto;
120+
}
121+
111122
.img-event{
112123
max-height: 140px;
113124
@media (max-width: 992px) {

content/english/_index.md

Lines changed: 61 additions & 77 deletions
Original file line numberDiff line numberDiff line change
@@ -8,11 +8,6 @@ Banner:
88
title_mobile_line2: knowledge for
99
title_mobile_line3_underline: responsible
1010
title_mobile_line3_after: algorithms
11-
phonetica: /æl.ɡə-ˈpruː.dəns/
12-
type: noun
13-
description1: Case-based normative advice for ethical algorithms
14-
description2: Guidance for decentralised self-assessment of fair AI
15-
description3: Jurisprudence for algorithms
1611
slogan:
1712
title: A European knowledge platform for
1813
labels:
@@ -30,49 +25,45 @@ About:
3025
overview_block:
3126
activities:
3227
- title: Knowledge platform
33-
subtitle: Statistical and legal expertise
28+
subtitle: Expertise in statistics, software development, legal framework and ethics
3429
url: /knowledge-platform/
3530
icon: fa-light fa-layer-group
3631
color: "#E3F0FE"
3732
- title: Algoprudence
38-
subtitle: Case-based normative advice
33+
subtitle: Case-based normative advice about responsible AI
3934
url: /algoprudence/
4035
icon: fa-light fa-scale-balanced
4136
color: "#F7CDBF"
4237
- title: Technical tools
43-
subtitle: Open source AI auditing tools
38+
subtitle: Open source tools for validating algorithmic systems
4439
url: /technical-tools/
4540
icon: fa-light fa-toolbox
4641
color: "#FFFDE4"
4742
- title: Project work
48-
subtitle: "Validation, AI Act etc."
43+
subtitle: Validation, AI Act implementation, organisational control measures etc.
4944
url: /knowledge-platform/project-work/
5045
icon: fa-light fa-magnifying-glass-plus
5146
color: "#E3F0FE"
5247
Activity_Feed:
5348
featured_title: Featured
5449
featured_activities:
5550
- title: >-
56-
Public standard 'Meaningful human intervention for risk profiling
57-
algorithms'
51+
Local-only tools for AI validation
5852
intro: >
59-
Step-by-step guide to prevent prohibited automated decision-making
60-
solely based on profilings, as stated in Article 22 GDPR. Based on
61-
case-based experiences with risk profiling algorithms and aligned with
62-
recent literature.
53+
Slides explaining the concept of 'local-only' tools. Highlighting similarities and differences with cloud computing, including examples how Algorithm Audit's open source software can be used for unsupervised bias detection and synthetic data generation tool.
6354
link: >-
64-
/knowledge-platform/knowledge-base/public_standard_meaningful_human_intervention/
65-
image: /images/knowledge_base/Public_standard_meaningful_human_intervention.png
66-
date: 15-05-2025
67-
type: public standard
55+
/technical-tools/bdt/#local-only
56+
image: /images/BDT/20250605_carrousel_local-only.png
57+
date: 05-06-2025
58+
type: open source code
6859
- title: >-
6960
Public standard 'Meaningful human intervention for risk profiling
7061
algorithms'
7162
intro: >
7263
Step-by-step guide to prevent prohibited automated decision-making
73-
solely based on profilings, as stated in Article 22 GDPR. Based on
64+
solely based on profiling, as stated in article 22 GDPR. Based on
7465
case-based experiences with risk profiling algorithms and aligned with
75-
recent literature.
66+
recent (scientific) publications.
7667
link: >-
7768
/knowledge-platform/knowledge-base/public_standard_meaningful_human_intervention/
7869
image: /images/knowledge_base/Public_standard_meaningful_human_intervention.png
@@ -84,19 +75,19 @@ Activity_Feed:
8475
activities:
8576
- title: Guest lecture 'Fairness and Algorithms' ETH Zürich
8677
link: /events/activities/#events
87-
image: /images/events/eth-zurich.jpg
78+
image: /images/partner logo-cropped/ETH.jpg
8879
date: 23-05-2025
8980
type: event
9081
- title: Panel discussion CPDP'25
9182
link: /events/activities/#events
92-
image: /images/events/cpdp-logo-2025.svg
83+
image: /images/partner logo-cropped/CPDP25.svg
9384
date: 21-05-2025
9485
type: panel discussion
9586
- title: >-
9687
Masterclass 'From data to decision', Jantina Tammes School of Digital
9788
Society, Technology and AI University of Groningen
9889
link: /events/activities/#events
99-
image: /images/events/RUG.png
90+
image: /images/partner logo-cropped/RUG.png
10091
date: 06-05-2025
10192
type: event
10293
items_button_text: More events
@@ -107,88 +98,81 @@ Areas_of_AI_expertise:
10798
width_m: 4
10899
width_s: 12
109100
feature_item:
110-
- name: Algorithms for decision support
111-
icon: fas fa-divide
101+
- name: Sociotechnical evaluation of generative AI
102+
icon: fas fa-robot
112103
content: >
113-
Auditing data-analysis methods and algorithms used for decision support.
114-
Among others by checking organizational checks and balances, and
115-
assessing the quantitative dimension
116-
- name: AI Act standards
104+
Evaluating Large Language Models (LLMs) and other general-purpose AI models for robustness, privacy and AI Act compliance. Based on real-world examples, are developing a framework to analyze content filters, guardrails and user interaction design choices. <a
105+
href="/knowledge-platform/project-work/#LLM-validation"
106+
style="text-decoration: underline;">Learn more</a> about our evaluation framework.
107+
- name: AI Act implementation and standards
117108
icon: fas fa-certificate
118109
content: >
119-
As Algorithm Audit is part of Dutch and Europen standardization
120-
organisations NEN and CEN-CENELEC, AI systems are audited according to
121-
the latest standards. See also our public <a
122-
href="https://algorithmaudit.eu/knowledge-platform/standards/"
110+
Our open-source <a
111+
href="/technical-tools/implementation-tool/"
112+
style="text-decoration: underline;">AI Act Implementation Tool</a> helps organizations identifying AI systems and assigning the right risk category. As a member of Dutch and European standardization organisations NEN and CEN-CENELEC, Algorithm Audit monitors and contributes to the development of standards for AI systems. See also our public <a
113+
href="/knowledge-platform/standards/"
123114
style="text-decoration: underline;">knowledge base</a> on
124115
standardization
125-
- name: Profiling
116+
- name: Bias analysis
126117
icon: fas fa-chart-pie
127118
content: >
128-
Auditing rule-based and ML-driven profiling, e.g., differentiation
129-
policies, selection criteria, Z-testing, model validation and
130-
organizational aspects
119+
We evaluate algorithmic systems both from a qualitative and quantitative dimension. Besides expertise about data analysis and AI engineering, we possess have in-depth knowledge of legal frameworks concerning non-discrimination, automated decision-making and organizational risk management. See our <a
120+
href="/knowledge-platform/knowledge-base/"
121+
style="text-decoration: underline;">public standards</a> how to deploy algorithmic systems responsibly.
131122
button_text: Discuss collaboration
132-
button_link: /knowledge-platform/project-work/
123+
button_link: /knowledge-platform/project-work/#form
133124
Distinctive_in:
134125
title: Distinctive in
135126
enable: true
136127
width_m: 4
137128
width_s: 2
138129
feature_item:
139-
- name: Independence
140-
icon: fas fa-star-of-life
130+
- name: Multi-disciplinary expertise
131+
icon: fas fa-brain
141132
content: >
142-
By working nonprofit and under explicit terms and conditions, we ensure
143-
the independence and quality of our audits and normative advice
144-
- name: Normative advice
145-
icon: fas fa-search
133+
We are pioneering the future of responsible AI by bringing together expertise in statistics, software development, law and ethics. Our work is widely read throughout Europe and beyond.
134+
- name: Not-for-profit
135+
icon: fas fa-seedling
146136
content: >
147-
Mindful of societal impact our commissions provide normative advice on
148-
ethical issues that arise in algorithmic use cases
149-
- name: Public knowledge
150-
icon: fab fa-slideshare
137+
We work closely together with private and public sector organisations, regulators and policy makers to foster knowledge exchange about responsible AI. Working nonprofit suits our activities and goals best.
138+
- name: Public knowledge building
139+
icon: fas fa-box-open
151140
content: >
152-
Audits and corresponding advice (*algoprudence*) are made <a
153-
href="https://algorithmaudit.eu/algoprudence/" style="text-decoration:
154-
underline;">publicly available</a>, increasing collective knowledge how
155-
to deploy and use algorithms in an responsible way
156-
button_text: Project work
141+
We make our reports, software and best-practices publicy available, contributing to collective knowledge on the responsible deployment and use of AI. We prioritize public knowledge building over protecting our intellectual property.
142+
button_text: Our projects
157143
button_link: /knowledge-platform/project-work/
158144
Supported_by:
159-
title: Collaborating with
145+
title: Working together with
160146
funders:
161-
- image: /images/supported_by/sidn.png
162-
link: "https://www.sidnfonds.nl/projecten/open-source-ai-auditing"
163-
alt_text: Foundation for Internet and Democracy Netherlands
164-
- image: /images/supported_by/EUAISFund.png
147+
- image: /images/partner logo-cropped/CoE.png
148+
link: "https://www.coe.int/en/web/portal/home"
149+
alt_text: Council of Europe
150+
- image: /images/partner logo-cropped/EAISF.png
165151
link: "https://europeanaifund.org/announcing-our-2022-open-call-grantees/"
166152
alt_text: European AI & Society Fund
167-
- image: /images/supported_by/BZK.jpg
153+
- image: /images/partner logo-cropped/CEN.jpg
154+
link: "https://www.cencenelec.eu"
155+
alt_text: "European standardisation committee "
156+
- image: /images/partner logo-cropped/MinBZK.png
168157
link: >-
169158
https://www.rijksoverheid.nl/ministeries/ministerie-van-binnenlandse-zaken-en-koninkrijksrelaties
170159
alt_text: Dutch Ministry of the Interior
171-
- image: /images/supported_by/HAI.png
172-
link: "https://hai.stanford.edu/ai-audit-challenge-2023-finalists"
173-
alt_text: Stanford University Human-Centered Artificial Intelligence Lab
174-
- image: /images/supported_by/DUO.png
160+
- image: /images/partner logo-cropped/MinJenV.png
161+
link: "https://www.rijksoverheid.nl/ministeries/ministerie-van-justitie-en-veiligheid"
162+
alt_text: Dutch Ministry of Justice and Security
163+
- image: /images/partner logo-cropped/DUO.png
175164
link: "https://duo.nl"
176165
alt_text: Dutch Executive Agency for Education
177-
- image: /images/partners/NEN.svg
178-
link: "https://www.nen.nl"
179-
alt_text: Dutch standardisation institute
180-
- image: /images/partners/CEN.jpg
181-
link: "https://www.cencenelec.eu"
182-
alt_text: "Europees standardisation committee "
183-
- image: /images/events/Amsterdam.png
166+
- image: /images/partner logo-cropped/GemeenteAmsterdam.png
184167
link: "https://www.amsterdam.nl"
185168
alt_text: Municipality of Amsterdam
186-
- image: /images/supported_by/CoE.png
187-
link: "https://www.coe.int/en/web/portal/home"
188-
alt_text: Council of Europe
189-
button_text: And more
190-
button_link: /funded-by
169+
- image: /images/partner logo-cropped/SIDN.png
170+
link: "https://www.sidnfonds.nl/excerpt"
171+
alt_text: Foundation Internet Domain registration Netherlands
172+
- image: /images/partner logo-cropped/NEN.svg
173+
link: "https://www.nen.nl"
174+
alt_text: Dutch standardisation institute
191175
Title_video:
192-
title: The Movie
176+
title: Video
193177
video_mp4: /videos/AA_video_(1080p).mp4
194178
---

content/english/about/teams.md

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -15,8 +15,7 @@ about_AA:
1515
image: /images/about/knowledge-platform.svg
1616
content: >
1717
Bringing together experts and knowledge to foster the collective
18-
learning process on the responsible use of algorithms, see for instance
19-
our [AI policy observatory](/knowledge-platform/policy-observatory/),
18+
learning process on the responsible use of algorithms, see our
2019
[white papers](/knowledge-platform/knowledge-base/) and
2120
[public standards](/knowledge-platform/knowledge-base/).
2221
- subtitle: Normative advice commissions
@@ -29,7 +28,7 @@ about_AA:
2928
image: /images/about/technical-tools.svg
3029
content: >
3130
Implementing and testing technical tools to detect and mitigate bias,
32-
e.g., [unsupervised bias detection tool](/technical-tools/bdt/) and [synthetic data generation](/technical-tools/sdg/).
31+
e.g., sociotechnical evaluation of generative AI, [unsupervised bias detection tool](/technical-tools/bdt/) and [synthetic data generation](/technical-tools/sdg/).
3332
- subtitle: Project work
3433
image: /images/about/project.svg
3534
content: >

content/english/algoprudence/cases/aa202301_bert-based-disinformation-classifier.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ Applying our self-build unsupervised [bias detection tool](/technical-tools/bdt
2626

2727
This case study, in combination with our [bias detection tool](/technical-tools/bdt/), has been selected as a finalist for [Stanford’s AI Audit Challenge 2023](https://hai.stanford.edu/ai-audit-challenge-2023-finalists).
2828

29-
{{< image id="stanford" width_desktop="6" width_mobile="12" image1="/images/supported_by/HAI.png" link1="https://hai.stanford.edu/ai-audit-challenge-2023-finalists" alt1="Stanford University" caption1="Stanford University" >}}
29+
{{< image id="stanford" width_desktop="6" width_mobile="12" image1="/images/partner logo-cropped/StanfordHAI.png" link1="https://hai.stanford.edu/ai-audit-challenge-2023-finalists" alt1="Stanford University" caption1="Stanford University" >}}
3030

3131
#### Presentation
3232

@@ -51,15 +51,15 @@ A visual presentation of this case study can be found in this [slide deck](http
5151

5252
{{< accordions_area_open id="actions" >}}
5353

54-
{{< accordion_item_open image="/images/supported_by/sidn.png" title="Funding for further development" id="sidn" date="01-12-2023" tag1="funding" tag2="open source" tag3="AI auditing tool" >}}
54+
{{< accordion_item_open image="/images/partner logo-cropped/SIDN.png" title="Funding for further development" id="sidn" date="01-12-2023" tag1="funding" tag2="open source" tag3="AI auditing tool" >}}
5555

5656
##### Description
5757

5858
[SIDN Fund](https://www.sidnfonds.nl/projecten/open-source-ai-auditing) is supporting Algorithm Audit for further development of the bias detection tool. On 01-01-2024, a [team](/nl/about/teams/#bdt) has started that is further developing a testing the tool.
5959

6060
{{< accordion_item_close >}}
6161

62-
{{< accordion_item_open title="Finalist selection Stanford's AI Audit Challenge 2023" image="/images/supported_by/HAI.png" id="ai_audit_challenge" date="28-04-2023" tag1="finalist" >}}
62+
{{< accordion_item_open title="Finalist selection Stanford's AI Audit Challenge 2023" image="/images/partner logo-cropped/StanfordHAI.png" id="ai_audit_challenge" date="28-04-2023" tag1="finalist" >}}
6363

6464
##### Description
6565

content/english/algoprudence/cases/aa202401_preventing-prejudice.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -75,7 +75,7 @@ Report *Preventing prejudice* has been <a href="https://www.rijksoverheid.nl/doc
7575

7676
{{< accordion_item_close >}}
7777

78-
{{< accordion_item_open title="DUO apologizes for indirect discrimination in college allowances control process" image="/images/supported_by/DUO.png" id="DUO-apologies" date="01-03-2024" tag1="press release" >}}
78+
{{< accordion_item_open title="DUO apologizes for indirect discrimination in college allowances control process" image="/images/partner logo-cropped/DUO.png" id="DUO-apologies" date="01-03-2024" tag1="press release" >}}
7979

8080
##### Description
8181

0 commit comments

Comments
 (0)