Skip to content

Commit 1734b80

Browse files
committed
back to old commit and removed cohort vacancy
1 parent 32b64f4 commit 1734b80

File tree

3 files changed

+0
-223
lines changed

3 files changed

+0
-223
lines changed

content/english/about/vacancies.md

Lines changed: 0 additions & 106 deletions
Original file line numberDiff line numberDiff line change
@@ -63,109 +63,3 @@ form1:
6363
type: checkbox
6464
---
6565

66-
{{< accordions_area_open >}}
67-
68-
{{< accordion_item_open image="/images/logo/logo.svg" title="Responsible AI expert – AI Act standardization cohort" id="standardization" tag1="rolling applications" tag2="2-4 hours per week" tag3="voluntary" background_color="#ffffff" >}}
69-
70-
{{< button button_text="Apply for this function" button_link="#form" >}}
71-
72-
#### Summary
73-
74-
Would you like to contribute to shaping AI standards
75-
for the common good? Become part of Algorithm Audit’s part-time voluntary cohort,
76-
consisting of 5 international qualitative and quantitative responsible AI
77-
experts. By joining, you will be registered on behalf of Algorithm Audit as a CEN-CENELEC
78-
JTC21 expert, granting you direct involvement in working groups devising AI
79-
standards at the European level. Participating in this cohort positions you to
80-
become an AI Act expert towards the implementation of the Act in 2026. Besides,
81-
it enables you to work together with cutting-edge not-for-profit international AI
82-
auditing experts.
83-
84-
#### What is Algorithm Audit?
85-
86-
Algorithm Audit is a European knowledge platform for
87-
AI bias testing and normative AI standards. We are a young, tech-savvy NGO working
88-
on ethical issues that arise in real-world algorithms. We bring together
89-
experts from various professional backgrounds to build bottom-up public
90-
knowledge how to use AI in a responsible manner (see our case repository: https://algorithmaudit.eu/algoprudence/).
91-
Besides, we develop, maintain and test open-source AI auditing tools. Check for
92-
instance our synthetic data generation and bias detection tool cohorts (https://algorithmaudit.eu/about/teams/).
93-
94-
#### &#xA;Project activities
95-
96-
You will follow standardization activities in one of
97-
the following working groups (WGs): WG2 Operational aspects (e.g., risk
98-
management), WG3 Engineering aspects (e.g., data and bias) or WG4
99-
Trustworthiness (e.g., fundamental rights). Within your chosen WG, your role
100-
involves staying informed about ongoing standardization activities by reviewing
101-
online materials and attending WG meetings. Depending on the distribution of
102-
cohort members over the WGs, tasks can be divided. During the first 4 weeks of
103-
the cohort, you will undergo together with fellow cohort members a concise online
104-
standardization training to familiarize yourselves with using CEN-CENELEC’s
105-
information system. After this, you will collaborate with Algorithm Audit’s AI
106-
auditing experts to formulate written contributions for ongoing standardization
107-
activities. Insights gained from Algorithm Audit’s bottom-up auditing work will
108-
serve as input for top-down AI standards currently under development. Every 4-6
109-
weeks, you will provide a brief summary of the WG’s recent progress to your
110-
fellow cohort members.
111-
112-
#### &#xA;&#xA;What will you do as cohort team member?   
113-
114-
* Dedicate 2-4 hours per week from Apr-15 up to Dec-31st
115-
2024 on JTC21 standardization activities;
116-
* As an AI expert, serve as a representative for Algorithm
117-
Audit within the international AI standardization community, showcasing
118-
Algorithm Audit as a European knowledge platform for AI bias testing and
119-
normative AI standards;
120-
* Translate bottom-up AI auditing experiences into meaningful contributions to procedural and technical standards for AI, for instance hypothesis testing standards (t-tests, Z-tests, chi-square-tests etc.)
121-
for risk profiling algorithms, inclusion of stakeholder panels for qualitative
122-
interpretation of confusion matrix-based performance/fairness metrics (FPRs, FNRs
123-
etc.) and your own ideas. See also Algorithm Audit’s algoprudence case repository
124-
https://algorithmaudit.eu/algoprudence/;
125-
* Coordinate your own work activities. Reading and writing activities can be scheduled at your convenience. Attendance at working group meetings may necessitate some planning, attendance can be coordinated with fellow cohort members;
126-
* Share (bi)monthly updates with the standardization cohort;
127-
* Become an expert in standards that will underpin the AI Act from 2026 onwards.
128-
129-
#### Candidate profile
130-
131-
* PhD or MSc degree in one of the following fields
132-
related to responsible AI: computer science, engineering, statistics,
133-
mathematics, ethics, philosophy, law, policy or social sciences with focus on digital society;
134-
* Proven track record and ideally active
135-
participation in public discussions regarding responsible AI;
136-
* Understanding of academic discussions concerning AI and its intersection
137-
with fundamental rights, explainability, human oversight, and/or risk management;
138-
* Willing to contribute 2-4h per week, besides your day-to-day
139-
professional job, to build public knowledge for responsible algorithms;
140-
* You can assure there is no conflict of interest
141-
between contributing to AI standards from a common good-perspective and your
142-
professional activities. If you are employed in industry, please provide
143-
further details on your motivation for participating in standardization endeavors and
144-
how you plan to address any potential conflicts;
145-
* Available up to 31-12-2024 to contribute to participate
146-
in the cohort;
147-
* Advantageous: Methodological expertise in hypothesis
148-
testing, unsupervised machine learning (specifically clustering) and
149-
statistical inference.
150-
151-
#### Our approach to diversity, equity and inclusion
152-
153-
We
154-
encourage candidates from all backgrounds to apply and join us in our mission
155-
to build a European knowledge platform for AI bias testing
156-
and normative AI standards. Algorithm Audit is committed to inclusion across race, gender, age,
157-
religion, identity, and experience. We foster a multidisciplinary environment
158-
where everyone, from any background, can do their best work. We actively seek
159-
to include a diverse set of voices and perspectives in all aspects of our activities,
160-
how we form cohorts, structure our leadership team, the advice we give and the
161-
technical tools we deploy. Algorithm Audit’s commitment is reflected in its
162-
core mission to strengthen more fair and less discriminatory deployment of AI
163-
in all parts of society. We build and share public knowledge about
164-
discriminatory bias and fostering equitable algorithms and methods for
165-
data-analysis.
166-
167-
{{< form1 >}}
168-
169-
{{< accordion_item_close >}}
170-
171-
{{< accordions_area_close >}}

content/english/knowledge-platform/knowledge-base/test.md

Lines changed: 0 additions & 11 deletions
This file was deleted.

content/nederlands/about/vacancies.md

Lines changed: 0 additions & 106 deletions
Original file line numberDiff line numberDiff line change
@@ -63,109 +63,3 @@ form1:
6363
type: checkbox
6464
---
6565

66-
{{< accordions_area_open >}}
67-
68-
{{< accordion_item_open image="/images/logo/logo.svg" title="Responsible AI expert – AI Act standardization cohort" id="standardization" tag1="rolling applications" tag2="2-4 hours per week" tag3="voluntary" background_color="#ffffff" >}}
69-
70-
{{< button button_text="Apply for this function" button_link="#form" >}}
71-
72-
#### Summary
73-
74-
Would you like to contribute to shaping AI standards
75-
for the common good? Become part of Algorithm Audit’s part-time voluntary cohort,
76-
consisting of 5 international qualitative and quantitative responsible AI
77-
experts. By joining, you will be registered on behalf of Algorithm Audit as a CEN-CENELEC
78-
JTC21 expert, granting you direct involvement in working groups devising AI
79-
standards at the European level. Participating in this cohort positions you to
80-
become an AI Act expert towards the implementation of the Act in 2026. Besides,
81-
it enables you to work together with cutting-edge not-for-profit international AI
82-
auditing experts.
83-
84-
#### What is Algorithm Audit?
85-
86-
Algorithm Audit is a European knowledge platform for
87-
AI bias testing and normative AI standards. We are a young, tech-savvy NGO working
88-
on ethical issues that arise in real-world algorithms. We bring together
89-
experts from various professional backgrounds to build bottom-up public
90-
knowledge how to use AI in a responsible manner (see our case repository: https://algorithmaudit.eu/algoprudence/).
91-
Besides, we develop, maintain and test open-source AI auditing tools. Check for
92-
instance our synthetic data generation and bias detection tool cohorts (https://algorithmaudit.eu/about/teams/).
93-
94-
#### &#xA;Project activities
95-
96-
You will follow standardization activities in one of
97-
the following working groups (WGs): WG2 Operational aspects (e.g., risk
98-
management), WG3 Engineering aspects (e.g., data and bias) or WG4
99-
Trustworthiness (e.g., fundamental rights). Within your chosen WG, your role
100-
involves staying informed about ongoing standardization activities by reviewing
101-
online materials and attending WG meetings. Depending on the distribution of
102-
cohort members over the WGs, tasks can be divided. During the first 4 weeks of
103-
the cohort, you will undergo together with fellow cohort members a concise online
104-
standardization training to familiarize yourselves with using CEN-CENELEC’s
105-
information system. After this, you will collaborate with Algorithm Audit’s AI
106-
auditing experts to formulate written contributions for ongoing standardization
107-
activities. Insights gained from Algorithm Audit’s bottom-up auditing work will
108-
serve as input for top-down AI standards currently under development. Every 4-6
109-
weeks, you will provide a brief summary of the WG’s recent progress to your
110-
fellow cohort members.
111-
112-
#### &#xA;&#xA;What will you do as cohort team member?   
113-
114-
* Dedicate 2-4 hours per week from Apr-15 up to Dec-31st
115-
2024 on JTC21 standardization activities;
116-
* As an AI expert, serve as a representative for Algorithm
117-
Audit within the international AI standardization community, showcasing
118-
Algorithm Audit as a European knowledge platform for AI bias testing and
119-
normative AI standards;
120-
* Translate bottom-up AI auditing experiences into meaningful contributions to procedural and technical standards for AI, for instance hypothesis testing standards (t-tests, Z-tests, chi-square-tests etc.)
121-
for risk profiling algorithms, inclusion of stakeholder panels for qualitative
122-
interpretation of confusion matrix-based performance/fairness metrics (FPRs, FNRs
123-
etc.) and your own ideas. See also Algorithm Audit’s algoprudence case repository
124-
https://algorithmaudit.eu/algoprudence/;
125-
* Coordinate your own work activities. Reading and writing activities can be scheduled at your convenience. Attendance at working group meetings may necessitate some planning, attendance can be coordinated with fellow cohort members;
126-
* Share (bi)monthly updates with the standardization cohort;
127-
* Become an expert in standards that will underpin the AI Act from 2026 onwards.
128-
129-
#### Candidate profile
130-
131-
* PhD or MSc degree in one of the following fields
132-
related to responsible AI: computer science, engineering, statistics,
133-
mathematics, ethics, philosophy, law, policy or social sciences with focus on digital society;
134-
* Proven track record and ideally active
135-
participation in public discussions regarding responsible AI;
136-
* Understanding of academic discussions concerning AI and its intersection
137-
with fundamental rights, explainability, human oversight, and/or risk management;
138-
* Willing to contribute 2-4h per week, besides your day-to-day
139-
professional job, to build public knowledge for responsible algorithms;
140-
* You can assure there is no conflict of interest
141-
between contributing to AI standards from a common good-perspective and your
142-
professional activities. If you are employed in industry, please provide
143-
further details on your motivation for participating in standardization endeavors and
144-
how you plan to address any potential conflicts;
145-
* Available up to 31-12-2024 to contribute to participate
146-
in the cohort;
147-
* Advantageous: Methodological expertise in hypothesis
148-
testing, unsupervised machine learning (specifically clustering) and
149-
statistical inference.
150-
151-
#### Our approach to diversity, equity and inclusion
152-
153-
We
154-
encourage candidates from all backgrounds to apply and join us in our mission
155-
to build a European knowledge platform for AI bias testing
156-
and normative AI standards. Algorithm Audit is committed to inclusion across race, gender, age,
157-
religion, identity, and experience. We foster a multidisciplinary environment
158-
where everyone, from any background, can do their best work. We actively seek
159-
to include a diverse set of voices and perspectives in all aspects of our activities,
160-
how we form cohorts, structure our leadership team, the advice we give and the
161-
technical tools we deploy. Algorithm Audit’s commitment is reflected in its
162-
core mission to strengthen more fair and less discriminatory deployment of AI
163-
in all parts of society. We build and share public knowledge about
164-
discriminatory bias and fostering equitable algorithms and methods for
165-
data-analysis.
166-
167-
{{< form1 >}}
168-
169-
{{< accordion_item_close >}}
170-
171-
{{< accordions_area_close >}}

0 commit comments

Comments
 (0)