You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
title: Type of SIM card as a predictor for detecting payment fraud
38
4
subtitle: |
@@ -63,10 +29,10 @@ form1:
63
29
required: true
64
30
type: textarea
65
31
- label: |
66
-
Contact detail
32
+
Contact details
67
33
id: contact-details
68
34
required: true
69
-
type: email
35
+
type: text
70
36
placeholder: Mail address
71
37
---
72
38
@@ -86,9 +52,7 @@ The commission advises against using type of SIM card as an input variable in al
86
52
87
53
Anonymized large multinational company with e-commerce platform.
88
54
89
-
#### Algoprudence
90
-
91
-
The problem statement and advice report can be downloaded <ahref="https://drive.google.com/file/d/1fSETUhxOz0nF2nznsWq-4TyngP6lU7yH/preview"target="_blank">here</a>.
Copy file name to clipboardExpand all lines: content/english/algoprudence/cases/aa202301_bert-based-disinformation-classifier.md
+6-84Lines changed: 6 additions & 84 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,98 +1,22 @@
1
1
---
2
-
title: Higher-dimensional bias in a BERT-based disinformation classifier
2
+
layout: case
3
+
title: Multi-dimensional bias in a BERT-based disinformation classifier
3
4
subtitle: |
4
5
Problem statement (ALGO:AA:2023:01:P) and advice document (ALGO:AA:2023:01:A)
5
6
image: /images/algoprudence/AA202301/Cover.png
6
-
form1:
7
-
title: React to this normative judgement
8
-
content: >-
9
-
Your reaction will be sent to the team maintaining algoprudence. A team will
10
-
review your response and, if it complies with the guidelines, it will be placed in the Discussion & debate section
11
-
above.
12
-
button_text: Submit
13
-
backend_link: 'https://formspree.io/f/xyyrjyzr'
14
-
id: case-reaction
15
-
questions:
16
-
- label: Name
17
-
id: name
18
-
required: true
19
-
type: text
20
-
- label: Affiliated organization
21
-
id: affiliated-organization
22
-
type: text
23
-
- label: Reaction
24
-
id: reaction
25
-
required: true
26
-
type: textarea
27
-
- label: Contact detail
28
-
id: contact-details
29
-
required: true
30
-
type: email
31
-
placeholder: Mail address
32
-
layout: case
33
-
icon: fa-newspaper
34
-
summary: >
35
-
The advice commission believes there is a low risk of (higher-dimensional)
36
-
proxy discrimination by the BERT-based disinformation classifier and that the
37
-
particular difference in treatment identified by the quantitative bias scan
38
-
can be justified, if certain conditions apply.
39
-
sources: "Applying our self-build unsupervised\_[bias detection tool](https://algorithmaudit.eu/bias_scan)\_on a self-trained BERT-based disinformation classifier on the Twitter1516 dataset. Learn more on\_[Github](https://github.com/NGO-Algorithm-Audit/Bias_scan).\n"
40
-
additional_content:
41
-
- title: Stanford's AI Audit Challenge 2023
42
-
content: "This case study, in combination with our\_[bias scan tool](https://algorithmaudit.eu/bias_scan), has been selected as a finalist for\_[Stanford's AI Audit Challenge 2023](https://hai.stanford.edu/ai-audit-challenge-2023-finalists).\n"
content: "A visual presentation of this case study can be found in this\_[slide deck](https://github.com/NGO-Algorithm-Audit/Bias_scan/blob/master/Main_presentation_joint_fairness_assessment_method.pdf).\n"
48
-
width: 12
49
-
algoprudence:
50
-
title: Report
51
-
intro: "Dowload the full report and problem statement\_[here](https://drive.google.com/file/d/1GHPwDaal3oBJZluFYVR59e1_LHhP8kni/view?usp=sharing).\n"
Anne Meuwese, Professor in Public Law & AI at Leiden University
58
-
- name: >
59
-
Hinda Haned, Professor in Responsible Data Science at University of
60
-
Amsterdam
61
-
- name: |
62
-
Raphaële Xenidis, Associate Professor in EU law at Sciences Po Paris
63
-
- name: |
64
-
Aileen Nielsen, Fellow Law\&Tech at ETH Zürich
65
-
- name: "Carlos Hernández-Echevarría, Assistant Director and Head of Public Policy at the anti-disinformation nonprofit fact-checker\_[Maldita.es](https://maldita.es/maldita-es-journalism-to-not-be-fooled/)\n"
66
-
- name: "Ellen Judson, Head of CASM and Sophia Knight, Researcher, CASM at Britain’s leading cross-party think tank\_[Demos](https://demos.co.uk/)\n"
67
-
funded_by:
68
-
- url: 'https://europeanaifund.org/'
69
-
image: /images/supported_by/EUAISFund.png
70
-
actions:
71
-
- id: ai_audit_2023
72
-
title: Finalist selection Stanford's AI Audit Challenge 2023
73
-
description: >
74
-
Our [bias detection tool](/technical-tools/BDT) and this case study have
75
-
been selected as a finalist for [Stanford's AI Audit Challenge
{{< tab_header width="6" tab1_id="description" tab1_title="Description of algoprudence" tab2_id="actions" tab2_title="Actions following algoprudence" tab3_id="" tab3_title="" default_tab="description" >}}
86
10
87
-
{{< tab_content_open icon="fa-newspaper" title="Higher-dimensional bias in a BERT-based disinformation classifier" id="description" >}}
11
+
{{< tab_content_open icon="fa-newspaper" title="Multi-dimensional bias in a BERT-based disinformation classifier" id="description" >}}
88
12
89
13
#### Algoprudence identification code
90
14
91
15
ALGO:AA:2023:01
92
16
93
17
#### Summary advice
94
18
95
-
The advice commission believes there is a low risk of (higher-dimensional) proxy discrimination by the BERT-based disinformation classifier and that the particular difference in treatment identified by the quantitative bias scan can be justified, if certain conditions apply.
19
+
The advice commission believes there is a low risk of (multi-dimensional) proxy discrimination by the BERT-based disinformation classifier and that the particular difference in treatment identified by the quantitative bias scan can be justified, if certain conditions apply.
96
20
97
21
#### Source of case
98
22
@@ -108,9 +32,7 @@ This case study, in combination with our [bias detection tool](/technical-tools
108
32
109
33
A visual presentation of this case study can be found in this [slide deck](https://github.com/NGO-Algorithm-Audit/Bias_scan/blob/master/Main_presentation_joint_fairness_assessment_method.pdf).
110
34
111
-
#### Report
112
-
113
-
Dowload the full report and problem statement [here](https://drive.google.com/file/d/1GHPwDaal3oBJZluFYVR59e1_LHhP8kni/view?usp=sharing).
@@ -141,7 +63,7 @@ Dowload the full report and problem statement [here](https://drive.google.com/f
141
63
142
64
##### Description
143
65
144
-
Our [bias detection tool](https://algorithmaudit.eu/technical-tools/BDT) and this case study have been selected as a finalist for [Stanford’s AI Audit Challenge 2023](https://hai.stanford.edu/ai-audit-challenge-2023-finalists).
66
+
Our [unsupervised bias detection tool](/technical-tools/bdt/) and this case study have been selected as a finalist for [Stanford’s AI Audit Challenge 2023](https://hai.stanford.edu/ai-audit-challenge-2023-finalists).
Copy file name to clipboardExpand all lines: content/english/algoprudence/cases/aa202302_risk-profiling-for-social-welfare-reexamination.md
+5-7Lines changed: 5 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -72,13 +72,11 @@ Unsolicited research, build upon [freedom of information requests](https://www.
72
72
73
73
#### Presentation
74
74
75
-
The advice report (AA:2023:02:A) has been presented to the Dutch Minister of Digitalization on November 29, 2023. A press release can be found [here](/events/press_room/#rotterdam-algoritme).
75
+
The advice report (ALGO:AA:2023:02:A) has been presented to the Dutch Minister of Digitalization on November 29, 2023. A press release can be found [here](/events/press_room/#rotterdam-algoritme).
76
76
77
77
{{< image id="presentation-minister" image1="/images/algoprudence/AA202302/Algorithm audit presentatie BZK FB-18.jpg" alt1="Presentation advice report to Dutch Minister of Digitalization" caption1="Presentation advice report to Dutch Minister of Digitalization" width_desktop="5" width_mobile="12" >}}
78
78
79
-
#### Report
80
-
81
-
Dowload the full report and problem statement [here](https://drive.google.com/file/d/1GHPwDaal3oBJZluFYVR59e1_LHhP8kni/view?usp=sharing).
@@ -100,21 +98,21 @@ Dowload the full report and problem statement [here](https://drive.google.com/f
100
98
101
99
##### Description
102
100
103
-
Council members submitted <ahref="https://amsterdam.raadsinformatie.nl/document/13573898/1/236+sv+Aslami%2C+IJmker+en+Garmy+inzake+toegepaste+profileringscriteria+gemeentelijke+algoritmes"target="_blank">questions</a> whether the machine learning (ML)-driven risk profiling algorithm currently tested by the City of Amsterdam satisfies the requirements as set out in AA-2023:02:A, including:
101
+
Council members submitted <ahref="https://amsterdam.raadsinformatie.nl/document/13573898/1/236+sv+Aslami%2C+IJmker+en+Garmy+inzake+toegepaste+profileringscriteria+gemeentelijke+algoritmes"target="_blank">questions</a> whether the machine learning (ML)-driven risk profiling algorithm currently tested by the City of Amsterdam satisfies the requirements as set out in ALGO:AA:2023:02:A, including:
104
102
105
103
* (in)eligible selection criteria fed to the ML model
106
104
* explainability requirements for the used explainable boosting algorithm
107
105
* implications of the AIAct for this particular form of risk profiling.
News website for Dutch public sector administration reported on AA:2023:02:A. See [link](https://www.binnenlandsbestuur.nl/digitaal/algoritmische-profilering-onder-strikte-voorwaarden-mogelijk).
115
+
News website for Dutch public sector administration reported on ALGO:AA:2023:02:A. See [link](https://www.binnenlandsbestuur.nl/digitaal/algoritmische-profilering-onder-strikte-voorwaarden-mogelijk).
@@ -94,20 +91,6 @@ Report *Preventing prejudice* has been <a href="https://www.rijksoverheid.nl/doc
94
91
95
92
{{< accordions_area_open id="discussion" >}}
96
93
97
-
{{< accordion_item_open title="Reaction Netherlands Human Rights Institute on age discrimination" id="cvrm" background_color="#eef2f6" date="12-04-2024" tag1="reaction" image="/images/algoprudence/AA202302/Discussion&debate/CvRM.svg" >}}
98
-
99
-
#### Age Discrimination
100
-
101
-
Policies, such as those implemented by public sector agencies investigating (un)duly granted social welfare or employers seeking new employees, can intentionally or unintentionally lead to differentiation between certain groups of people. If an organization makes this distinction based on grounds that are legally protected, such as gender, origin, sexual orientation, or a disability or chronic illness, and there is no valid justifying reason for doing so, then the organization is engaging in prohibited discrimination. We refer to this as discrimination.
102
-
103
-
But what about age? Both the Rotterdam-algorithm and DUO-algorithm, as studied by Algorithm Audit, differentiated based on age. However, in these cases, age discrimination does not occur.
104
-
105
-
EU non-discrimination law also prohibits discrimination on the basis of age. For instance, arbitrarily rejecting a job applicant because someone is too old is not unlawful. However, legislation regarding age differentiation allows more room for a justifying argument than for the aforementioned personal characteristics. This is especially true when the algorithm is not applied in the context of labor.
106
-
107
-
Therefore, in the case of detecting unduly granted social welfare or misuse of college loan, it is not necessarily prohibited for an algorithm to consider someone's age. However, there must be a clear connection between age and the aim pursued. Until it is shown that someone's age increases the likelihood of misuse or fraud, age is ineligible as a selection criteria in algorithmic-driven selection procedures. For example, pertaining to disability allowances for youngsters (Wajong) a clear connection exists and an algorithm can lawfully differentiate upon age.
The full report (TA:AA:2024:02) can be found <ahref="https://drive.google.com/file/d/1uOhR9qXHW6P0i4uP7RNhil2G2dXzFjrp/preview"target="_blank">here</a>.
Copy file name to clipboardExpand all lines: content/english/events/activities.md
+10-10Lines changed: 10 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -485,7 +485,11 @@ events:
485
485
- value: type_interview
486
486
label: interview
487
487
- title: Seminar 'Algorithm validation'
488
-
description: "Algorithm Audit hosted a public seminar on algorithm validation and algoprudence in The Hague. Anne Meuwese\_shared insights on the value of algoprudence in contextualizing legal norms. Floris Holstege shed light into statistical hypothesis testing which plays an important role in our recent techincal audits.\_\n\nWe appreciated the interactive Q\\&A with the participants, especially the curious and critical questions which enables us to clarify our work, but also contributes to the further development and refinement of our activities as an NGO.\n\n{{< pdf_frame articleUrl1=\"https://drive.google.com/file/d/1edrNqP4cBgJ1zKv1970DsUJ3Tark6waF/preview\" width_desktop_pdf=\"12\" width_mobile_pdf=\"12\" >}}\n"
488
+
description: >
489
+
Algorithm Audit hosted a public seminar on algorithm validation and algoprudence in The Hague. Anne Meuwese shared insights on the value of algoprudence in contextualizing legal norms. Floris Holstege shed light into statistical hypothesis testing which plays an important role in our recent techincal audits. We appreciated the interactive Q&A with the participants, especially the curious and critical questions which enables us to clarify our work, but also contributes to the further development and refinement of our activities as an NGO.
Copy file name to clipboardExpand all lines: content/english/knowledge-platform/knowledge-base/Comparative_review_10_FRIAs.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -31,4 +31,4 @@ Our assessment shows a sharp divide regarding the length and completeness of FRI
31
31
32
32
Are you a frequent user, or a developer of a FRIA, please reach out to [[email protected]](mailto:[email protected]) to share insights based on our case-based AI auditing experience.
0 commit comments