You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -48,7 +48,7 @@ The [Digital Services Act (DSA)](https://eur-lex.europa.eu/legal-content/EN/TXT
48
48
* Private auditors (like PwC and Deloitte) warn that the lack of guidance on criteria against which to audit poses a risk of subjective audits;
49
49
* Tech companies (like Snap and Wikipedia) raise concerns about the industry’s lack of expertise to audit specific AI products, like company-tailored timeline recommender systems.
50
50
51
-
#### Read our feedback to the Europen Commission on DSA Art. 37 Delegated Regulation
51
+
#### Read Algorithm Audit's feedback to the Europen Commission on DSA Art. 37 Delegated Regulation
@@ -60,7 +60,7 @@ The [Digital Services Act (DSA)](https://eur-lex.europa.eu/legal-content/EN/TXT
60
60
61
61
The GDPR has its strengths regarding participatory decision-making, but it has also weaknesses in regulating profiling algorithms and its focus on fully automated decision-making.
62
62
63
-
* <ahref="https://gdpr-info.eu/art-35-gdpr/"target="\_blank"> Participatory DPIA (art. 35 sub 9)</a> – This provision mandates that in cases where a Data Privacy Impact Assessment (DPIA) is obligatory, the opinions of data subjects regarding the planned data processing shall be seeked. This is a powerful legal mechanism to foster collaborative algorithm development. Nevertheless, the inclusion of data subjects in this manner is scarcely observed in practice;
63
+
* <ahref="https://gdpr-info.eu/art-35-gdpr/"target="\_blank"> Participatory Data Privacy Impact Assessment (DPIA) (art. 35 sub 9)</a> – This provision mandates that in cases where a Data Privacy Impact Assessment (DPIA) is obligatory, the opinions of data subjects regarding the planned data processing shall be seeked. This is a powerful legal mechanism to foster collaborative algorithm development. Nevertheless, the inclusion of data subjects in this manner is scarcely observed in practice;
64
64
* <ahref="https://gdpr-info.eu/art-22-gdpr/"target="\_blank"> Automated decision-making (art. 22 sub 2)</a> – Ongoing legal uncertainty what exactly is 'automated decision-making' and 'meaningful human interaction' given the <ahref="[https://](https://curia.europa.eu/juris/liste.jsf?num=C-634/21)"target="_blank">Schüfa court</a> ruling by the Court of Justice of the European Union (CJEU).
65
65
66
66
#### Article summarizing interaction GDPR and AI Act regarding data collection for debiasing
@@ -73,30 +73,24 @@ The GDPR has its strengths regarding participatory decision-making, but it has a
73
73
74
74
Administrative law provides a normative framework for algorithmic-driven decision-making processes. In The Netherlands, for instance, through the codification of general principles of good administration (gpga). We argue that these principles are relevant to the algorithmic practice, but require contextualisation, which is often lacking. Take a closer look, for instance, to:
75
75
76
-
***Principle of reasoning:**On the basis of the principle of reasoning, it must be sufficiently clear on what grounds and why an administrative body takes a decision. What can and cannot be categorised as 'explainable' as a non-legal part of the legal norm of the principle of reasoning is undergoing extensive development, and concrete norms are therefore still lacking.
77
-
***Principle of due diligence:**This principle relates to the formation of a decision, and ML-driven risk profiling is precisely used in the phase of the decision-making process. The principle of due diligence can be jeopardized when ML-driven risk profiling is applied if the input data is incomplete or incorrect and if the risk profile does not include all the relevant facts. This principle suffers from a lack of interpretation, resulting in a lack of clear guidance.
76
+
***Duty to give reasons:**It must be sufficiently clear on what grounds and why an administrative body takes a decision. When an algorithm is used for decision support it should be explained how the output of the algorithm contributed to the decision-making process.
77
+
***Duty of care:**The duty of care, among others stating that a situation must be created in which all interest can be weighed and in which a suitable ML method is used;
78
78
***Fair play principle:** The principle of fair play, or proper treatment, which is partly codified as a prohibition of bias in Section 2:4 of the Dutch General Administrative Law Act, concerns impartial execution of tasks by an administrative body. We argue that ‘contextualising’ the gpga in the case of this principle should focus on new, digital manifestations of bias. Thereafter, a subsequent best-efforts obligation could be applied to prevent bias and guarantee fairness in algorithmic applications.
79
79
80
-
#### Read Algorithm Audit's article How ‘algoprudence’ can contribute to responsible use of ML-algorithms and its interplay with the Dutch General Administrative Law Act
80
+
#### Read the article How ‘algoprudence’ can contribute to responsible use of ML-algorithms and its interplay with the Dutch General Administrative Law Act
The [Impact Assessment Human Rights and Algorithms (IAMA)](https://www.rijksoverheid.nl/documenten/rapporten/2021/02/25/impact-assessment-mensenrechten-en-algoritmes) and the [Handbook for Non-Discrimination](https://www.rijksoverheid.nl/documenten/rapporten/2021/06/10/handreiking-non-discriminatie-by-design), both developed by the Dutch government, assess discriminatory practice mainly by asking questions that are meant to stimulate self-reflection. It does not provide answers or concrete guidelines how to realise ethical algorithms.
Unifying principles of sound administration with (semi-) automated decision-making is challenging. For instance:
88
+
Over the years, many Fundamental Rights Impact Assessments (FRIAs) have been developed. FRIAs typically assess responsible deployment of algorithms and AI by asking questions that are meant to stimulate self-reflection. It does not provide answers or concrete guidelines how to realise ethical algorithms.
97
89
98
-
Obligation to state reasons: Governmental institutions must always provide clear explanations for their decisions. However, when machine learning is employed, such as in variable selection for risk profiling, this transparency may be obscured. This leads to the question of how far arguments based on probability distributions are acceptable as explanations for why certain citizens are chosen for a particular profile.
90
+
#### Read Algorithm Audit's comparative analysis of 10 FRIAs
99
91
100
-
[Read Algoprudence AA:2023:02 for a review of xgboost machine learning used for risk profiling variable selection ](http://localhost:1313/algoprudence/cases/risk-profiling-for-social-welfare-reexamination-aa202302/)
0 commit comments