You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The <ahref="https://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1623335154975&uri=CELEX%3A52021PC0206"target="_blank">AI Act</a> imposes broad new responsibilities to control risks from AI systems without at the same time laying down specific standards they are expected to meet. For instance:
30
30
31
-
***Conformity assessment (Art. 43) –** The proposed route for internal control relies too much on the self-reflective capacities of producers to assess AI quality management, risk management and bias. Resulting in subjective best-practices;
32
31
***Risk- and quality management systems (Art. 9 and 17) –** Requirements set out for risk management systems and quality management systems remain too generic. For example, it does not provide precise guidelines how to identify and mitigate ethical issues such as algorithmic discrimination;
33
-
***Normative standards –** Technical standards alone, as requested the European Commission to standardization bodies CEN-CENELEC, are not enough to realize AI harmonization across the EU. Publicly available technical and normative best-practices for fair AI are urgently needed.
32
+
***Conformity assessment (Art. 43) –** The proposed route for internal control relies too much on the self-reflective capacities of producers to assess AI quality management, risk management and bias. Resulting in subjective best-practices;
33
+
***Technical standards –** Technical standards alone, as requested the European Commission to standardization bodies CEN-CENELEC, are not enough to realize AI harmonization across the EU. Publicly available technical and normative best-practices for fair AI are urgently needed.
34
34
35
35
As a member of Dutch standardization body NEN, Algorithm Audit contributes to the European debate how fundamental rights should be co-regulated by product safety.
36
36
37
-
#### Presentation to European standardization body CEN-CENELEC on stakeholder panels
{{< button button_text="Learn more about our standardization activities" button_link="/knowledge-platform/standards/" >}}
42
-
43
-
Our audits take in mind upcoming harmonized standards that will be applicable under the AI Act, excluding cybersecurity specifications. For each of our technical and normative audit reports is elaborated how it aligns with the current status of AI Act harmonized standards.
{{< button button_text="Read the white paper" button_link="/knowledge-platform/knowledge-base/white_paper_dsa_delegated_regulation_feedback/" >}}
64
56
@@ -69,14 +61,11 @@ The [Digital Services Act (DSA)](https://eur-lex.europa.eu/legal-content/EN/TXT
69
61
The GDPR has its strengths regarding participatory decision-making, but it has also weaknesses in regulating profiling algorithms and its focus on fully automated decision-making.
70
62
71
63
* <ahref="https://gdpr-info.eu/art-35-gdpr/"target="\_blank"> Participatory DPIA (art. 35 sub 9)</a> – This provision mandates that in cases where a Data Privacy Impact Assessment (DPIA) is obligatory, the opinions of data subjects regarding the planned data processing shall be seeked. This is a powerful legal mechanism to foster collaborative algorithm development. Nevertheless, the inclusion of data subjects in this manner is scarcely observed in practice;
72
-
* <ahref="https://gdpr-info.eu/recitals/no-71/"target="\_blank"> Profiling (recital 71)</a> – Profiling is defined as: “to analyse or predict aspects concerning the data subject’s performance at work, economic situation, health, personal preferences or interests, reliability or behaviour, location or movements”. However, the approval of profiling, particularly when “authorised by Union or Member State law to which the controller is subject, including fraud monitoring”, grants public and private entities significant flexibility to integrate algorithmic decision-making derived from diverse types of profiling. This wide latitude raises concerns about the potential for excessive consolidation of personal data and the consequences of algorithmic determinations. As illustrated by simple, rule-based but harmful profiling algorithms in The Netherlands;
73
-
* <ahref="https://gdpr-info.eu/art-22-gdpr/"target="\_blank"> Automated decision-making (art. 22 sub 2)</a> – Allowing wide-ranging automated decision-making (ADM) and profiling under the sole condition of contract agreement opens the door for large scale unethical algorithmic practices without accountability and public awareness.
74
-
75
-
#### Read Algorithm Audit's technical audit of a risk profiling-based control proces of a Dutch public sector organisation
64
+
* <ahref="https://gdpr-info.eu/art-22-gdpr/"target="\_blank"> Automated decision-making (art. 22 sub 2)</a> – Ongoing legal uncertainty what exactly is 'automated decision-making' and 'meaningful human interaction' given the <ahref="[https://](https://curia.europa.eu/juris/liste.jsf?num=C-634/21)"target="_blank">Schüfa court</a> ruling by the Court of Justice of the European Union (CJEU).
{{< embed_pdf url="/pdf-files/policy-observatory/2023_VanBekkum_Using sensitive data to prevent discrimination by AI.pdf" width_desktop_pdf="6" width_mobile_pdf="12" >}}
De <ahref="https://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1623335154975&uri=CELEX%3A52021PC0206"target="_blank">AI Verordening</a> legt brede nieuwe verantwoordelijkheden op om risico's van AI-systemen te beheersen, maar specifieke normen voor de verantwoorde inzet van algoritmes ontbreken vooralsnog. Bijvoorbeeld:
14
+
De <ahref="https://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1623335154975&uri=CELEX%3A52021PC0206"target="_blank">AI-verordening</a> legt brede nieuwe verantwoordelijkheden op om risico's van AI-systemen te beheersen, maar specifieke normen voor de verantwoorde inzet van algoritmes ontbreken vooralsnog. Bijvoorbeeld:
17
15
18
-
***Risico- en kwaliteitmanagement systeem (Art. 9 and 17) –** Vereisten voor risico- en kwaliteitmanagement systemen blijven te generiek. De vereisten stellen bijvoorbeeld dat AI systemen niet mogen discrimineren en dat ethische risico's in kaart moeten worden gebracht. Er wordt echter niet toegelicht hoe discriminatie kan worden vastgesteld of hoe waardenspanningen beslecht kunnen worden;
16
+
***Risico- en kwaliteitmanagementsysteem (Art. 9 and 17) –** Vereisten voor risico- en kwaliteitmanagement systemen blijven te generiek. De vereisten stellen bijvoorbeeld dat AI systemen niet mogen discrimineren en dat ethische risico's in kaart moeten worden gebracht. Er wordt echter niet toegelicht hoe discriminatie kan worden vastgesteld of hoe waardenspanningen beslecht kunnen worden;
19
17
***Conformiteitsassessment (Art. 43) –** De AI Verordening leunt zwaar of interne controles en mechanismen die zelf-reflectie moeten bevorderen om AI systemen op een verantwoorde wijze in te zetten. Dit leidt echter tot subjectieve keuzen. Meer institutionele duiding is nodig over normatieve vraagstukken;
20
18
***Normatieve standaarden –** Enkel technische standaarden voor AI-systemen, zoals de Europese Commissie standaardiseringsorganisaties CEN-CENELEC heeft verzocht te ontwikkelen, zijn onvoldoende om voor de verantwoorde inzet van AI systemen. Publieke kennis over technische én normatieve oordeelsvorming over verantwoorde AI-systemen is hard nodig. Maar juist hier is een gebrek aan.
21
19
22
20
Als lid van het Nederlands Normalisatie Instituut NEN draagt Stichting Algorithm Audit bij aan het Europese debat hoe fundamentele rechten gecoreguleerd kunnen worden door productveiligheidsregulatie zoals de AI Verordening.
23
21
24
22
#### Presentatie Algorithm Audit tijdens plenaire bijeenkomst Europese standaardiseringsorganisatie CEN-CENELEC over diverse en inclusieve adviescommissies in Dublin, feb-2024
Administrative law provides a normative framework for algorithmic-driven decision-making processes. In The Netherlands, for instance, through the codification of general principles of good administration (gpga). We argue that these principles are relevant to the algorithmic practice, but require contextualisation, which is often lacking. Take a closer look, for instance, to:
0 commit comments