Skip to content

Commit 841c9aa

Browse files
committed
Update policy observatory
1 parent 863f233 commit 841c9aa

File tree

7 files changed

+27
-40
lines changed

7 files changed

+27
-40
lines changed

config/_default/menus.NL.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -83,7 +83,7 @@ url = "/nl/knowledge-platform/knowledge-base"
8383
icon = "fa-check"
8484
[[main]]
8585
parent = "Kennisplatform"
86-
name = "AI beleid observatorium"
86+
name = "AI beleidsobservatorium"
8787
url = "nl/knowledge-platform/policy-observatory"
8888
weight = 2
8989
[[main.params]]

content/english/knowledge-platform/policy-observatory.md

Lines changed: 12 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,9 @@
11
---
22
title: AI Policy Observatory
33
subtitle: >
4-
AI ethics urgently needs case-based experience and a bottom-up approach. We
5-
believe existing and proposed legislation is and will not suffice to realize
6-
ethical algorithms. For various policy initiatives, we elaborate below why.
4+
There are various policy initiatives for responsible deployment of algorithms
5+
and AI. On this page information is collected about these initiatives,
6+
including reference material that Algorithm Audit has developed.
77
image: /images/svg-illustrations/knowledge_base.svg
88
reports_preview:
99
title: White papers
@@ -22,27 +22,19 @@ reports_preview:
2222
conducting independent audits
2323
---
2424

25-
{{< tab_header width="2" default_tab="AI-Act" tab1_id="AI-Act" tab1_title="AI Act" tab2_id="DSA" tab2_title="DSA" tab3_id="GDPR" tab3_title="GDPR" tab4_id="administrative-law" tab4_title="Administrative law" tab5_id="FRIA" tab5_title="FRIA" tab6_id="algorithm-registers" tab6_title="Registers" >}}
25+
{{< tab_header width="2" default_tab="AIAct" tab1_id="AIAct" tab1_title="AI Act" tab2_id="GDPR" tab2_title="GDPR" tab3_id="administrative-law" tab3_title="Administrative law" tab4_id="FRIA" tab4_title="FRIA" tab5_id="algorithm-registers" tab5_title="Registers" tab6_id="DSA" tab6_title="DSA" >}}
2626

27-
{{< tab_content_open id="AI-Act" icon="fa-newspaper" title="AI Act" >}}
27+
{{< tab_content_open id="AIAct" icon="fa-newspaper" title="AI Act" >}}
2828

2929
The <a href="https://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1623335154975&uri=CELEX%3A52021PC0206" target="_blank">AI Act</a> imposes broad new responsibilities to control risks from AI systems without at the same time laying down specific standards they are expected to meet. For instance:
3030

31-
* **Conformity assessment (Art. 43) –** The proposed route for internal control relies too much on the self-reflective capacities of producers to assess AI quality management, risk management and bias. Resulting in subjective best-practices;
3231
* **Risk- and quality management systems (Art. 9 and 17) –** Requirements set out for risk management systems and quality management systems remain too generic. For example, it does not provide precise guidelines how to identify and mitigate ethical issues such as algorithmic discrimination;
33-
* **Normative standards –** Technical standards alone, as requested the European Commission to standardization bodies CEN-CENELEC, are not enough to realize AI harmonization across the EU. Publicly available technical and normative best-practices for fair AI are urgently needed.
32+
* **Conformity assessment (Art. 43) –** The proposed route for internal control relies too much on the self-reflective capacities of producers to assess AI quality management, risk management and bias. Resulting in subjective best-practices;
33+
* **Technical standards –** Technical standards alone, as requested the European Commission to standardization bodies CEN-CENELEC, are not enough to realize AI harmonization across the EU. Publicly available technical and normative best-practices for fair AI are urgently needed.
3434

3535
As a member of Dutch standardization body NEN, Algorithm Audit contributes to the European debate how fundamental rights should be co-regulated by product safety.
3636

37-
#### Presentation to European standardization body CEN-CENELEC on stakeholder panels
38-
39-
{{< pdf_frame articleUrl1="https://drive.google.com/file/d/1vadydN4_ZEXJ0h_Sj-4GRUwJSacM0fCK/preview" width_desktop_pdf="12" width_mobile_pdf="12" >}}
40-
41-
{{< button button_text="Learn more about our standardization activities" button_link="/knowledge-platform/standards/" >}}
42-
43-
Our audits take in mind upcoming harmonized standards that will be applicable under the AI Act, excluding cybersecurity specifications. For each of our technical and normative audit reports is elaborated how it aligns with the current status of AI Act harmonized standards.
44-
45-
{{< button button_text="Case repository" button_link="/algoprudence/" >}}
37+
{{< button button_text="Learn more about our AI Act Implementation Tool" button_link="/technical-tools/implementation-tool/" >}}
4638

4739
{{< tab_content_close >}}
4840

@@ -58,7 +50,7 @@ The [Digital Services Act (DSA)](https://eur-lex.europa.eu/legal-content/EN/TXT
5850

5951
#### Read our feedback to the Europen Commission on DSA Art. 37 Delegated Regulation
6052

61-
{{< pdf_frame articleUrl1="https://drive.google.com/file/d/1v6CApiRsT4vE1e-EXJnHDufk0FyXLHwL/preview" width_desktop_pdf="6" width_mobile_pdf="12" >}}
53+
{{< embed_pdf url="/pdf-files/policy-observatory/20230705_DSA_delegated_regulation.pdf" width_desktop_pdf="6" width_mobile_pdf="12" >}}
6254

6355
{{< button button_text="Read the white paper" button_link="/knowledge-platform/knowledge-base/white_paper_dsa_delegated_regulation_feedback/" >}}
6456

@@ -69,14 +61,11 @@ The [Digital Services Act (DSA)](https://eur-lex.europa.eu/legal-content/EN/TXT
6961
The GDPR has its strengths regarding participatory decision-making, but it has also weaknesses in regulating profiling algorithms and its focus on fully automated decision-making.
7062

7163
* <a href="https://gdpr-info.eu/art-35-gdpr/" target="\_blank"> Participatory DPIA (art. 35 sub 9)</a> – This provision mandates that in cases where a Data Privacy Impact Assessment (DPIA) is obligatory, the opinions of data subjects regarding the planned data processing shall be seeked. This is a powerful legal mechanism to foster collaborative algorithm development. Nevertheless, the inclusion of data subjects in this manner is scarcely observed in practice;
72-
* <a href="https://gdpr-info.eu/recitals/no-71/" target="\_blank"> Profiling (recital 71)</a> – Profiling is defined as: “to analyse or predict aspects concerning the data subject’s performance at work, economic situation, health, personal preferences or interests, reliability or behaviour, location or movements”. However, the approval of profiling, particularly when “authorised by Union or Member State law to which the controller is subject, including fraud monitoring”, grants public and private entities significant flexibility to integrate algorithmic decision-making derived from diverse types of profiling. This wide latitude raises concerns about the potential for excessive consolidation of personal data and the consequences of algorithmic determinations. As illustrated by simple, rule-based but harmful profiling algorithms in The Netherlands;
73-
* <a href="https://gdpr-info.eu/art-22-gdpr/" target="\_blank"> Automated decision-making (art. 22 sub 2)</a> – Allowing wide-ranging automated decision-making (ADM) and profiling under the sole condition of contract agreement opens the door for large scale unethical algorithmic practices without accountability and public awareness.
74-
75-
#### Read Algorithm Audit's technical audit of a risk profiling-based control proces of a Dutch public sector organisation
64+
* <a href="https://gdpr-info.eu/art-22-gdpr/" target="\_blank"> Automated decision-making (art. 22 sub 2)</a> – Ongoing legal uncertainty what exactly is 'automated decision-making' and 'meaningful human interaction' given the <a href="[https://](https://curia.europa.eu/juris/liste.jsf?num=C-634/21)" target="_blank">Schüfa court</a> ruling by the Court of Justice of the European Union (CJEU).
7665

77-
{{< pdf_frame articleUrl1="https://drive.google.com/file/d/17dwU4zAqpyixwVTKCYM7Ezq1VM5_kcDa/preview" width_desktop_pdf="6" width_mobile_pdf="12" >}}
66+
#### Article summarizing interaction GDPR and AI Act regarding data collection for debiasing
7867

79-
{{< button button_link="/algoprudence/cases/aa202401_bias-prevented/" button_text="Technical audit" >}}
68+
{{< embed_pdf url="/pdf-files/policy-observatory/2023_VanBekkum_Using sensitive data to prevent discrimination by AI.pdf" width_desktop_pdf="6" width_mobile_pdf="12" >}}
8069

8170
{{< tab_content_close >}}
8271

content/nederlands/knowledge-platform/policy-observatory.md

Lines changed: 12 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -1,29 +1,27 @@
11
---
2-
title: AI beleid observatorium
2+
title: AI beleidsobservatorium
33
subtitle: >
4-
Om concreet aan de slag te gaan met de verantwoorde inzet van algoritmes is
5-
het urgent dat praktijkervaring op code-niveau publiek wordt gedeeld.
6-
Algorithm Audit is van mening dat abstracte en top-down wet- en regelgeving
7-
hier onvoldoende soelaas voor biedt. Per beleidsinstrument lichten we
8-
hieronder toe waarom.
4+
Er zijn verschillende beleidsinitiativen om algoritmes en AI verantwoord in te zetten.
5+
Op deze pagina wordt informatie bijgehouden over belangrijke Europese en nationale initiatieven, inclusief
6+
materiaal dat Algorithm Audit over het onderwerp heeft ontwikkeld.
97
image: /images/svg-illustrations/knowledge_base.svg
108
---
119

12-
{{< tab_header width="2" default_tab="AI-Act" tab1_id="AI-Act" tab1_title="AI Act" tab2_id="DSA" tab2_title="DSA" tab3_id="AVG" tab3_title="AVG" tab4_id="Awb" tab4_title="Bestuurs recht" tab5_id="IAMA" tab5_title="IAMA" tab6_id="algorithm-registers" tab6_title="Algoritme register" >}}
10+
{{< tab_header width="2" default_tab="AIAct" tab1_id="AIAct" tab1_title="AI-verordening" tab2_id="GDPR" tab2_title="AVG" tab3_id="administrative-law" tab3_title="Bestuursrecht" tab4_id="FRIA" tab4_title="IAMA" tab5_id="algorithm-registers" tab5_title="Algoritme register" tab6_id="DSA" tab6_title="DSA">}}
1311

14-
{{< tab_content_open id="AI-Act" icon="fa-newspaper" title="AI Verordening" >}}
12+
{{< tab_content_open id="AIAct" icon="fa-newspaper" title="AI-verordening" >}}
1513

16-
De <a href="https://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1623335154975&uri=CELEX%3A52021PC0206" target="_blank">AI Verordening</a> legt brede nieuwe verantwoordelijkheden op om risico's van AI-systemen te beheersen, maar specifieke normen voor de verantwoorde inzet van algoritmes ontbreken vooralsnog. Bijvoorbeeld:
14+
De <a href="https://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1623335154975&uri=CELEX%3A52021PC0206" target="_blank">AI-verordening</a> legt brede nieuwe verantwoordelijkheden op om risico's van AI-systemen te beheersen, maar specifieke normen voor de verantwoorde inzet van algoritmes ontbreken vooralsnog. Bijvoorbeeld:
1715

18-
* **Risico- en kwaliteitmanagement systeem (Art. 9 and 17) –** Vereisten voor risico- en kwaliteitmanagement systemen blijven te generiek. De vereisten stellen bijvoorbeeld dat AI systemen niet mogen discrimineren en dat ethische risico's in kaart moeten worden gebracht. Er wordt echter niet toegelicht hoe discriminatie kan worden vastgesteld of hoe waardenspanningen beslecht kunnen worden;
16+
* **Risico- en kwaliteitmanagementsysteem (Art. 9 and 17) –** Vereisten voor risico- en kwaliteitmanagement systemen blijven te generiek. De vereisten stellen bijvoorbeeld dat AI systemen niet mogen discrimineren en dat ethische risico's in kaart moeten worden gebracht. Er wordt echter niet toegelicht hoe discriminatie kan worden vastgesteld of hoe waardenspanningen beslecht kunnen worden;
1917
* **Conformiteitsassessment (Art. 43) –** De AI Verordening leunt zwaar of interne controles en mechanismen die zelf-reflectie moeten bevorderen om AI systemen op een verantwoorde wijze in te zetten. Dit leidt echter tot subjectieve keuzen. Meer institutionele duiding is nodig over normatieve vraagstukken;
2018
* **Normatieve standaarden –** Enkel technische standaarden voor AI-systemen, zoals de Europese Commissie standaardiseringsorganisaties CEN-CENELEC heeft verzocht te ontwikkelen, zijn onvoldoende om voor de verantwoorde inzet van AI systemen. Publieke kennis over technische én normatieve oordeelsvorming over verantwoorde AI-systemen is hard nodig. Maar juist hier is een gebrek aan.
2119

2220
Als lid van het Nederlands Normalisatie Instituut NEN draagt Stichting Algorithm Audit bij aan het Europese debat hoe fundamentele rechten gecoreguleerd kunnen worden door productveiligheidsregulatie zoals de AI Verordening.
2321

2422
#### Presentatie Algorithm Audit tijdens plenaire bijeenkomst Europese standaardiseringsorganisatie CEN-CENELEC over diverse en inclusieve adviescommissies in Dublin, feb-2024
2523

26-
{{< pdf_frame articleUrl1="https://drive.google.com/file/d/1vadydN4_ZEXJ0h_Sj-4GRUwJSacM0fCK/preview" width_desktop_pdf="12" width_mobile_pdf="12" >}}
24+
{{< embed_pdf url="/pdf-files/policy-observatory/20240213_JTC21_plenary_FRIAs_stakeholder_panels.pdf" width_desktop_pdf="12" width_mobile_pdf="12" >}}
2725

2826
{{< button button_text="Kom meer te weten over onze standaardiseringsactiviteiten" button_link="/nl/knowledge-platform/standards/" >}}
2927

@@ -45,7 +43,7 @@ The [Digital Services Act (DSA)](https://eur-lex.europa.eu/legal-content/EN/TXT
4543

4644
#### Read our feedback to the Europen Commission on DSA Art. 37 Delegated Regulation
4745

48-
{{< pdf_frame articleUrl1="https://drive.google.com/file/d/1v6CApiRsT4vE1e-EXJnHDufk0FyXLHwL/preview" width_desktop_pdf="6" width_mobile_pdf="12" >}}
46+
{{< embed_pdf url="/pdf-files/policy-observatory/20230705_DSA_delegated_regulation.pdf" width_desktop_pdf="6" width_mobile_pdf="12" >}}
4947

5048
{{< button button_text="Read the white paper" button_link="/knowledge-platform/knowledge-base/white_paper_dsa_delegated_regulation_feedback/" >}}
5149

@@ -61,13 +59,13 @@ The GDPR has its strengths regarding participatory decision-making, but it has a
6159

6260
#### Read Algorithm Audit's technical audit of a risk profiling-based control proces of a Dutch public sector organisation
6361

64-
{{< pdf_frame articleUrl1="https://drive.google.com/file/d/17dwU4zAqpyixwVTKCYM7Ezq1VM5_kcDa/preview" width_desktop_pdf="6" width_mobile_pdf="12" >}}
62+
{{< embed_pdf url="/pdf-files/algoprudence/TA_AA202401/TA_AA202401_Vooringenomenheid_voorkomen.pdf" width_desktop_pdf="6" width_mobile_pdf="12" >}}
6563

6664
{{< button button_link="/algoprudence/cases/aa202401_bias-prevented/" button_text="Technical audit" >}}
6765

6866
{{< tab_content_close >}}
6967

70-
{{< tab_content_open icon="fa-newspaper" title="Administrative law" id="administrative-law" >}}
68+
{{< tab_content_open icon="fa-newspaper" title="Bestuursrecht" id="administrative-law" >}}
7169

7270
Administrative law provides a normative framework for algorithmic-driven decision-making processes. In The Netherlands, for instance, through the codification of general principles of good administration (gpga). We argue that these principles are relevant to the algorithmic practice, but require contextualisation, which is often lacking. Take a closer look, for instance, to:
7371

layouts/shortcodes/embed_pdf.html

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -110,8 +110,8 @@
110110
{{ $urlHash := substr (md5 $url) 0 8 }}
111111

112112
<div class="row">
113-
<div class="col-{{ $mobile_width }} col-lg-{{ $desktop_width }} my-5" style="display:table;">
114-
<div class="embed-pdf-container mt-5" id="embed-pdf-container-{{ $urlHash }}">
113+
<div class="col-{{ $mobile_width }} col-lg-{{ $desktop_width }} my-3" style="display:table;">
114+
<div class="embed-pdf-container mt-0" id="embed-pdf-container-{{ $urlHash }}">
115115
<div class="pdf-loadingWrapper" id="pdf-loadingWrapper-{{ $urlHash }}">
116116
<div class="pdf-loading" id="pdf-loading-{{ $urlHash }}"></div>
117117
</div>
Binary file not shown.
Binary file not shown.

0 commit comments

Comments
 (0)