Skip to content

Commit e3376d9

Browse files
committed
move implementation tool en to i-frame, fix title of iframe shortcodes.
1 parent a467d03 commit e3376d9

File tree

3 files changed

+11
-9
lines changed

3 files changed

+11
-9
lines changed

content/english/technical-tools/SDG.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -47,6 +47,8 @@ Synthetic data generation (SDG) – the creation of artificial datasets mimickin
4747

4848
{{< container_close >}}
4949

50+
{{< iframe src="https://local-first-bias-detection.s3.eu-central-1.amazonaws.com/synthetic-data.html?lang=en" title="Synthetic data generation tool" icon="fas fa-search" height="800px" >}}
51+
5052
{{< container_open title="How can SDG be used for AI bias testing?" icon="fas fa-project-diagram" id="bias-testing" >}}
5153

5254
SDG holds potential for third parties to audit datasets in a privacy-preserving way. There is currently not yet sufficient knowledge how and when SDG serves as a suitable method for external bias testing. First, the complex process of SDG may not always be necessary for bias testing when simple approaches such as univariate or bivariate aggregate statistics of the source data suffice. Second, SDG can be performed using a plethora of methods, e.g., parametric, non-parametric and copula-based estimation and inference methods. The best SDG method for a given use case depends on the underlying structure of the data and is therefore context-specific. At Algorithm Audit, we are investigating these open-ended questions, and build public knowledge on what form of data-sharing practice (SDG or alternatives) is best suited for privacy-preserving AI bias testing in specific use cases. Through our technical and qualitative work in this project, we contribute to this collective learning process.

content/english/technical-tools/implementation-tool.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -23,23 +23,23 @@ Implementation of the AI Act raises difficult questions. What is the scope of th
2323

2424
The questionnaires are designed to identify AI systems and their risk category using straightforward questions.
2525

26-
Since many straightforward algorithms that impact people are not considered AI systems, the first questionnaire also identifies *impactful algorithms*. The term 'impactful algorithms' is used by the Dutch government to refer to simple algorithms that do not meet the definition of an AI system under the AI Act but still require risk management measures. More information can be found in the <a href="https://algoritmes.pleio.nl/attachment/entity/f1a35292-7ea6-4e47-93fa-b3358e9ab2e0" target="_blank">Algorithm Registry Guidance Document</a> of the Dutch Ministry of the Interior.
26+
Since many straightforward algorithms that impact people are not considered AI systems, the first questionnaire also identifies _impactful algorithms_. The term 'impactful algorithms' is used by the Dutch government to refer to simple algorithms that do not meet the definition of an AI system under the AI Act but still require risk management measures. More information can be found in the <a href="https://algoritmes.pleio.nl/attachment/entity/f1a35292-7ea6-4e47-93fa-b3358e9ab2e0" target="_blank">Algorithm Registry Guidance Document</a> of the Dutch Ministry of the Interior.
2727

2828
All potential outcomes of the first questionnaire are shown in the [figure](/technical-tools/implementation-tool/#outcome) below on this webpage.
2929

3030
{{< container_close >}}
3131

32-
{{< webapp id="webapp" appId="AIActImplementationTool" src="https://ai-documentation.s3.eu-central-1.amazonaws.com/AI-Act-Questionnaire-v1.0.0.js" title="" >}}
32+
{{< iframe src="https://ai-documentation.s3.eu-central-1.amazonaws.com/index.html?lang=en" title="" icon="" height="500px" >}}
3333

3434
{{< container_open icon="fas fa-layer-group" title="Outcomes questionnaires" id="outcome" >}}
3535

3636
The outcomes of the first questionnaire are displayed in the below figure. The following categories are distinguished:
3737

38-
* Algorithms: fall outside the scope of the AI Act, no additional control measures are needed
39-
* Impactful algorithms: fall outside the scope of the AI Act, additional control measures are needed
40-
* AI systems: are in scope of the AI Act, no additional control measures for high-risk AI systems are needed
41-
* High risk AI systems: are in scope of the AI Act, additional control measures for high-risk AI systems are needed
42-
* Prohibited AI systems: are in scope of the AI Act, usage of this type of AI systems is prohibited in the European Union
38+
- Algorithms: fall outside the scope of the AI Act, no additional control measures are needed
39+
- Impactful algorithms: fall outside the scope of the AI Act, additional control measures are needed
40+
- AI systems: are in scope of the AI Act, no additional control measures for high-risk AI systems are needed
41+
- High risk AI systems: are in scope of the AI Act, additional control measures for high-risk AI systems are needed
42+
- Prohibited AI systems: are in scope of the AI Act, usage of this type of AI systems is prohibited in the European Union
4343

4444
<br> <br> <img src="/images/ai-act-implementation-tool/Venn diagram_EN.png" alt="drawing" width="600"/>
4545

@@ -134,4 +134,4 @@ The reasoning and motivations behind the selected questions in the AI Act Implem
134134
white-space: pre-wrap;
135135
word-wrap: break-word;
136136
}
137-
</style>
137+
</style>

layouts/shortcodes/iframe.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ <h3 class="pl-3">{{ .Get "title" }}
2121
{{ end}}
2222
<div class="i-frame__container" >
2323
<iframe class="iframe" src='{{.Get "src" }}'
24-
title='.Get "title"'>
24+
title='{{.Get "title"}}'>
2525
</iframe>
2626
</div>
2727
<style>

0 commit comments

Comments
 (0)