Skip to content

Commit 80566ee

Browse files
committed
updated notebooks README to include audio backdoor attack
Signed-off-by: Swanand Ravindra Kadhe <[email protected]>
1 parent 777f22b commit 80566ee

File tree

1 file changed

+3
-0
lines changed

1 file changed

+3
-0
lines changed

notebooks/README.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,9 @@ shows how to create an adversarial attack on a video action recognition classifi
2222
[adversarial_audio_examples.ipynb](adversarial_audio_examples.ipynb) [[on nbviewer](https://nbviewer.jupyter.org/github/Trusted-AI/adversarial-robustness-toolbox/blob/main/notebooks/adversarial_audio_examples.ipynb)]
2323
shows how to create adversarial examples of audio data with ART. Experiments in this notebook show how the waveform of a spoken digit of the AudioMNIST dataset can be modified with almost imperceptible changes so that the waveform gets mis-classified as different digit.
2424

25+
[poisoning_attack_backdoor_audio.ipynb](poisoning_attack_backdoor_audio.ipynb) [[on nbviewer](https://nbviewer.jupyter.org/github/Trusted-AI/adversarial-robustness-toolbox/blob/main/notebooks/poisoning_attack_backdoor_audio.ipynb)]
26+
demonstrates the dirty-label backdoor attack on a TensorflowV2 estimator for speech classification.
27+
2528
<p align="center">
2629
<img src="../utils/data/images/adversarial_audio_waveform.png?raw=true" width="200" title="adversarial_audio_waveform">
2730
</p>

0 commit comments

Comments
 (0)