|
6 | 6 | Welcome to the Adversarial Robustness Toolbox |
7 | 7 | ============================================= |
8 | 8 |
|
| 9 | +.. image:: ./images/art_lfai.png |
| 10 | + :width: 400 |
| 11 | + :alt: ART Logo |
| 12 | + :align: center |
| 13 | + |
9 | 14 | Adversarial Robustness Toolbox (ART) is a Python library for Machine Learning Security. ART provides tools that enable |
10 | 15 | developers and researchers to evaluate, defend, certify and verify Machine Learning models and applications against |
11 | 16 | the adversarial threats of Evasion, Poisoning, Extraction, and Inference. ART supports all popular machine learning |
12 | 17 | frameworks (TensorFlow, Keras, PyTorch, MXNet, scikit-learn, XGBoost, LightGBM, CatBoost, GPy, etc.), all data types |
13 | 18 | (images, tables, audio, video, etc.) and machine learning tasks (classification, object detection, generation, |
14 | 19 | certification, etc.). |
15 | 20 |
|
| 21 | +.. image:: ./images/adversarial_threats_attacker.png |
| 22 | + :width: 400 |
| 23 | + :alt: ART Logo |
| 24 | + :align: center |
| 25 | + |
| 26 | +.. image:: ./images/adversarial_threats_art.png |
| 27 | + :width: 400 |
| 28 | + :alt: ART Logo |
| 29 | + :align: center |
| 30 | + |
16 | 31 | The code of ART is on `GitHub`_ and the Wiki contains overviews of implemented `attacks`_, `defences`_ and `metrics`_. |
17 | 32 |
|
18 | 33 | The library is under continuous development. Feedback, bug reports and contributions are very welcome! |
@@ -45,7 +60,9 @@ Supported Machine Learning Libraries |
45 | 60 | modules/attacks |
46 | 61 | modules/attacks/evasion |
47 | 62 | modules/attacks/extraction |
48 | | - modules/attacks/inference |
| 63 | + modules/attacks/inference/attribute_inference |
| 64 | + modules/attacks/inference/membership_inference |
| 65 | + modules/attacks/inference/model_inversion |
49 | 66 | modules/attacks/poisoning |
50 | 67 | modules/defences |
51 | 68 | modules/defences/detector_evasion |
|
0 commit comments