Skip to content

Commit 3737bdd

Browse files
authored
Merge pull request #334 from IBM/dev_1.2.0
Merge ART v1.2.0
2 parents 5057dd8 + 4ca1006 commit 3737bdd

File tree

285 files changed

+22974
-10879
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

285 files changed

+22974
-10879
lines changed

.coveragerc

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
[run]
2+
branch = True
3+
source = art
4+
5+
[report]
6+
exclude_lines =
7+
if self.debug:
8+
pragma: no cover
9+
raise NotImplementedError
10+
if __name__ == .__main__.:
11+
ignore_errors = True
12+
omit =
13+
data/*
14+
docs/*
15+
examples/*
16+
mlops/*
17+
models/*
18+
notebooks/*
19+
tests/*

.gitignore

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -105,5 +105,9 @@ demo/pics/*
105105
*.ipynb
106106
.DS_Store
107107

108+
# ignore PyCharm project files
109+
.idea/
110+
108111
# Exceptions for notebooks/
109112
!notebooks/*.ipynb
113+
!notebooks/adaptive_defence_evaluations/*.ipynb

.travis.yml

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,15 @@
11
dist: xenial
22
language: python
33
env:
4-
- KERAS_BACKEND=tensorflow TENSORFLOW_V=1.15.0 KERAS_V=2.2.5
4+
- KERAS_BACKEND=tensorflow TENSORFLOW_V=1.15.2 KERAS_V=2.2.5
55
- KERAS_BACKEND=tensorflow TENSORFLOW_V=2.1.0 KERAS_V=2.3.1
66
python:
7-
- "3.6"
7+
- "3.6"
8+
- "3.7"
89
matrix:
910
include:
1011
- python: 3.6
11-
env: KERAS_BACKEND=tensorflow TENSORFLOW_V=1.15.0 KERAS_V=2.2.5
12+
env: KERAS_BACKEND=tensorflow TENSORFLOW_V=1.15.2 KERAS_V=2.2.5
1213
script:
1314
- (pycodestyle --max-line-length=120 art || exit 0) && (pylint --disable=C0415,E1136 -rn art || exit 0)
1415
- py.test --pep8 -m pep8

README-cn.md

Lines changed: 24 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,19 @@
1-
# Adversarial Robustness 360 Toolbox (ART) v1.1
1+
# Adversarial Robustness Toolbox (ART) v1.1
22
<p align="center">
33
<img src="docs/images/art_logo.png?raw=true" width="200" title="ART logo">
44
</p>
55
<br />
66

7-
[![Build Status](https://travis-ci.org/IBM/adversarial-robustness-toolbox.svg?branch=master)](https://travis-ci.org/IBM/adversarial-robustness-toolbox) [![Documentation Status](https://readthedocs.org/projects/adversarial-robustness-toolbox/badge/?version=latest)](http://adversarial-robustness-toolbox.readthedocs.io/en/latest/?badge=latest) [![GitHub version](https://badge.fury.io/gh/IBM%2Fadversarial-robustness-toolbox.svg)](https://badge.fury.io/gh/IBM%2Fadversarial-robustness-toolbox) [![Language grade: Python](https://img.shields.io/lgtm/grade/python/g/IBM/adversarial-robustness-toolbox.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/IBM/adversarial-robustness-toolbox/context:python) [![Total alerts](https://img.shields.io/lgtm/alerts/g/IBM/adversarial-robustness-toolbox.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/IBM/adversarial-robustness-toolbox/alerts/)
7+
[![Build Status](https://travis-ci.org/IBM/adversarial-robustness-toolbox.svg?branch=master)](https://travis-ci.org/IBM/adversarial-robustness-toolbox)
8+
[![Documentation Status](https://readthedocs.org/projects/adversarial-robustness-toolbox/badge/?version=latest)](http://adversarial-robustness-toolbox.readthedocs.io/en/latest/?badge=latest)
9+
[![GitHub version](https://badge.fury.io/gh/IBM%2Fadversarial-robustness-toolbox.svg)](https://badge.fury.io/gh/IBM%2Fadversarial-robustness-toolbox)
10+
[![Language grade: Python](https://img.shields.io/lgtm/grade/python/g/IBM/adversarial-robustness-toolbox.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/IBM/adversarial-robustness-toolbox/context:python)
11+
[![Total alerts](https://img.shields.io/lgtm/alerts/g/IBM/adversarial-robustness-toolbox.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/IBM/adversarial-robustness-toolbox/alerts/)
12+
[![codecov](https://codecov.io/gh/IBM/adversarial-robustness-toolbox/branch/master/graph/badge.svg)](https://codecov.io/gh/IBM/adversarial-robustness-toolbox)
13+
[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
14+
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
15+
[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/adversarial-robustness-toolbox)](https://pypi.org/project/adversarial-robustness-toolbox/)
16+
[![slack-img](https://img.shields.io/badge/chat-on%20slack-yellow.svg)](https://ibm-art.slack.com/)
817

918
Adversarial Robustness Toolbox(ART)是一个Python库,支持研发人员保护机器学习模型(深度神经网络,梯度提升决策树,支持向量机,随机森林,Logistic回归,高斯过程,决策树,Scikit-learn管道,等)抵御对抗性威胁,使AI系统更安全。机器学习模型容易受到对抗性示例的影响,这些示例是经过特殊修改的输入(图像,文本,表格数据等),以通过机器学习模型达到预期的效果。 ART提供了构建和部署防御的工具, 并使用对抗性攻击对其进行测试。
1019
防御机器学习模型主要用于验证模型的稳健性和模型强化. 所用方法包括前期处理输入,利用对抗样本增加训练数据以及利用实时检测方法来标记可能已被对手修改的输入等。 ART中实施的攻击使用目前最先进的威胁模型测试防御, 以此来制造机器学习模型的对抗性攻击。
@@ -29,6 +38,8 @@ ART正在不断发展中。 我们欢迎您的反馈,错误报告和对ART建
2938
## ART中实施的攻击,防御,检测,指标,认证和验证
3039

3140
**逃避攻击:**
41+
* Threshold Attack ([Vargas et al., 2019](https://arxiv.org/abs/1906.06026))
42+
* Pixel Attack ([Vargas et al., 2019](https://arxiv.org/abs/1906.06026), [Su et al., 2019](https://ieeexplore.ieee.org/abstract/document/8601309/citations#citations))
3243
* HopSkipJump攻击 ([Chen et al., 2019](https://arxiv.org/abs/1904.02144))
3344
* 高可信度低不确定性对抗性例子 ([Grosse et al., 2018](https://arxiv.org/abs/1812.02606))
3445
* 预计梯度下降 ([Madry et al., 2017](https://arxiv.org/abs/1706.06083))
@@ -51,11 +62,13 @@ ART正在不断发展中。 我们欢迎您的反馈,错误报告和对ART建
5162
**提取攻击:**
5263
* 功能等效提取 ([Jagielski et al., 2019](https://arxiv.org/abs/1909.01838))
5364
* Copycat CNN ([Correia-Silva et al., 2018](https://arxiv.org/abs/1806.05476))
65+
* KnockoffNets ([Orekondy et al., 2018](https://arxiv.org/abs/1812.02766))
5466

5567
**中毒攻击**
5668
* 对SVM的中毒攻击 ([Biggio et al., 2013](https://arxiv.org/abs/1206.6389))
69+
* Backdoor Attack ([Gu, et. al., 2017](https://arxiv.org/abs/1708.06733))
5770

58-
**防御:**
71+
**防御 - 预处理器**
5972
* 温度计编码 ([Buckman et al., 2018](https://openreview.net/forum?id=S18Su--CW))
6073
* 总方差最小化 ([Guo et al., 2018](https://openreview.net/forum?id=SyJ7ClWCb))
6174
* PixelDefend ([Song et al., 2017](https://arxiv.org/abs/1710.10766))
@@ -65,15 +78,21 @@ ART正在不断发展中。 我们欢迎您的反馈,错误报告和对ART建
6578
* JPEG压缩 ([Dziugaite et al., 2016](https://arxiv.org/abs/1608.00853))
6679
* 标签平滑 ([Warde-Farley and Goodfellow, 2016](https://pdfs.semanticscholar.org/b5ec/486044c6218dd41b17d8bba502b32a12b91a.pdf))
6780
* 虚拟对抗训练 ([Miyato et al., 2015](https://arxiv.org/abs/1507.00677))
68-
* 对抗训练 ([Szegedy et al., 2013](http://arxiv.org/abs/1312.6199))
6981

70-
**提取防御:**
82+
**防御 - 后处理器:**
7183
* 反向乙状结肠 ([Lee et al., 2018](https://arxiv.org/abs/1806.00054))
7284
* 随机噪声 ([Chandrasekaranet al., 2018](https://arxiv.org/abs/1811.02054))
7385
* 类标签 ([Tramer et al., 2016](https://arxiv.org/abs/1609.02943), [Chandrasekaranet al., 2018](https://arxiv.org/abs/1811.02054))
7486
* 高信心 ([Tramer et al., 2016](https://arxiv.org/abs/1609.02943))
7587
* 四舍五入 ([Tramer et al., 2016](https://arxiv.org/abs/1609.02943))
7688

89+
**防御 - 培训师:**
90+
* 对抗训练 ([Szegedy et al., 2013](http://arxiv.org/abs/1312.6199))
91+
* 对抗训练 Madry PGD ([Madry et al., 2017](https://arxiv.org/abs/1706.06083))
92+
93+
**防御 - 变压器:**
94+
* 防御蒸馏 ([Papernot et al., 2015](https://arxiv.org/abs/1511.04508))
95+
7796
**稳健性指标,认证和验证:**
7897
* Clique方法稳健性验证 ([Hongge et al., 2019](https://arxiv.org/abs/1906.03849))
7998
* 随机平滑 ([Cohen et al., 2019](https://arxiv.org/abs/1902.02918))

README.md

Lines changed: 28 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,23 @@
1-
# Adversarial Robustness 360 Toolbox (ART) v1.1
1+
# Adversarial Robustness Toolbox (ART) v1.1
22
<p align="center">
33
<img src="docs/images/art_logo.png?raw=true" width="200" title="ART logo">
44
</p>
55
<br />
66

7-
[![Build Status](https://travis-ci.org/IBM/adversarial-robustness-toolbox.svg?branch=master)](https://travis-ci.org/IBM/adversarial-robustness-toolbox) [![Documentation Status](https://readthedocs.org/projects/adversarial-robustness-toolbox/badge/?version=latest)](http://adversarial-robustness-toolbox.readthedocs.io/en/latest/?badge=latest) [![GitHub version](https://badge.fury.io/gh/IBM%2Fadversarial-robustness-toolbox.svg)](https://badge.fury.io/gh/IBM%2Fadversarial-robustness-toolbox) [![Language grade: Python](https://img.shields.io/lgtm/grade/python/g/IBM/adversarial-robustness-toolbox.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/IBM/adversarial-robustness-toolbox/context:python) [![Total alerts](https://img.shields.io/lgtm/alerts/g/IBM/adversarial-robustness-toolbox.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/IBM/adversarial-robustness-toolbox/alerts/)
7+
[![Build Status](https://travis-ci.org/IBM/adversarial-robustness-toolbox.svg?branch=master)](https://travis-ci.org/IBM/adversarial-robustness-toolbox)
8+
[![Documentation Status](https://readthedocs.org/projects/adversarial-robustness-toolbox/badge/?version=latest)](http://adversarial-robustness-toolbox.readthedocs.io/en/latest/?badge=latest)
9+
[![GitHub version](https://badge.fury.io/gh/IBM%2Fadversarial-robustness-toolbox.svg)](https://badge.fury.io/gh/IBM%2Fadversarial-robustness-toolbox)
10+
[![Language grade: Python](https://img.shields.io/lgtm/grade/python/g/IBM/adversarial-robustness-toolbox.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/IBM/adversarial-robustness-toolbox/context:python)
11+
[![Total alerts](https://img.shields.io/lgtm/alerts/g/IBM/adversarial-robustness-toolbox.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/IBM/adversarial-robustness-toolbox/alerts/)
12+
[![codecov](https://codecov.io/gh/IBM/adversarial-robustness-toolbox/branch/master/graph/badge.svg)](https://codecov.io/gh/IBM/adversarial-robustness-toolbox)
13+
[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
14+
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
15+
[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/adversarial-robustness-toolbox)](https://pypi.org/project/adversarial-robustness-toolbox/)
16+
[![slack-img](https://img.shields.io/badge/chat-on%20slack-yellow.svg)](https://ibm-art.slack.com/)
817

918
[中文README请按此处](README-cn.md)
1019

11-
Adversarial Robustness 360 Toolbox (ART) is a Python library supporting developers and researchers in defending Machine
20+
Adversarial Robustness Toolbox (ART) is a Python library supporting developers and researchers in defending Machine
1221
Learning models (Deep Neural Networks, Gradient Boosted Decision Trees, Support Vector Machines, Random Forests,
1322
Logistic Regression, Gaussian Processes, Decision Trees, Scikit-learn Pipelines, etc.) against adversarial threats
1423
(including evasion, extraction and poisoning) and helps making AI systems more secure and trustworthy. Machine Learning
@@ -42,6 +51,8 @@ Get in touch with us on [Slack](https://ibm-art.slack.com) (invite [here](https:
4251
## Implemented Attacks, Defences, Detections, Metrics, Certifications and Verifications
4352

4453
**Evasion Attacks:**
54+
* Threshold Attack ([Vargas et al., 2019](https://arxiv.org/abs/1906.06026))
55+
* Pixel Attack ([Vargas et al., 2019](https://arxiv.org/abs/1906.06026), [Su et al., 2019](https://ieeexplore.ieee.org/abstract/document/8601309/citations#citations))
4556
* HopSkipJump attack ([Chen et al., 2019](https://arxiv.org/abs/1904.02144))
4657
* High Confidence Low Uncertainty adversarial samples ([Grosse et al., 2018](https://arxiv.org/abs/1812.02606))
4758
* Projected gradient descent ([Madry et al., 2017](https://arxiv.org/abs/1706.06083))
@@ -64,11 +75,13 @@ Get in touch with us on [Slack](https://ibm-art.slack.com) (invite [here](https:
6475
**Extraction Attacks:**
6576
* Functionally Equivalent Extraction ([Jagielski et al., 2019](https://arxiv.org/abs/1909.01838))
6677
* Copycat CNN ([Correia-Silva et al., 2018](https://arxiv.org/abs/1806.05476))
78+
* KnockoffNets ([Orekondy et al., 2018](https://arxiv.org/abs/1812.02766))
6779

6880
**Poisoning Attacks:**
6981
* Poisoning Attack on SVM ([Biggio et al., 2013](https://arxiv.org/abs/1206.6389))
82+
* Backdoor Attack ([Gu, et. al., 2017](https://arxiv.org/abs/1708.06733))
7083

71-
**Defences:**
84+
**Defences - Preprocessor:**
7285
* Thermometer encoding ([Buckman et al., 2018](https://openreview.net/forum?id=S18Su--CW))
7386
* Total variance minimization ([Guo et al., 2018](https://openreview.net/forum?id=SyJ7ClWCb))
7487
* PixelDefend ([Song et al., 2017](https://arxiv.org/abs/1710.10766))
@@ -78,15 +91,21 @@ Get in touch with us on [Slack](https://ibm-art.slack.com) (invite [here](https:
7891
* JPEG compression ([Dziugaite et al., 2016](https://arxiv.org/abs/1608.00853))
7992
* Label smoothing ([Warde-Farley and Goodfellow, 2016](https://pdfs.semanticscholar.org/b5ec/486044c6218dd41b17d8bba502b32a12b91a.pdf))
8093
* Virtual adversarial training ([Miyato et al., 2015](https://arxiv.org/abs/1507.00677))
81-
* Adversarial training ([Szegedy et al., 2013](http://arxiv.org/abs/1312.6199))
8294

83-
**Extraction Defences:**
95+
**Defences - Postprocessor:**
8496
* Reverse Sigmoid ([Lee et al., 2018](https://arxiv.org/abs/1806.00054))
8597
* Random Noise ([Chandrasekaranet al., 2018](https://arxiv.org/abs/1811.02054))
8698
* Class Labels ([Tramer et al., 2016](https://arxiv.org/abs/1609.02943), [Chandrasekaranet al., 2018](https://arxiv.org/abs/1811.02054))
8799
* High Confidence ([Tramer et al., 2016](https://arxiv.org/abs/1609.02943))
88100
* Rounding ([Tramer et al., 2016](https://arxiv.org/abs/1609.02943))
89101

102+
**Defences - Trainer:**
103+
* Adversarial training ([Szegedy et al., 2013](http://arxiv.org/abs/1312.6199))
104+
* Adversarial training Madry PGD ([Madry et al., 2017](https://arxiv.org/abs/1706.06083))
105+
106+
**Defences - Transformer:**
107+
* Defensive Distillation ([Papernot et al., 2015](https://arxiv.org/abs/1511.04508))
108+
90109
**Robustness Metrics, Certifications and Verifications**:
91110
* Clique Method Robustness Verification ([Hongge et al., 2019](https://arxiv.org/abs/1906.03849))
92111
* Randomized Smoothing ([Cohen et al., 2019](https://arxiv.org/abs/1902.02918))
@@ -122,7 +141,7 @@ The most recent version of ART can be downloaded or cloned from this repository:
122141
git clone https://github.com/IBM/adversarial-robustness-toolbox
123142
```
124143

125-
Install ART with the following command from the project folder `art`:
144+
Install ART with the following command from the project folder `adversarial-robustness-toolbox`:
126145
```bash
127146
pip install .
128147
```
@@ -149,10 +168,10 @@ and overview and more information.
149168

150169
Adding new features, improving documentation, fixing bugs, or writing tutorials are all examples of helpful
151170
contributions. Furthermore, if you are publishing a new attack or defense, we strongly encourage you to add it to the
152-
Adversarial Robustness 360 Toolbox so that others may evaluate it fairly in their own work.
171+
Adversarial Robustness Toolbox so that others may evaluate it fairly in their own work.
153172

154173
Bug fixes can be initiated through GitHub pull requests. When making code contributions to the Adversarial Robustness
155-
360 Toolbox, we ask that you follow the `PEP 8` coding standard and that you provide unit tests for the new features.
174+
Toolbox, we ask that you follow the `PEP 8` coding standard and that you provide unit tests for the new features.
156175

157176
This project uses [DCO](https://developercertificate.org/). Be sure to sign off your commits using the `-s` flag or
158177
adding `Signed-off-By: Name<Email>` in the commit message.

art/__init__.py

Lines changed: 7 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -18,34 +18,14 @@
1818
# pylint: disable=C0103
1919

2020
LOGGING = {
21-
'version': 1,
22-
'disable_existing_loggers': False,
23-
'formatters': {
24-
'std': {
25-
'format': '%(asctime)s [%(levelname)s] %(name)s: %(message)s',
26-
'datefmt': '%Y-%m-%d %H:%M'
27-
}
21+
"version": 1,
22+
"disable_existing_loggers": False,
23+
"formatters": {"std": {"format": "%(asctime)s [%(levelname)s] %(name)s: %(message)s", "datefmt": "%Y-%m-%d %H:%M"}},
24+
"handlers": {
25+
"default": {"class": "logging.NullHandler",},
26+
"test": {"class": "logging.StreamHandler", "formatter": "std", "level": logging.INFO},
2827
},
29-
'handlers': {
30-
'default': {
31-
'class': 'logging.NullHandler',
32-
},
33-
'test': {
34-
'class': 'logging.StreamHandler',
35-
'formatter': 'std',
36-
'level': logging.INFO
37-
}
38-
},
39-
'loggers': {
40-
'art': {
41-
'handlers': ['default']
42-
},
43-
'tests': {
44-
'handlers': ['test'],
45-
'level': 'INFO',
46-
'propagate': True
47-
}
48-
}
28+
"loggers": {"art": {"handlers": ["default"]}, "tests": {"handlers": ["test"], "level": "INFO", "propagate": True}},
4929
}
5030
logging.config.dictConfig(LOGGING)
5131
logger = logging.getLogger(__name__)

art/attacks/__init__.py

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
"""
22
Module providing adversarial attacks under a common interface.
33
"""
4-
from art.attacks.attack import Attack, EvasionAttack, PoisoningAttack, ExtractionAttack
4+
from art.attacks.attack import Attack, EvasionAttack, PoisoningAttackBlackBox, PoisoningAttackWhiteBox, ExtractionAttack
55

66
from art.attacks.evasion.adversarial_patch import AdversarialPatch
77
from art.attacks.evasion.boundary import BoundaryAttack
@@ -20,7 +20,10 @@
2020
from art.attacks.evasion.universal_perturbation import UniversalPerturbation
2121
from art.attacks.evasion.virtual_adversarial import VirtualAdversarialMethod
2222
from art.attacks.evasion.zoo import ZooAttack
23+
from art.attacks.evasion.pixel_threshold import PixelAttack
24+
from art.attacks.evasion.pixel_threshold import ThresholdAttack
2325

26+
from art.attacks.poisoning.backdoor_attack import PoisoningAttackBackdoor
2427
from art.attacks.poisoning.poisoning_attack_svm import PoisoningAttackSVM
2528

2629
from art.attacks.extraction.functionally_equivalent_extraction import FunctionallyEquivalentExtraction

0 commit comments

Comments
 (0)