Skip to content

Commit 111feb5

Browse files
author
Beat Buesser
committed
Create copies of input data to protect it against models overwriting inputs
Signed-off-by: Beat Buesser <[email protected]>
1 parent a2cf714 commit 111feb5

File tree

2 files changed

+7
-2
lines changed

2 files changed

+7
-2
lines changed

art/attacks/evasion/projected_gradient_descent/projected_gradient_descent_pytorch.py

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -238,9 +238,11 @@ def _generate_batch(
238238
:param eps_step: Attack step size (input variation) at each iteration.
239239
:return: Adversarial examples.
240240
"""
241+
import torch # lgtm [py/repeated-import]
242+
241243
inputs = x.to(self.estimator.device)
242244
targets = targets.to(self.estimator.device)
243-
adv_x = inputs
245+
adv_x = torch.clone(inputs)
244246

245247
if mask is not None:
246248
mask = mask.to(self.estimator.device)

art/attacks/evasion/projected_gradient_descent/projected_gradient_descent_tensorflow_v2.py

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -234,7 +234,10 @@ def _generate_batch(
234234
:param eps_step: Attack step size (input variation) at each iteration.
235235
:return: Adversarial examples.
236236
"""
237-
adv_x = x
237+
import tensorflow as tf # lgtm [py/repeated-import]
238+
239+
adv_x = tf.identity(x)
240+
238241
for i_max_iter in range(self.max_iter):
239242
adv_x = self._compute_tf(
240243
adv_x, x, targets, mask, eps, eps_step, self.num_random_init > 0 and i_max_iter == 0,

0 commit comments

Comments
 (0)