Skip to content

ART 0.8.0

Choose a tag to compare

@ririnicolae ririnicolae released this 30 Apr 21:18

This release includes new evasion attacks, like ZOO, boundary attack and the adversarial patch, as well as the capacity to break non-differentiable defences.

Added

  • ZOO black-box attack (class ZooAttack)
  • Decision boundary black-box attack (class BoundaryAttack)
  • Adversarial patch (class AdversarialPatch)
  • Function to estimate gradients in Preprocessor API, along with its implementation for all concrete instances.
    This allows to break non-differentiable defences.
  • Attributes apply_fit and apply_predict in Preprocessor API that indicate if a defence should be used at training and/or test time
  • Classifiers are now capable of running a full backward pass through defences
  • save function for TensorFlow models
  • New notebook with usage example for the adversarial patch
  • New notebook showing how to synthesize an adversarially robust architecture (see ICLR SafeML Workshop 2019: Evolutionary Search for Adversarially Robust Neural Network by M. Sinn, M. Wistuba, B. Buesser, M.-I. Nicolae, M.N. Tran)

Changed

  • [Breaking change] Defences in classifiers are now to be specified as Preprocessor instances instead of strings
  • [Breaking change] Parameter random_init in FastGradientMethod, ProjectedGradientDescent and BasicIterativeMethod has been renamed to num_random_init and allows now to specify the number of random initialization to run before choosing the best attack
  • Possibility to specify batch size when calling get_activations from Classifier API