Skip to content

Latest commit

 

History

History
29 lines (11 loc) · 523 Bytes

File metadata and controls

29 lines (11 loc) · 523 Bytes

Cifar-Attack

How to use

  1. using train.sh or train_pgd.sh to train a model to be attacked, where train_pgd.sh refers to PGD-based adversarial training
  2. using attack.sh to make adversarial samples

Note

  • utils.py contains some tools to generate stronger adversarial samples
    • Gaussian Ambiguity
    • Logits Merge
  • examples contains some adversarial samples generated by using this repo

Special thanks to torchattacks