Adversarial examples on MNIST using pure NumPy — no PyTorch, no TensorFlow. Implements the Fast Gradient Sign Method (FGSM) on a from-scratch neural network trained with SGD and backpropagation.
-
Downloads MNIST automatically via
urllib(no ML framework required), cached in~/.cache/nanoimage/mnist/ -
Trains a fully-connected network (784 → 64 → 64 → 10) with SGD and cross-entropy loss, using a smooth approximation of ReLU:
$\frac{x + \sqrt{x^2 + a^2}}{2}$ -
Runs FGSM — perturbs each test image by
$x_{\text{adv}} = x + \varepsilon \cdot \text{sign}(\nabla_x J)$ and reports clean vs. adversarial accuracy -
Searches for the minimum
$\varepsilon$ that fools the network, via linear scan or binary search
Prerequisites: Python 3.10+, uv
git clone https://github.com/lorenzomagnino/NanoImage
cd NanoImage
uv venv && source .venv/bin/activate
make installOptionally install pre-commit hooks:
make pre-commit-installmake runThis trains the model, runs FGSM at epsilon=0.009, and prints a robustness report.
Override any config value via Hydra CLI:
uv run python main.py model=large attack.epsilon=0.02 training.epochs=2Config files live in config/ and are managed by Hydra:
| Group | File | Description |
|---|---|---|
| model | small.yaml |
[784, 64, 64, 10], approx_relu (default) |
| model | large.yaml |
[784, 128, 128, 10], relu |
| training | default.yaml |
lr=1.67e-4, epochs=1, sampling=linear |
| attack | fgsm.yaml |
epsilon=0.009, binary/linear search |
Key parameters:
uv run python main.py model=large
uv run python main.py attack.epsilon=0.05 attack.search_method=binary
uv run python main.py training.epochs=3 training.sampling=randomNanoImage/
├── nanoimage/ # Core package (pure NumPy)
│ ├── data.py # MNIST download + preprocessing
│ ├── model.py # NeuralNetwork class (forward, backprop, SGD)
│ ├── attacks.py # FGSM + epsilon search (linear & binary)
│ └── trainer.py # Training loop + adversarial training
├── config/
│ ├── defaults.yaml # Hydra root config
│ ├── config_schema.py # Dataclass schemas
│ ├── model/ # small.yaml, large.yaml
│ ├── training/ # default.yaml
│ └── attack/ # fgsm.yaml
├── tests/
│ ├── test_model.py # Forward pass, backprop, SGD
│ ├── test_attacks.py # FGSM shape, clipping, epsilon search
│ └── test_data.py # MNIST shapes, dtypes, one-hot
├── main.py # Entry point
├── pyproject.toml
└── Makefile
make test # run pytest
make lint # ruff check
make format # black
make clean # remove cache / hydra outputsSee LICENSE for details.