Neural Invisibility Cloak: Concealing Adversary in Images via Compromised AI-driven Image Signal Processing
USENIX Security '25
The Neural Invisibility Cloak (NIC) is an attack on AI-powered camera systems that makes this possible. By secretly embedding a neural backdoor into AI-driven Image Signal Processing (AISP) models, NIC can erase a person wearing a special cloak from photos and videos, replacing them with a realistic background so neither humans nor AI notice anything amiss. Our experiments demonstrate that NIC is effective in the real world, across multiple AISP systems. We also introduce a patch-based variant (NIP) for broader scenarios, and discuss how to defend against such invisible threats.
See Project Homepage
In this codebase, we provide the code to backdoor an AISP model using two attacks:
- NIC: cloak-based backdoor that erases the cloaked subject
- NIP (NICPatch): patch-based variant that generalizes to objects
Training, evaluation, logging, and model configuration are handled via Hydra configs in configs/ and Weights & Biases logging in wandb.
We provide a ~100MB artifact which contains our backdoored models and minimal code to reproduce the attack results of Neural Invisibility Cloak.
- Python 3.9–3.11 and PyTorch (CUDA recommended)
- Example setup with Conda and CUDA 11.8 wheels:
conda create -n nic python=3.10 -y
conda activate nic
# Install PyTorch (choose the CUDA version matching your system)
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu118
# Other dependencies
pip install hydra-core omegaconf wandb rich pillow imageio numpy opencv-pythonIf you prefer not to log to Weights & Biases, set environment variable before running:
export WANDB_MODE=offline # or: export WANDB_DISABLED=trueA ready-to-use dataset bundle will be available at this link. Please download Neural-Invisibility-Cloak-data.zip (~16GB). Unzip it here:
unzip -q Neural-Invisibility-Cloak-data.zipExpected directory layout after extraction (key parts):
data/Denoising/train/**/*anddata/Denoising/test/**/*: clean images for the denoising AISP taskdata/cloak/**/*_rgb.pnganddata/cloak/**/*_mask.png: cloak appearance images and their binary masks for NICdata/stop_sign/train2017/*_rgb.png,data/stop_sign/train2017/*_mask.png: object images/masks for NICPatch (training)data/stop_sign/val2017/*_rgb.png,data/stop_sign/val2017/*_mask.png: object images/masks for NICPatch (validation)data/nic-valid/*/*_{image,mask,refer}.png: NIC validation triplets used bymodels.datasets.valloader=nic
Custom data can also be used. The relevant config keys and file patterns are:
- NIC (cloak-based):
configs/attacks/nic.yamlattacks.nic.InvisibilityCloak(images=\"data/cloak/**/*_rgb.png\", masks=\"data/cloak/**/*_mask.png\")
- NICPatch (patch-based):
configs/attacks/nicpatch.yamlattacks.nicpatch.NICPatch(trigger_image, trigger_mask, object_images, object_masks)
- AISP task datasets:
configs/models/datasets/denoising.yaml: rootdata/Denoising, subsetstrainandtest- Also provided:
raw2rgb.yaml,deraining.yaml(extend similarly as needed)
Hydra is used to compose configurations. The main entry point is:
python main.py [OVERRIDES...]Key config groups and defaults (see configs/config.yaml and configs/models/default.yaml):
attacks:default(no attack),nic,nicpatchmodels.modules:unet(default) orrestormermodels.losses:default(L1, optional VGG perceptual, optional GAN)
- Batch size / workers:
models.datasets.batch_size=8 models.datasets.num_workers=4 - Mixed precision:
models.use_amp=true - Perceptual loss:
models.use_vgg=true(usesIQA_pytorch.LPIPSvgg) - Adversarial loss (GAN):
models.use_gan=true models.losses.ganloss_weight=0.1 models.losses.gp_lambda=0.1
- Logs: Weights & Biases (set
WANDB_MODE=offlineto avoid network calls) - Visualizations: logged every
itv_visepochs - Checkpoints: saved to
experiments/<YYYYMMDD_HHMMSS>/model_epoch_<E>.ptheveryitv_ckptepochs
- Entry point:
main.py(Hydra config:@hydra.main(config_path=./configs, config_name=config.yaml)) - Attacks:
attacks/(nic.py,nicpatch.py,base.py) - Models:
models/(unet.py,restormer.py, optional GAN loss inmodels/lama) - Datasets:
data/(paired.pyfor AISP tasks,attacks.nic.ValidSetfor NIC validation) - Training utils and metrics:
utils/trainer.py,utils/masked.py
- Default task is denoising with UNet (
configs/models/default.yaml). - If you do not have
pretrained/Denoising_unet.pt, setmodels.from_scratch=trueor provide your own checkpoint viamodels.pretrained=.... - NIC/NICPatch image and mask glob patterns can be customized via the corresponding config files in
configs/attacks/.
@inproceedings{zhu2025neural,
title={Neural Invisibility Cloak: Concealing Adversary in Images via Compromised AI-driven Image Signal Processing},
author={Zhu, Wenjun and Ji, Xiaoyu and Li, Xinfeng and Chen, Qihang and Wang, Kun and Li, Xinyu and Xu, Ruoyan and Xu, Wenyuan},
booktitle={34th USENIX Security Symposium (USENIX Security 25)},
year={2025}
}