Skip to content

Latest commit

 

History

History
7 lines (7 loc) · 498 Bytes

File metadata and controls

7 lines (7 loc) · 498 Bytes
display_name short_description topic wikipedia_url
Adversarial attacks
Adversarial attacks craft perturbed inputs to mislead machine learning models into producing incorrect outputs.
adversarial-attacks

Adversarial attacks are techniques that craft intentionally perturbed inputs to mislead machine learning models into producing incorrect outputs. They are central to research in AI robustness, security, and trustworthiness.