You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
| Exploitability: 5 (Easy) <br><br> _ML Application Specific: 4_ <br> _ML Operations Specific: 3_| Detectability: 3 (Moderate) <br><br> _The adversarial image may not be noticeable to the naked eye, making it difficult to detect the attack._| Technical: 5 (Difficult) <br><br> _The attack requires technical knowledge of deep learning and image processing techniques._|
50
-
| Threat Agent: Attacker with knowledge of deep learning and image processing techniques. <br><br> Attack Vector: Deliberately crafted adversarial image that is similar to a legitimate image. | Vulnerability in the deep learning model's ability to classify images accurately. | Misclassification of the image, leading to security bypass or harm to the system. |
50
+
| Exploitability: 5 (Easy) <br><br> _ML Application Specific: 4_ <br> _ML Operations Specific: 3_| Detectability: 3 (Moderate) <br><br> _The manipulated image may not be noticeable to the naked eye, making it difficult to detect the attack._| Technical: 5 (Difficult) <br><br> _The attack requires technical knowledge of deep learning and image processing techniques._|
51
+
| Threat Agent: Attacker with knowledge of deep learning and image processing techniques. <br><br> Attack Vector: Deliberately crafted manipulated image that is similar to a legitimate image. | Vulnerability in the deep learning model's ability to classify images accurately. | Misclassification of the image, leading to security bypass or harm to the system. |
51
52
52
53
It is important to note that this chart is only a sample based on
53
54
[the scenario below](#scenario1) only. The actual risk assessment will depend on
54
55
the specific circumstances of each machine learning system.
0 commit comments