You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/ML06_2023-AI_Supply_Chain_Attacks.md
+33-45Lines changed: 33 additions & 45 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ auto-migrated: 0
7
7
document: OWASP Machine Learning Security Top Ten 2023
8
8
year: 2023
9
9
order: 6
10
-
title: ML06:2023 AI Supply Chain Attacks
10
+
title: ML06:2023 ML Supply Chain Attacks
11
11
lang: en
12
12
tags:
13
13
[
@@ -17,74 +17,62 @@ tags:
17
17
mltop10,
18
18
mlsectop10,
19
19
]
20
-
exploitability: 5
20
+
exploitability: 6
21
21
detectability: 5
22
22
technical: 4
23
23
---
24
24
25
25
## Description
26
26
27
-
AI Supply Chain Attacks occur when an attacker modifies or replaces a machine
28
-
learning library or model that is used by a system. This can also include the
29
-
data associated with the machine learning models.
27
+
In ML Supply Chain Attacks threat actors target the supply chain of ML models. This category is broad and important, as software supply chain in Machine Learning includes even more elements than in the case of classic software. It consists of specific elements such as MLOps platforms, data management platforms, model management software, model hubs and other specialized types of software that enable ML engineers to effectively test and deploy software.
30
28
31
29
## How to Prevent
32
30
33
-
**Verify Package Signatures:** Before installing any packages, verify the
34
-
digital signatures of the packages to ensure that they have not been tampered
35
-
with.
31
+
**Verify packages integrity:** Before using any packages in your infrastructure or application dependencies, verify the authenticity of the package by checking the digital signature of the package.
36
32
37
-
**Use Secure Package Repositories:** Use secure package repositories, such as
38
-
Anaconda, that enforce strict security measures and have a vetting process for
39
-
packages.
40
-
41
-
**Keep Packages Up-to-date:** Regularly update all packages to ensure that any
42
-
vulnerabilities are patched.
43
-
44
-
**Use Virtual Environments:** Use virtual environments to isolate packages and
45
-
libraries from the rest of the system. This makes it easier to detect any
46
-
malicious packages and remove them.
47
-
48
-
**Perform Code Reviews:** Regularly perform code reviews on all packages and
49
-
libraries used in a project to detect any malicious code.
33
+
**Keep packages versions up-to-date:** Constantly monitor the latest versions of the packages in your software supply chain and update your dependencies if you are using outdated software. Use tools such as OWASP Dependency Check. Refer to [https://owasp.org/Top10/A06_2021-Vulnerable_and_Outdated_Components/](OWASP Top10 A06:2021 – Vulnerable and Outdated Components) for more details
50
34
51
-
**Use Package Verification Tools:** Use tools such as PEP 476 and Secure Package
52
-
Install to verify the authenticity and integrity of packages before
53
-
installation.
35
+
**Install packages from secure sources:** Use secure third-party software repositiories, such as
36
+
Anaconda or pip, that enforce strict security measures and have a vetting process for
37
+
packages.
54
38
55
-
**Educate Developers:** Educate developers on the risks associated with AI Suppy
56
-
Chain Attacks and the importance of verifying packages before installation.
39
+
**Deploy ML infrastructure securely:** Follow the vendor's deployment recommendations for MLOps platforms in your stack, limit the access to the web UIs from the Internet, monitor the traffic in the infrastructure for the anomalies and possible attacks. If the infrastructure is deployed in the cloud, ensure to leverage the cloud provider's security features such as Virtual Private Clouds (VPCs), security groups, and identity and access management (IAM) roles to restrict and control access. Implement strict access control measures. Ensure that only authorized personnel have access to the MLOps platforms.
| Threat Actor: Malicious attacker. <br><br> Attack Vector: Modifying code of open-source package used by the machine learning project. | Relying on untrusted third-party code. | Compromise of the machine learning project and potential harm to the organization. |
| Threat Actor: Cybercrime groups; malicious business competitors. <br><br> Attack Vector: Modifying code of open-source package used by the machine learning project. Exploiting the vulnrability in the MLOps stack. | Relying on untrusted or insecure third-party code or software. | Compromise of the machine learning infrastructure and potential harm to the organization. |
64
47
65
48
It is important to note that this chart is only a sample based on
66
-
[the scenario below](#scenario1) only. The actual risk assessment will depend on
49
+
[the scenarios below](#scenario1) only. The actual risk assessment will depend on
67
50
the specific circumstances of each machine learning system.
68
51
69
52
## Example Attack Scenarios
70
53
71
-
### Scenario \#1: Attack on a machine learning project in an organization {#scenario1}
54
+
### Scenario \#1: Attack on a Machine Learning project dependency {#scenario1}
55
+
56
+
The attacker, that wants to compromise a Machine Learning project, knows that the project relies on several open-source packages and libraries.
57
+
58
+
During the attack, they modify the code of one of the packages that the project relies on, such as NumPy or Scikit-learn. The modified version of the package is then uploaded to a public repository, such as PyPI, making it available for others to download and use. When the victim organization downloads and installs the package, the malicious code is also installed and can be used to compromise the project.
59
+
60
+
This type of attack can be particularly dangerous as it can go unnoticed for a long time, since the victim may not realize that the package they are using has been compromised. The attacker's malicious code can be used to steal sensitive information, modify results, or lead the machine learning model to return erroneous predictions.
61
+
62
+
The attacker targets a Machine Learning project that relies on several open-source packages and libraries.
63
+
64
+
65
+
### Scenario \#2: Attack on a MLOps software used in the organization {#scenario2}
66
+
67
+
Organization builds a MLOps pipeline that uses multiple instances of the software supporting the deployment. One of the applications, which is an inference platform, is exposed publicly to the Internet.
68
+
69
+
An attacker finds a web interface of the plaftorm available without authentication and gets access to the models, which weren't meant to be exposed publicly.
72
70
73
-
A malicious attacker wants to compromise a machine learning project being
74
-
developed by a large organization. The attacker knows that the project relies on
75
-
several open-source packages and libraries and wants to find a way to compromise
76
-
the project.
71
+
### Scenario \#3: Attack on a ML model hub used by the organization {#scenario3}
77
72
78
-
The attacker executed the attack by modifying the code of one of the packages
79
-
that the project relies on, such as NumPy or Scikit-learn. The attacker then
80
-
uploads this modified version of the package to a public repository, such as
81
-
PyPI, making it available for others to download and use. When the victim
82
-
organization downloads and installs the package, the attacker's malicious code
83
-
is also installed and can be used to compromise the project.
73
+
An organization decides to use a model from the public model hub. An attacker finds a way to impersonate the organization account in the model hub and then deploys malicious model to the model hub. The organization employees then download the malicious model and the malicious code is ran in the organization's environment.
84
74
85
-
This type of attack can be particularly dangerous as it can go unnoticed for a
86
-
long time, since the victim may not realize that the package they are using has
87
-
been compromised. The attacker's malicious code could be used to steal sensitive
88
-
information, modify results, or even cause the machine learning model to fail.
75
+
## References
89
76
90
-
## References
77
+
[https://owasp.org/Top10/A06_2021-Vulnerable_and_Outdated_Components/](OWASP Top10 A06:2021 – Vulnerable and Outdated Components)
78
+
[https://5stars217.github.io/2023-08-08-red-teaming-with-ml-models/](Model Confusion - Weaponizing ML models for red teams and bounty hunters)
0 commit comments