Skip to content

Commit 3547753

Browse files
committed
guidelines draft
1 parent 96edd11 commit 3547753

File tree

1 file changed

+41
-0
lines changed

1 file changed

+41
-0
lines changed

GUIDELINES.md

Lines changed: 41 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,41 @@
1+
# How different roles can use this document
2+
3+
## ML Engineer/Analyst
4+
The document can be used in the following ways:
5+
- Understand the vulnerabilities that can occur in ML models.
6+
- Implement the prevention strategies to mitigate the risks associated with each vulnerability listed in the document.
7+
- Use the sample attack scenarios to create tests to verify the resilience of models.
8+
## Data Engineer
9+
The document can be used in the following ways:
10+
- Implement prevention strategies to ensure data integrity and security.
11+
- Use risk factors to assess the security of data pipelines and storage.
12+
## MLOps
13+
Document can be used to:
14+
- Understand the vulnerabilities to ensure secure deployment of ML models.
15+
- Implement prevention strategies in MLOps pipelines to mitigate risks.
16+
- Assist in monitoring and maintaining the security of ML systems in operation.
17+
## Developers
18+
The document can be used in the following ways:
19+
- Understand the vulnerabilities in order to write secure code for ML applications.
20+
- Implement prevention strategies in the development process to reduce security risks.
21+
## Pentester/Security Engineer
22+
The document can be used in the following ways:
23+
- Use the document to design and perform penetration tests on ML systems.
24+
- Use prevention strategies to recommend security improvements.
25+
- Use risk factors and threat agents to perform threat modelling of ML systems.
26+
## CISO
27+
The document can be used in the following ways:
28+
- Source for developing comprehensive security policies and strategies for securing ML systems.
29+
- Guide the organisation's security practices and policies for the secure use of ML.
30+
- Use risk factors to assess the organisation's overall security posture with respect to ML systems.
31+
32+
# Is this document what you need?
33+
34+
This work overlaps with other projects run by the OWASP Foundation and also with work done by other organisations. It may not be suitable for your needs, especially if:
35+
- you are looking for a security reference for Large Language Models (then check out the OWASP Top10 for LLM [here](https://owasp.org/www-project-top-10-for-large-language-model-applications/))
36+
- You are working with areas such as ethics of AI, sustainability of AI, etc.
37+
- you are looking for a risk assessment framework or a complete threat model for AI/ML systems (then check e.g. [AI RMF by NIST](https://airc.nist.gov/AI_RMF_Knowledge_Base/AI_RMF))
38+
- you are looking for real vulnerabilities in AI/ML systems (check our [RELATED](RELATED.md) document for more details)
39+
40+
Of course, this document may be helpful to you, but if you are looking for something to help you solve the above task, it is worth looking at these documents.
41+

0 commit comments

Comments
 (0)