You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: 2_0_vulns/LLM01_PromptInjection.md
+32Lines changed: 32 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,9 +11,11 @@ While prompt injection and jailbreaking are related concepts in LLM security, th
11
11
### Types of Prompt Injection Vulnerabilities
12
12
13
13
#### Direct Prompt Injections
14
+
14
15
Direct prompt injections occur when a user's prompt input directly alters the behavior of the model in unintended or unexpected ways. The input can be either intentional (i.e., a malicious actor deliberately crafting a prompt to exploit the model) or unintentional (i.e., a user inadvertently providing input that triggers unexpected behavior).
15
16
16
17
#### Indirect Prompt Injections
18
+
17
19
Indirect prompt injections occur when an LLM accepts input from external sources, such as websites or files. The external source may have content data that when interpreted by the model, alters the behavior of the model in unintended or unexpected ways. Like direct injections, indirect injections can be either intentional or unintentional.
18
20
19
21
The severity and nature of the impact of a successful prompt injection attack can vary greatly and are largely dependent on both the business context the model operates in, and the agency with which the model is architected. Generally, however, prompt injection can lead to unintended outcomes, including but not limited to:
@@ -32,39 +34,69 @@ The rise of multimodal AI, which processes multiple data types simultaneously, i
32
34
Prompt injection vulnerabilities are possible due to the nature of generative AI. Given the stochastic influence at the heart of the way models work, it is unclear if there are fool-proof methods of prevention for prompt injection. However, the following measures can mitigate the impact of prompt injections:
33
35
34
36
#### 1. Constrain model behavior
37
+
35
38
Provide specific instructions about the model's role, capabilities, and limitations within the system prompt. Enforce strict context adherence, limit responses to specific tasks or topics, and instruct the model to ignore attempts to modify core instructions.
39
+
36
40
#### 2. Define and validate expected output formats
41
+
37
42
Specify clear output formats, request detailed reasoning and source citations, and use deterministic code to validate adherence to these formats.
43
+
38
44
#### 3. Implement input and output filtering
45
+
39
46
Define sensitive categories and construct rules for identifying and handling such content. Apply semantic filters and use string-checking to scan for non-allowed content. Evaluate responses using the RAG Triad: Assess context relevance, groundedness, and question/answer relevance to identify potentially malicious outputs.
47
+
40
48
#### 4. Enforce privilege control and least privilege access
49
+
41
50
Provide the application with its own API tokens for extensible functionality, and handle these functions in code rather than providing them to the model. Restrict the model's access privileges to the minimum necessary for its intended operations.
51
+
42
52
#### 5. Require human approval for high-risk actions
53
+
43
54
Implement human-in-the-loop controls for privileged operations to prevent unauthorized actions.
55
+
44
56
#### 6. Segregate and identify external content
57
+
45
58
Separate and clearly denote untrusted content to limit its influence on user prompts.
59
+
46
60
#### 7. Conduct adversarial testing and attack simulations
61
+
47
62
Perform regular penetration testing and breach simulations, treating the model as an untrusted user to test the effectiveness of trust boundaries and access controls.
48
63
49
64
### Example Attack Scenarios
50
65
51
66
#### Scenario #1: Direct Injection
67
+
52
68
An attacker injects a prompt into a customer support chatbot, instructing it to ignore previous guidelines, query private data stores, and send emails, leading to unauthorized access and privilege escalation.
69
+
53
70
#### Scenario #2: Indirect Injection
71
+
54
72
A user employs an LLM to summarize a webpage containing hidden instructions that cause the LLM to insert an image linking to a URL, leading to exfiltration of the the private conversation.
73
+
55
74
#### Scenario #3: Unintentional Injection
75
+
56
76
A company includes an instruction in a job description to identify AI-generated applications. An applicant, unaware of this instruction, uses an LLM to optimize their resume, inadvertently triggering the AI detection.
77
+
57
78
#### Scenario #4: Intentional Model Influence
79
+
58
80
An attacker modifies a document in a repository used by a Retrieval-Augmented Generation (RAG) application. When a user's query returns the modified content, the malicious instructions alter the LLM's output, generating misleading results.
81
+
59
82
#### Scenario #5: Code Injection
83
+
60
84
An attacker exploits a vulnerability (CVE-2024-5184) in an LLM-powered email assistant to inject malicious prompts, allowing access to sensitive information and manipulation of email content.
85
+
61
86
#### Scenario #6: Payload Splitting
87
+
62
88
An attacker uploads a resume with split malicious prompts. When an LLM is used to evaluate the candidate, the combined prompts manipulate the model's response, resulting in a positive recommendation despite the actual resume contents.
89
+
63
90
#### Scenario #7: Multimodal Injection
91
+
64
92
An attacker embeds a malicious prompt within an image that accompanies benign text. When a multimodal AI processes the image and text concurrently, the hidden prompt alters the model's behavior, potentially leading to unauthorized actions or disclosure of sensitive information.
93
+
65
94
#### Scenario #8: Adversarial Suffix
95
+
66
96
An attacker appends a seemingly meaningless string of characters to a prompt, which influences the LLM's output in a malicious way, bypassing safety measures.
97
+
67
98
#### Scenario #9: Multilingual/Obfuscated Attack
99
+
68
100
An attacker uses multiple languages or encodes malicious instructions (e.g., using Base64 or emojis) to evade filters and manipulate the LLM's behavior.
Copy file name to clipboardExpand all lines: 2_0_vulns/LLM02_SensitiveInformationDisclosure.md
+34-6Lines changed: 34 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,64 +11,92 @@ To reduce this risk, LLM applications should perform adequate data sanitization
11
11
### Common Examples of Vulnerability
12
12
13
13
#### 1. PII Leakage
14
+
14
15
Personal identifiable information (PII) may be disclosed during interactions with the LLM.
16
+
15
17
#### 2. Proprietary Algorithm Exposure
18
+
16
19
Poorly configured model outputs can reveal proprietary algorithms or data. Revealing training data can expose models to inversion attacks, where attackers extract sensitive information or reconstruct inputs. For instance, as demonstrated in the 'Proof Pudding' attack (CVE-2019-20634), disclosed training data facilitated model extraction and inversion, allowing attackers to circumvent security controls in machine learning algorithms and bypass email filters.
20
+
17
21
#### 3. Sensitive Business Data Disclosure
22
+
18
23
Generated responses might inadvertently include confidential business information.
19
24
20
25
### Prevention and Mitigation Strategies
21
26
22
-
#### Sanitization:
27
+
#### Sanitization
23
28
24
29
#### 1. Integrate Data Sanitization Techniques
30
+
25
31
Implement data sanitization to prevent user data from entering the training model. This includes scrubbing or masking sensitive content before it is used in training.
32
+
26
33
#### 2. Robust Input Validation
34
+
27
35
Apply strict input validation methods to detect and filter out potentially harmful or sensitive data inputs, ensuring they do not compromise the model.
28
36
29
-
#### Access Controls:
37
+
#### Access Controls
30
38
31
39
#### 1. Enforce Strict Access Controls
40
+
32
41
Limit access to sensitive data based on the principle of least privilege. Only grant access to data that is necessary for the specific user or process.
42
+
33
43
#### 2. Restrict Data Sources
44
+
34
45
Limit model access to external data sources, and ensure runtime data orchestration is securely managed to avoid unintended data leakage.
35
46
36
-
#### Federated Learning and Privacy Techniques:
47
+
#### Federated Learning and Privacy Techniques
37
48
38
49
#### 1. Utilize Federated Learning
50
+
39
51
Train models using decentralized data stored across multiple servers or devices. This approach minimizes the need for centralized data collection and reduces exposure risks.
52
+
40
53
#### 2. Incorporate Differential Privacy
54
+
41
55
Apply techniques that add noise to the data or outputs, making it difficult for attackers to reverse-engineer individual data points.
42
56
43
-
#### User Education and Transparency:
57
+
#### User Education and Transparency
44
58
45
59
#### 1. Educate Users on Safe LLM Usage
60
+
46
61
Provide guidance on avoiding the input of sensitive information. Offer training on best practices for interacting with LLMs securely.
62
+
47
63
#### 2. Ensure Transparency in Data Usage
64
+
48
65
Maintain clear policies about data retention, usage, and deletion. Allow users to opt out of having their data included in training processes.
49
66
50
-
#### Secure System Configuration:
67
+
#### Secure System Configuration
51
68
52
69
#### 1. Conceal System Preamble
70
+
53
71
Limit the ability for users to override or access the system's initial settings, reducing the risk of exposure to internal configurations.
72
+
54
73
#### 2. Reference Security Misconfiguration Best Practices
74
+
55
75
Follow guidelines like "OWASP API8:2023 Security Misconfiguration" to prevent leaking sensitive information through error messages or configuration details.
Use homomorphic encryption to enable secure data analysis and privacy-preserving machine learning. This ensures data remains confidential while being processed by the model.
83
+
62
84
#### 2. Tokenization and Redaction
85
+
63
86
Implement tokenization to preprocess and sanitize sensitive information. Techniques like pattern matching can detect and redact confidential content before processing.
64
87
65
88
### Example Attack Scenarios
66
89
67
90
#### Scenario #1: Unintentional Data Exposure
91
+
68
92
A user receives a response containing another user's personal data due to inadequate data sanitization.
93
+
69
94
#### Scenario #2: Targeted Prompt Injection
95
+
70
96
An attacker bypasses input filters to extract sensitive information.
97
+
71
98
#### Scenario #3: Data Leak via Training Data
99
+
72
100
Negligent data inclusion in training leads to sensitive information disclosure.
Copy file name to clipboardExpand all lines: 2_0_vulns/LLM03_SupplyChain.md
+42Lines changed: 42 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,23 +14,40 @@ A simple threat model can be found [here](https://github.com/jsotiro/ThreatModel
14
14
### Common Examples of Risks
15
15
16
16
#### 1. Traditional Third-party Package Vulnerabilities
17
+
17
18
Such as outdated or deprecated components, which attackers can exploit to compromise LLM applications. This is similar to "A06:2021 – Vulnerable and Outdated Components" with increased risks when components are used during model development or fine-tuning.
18
19
(Ref. link: [A06:2021 – Vulnerable and Outdated Components](https://owasp.org/Top10/A06_2021-Vulnerable_and_Outdated_Components/))
20
+
19
21
#### 2. Licensing Risks
22
+
20
23
AI development often involves diverse software and dataset licenses, creating risks if not properly managed. Different open-source and proprietary licenses impose varying legal requirements. Dataset licenses may restrict usage, distribution, or commercialization.
24
+
21
25
#### 3. Outdated or Deprecated Models
26
+
22
27
Using outdated or deprecated models that are no longer maintained leads to security issues.
28
+
23
29
#### 4. Vulnerable Pre-Trained Model
30
+
24
31
Models are binary black boxes and unlike open source, static inspection can offer little to security assurances. Vulnerable pre-trained models can contain hidden biases, backdoors, or other malicious features that have not been identified through the safety evaluations of model repository. Vulnerable models can be created by both poisoned datasets and direct model tampering using techniques such as ROME also known as lobotomisation.
32
+
25
33
#### 5. Weak Model Provenance
34
+
26
35
Currently there are no strong provenance assurances in published models. Model Cards and associated documentation provide model information and relied upon users, but they offer no guarantees on the origin of the model. An attacker can compromise supplier account on a model repo or create a similar one and combine it with social engineering techniques to compromise the supply-chain of an LLM application.
36
+
27
37
#### 6. Vulnerable LoRA adapters
38
+
28
39
LoRA is a popular fine-tuning technique that enhances modularity by allowing pre-trained layers to be bolted onto an existing LLM. The method increases efficiency but create new risks, where a malicious LorA adapter compromises the integrity and security of the pre-trained base model. This can happen both in collaborative model merge environments but also exploiting the support for LoRA from popular inference deployment platforms such as vLMM and OpenLLM where adapters can be downloaded and applied to a deployed model.
40
+
29
41
#### 7. Exploit Collaborative Development Processes
42
+
30
43
Collaborative model merge and model handling services (e.g. conversions) hosted in shared environments can be exploited to introduce vulnerabilities in shared models. Model merging is is very popular on Hugging Face with model-merged models topping the OpenLLM leaderboard and can be exploited to bypass reviews. Similarly, services such as conversation bot have been proved to be vulnerable to manipulation and introduce malicious code in models.
44
+
31
45
#### 8. LLM Model on Device supply-chain vulnerabilities
46
+
32
47
LLM models on device increase the supply attack surface with compromised manufactured processes and exploitation of device OS or firmware vulnerabilities to compromise models. Attackers can reverse engineer and re-package applications with tampered models.
48
+
33
49
#### 9. Unclear T&Cs and Data Privacy Policies
50
+
34
51
Unclear T&Cs and data privacy policies of the model operators lead to the application's sensitive data being used for model training and subsequent sensitive information exposure. This may also apply to risks from using copyrighted material by the model supplier.
35
52
36
53
### Prevention and Mitigation Strategies
@@ -51,31 +68,56 @@ A simple threat model can be found [here](https://github.com/jsotiro/ThreatModel
51
68
### Sample Attack Scenarios
52
69
53
70
#### Scenario #1: Vulnerable Python Library
71
+
54
72
An attacker exploits a vulnerable Python library to compromise an LLM app. This happened in the first Open AI data breach. Attacks on the PyPi package registry tricked model developers into downloading a compromised PyTorch dependency with malware in a model development environment. A more sophisticated example of this type of attack is Shadow Ray attack on the Ray AI framework used by many vendors to manage AI infrastructure. In this attack, five vulnerabilities are believed to have been exploited in the wild affecting many servers.
73
+
55
74
#### Scenario #2: Direct Tampering
75
+
56
76
Direct Tampering and publishing a model to spread misinformation. This is an actual attack with PoisonGPT bypassing Hugging Face safety features by directly changing model parameters.
77
+
57
78
#### Scenario #3: Fine-tuning Popular Model
79
+
58
80
An attacker fine-tunes a popular open access model to remove key safety features and perform high in a specific domain (insurance). The model is fine-tuned to score highly on safety benchmarks but has very targeted triggers. They deploy it on Hugging Face for victims to use it exploiting their trust on benchmark assurances.
81
+
59
82
#### Scenario #4: Pre-Trained Models
83
+
60
84
An LLM system deploys pre-trained models from a widely used repository without thorough verification. A compromised model introduces malicious code, causing biased outputs in certain contexts and leading to harmful or manipulated outcomes
A compromised third-party supplier provides a vulnerable LorA adapter that is being merged to an LLM using model merge on Hugging Face.
89
+
63
90
#### Scenario #6: Supplier Infiltration
91
+
64
92
An attacker infiltrates a third-party supplier and compromises the production of a LoRA (Low-Rank Adaptation) adapter intended for integration with an on-device LLM deployed using frameworks like vLLM or OpenLLM. The compromised LoRA adapter is subtly altered to include hidden vulnerabilities and malicious code. Once this adapter is merged with the LLM, it provides the attacker with a covert entry point into the system. The malicious code can activate during model operations, allowing the attacker to manipulate the LLM’s outputs.
93
+
65
94
#### Scenario #7: CloudBorne and CloudJacking Attacks
95
+
66
96
These attacks target cloud infrastructures, leveraging shared resources and vulnerabilities in the virtualization layers. CloudBorne involves exploiting firmware vulnerabilities in shared cloud environments, compromising the physical servers hosting virtual instances. CloudJacking refers to malicious control or misuse of cloud instances, potentially leading to unauthorized access to critical LLM deployment platforms. Both attacks represent significant risks for supply chains reliant on cloud-based ML models, as compromised environments could expose sensitive data or facilitate further attacks.
97
+
67
98
#### Scenario #8: LeftOvers (CVE-2023-4969)
99
+
68
100
LeftOvers exploitation of leaked GPU local memory to recover sensitive data. An attacker can use this attack to exfiltrate sensitive data in production servers and development workstations or laptops.
101
+
69
102
#### Scenario #9: WizardLM
103
+
70
104
Following the removal of WizardLM, an attacker exploits the interest in this model and publish a fake version of the model with the same name but containing malware and backdoors.
105
+
71
106
#### Scenario #10: Model Merge/Format Conversion Service
107
+
72
108
An attacker stages an attack with a model merge or format conversation service to compromise a publicly available access model to inject malware. This is an actual attack published by vendor HiddenLayer.
109
+
73
110
#### Scenario #11: Reverse-Engineer Mobile App
111
+
74
112
An attacker reverse-engineers an mobile app to replace the model with a tampered version that leads the user to scam sites. Users are encouraged to download the app directly via social engineering techniques. This is a "real attack on predictive AI" that affected 116 Google Play apps including popular security and safety-critical applications used for as cash recognition, parental control, face authentication, and financial service.
75
113
(Ref. link: [real attack on predictive AI](https://arxiv.org/abs/2006.08131))
114
+
76
115
#### Scenario #12: Dataset Poisoning
116
+
77
117
An attacker poisons publicly available datasets to help create a back door when fine-tuning models. The back door subtly favors certain companies in different markets.
118
+
78
119
#### Scenario #13: T&Cs and Privacy Policy
120
+
79
121
An LLM operator changes its T&Cs and Privacy Policy to require an explicit opt out from using application data for model training, leading to the memorization of sensitive data.
Copy file name to clipboardExpand all lines: 2_0_vulns/LLM04_DataModelPoisoning.md
+10-1Lines changed: 10 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -34,14 +34,23 @@ Moreover, models distributed through shared repositories or open-source platform
34
34
### Example Attack Scenarios
35
35
36
36
#### Scenario #1
37
+
37
38
An attacker biases the model's outputs by manipulating training data or using prompt injection techniques, spreading misinformation.
39
+
38
40
#### Scenario #2
41
+
39
42
Toxic data without proper filtering can lead to harmful or biased outputs, propagating dangerous information.
40
-
#### Scenario # 3
43
+
44
+
#### Scenario #3
45
+
41
46
A malicious actor or competitor creates falsified documents for training, resulting in model outputs that reflect these inaccuracies.
47
+
42
48
#### Scenario #4
49
+
43
50
Inadequate filtering allows an attacker to insert misleading data via prompt injection, leading to compromised outputs.
51
+
44
52
#### Scenario #5
53
+
45
54
An attacker uses poisoning techniques to insert a backdoor trigger into the model. This could leave you open to authentication bypass, data exfiltration or hidden command execution.
0 commit comments