Skip to content

Commit 4b7924a

Browse files
authored
Minor Editorial style correction (#453)
Minor Editorial style correction Signed-off-by: Krishna Sankar <[email protected]>
1 parent db8aee4 commit 4b7924a

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

2_0_vulns/LLM08_VectorAndEmbeddingWeaknesses.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ Vectors and embeddings vulnerabilities present significant security risks in sys
66

77
Retrieval Augmented Generation (RAG) is a model adaptation technique that enhances the performance and contextual relevance of responses from LLM Applications, by combining pre-trained language models with external knowledge sources.Retrieval Augmentation uses vector mechanisms and embedding. (Ref #1)
88

9-
### Common Examples of Risk
9+
### Common Examples of Risks
1010

1111
1. **Unauthorized Access & Data Leakage:** Inadequate or misaligned access controls can lead to unauthorized access to embeddings containing sensitive information. If not properly managed, the model could retrieve and disclose personal data, proprietary information, or other sensitive content. Unauthorized use of copyrighted material or non-compliance with data usage policies during augmentation can lead to legal repercussions.
1212
2. **Cross-Context Information Leaks and Federation Knowledge Conflict:** In multi-tenant environments where multiple classes of users or applications share the same vector database, there's a risk of context leakage between users or queries. Data federation knowledge conflict errors can occur when data from multiple sources contradict each other (Ref #2). This can also happen when an LLM can’t supersede old knowledge that it has learned while training, with the new data from Retrieval Augmentation.

0 commit comments

Comments
 (0)