Skip to content

Commit 203ba80

Browse files
Felipe Campos PenhaFelipe Campos Penha
authored andcommitted
docs: overview.
1 parent 5a39e45 commit 203ba80

File tree

1 file changed

+37
-0
lines changed

1 file changed

+37
-0
lines changed
Lines changed: 37 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,37 @@
1+
# GenAI Red Team Handbook
2+
3+
This handbook provides a collection of resources, sandboxes, and examples designed to facilitate Red Teaming exercises for Generative AI systems. It aims to help security researchers and developers test, probe, and evaluate the safety and security of LLM applications.
4+
5+
## Directory Structure
6+
7+
```text
8+
initiatives/genai_red_team_handbook
9+
├── exploitation
10+
│ └── example
11+
└── sandboxes
12+
├── RAG_local
13+
└── llm_local
14+
```
15+
16+
## Index of Sub-Projects
17+
18+
### Sandboxes
19+
20+
* **[Sandboxes Overview](sandboxes/README.md)**
21+
* **Summary**: The central hub for all available sandboxes. It explains the purpose of these isolated environments and lists the available options.
22+
23+
* **[RAG Local Sandbox](sandboxes/RAG_local/README.md)**
24+
* **Summary**: A comprehensive Retrieval-Augmented Generation (RAG) sandbox. It includes a mock Vector Database (Pinecone compatible), mock Object Storage (S3 compatible), and a mock LLM API. Designed for testing vulnerabilities like embedding inversion and data poisoning.
25+
* **Sub-guides**:
26+
* [Adding New Mock Services](sandboxes/RAG_local/app/mocks/README.md): Guide for extending the sandbox with new API mocks.
27+
28+
* **[LLM Local Sandbox](sandboxes/llm_local/README.md)**
29+
* **Summary**: A lightweight local sandbox that mocks an OpenAI-compatible LLM API using Ollama. Ideal for testing client-side interactions and prompt injection vulnerabilities without external costs.
30+
* **Sub-guides**:
31+
* [Adding New Mock Services](sandboxes/llm_local/app/mocks/README.md): Guide for extending the sandbox with new API mocks.
32+
33+
34+
### Exploitation
35+
36+
* **[Red Team Example](exploitation/example/README.md)**
37+
* **Summary**: Demonstrates a red team operation against a local LLM sandbox. It includes an adversarial attack script (`attack.py`) targting the mock LLM API to test safety guardrails.

0 commit comments

Comments
 (0)