Skip to content

Commit 27a5e71

Browse files
Felipe Campos PenhaFelipe Campos Penha
authored andcommitted
docs: revision.
1 parent ea6e43f commit 27a5e71

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

initiatives/genai_red_team_handbook/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,4 +34,4 @@ initiatives/genai_red_team_handbook
3434
### Exploitation
3535

3636
* **[Red Team Example](exploitation/example/README.md)**
37-
* **Summary**: Demonstrates a red team operation against a local LLM sandbox. It includes an adversarial attack script (`attack.py`) targting the mock LLM API to test safety guardrails.
37+
* **Summary**: Demonstrates a red team operation against a local LLM sandbox. It includes an adversarial attack script (`attack.py`) targeting the Gradio interface (port 7860). By targeting the application layer, this approach tests the entire system—including the configurable system prompt—providing a more realistic assessment of the sandbox's security posture compared to testing the raw LLM API in isolation.

0 commit comments

Comments
 (0)