Skip to content

Commit 36253b6

Browse files
authored
Merge pull request meta-llama#11 from meta-llama/rai_readme
Update RAI readme
2 parents c0d58d5 + bc56973 commit 36253b6

File tree

2 files changed

+11
-7
lines changed

2 files changed

+11
-7
lines changed

README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -145,6 +145,7 @@ Contains examples are organized in folders by topic:
145145
[use_cases](./recipes/use_cases)|Scripts showing common applications of Meta Llama3
146146
[3p_integrations](./recipes/3p_integrations)|Partner owned folder showing common applications of Meta Llama3
147147
[responsible_ai](./recipes/responsible_ai)|Scripts to use PurpleLlama for safeguarding model outputs
148+
[experimental](./experimental)|Meta Llama implementations of experimental LLM techniques
148149

149150
### `src/`
150151

recipes/responsible_ai/README.md

Lines changed: 10 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,14 @@
1-
# Meta Llama Guard
1+
# Trust and Safety with Llama
22

3-
Meta Llama Guard models provide input and output guardrails for LLM inference. For more details, please visit the main [repository](https://github.com/meta-llama/PurpleLlama/).
3+
The [Purple Llama](https://github.com/meta-llama/PurpleLlama/) project provides tools and models to improve LLM security. This folder contains examples to get started with PurpleLlama tools.
44

5-
**Note** Please find the right model on HF side [here](https://huggingface.co/meta-llama/Llama-Guard-3-8B).
5+
| Tool/Model | Description | Get Started
6+
|---|---|---|
7+
[Llama Guard](https://llama.meta.com/docs/model-cards-and-prompt-formats/llama-guard-3) | Provide guardrailing on inputs and outputs | [Inference](./llama_guard/inference.py), [Finetuning](./llama_guard/llama_guard_customization_via_prompting_and_fine_tuning.ipynb)
8+
[Prompt Guard](https://llama.meta.com/docs/model-cards-and-prompt-formats/prompt-guard) | Model to safeguards against jailbreak attempts and embedded prompt injections | [Notebook](./prompt_guard/prompt_guard_tutorial.ipynb)
9+
[Code Shield](https://github.com/meta-llama/PurpleLlama/tree/main/CodeShield) | Tool to safeguard against insecure code generated by the LLM | [Notebook](https://github.com/meta-llama/PurpleLlama/blob/main/CodeShield/notebook/CodeShieldUsageDemo.ipynb)
610

7-
### Running locally
8-
The [llama_guard](llama_guard) folder contains the inference script to run Meta Llama Guard locally. Add test prompts directly to the [inference script](llama_guard/inference.py) before running it.
911

10-
### Running on the cloud
11-
The notebooks [Purple_Llama_Anyscale](Purple_Llama_Anyscale.ipynb) & [Purple_Llama_OctoAI](Purple_Llama_OctoAI.ipynb) contain examples for running Meta Llama Guard on cloud hosted endpoints.
12+
13+
### Running on hosted APIs
14+
The notebooks [input_output_guardrails.ipynb](./input_output_guardrails_with_llama.ipynb), [Purple_Llama_Anyscale](Purple_Llama_Anyscale.ipynb) & [Purple_Llama_OctoAI](Purple_Llama_OctoAI.ipynb) contain examples for running Meta Llama Guard on cloud hosted endpoints.

0 commit comments

Comments
 (0)