Skip to content

Commit b15bad9

Browse files
Merge pull request meta-llama#8 from meta-llama/responsible_ai
Updates to responsible_ai folder including README changes
2 parents 60e3159 + d28400e commit b15bad9

File tree

4 files changed

+10
-7
lines changed

4 files changed

+10
-7
lines changed

recipes/responsible_ai/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
# Meta Llama Guard
22

3-
Meta Llama Guard and Meta Llama Guard 2 are new models that provide input and output guardrails for LLM inference. For more details, please visit the main [repository](https://github.com/facebookresearch/PurpleLlama/tree/main/Llama-Guard2).
3+
Meta Llama Guard models provide input and output guardrails for LLM inference. For more details, please visit the main [repository](https://github.com/meta-llama/PurpleLlama/).
44

5-
**Note** Please find the right model on HF side [here](https://huggingface.co/meta-llama/Meta-Llama-Guard-2-8B).
5+
**Note** Please find the right model on HF side [here](https://huggingface.co/meta-llama/Llama-Guard-3-8B).
66

77
### Running locally
88
The [llama_guard](llama_guard) folder contains the inference script to run Meta Llama Guard locally. Add test prompts directly to the [inference script](llama_guard/inference.py) before running it.

recipes/responsible_ai/llama_guard/README.md

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Meta Llama Guard demo
22
<!-- markdown-link-check-disable -->
3-
Meta Llama Guard is a language model that provides input and output guardrails for LLM inference. For more details and model cards, please visit the main repository for each model, [Meta Llama Guard](https://github.com/meta-llama/PurpleLlama/tree/main/Llama-Guard) and Meta [Llama Guard 2](https://github.com/meta-llama/PurpleLlama/tree/main/Llama-Guard2).
3+
Meta Llama Guard is a language model that provides input and output guardrails for LLM inference. For more details and model cards, please visit the [PurpleLlama](https://github.com/meta-llama/PurpleLlama) repository.
44

55
This folder contains an example file to run inference with a locally hosted model, either using the Hugging Face Hub or a local path.
66

@@ -55,9 +55,9 @@ This is the output:
5555

5656
To run it with a local model, you can use the `model_id` param in the inference script:
5757

58-
`python recipes/responsible_ai/llama_guard/inference.py --model_id=/home/ubuntu/models/llama3/llama_guard_2-hf/ --llama_guard_version=LLAMA_GUARD_2`
58+
`python recipes/responsible_ai/llama_guard/inference.py --model_id=/home/ubuntu/models/llama3/Llama-Guard-3-8B/ --llama_guard_version=LLAMA_GUARD_3`
5959

60-
Note: Make sure to also add the llama_guard_version if when it does not match the default, the script allows you to run the prompt format from Meta Llama Guard 1 on Meta Llama Guard 2
60+
Note: Make sure to also add the llama_guard_version; by default it uses LLAMA_GUARD_3
6161

6262
## Inference Safety Checker
6363
When running the regular inference script with prompts, Meta Llama Guard will be used as a safety checker on the user prompt and the model output. If both are safe, the result will be shown, else a message with the error will be shown, with the word unsafe and a comma separated list of categories infringed. Meta Llama Guard is always loaded quantized using Hugging Face Transformers library with bitsandbytes.
@@ -67,3 +67,6 @@ In this case, the default categories are applied by the tokenizer, using the `ap
6767
Use this command for testing with a quantized Llama model, modifying the values accordingly:
6868

6969
`python examples/inference.py --model_name <path_to_regular_llama_model> --prompt_file <path_to_prompt_file> --quantization 8bit --enable_llamaguard_content_safety`
70+
71+
## Llama Guard 3 Finetuning & Customization
72+
The safety categories in Llama Guard 3 can be tuned for specific application needs. Existing categories can be removed and new categories can be added to the taxonomy. The [Llama Guard Customization](./llama_guard_customization_via_prompting_and_fine_tuning.ipynb) notebook walks through the process.

recipes/responsible_ai/llama_guard/inference.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,8 +14,8 @@ class AgentType(Enum):
1414
USER = "User"
1515

1616
def main(
17-
model_id: str = "meta-llama/LlamaGuard-7b",
18-
llama_guard_version: LlamaGuardVersion = LlamaGuardVersion.LLAMA_GUARD_1
17+
model_id: str = "meta-llama/Llama-Guard-3-8B",
18+
llama_guard_version: str = "LLAMA_GUARD_3"
1919
):
2020
"""
2121
Entry point for Llama Guard inference sample script.

0 commit comments

Comments
 (0)