Skip to content

Commit bc936dd

Browse files
authored
Fill in one sentence in the prompt guard tutorial. (meta-llama#609)
2 parents 2845306 + be19e39 commit bc936dd

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

recipes/responsible_ai/prompt_guard/prompt_guard_tutorial.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -789,7 +789,7 @@
789789
"metadata": {},
790790
"source": [
791791
"\n",
792-
"One good way to quickly obtain labeled training data for a use case is to use the original, non-fine tuned model itself to highlight risky examples to label, while drawing random negatives from below a score threshold. This helps address the class imbalance (attacks and risky prompts can be a very small percentage of all prompts) and includes false positive examples (which tend to be very valuable to train on) in the dataset. The use of synthetic data for specific "
792+
"One good way to quickly obtain labeled training data for a use case is to use the original, non-fine tuned model itself to highlight risky examples to label, while drawing random negatives from below a score threshold. This helps address the class imbalance (attacks and risky prompts can be a very small percentage of all prompts) and includes false positive examples (which tend to be very valuable to train on) in the dataset. Generating synthetic fine-tuning data for specific use cases can also be an effective strategy."
793793
]
794794
}
795795
],

0 commit comments

Comments
 (0)