From 8ee3565b957d7d353771ae82ed7a05b4bf327044 Mon Sep 17 00:00:00 2001 From: Sanaz Khalili <74542890+sanazkhalili@users.noreply.github.com> Date: Mon, 7 Oct 2024 21:08:45 +0330 Subject: [PATCH] Update using_llms.md Install ProfanityFree --- docs/how_to_guides/using_llms.md | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/docs/how_to_guides/using_llms.md b/docs/how_to_guides/using_llms.md index e6b26a3b8..51163663b 100644 --- a/docs/how_to_guides/using_llms.md +++ b/docs/how_to_guides/using_llms.md @@ -292,7 +292,10 @@ See LiteLLM’s documentation [here](https://docs.litellm.ai/docs/providers) for ## Custom LLM Wrappers In case you're using an LLM that isn't natively supported by Guardrails and you don't want to use LiteLLM, you can build a custom LLM API wrapper. In order to use a custom LLM, create a function that accepts a positional argument for the prompt as a string and any other arguments that you want to pass to the LLM API as keyword args. The function should return the output of the LLM API as a string. - +Install ProfanityFree from hub: +``` +guardrails hub install hub://guardrails/profanity_free +``` ```python from guardrails import Guard from guardrails.hub import ProfanityFree @@ -334,4 +337,4 @@ validated_response = guard( prompt="Can you generate a list of 10 things that are not food?", **kwargs, ) -``` \ No newline at end of file +```