Skip to content

Commit 78aef23

Browse files
committed
custom llm wrappers
1 parent 259d917 commit 78aef23

File tree

1 file changed

+46
-0
lines changed

1 file changed

+46
-0
lines changed

docs/how_to_guides/using_llms.md

Lines changed: 46 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -289,3 +289,49 @@ for chunk in stream_chunk_generator
289289
## Other LLMs
290290

291291
See LiteLLM’s documentation [here](https://docs.litellm.ai/docs/providers) for details on many other llms.
292+
293+
## Custom LLM Wrappers
294+
In case you're using an LLM that isn't natively supported by Guardrails and you don't want to use LiteLLM, you can build a custom LLM API wrapper. In order to use a custom LLM, create a function that takes accepts a prompt as a string and any other arguments that you want to pass to the LLM API as keyword args. The function should return the output of the LLM API as a string.
295+
296+
```python
297+
from guardrails import Guard
298+
from guardrails.hub import ProfanityFree
299+
300+
# Create a Guard class
301+
guard = Guard().use(ProfanityFree())
302+
303+
# Function that takes the prompt as a string and returns the LLM output as string
304+
def my_llm_api(
305+
prompt: Optional[str] = None,
306+
*,
307+
instruction: Optional[str] = None,
308+
msg_history: Optional[list[dict]] = None,
309+
**kwargs
310+
) -> str:
311+
"""Custom LLM API wrapper.
312+
313+
At least one of prompt, instruction or msg_history should be provided.
314+
315+
Args:
316+
prompt (str): The prompt to be passed to the LLM API
317+
instruction (str): The instruction to be passed to the LLM API
318+
msg_history (list[dict]): The message history to be passed to the LLM API
319+
**kwargs: Any additional arguments to be passed to the LLM API
320+
321+
Returns:
322+
str: The output of the LLM API
323+
"""
324+
325+
# Call your LLM API here
326+
# What you pass to the llm will depend on what arguments it accepts.
327+
llm_output = some_llm(prompt, instruction, msg_history, **kwargs)
328+
329+
return llm_output
330+
331+
# Wrap your LLM API call
332+
validated_response = guard(
333+
my_llm_api,
334+
prompt="Can you generate a list of 10 things that are not food?",
335+
**kwargs,
336+
)
337+
```

0 commit comments

Comments
 (0)