You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/how_to_guides/using_llms.md
+46Lines changed: 46 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -289,3 +289,49 @@ for chunk in stream_chunk_generator
289
289
## Other LLMs
290
290
291
291
See LiteLLM’s documentation [here](https://docs.litellm.ai/docs/providers) for details on many other llms.
292
+
293
+
## Custom LLM Wrappers
294
+
In case you're using an LLM that isn't natively supported by Guardrails and you don't want to use LiteLLM, you can build a custom LLM API wrapper. In order to use a custom LLM, create a function that takes accepts a prompt as a string and any other arguments that you want to pass to the LLM API as keyword args. The function should return the output of the LLM API as a string.
295
+
296
+
```python
297
+
from guardrails import Guard
298
+
from guardrails.hub import ProfanityFree
299
+
300
+
# Create a Guard class
301
+
guard = Guard().use(ProfanityFree())
302
+
303
+
# Function that takes the prompt as a string and returns the LLM output as string
304
+
defmy_llm_api(
305
+
prompt: Optional[str] =None,
306
+
*,
307
+
instruction: Optional[str] =None,
308
+
msg_history: Optional[list[dict]] =None,
309
+
**kwargs
310
+
) -> str:
311
+
"""Custom LLM API wrapper.
312
+
313
+
At least one of prompt, instruction or msg_history should be provided.
314
+
315
+
Args:
316
+
prompt (str): The prompt to be passed to the LLM API
317
+
instruction (str): The instruction to be passed to the LLM API
318
+
msg_history (list[dict]): The message history to be passed to the LLM API
319
+
**kwargs: Any additional arguments to be passed to the LLM API
320
+
321
+
Returns:
322
+
str: The output of the LLM API
323
+
"""
324
+
325
+
# Call your LLM API here
326
+
# What you pass to the llm will depend on what arguments it accepts.
0 commit comments