Skip to content

Commit 06ca042

Browse files
authored
Merge pull request #635 from guardrails-ai/shreya/custom-callable-docs
Update docs for using a custom llm
2 parents b11efa5 + efb8cd5 commit 06ca042

File tree

2 files changed

+106
-83
lines changed

2 files changed

+106
-83
lines changed
Lines changed: 105 additions & 82 deletions
Original file line numberDiff line numberDiff line change
@@ -1,133 +1,156 @@
1-
# Guard Custom LLMs
1+
import Tabs from '@theme/Tabs';
2+
import TabItem from '@theme/TabItem';
3+
4+
# Use Guardrails with any LLMs
25

36
Guardrails' `Guard` wrappers provide a simple way to add Guardrails to your LLM API calls. The wrappers are designed to be used with any LLM API.
47

8+
There are three ways to use Guardrails with an LLM API:
9+
1. [**Natively-supported LLMs**](#natively-supported-llms): Guardrails provides out-of-the-box wrappers for OpenAI, Cohere, Anthropic and HuggingFace. If you're using any of these APIs, check out the documentation in [this](#natively-supported-llms) section.
10+
2. [**LLMs supported through LiteLLM**](#llms-supported-via-litellm): Guardrails provides an easy integration with [liteLLM](https://docs.litellm.ai/docs/), a lightweight abstraction over LLM APIs that supports over 100+ LLMs. If you're using an LLM that isn't natively supported by Guardrails, you can use LiteLLM to integrate it with Guardrails. Check out the documentation in [this](#llms-supported-via-litellm) section.
11+
3. [**Build a custom LLM wrapper**](#build-a-custom-llm-wrapper): If you're using an LLM that isn't natively supported by Guardrails and you don't want to use LiteLLM, you can build a custom LLM API wrapper. Check out the documentation in [this](#build-a-custom-llm-wrapper) section.
512

6-
Here are some examples of how to use the wrappers with different LLM providers and models:
713

8-
## OpenAI
14+
## Natively-supported LLMs
915

10-
### Completion Models (e.g. GPT-3)
16+
Guardrails provides native support for a select few LLMs and Manifest. If you're using any of these LLMs, you can use Guardrails' out-of-the-box wrappers to add Guardrails to your LLM API calls.
1117

12-
```python
13-
import openai
14-
import guardrails as gd
18+
<Tabs>
19+
<TabItem value="openai" label="OpenAI" default>
20+
```python
21+
import openai
22+
from guardrails import Guard
23+
from guardrails.hub import ProfanityFree
1524

25+
# Create a Guard
26+
guard = Guard().use(ProfanityFree())
1627

17-
# Create a Guard class
18-
guard = gd.Guard.from_rail(...)
19-
20-
# Wrap openai API call
21-
raw_llm_output, guardrail_output, *rest = guard(
22-
openai.Completion.create,
23-
prompt_params={"prompt_param_1": "value_1", "prompt_param_2": "value_2", ..},
24-
engine="text-davinci-003",
25-
max_tokens=100,
26-
temperature=0.0,
27-
)
28-
```
28+
# Wrap openai API call
29+
validated_response = guard(
30+
openai.chat.completions.create,
31+
prompt="Can you generate a list of 10 things that are not food?",
32+
model="gpt-3.5-turbo",
33+
max_tokens=100,
34+
temperature=0.0,
35+
)
36+
```
37+
</TabItem>
38+
<TabItem value="cohere" label="Cohere">
39+
```python
40+
import cohere
41+
from guardrails import Guard
42+
from guardrails.hub import ProfanityFree
2943

30-
### ChatCompletion Models (e.g. ChatGPT)
44+
# Create a Guard
45+
guard = Guard().use(ProfanityFree())
3146

32-
```python
33-
import openai
34-
import guardrails as gd
47+
# Create a Cohere client
48+
cohere_client = cohere.Client(api_key="my_api_key")
3549

36-
# Create a Guard class
37-
guard = gd.Guard.from_rail(...)
38-
39-
# Wrap openai API call
40-
raw_llm_output, guardrail_output, *rest = guard(
41-
openai.ChatCompletion.create,
42-
prompt_params={"prompt_param_1": "value_1", "prompt_param_2": "value_2", ..},
43-
system_prompt="You are a helpful assistant...",
44-
model="gpt-3.5-turbo",
45-
max_tokens=100,
46-
temperature=0.0,
47-
)
48-
```
50+
# Wrap cohere API call
51+
validated_response = guard(
52+
cohere_client.chat,
53+
prompt="Can you try to generate a list of 10 things that are not food?",
54+
model="command",
55+
max_tokens=100,
56+
...
57+
)
58+
```
59+
</TabItem>
60+
</Tabs>
4961

50-
## Cohere
5162

52-
### Generate (e.g. GPT-3)
63+
## LLMs supported via LiteLLM
5364

54-
```python
55-
import cohere
56-
import guardrails as gd
65+
[LiteLLM](https://docs.litellm.ai/docs/) is a lightweight wrapper that unifies the interface for over 100+ LLMs. Guardrails only supports 4 LLMs natively, but you can use Guardrails with LiteLLM to support over 100+ LLMs. You can read more about the LLMs supported by LiteLLM [here](https://docs.litellm.ai/docs/providers).
5766

58-
# Create a Guard class
59-
guard = gd.Guard.from_rail(...)
60-
61-
# Create a Cohere client
62-
cohere_client = cohere.Client(api_key="my_api_key")
63-
64-
# Wrap cohere API call
65-
raw_llm_output, guardrail_output, *rest = guard(
66-
cohere_client.generate,
67-
prompt_params={"prompt_param_1": "value_1", "prompt_param_2": "value_2", ..},
68-
model="command-nightly",
69-
max_tokens=100,
70-
...
71-
)
72-
```
67+
In order to use Guardrails with any of the LLMs supported through liteLLM, you need to do the following:
68+
1. Call the `Guard.__call__` method with `litellm.completion` as the first argument.
69+
2. Pass any additional litellm arguments as keyword arguments to the `Guard.__call` method.
70+
71+
Some examples of using Guardrails with LiteLLM are shown below.
7372

74-
## Using Manifest
75-
[Manifest](https://github.com/HazyResearch/manifest) is a wrapper around most model APIs and supports hosting local models. It can be used as a LLM API.
73+
### Use Guardrails with Ollama
7674

7775
```python
78-
import guardrails as gd
79-
import manifest
76+
import litellm
77+
from guardrails import Guard
78+
from guardrails.hub import ProfanityFree
8079

8180
# Create a Guard class
82-
guard = gd.Guard.from_rail(...)
83-
84-
# Create a Manifest client - this one points to GPT-4
85-
# and caches responses in SQLLite
86-
manifest = manifest.Manifest(
87-
client_name="openai",
88-
engine="gpt-4",
89-
cache_name="sqlite",
90-
cache_connection="my_manifest_cache.db"
81+
guard = Guard().use(ProfanityFree())
82+
83+
# Call the Guard to wrap the LLM API call
84+
validated_response = guard(
85+
litellm.completion,
86+
model="ollama/llama2",
87+
max_tokens=500,
88+
api_base="http://localhost:11434",
89+
msg_history=[{"role": "user", "content": "hello"}]
9190
)
91+
```
92+
93+
### Use Guardrails with Azure's OpenAI endpoint
9294

93-
# Wrap openai API call
94-
raw_llm_output, guardrail_output, *rest = guard(
95-
manifest,
96-
prompt_params={"prompt_param_1": "value_1", "prompt_param_2": "value_2", ..},
97-
max_tokens=100,
98-
temperature=0.0,
95+
```python
96+
import os
97+
98+
import litellm
99+
from guardrails import Guard
100+
from guardrails.hub import ProfanityFree
101+
102+
validated_response = guard(
103+
litellm.completion,
104+
model="azure/<your deployment name>",
105+
max_tokens=500,
106+
api_base=os.environ.get("AZURE_OPENAI_API_BASE"),
107+
api_version="2023-05-15",
108+
api_key=os.environ.get("AZURE_OPENAI_API_KEY"),
109+
msg_history=[{"role": "user", "content": "hello"}]
99110
)
100111
```
101112

113+
## Build a custom LLM wrapper
102114

103-
## Using a custom LLM API
115+
In case you're using an LLM that isn't natively supported by Guardrails and you don't want to use LiteLLM, you can build a custom LLM API wrapper. In order to use a custom LLM, create a function that takes accepts a prompt as a string and any other arguments that you want to pass to the LLM API as keyword args. The function should return the output of the LLM API as a string.
104116

105117
```python
106-
import guardrails as gd
118+
from guardrails import Guard
119+
from guardrails.hub import ProfanityFree
107120

108121
# Create a Guard class
109-
guard = gd.Guard.from_rail(...)
122+
guard = Guard().use(ProfanityFree())
110123

111124
# Function that takes the prompt as a string and returns the LLM output as string
112-
def my_llm_api(prompt: str, **kwargs) -> str:
125+
def my_llm_api(
126+
prompt: Optional[str] = None,
127+
instruction: Optional[str] = None,
128+
msg_history: Optional[list[dict]] = None,
129+
**kwargs
130+
) -> str:
113131
"""Custom LLM API wrapper.
114132
133+
At least one of prompt, instruction or msg_history should be provided.
134+
115135
Args:
116136
prompt (str): The prompt to be passed to the LLM API
137+
instruction (str): The instruction to be passed to the LLM API
138+
msg_history (list[dict]): The message history to be passed to the LLM API
117139
**kwargs: Any additional arguments to be passed to the LLM API
118140
119141
Returns:
120142
str: The output of the LLM API
121143
"""
122144

123145
# Call your LLM API here
124-
return ...
146+
llm_output = some_llm(prompt, instruction, msg_history, **kwargs)
125147

148+
return llm_output
126149

127150
# Wrap your LLM API call
128-
raw_llm_output, guardrail_output, *rest = guard(
151+
validated_response = guard(
129152
my_llm_api,
130-
prompt_params={"prompt_param_1": "value_1", "prompt_param_2": "value_2", ..},
153+
prompt="Can you generate a list of 10 things that are not food?",
131154
**kwargs,
132155
)
133156
```

settings.ini

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,4 +7,4 @@ doc_host = https://outerbounds.github.io
77
doc_baseurl = /nbdoc-docusaurus
88
module_baseurls = metaflow=https://github.com/Netflix/metaflow/tree/master/
99
fastcore=https://github.com/fastcore/tree/master
10-
host = github
10+
host = github

0 commit comments

Comments
 (0)