|
1 | | -# Guard Custom LLMs |
| 1 | +import Tabs from '@theme/Tabs'; |
| 2 | +import TabItem from '@theme/TabItem'; |
| 3 | + |
| 4 | +# Use Guardrails with any LLMs |
2 | 5 |
|
3 | 6 | Guardrails' `Guard` wrappers provide a simple way to add Guardrails to your LLM API calls. The wrappers are designed to be used with any LLM API. |
4 | 7 |
|
| 8 | +There are three ways to use Guardrails with an LLM API: |
| 9 | +1. [**Natively-supported LLMs**](#natively-supported-llms): Guardrails provides out-of-the-box wrappers for OpenAI, Cohere, Anthropic and HuggingFace. If you're using any of these APIs, check out the documentation in [this](#natively-supported-llms) section. |
| 10 | +2. [**LLMs supported through LiteLLM**](#llms-supported-via-litellm): Guardrails provides an easy integration with [liteLLM](https://docs.litellm.ai/docs/), a lightweight abstraction over LLM APIs that supports over 100+ LLMs. If you're using an LLM that isn't natively supported by Guardrails, you can use LiteLLM to integrate it with Guardrails. Check out the documentation in [this](#llms-supported-via-litellm) section. |
| 11 | +3. [**Build a custom LLM wrapper**](#build-a-custom-llm-wrapper): If you're using an LLM that isn't natively supported by Guardrails and you don't want to use LiteLLM, you can build a custom LLM API wrapper. Check out the documentation in [this](#build-a-custom-llm-wrapper) section. |
5 | 12 |
|
6 | | -Here are some examples of how to use the wrappers with different LLM providers and models: |
7 | 13 |
|
8 | | -## OpenAI |
| 14 | +## Natively-supported LLMs |
9 | 15 |
|
10 | | -### Completion Models (e.g. GPT-3) |
| 16 | +Guardrails provides native support for a select few LLMs and Manifest. If you're using any of these LLMs, you can use Guardrails' out-of-the-box wrappers to add Guardrails to your LLM API calls. |
11 | 17 |
|
12 | | -```python |
13 | | -import openai |
14 | | -import guardrails as gd |
| 18 | +<Tabs> |
| 19 | + <TabItem value="openai" label="OpenAI" default> |
| 20 | + ```python |
| 21 | + import openai |
| 22 | + from guardrails import Guard |
| 23 | + from guardrails.hub import ProfanityFree |
15 | 24 |
|
| 25 | + # Create a Guard |
| 26 | + guard = Guard().use(ProfanityFree()) |
16 | 27 |
|
17 | | -# Create a Guard class |
18 | | -guard = gd.Guard.from_rail(...) |
19 | | - |
20 | | -# Wrap openai API call |
21 | | -raw_llm_output, guardrail_output, *rest = guard( |
22 | | - openai.Completion.create, |
23 | | - prompt_params={"prompt_param_1": "value_1", "prompt_param_2": "value_2", ..}, |
24 | | - engine="text-davinci-003", |
25 | | - max_tokens=100, |
26 | | - temperature=0.0, |
27 | | -) |
28 | | -``` |
| 28 | + # Wrap openai API call |
| 29 | + validated_response = guard( |
| 30 | + openai.chat.completions.create, |
| 31 | + prompt="Can you generate a list of 10 things that are not food?", |
| 32 | + model="gpt-3.5-turbo", |
| 33 | + max_tokens=100, |
| 34 | + temperature=0.0, |
| 35 | + ) |
| 36 | + ``` |
| 37 | + </TabItem> |
| 38 | + <TabItem value="cohere" label="Cohere"> |
| 39 | + ```python |
| 40 | + import cohere |
| 41 | + from guardrails import Guard |
| 42 | + from guardrails.hub import ProfanityFree |
29 | 43 |
|
30 | | -### ChatCompletion Models (e.g. ChatGPT) |
| 44 | + # Create a Guard |
| 45 | + guard = Guard().use(ProfanityFree()) |
31 | 46 |
|
32 | | -```python |
33 | | -import openai |
34 | | -import guardrails as gd |
| 47 | + # Create a Cohere client |
| 48 | + cohere_client = cohere.Client(api_key="my_api_key") |
35 | 49 |
|
36 | | -# Create a Guard class |
37 | | -guard = gd.Guard.from_rail(...) |
38 | | - |
39 | | -# Wrap openai API call |
40 | | -raw_llm_output, guardrail_output, *rest = guard( |
41 | | - openai.ChatCompletion.create, |
42 | | - prompt_params={"prompt_param_1": "value_1", "prompt_param_2": "value_2", ..}, |
43 | | - system_prompt="You are a helpful assistant...", |
44 | | - model="gpt-3.5-turbo", |
45 | | - max_tokens=100, |
46 | | - temperature=0.0, |
47 | | -) |
48 | | -``` |
| 50 | + # Wrap cohere API call |
| 51 | + validated_response = guard( |
| 52 | + cohere_client.chat, |
| 53 | + prompt="Can you try to generate a list of 10 things that are not food?", |
| 54 | + model="command", |
| 55 | + max_tokens=100, |
| 56 | + ... |
| 57 | + ) |
| 58 | + ``` |
| 59 | + </TabItem> |
| 60 | +</Tabs> |
49 | 61 |
|
50 | | -## Cohere |
51 | 62 |
|
52 | | -### Generate (e.g. GPT-3) |
| 63 | +## LLMs supported via LiteLLM |
53 | 64 |
|
54 | | -```python |
55 | | -import cohere |
56 | | -import guardrails as gd |
| 65 | +[LiteLLM](https://docs.litellm.ai/docs/) is a lightweight wrapper that unifies the interface for over 100+ LLMs. Guardrails only supports 4 LLMs natively, but you can use Guardrails with LiteLLM to support over 100+ LLMs. You can read more about the LLMs supported by LiteLLM [here](https://docs.litellm.ai/docs/providers). |
57 | 66 |
|
58 | | -# Create a Guard class |
59 | | -guard = gd.Guard.from_rail(...) |
60 | | - |
61 | | -# Create a Cohere client |
62 | | -cohere_client = cohere.Client(api_key="my_api_key") |
63 | | - |
64 | | -# Wrap cohere API call |
65 | | -raw_llm_output, guardrail_output, *rest = guard( |
66 | | - cohere_client.generate, |
67 | | - prompt_params={"prompt_param_1": "value_1", "prompt_param_2": "value_2", ..}, |
68 | | - model="command-nightly", |
69 | | - max_tokens=100, |
70 | | - ... |
71 | | -) |
72 | | -``` |
| 67 | +In order to use Guardrails with any of the LLMs supported through liteLLM, you need to do the following: |
| 68 | +1. Call the `Guard.__call__` method with `litellm.completion` as the first argument. |
| 69 | +2. Pass any additional litellm arguments as keyword arguments to the `Guard.__call` method. |
| 70 | + |
| 71 | +Some examples of using Guardrails with LiteLLM are shown below. |
73 | 72 |
|
74 | | -## Using Manifest |
75 | | -[Manifest](https://github.com/HazyResearch/manifest) is a wrapper around most model APIs and supports hosting local models. It can be used as a LLM API. |
| 73 | +### Use Guardrails with Ollama |
76 | 74 |
|
77 | 75 | ```python |
78 | | -import guardrails as gd |
79 | | -import manifest |
| 76 | +import litellm |
| 77 | +from guardrails import Guard |
| 78 | +from guardrails.hub import ProfanityFree |
80 | 79 |
|
81 | 80 | # Create a Guard class |
82 | | -guard = gd.Guard.from_rail(...) |
83 | | - |
84 | | -# Create a Manifest client - this one points to GPT-4 |
85 | | -# and caches responses in SQLLite |
86 | | -manifest = manifest.Manifest( |
87 | | - client_name="openai", |
88 | | - engine="gpt-4", |
89 | | - cache_name="sqlite", |
90 | | - cache_connection="my_manifest_cache.db" |
| 81 | +guard = Guard().use(ProfanityFree()) |
| 82 | + |
| 83 | +# Call the Guard to wrap the LLM API call |
| 84 | +validated_response = guard( |
| 85 | + litellm.completion, |
| 86 | + model="ollama/llama2", |
| 87 | + max_tokens=500, |
| 88 | + api_base="http://localhost:11434", |
| 89 | + msg_history=[{"role": "user", "content": "hello"}] |
91 | 90 | ) |
| 91 | +``` |
| 92 | + |
| 93 | +### Use Guardrails with Azure's OpenAI endpoint |
92 | 94 |
|
93 | | -# Wrap openai API call |
94 | | -raw_llm_output, guardrail_output, *rest = guard( |
95 | | - manifest, |
96 | | - prompt_params={"prompt_param_1": "value_1", "prompt_param_2": "value_2", ..}, |
97 | | - max_tokens=100, |
98 | | - temperature=0.0, |
| 95 | +```python |
| 96 | +import os |
| 97 | + |
| 98 | +import litellm |
| 99 | +from guardrails import Guard |
| 100 | +from guardrails.hub import ProfanityFree |
| 101 | + |
| 102 | +validated_response = guard( |
| 103 | + litellm.completion, |
| 104 | + model="azure/<your deployment name>", |
| 105 | + max_tokens=500, |
| 106 | + api_base=os.environ.get("AZURE_OPENAI_API_BASE"), |
| 107 | + api_version="2023-05-15", |
| 108 | + api_key=os.environ.get("AZURE_OPENAI_API_KEY"), |
| 109 | + msg_history=[{"role": "user", "content": "hello"}] |
99 | 110 | ) |
100 | 111 | ``` |
101 | 112 |
|
| 113 | +## Build a custom LLM wrapper |
102 | 114 |
|
103 | | -## Using a custom LLM API |
| 115 | +In case you're using an LLM that isn't natively supported by Guardrails and you don't want to use LiteLLM, you can build a custom LLM API wrapper. In order to use a custom LLM, create a function that takes accepts a prompt as a string and any other arguments that you want to pass to the LLM API as keyword args. The function should return the output of the LLM API as a string. |
104 | 116 |
|
105 | 117 | ```python |
106 | | -import guardrails as gd |
| 118 | +from guardrails import Guard |
| 119 | +from guardrails.hub import ProfanityFree |
107 | 120 |
|
108 | 121 | # Create a Guard class |
109 | | -guard = gd.Guard.from_rail(...) |
| 122 | +guard = Guard().use(ProfanityFree()) |
110 | 123 |
|
111 | 124 | # Function that takes the prompt as a string and returns the LLM output as string |
112 | | -def my_llm_api(prompt: str, **kwargs) -> str: |
| 125 | +def my_llm_api( |
| 126 | + prompt: Optional[str] = None, |
| 127 | + instruction: Optional[str] = None, |
| 128 | + msg_history: Optional[list[dict]] = None, |
| 129 | + **kwargs |
| 130 | +) -> str: |
113 | 131 | """Custom LLM API wrapper. |
114 | 132 |
|
| 133 | + At least one of prompt, instruction or msg_history should be provided. |
| 134 | +
|
115 | 135 | Args: |
116 | 136 | prompt (str): The prompt to be passed to the LLM API |
| 137 | + instruction (str): The instruction to be passed to the LLM API |
| 138 | + msg_history (list[dict]): The message history to be passed to the LLM API |
117 | 139 | **kwargs: Any additional arguments to be passed to the LLM API |
118 | 140 |
|
119 | 141 | Returns: |
120 | 142 | str: The output of the LLM API |
121 | 143 | """ |
122 | 144 |
|
123 | 145 | # Call your LLM API here |
124 | | - return ... |
| 146 | + llm_output = some_llm(prompt, instruction, msg_history, **kwargs) |
125 | 147 |
|
| 148 | + return llm_output |
126 | 149 |
|
127 | 150 | # Wrap your LLM API call |
128 | | -raw_llm_output, guardrail_output, *rest = guard( |
| 151 | +validated_response = guard( |
129 | 152 | my_llm_api, |
130 | | - prompt_params={"prompt_param_1": "value_1", "prompt_param_2": "value_2", ..}, |
| 153 | + prompt="Can you generate a list of 10 things that are not food?", |
131 | 154 | **kwargs, |
132 | 155 | ) |
133 | 156 | ``` |
0 commit comments