Skip to content

Commit d3faf51

Browse files
committed
Merge branch 'main' into telemetry-update
2 parents 56882fd + eb212ba commit d3faf51

36 files changed

+927
-252
lines changed

docs/faq.md

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,21 @@
11
# Frequently Asked Questions
22

3+
## I get an "Unauthorized" error when installing validators from the Guardrails Hub. What should I do?
4+
5+
If you see an "Unauthorized" error when installing validators from the Guardrails hub, it means that the API key you are using is not authorized to access the Guardrails hub. It may be unset or expired.
6+
7+
To fix this, first generate a new API key from the [Guardrails Hub](https://hub.guardrailsai.com/keys). Then, configure the Guardrails CLI with the new API key.
8+
9+
```bash
10+
guardrails configure
11+
```
12+
13+
There is also a headless option to configure the CLI with the token.
14+
15+
```bash
16+
guardrails configure --token <your_token>
17+
```
18+
319
## I'm seeing a PromptCallableException when invoking my Guard. What should I do?
420

521
If you see an exception that looks like this

docs/getting_started/guardrails_server.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,13 @@ This document will overview a few of the key features of the Guardrails Server,
1313

1414
# Walkthrough
1515

16+
## 0. Configure Guardrails
17+
First, get a free auth key from [Guardrails Hub](https://hub.guardrailsai.com/keys). Then, configure the Guardrails CLI with the auth key.
18+
19+
```bash
20+
guardrails configure
21+
```
22+
1623
## 1. Install the Guardrails Server
1724
This is done by simply installing the `guardrails-ai` package. See the [installation guide](./quickstart.md) for more information.
1825

docs/getting_started/quickstart.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,8 @@ pip install guardrails-ai
1717
```
1818

1919
### Configure the Guardrails CLI (required)
20+
21+
First, get a free auth key from [Guardrails Hub](https://hub.guardrailsai.com/keys). Then, configure the Guardrails CLI with the auth key.
2022

2123
```bash
2224
guardrails configure

docs/how_to_guides/using_llms.md

Lines changed: 46 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -289,3 +289,49 @@ for chunk in stream_chunk_generator
289289
## Other LLMs
290290

291291
See LiteLLM’s documentation [here](https://docs.litellm.ai/docs/providers) for details on many other llms.
292+
293+
## Custom LLM Wrappers
294+
In case you're using an LLM that isn't natively supported by Guardrails and you don't want to use LiteLLM, you can build a custom LLM API wrapper. In order to use a custom LLM, create a function that accepts a positional argument for the prompt as a string and any other arguments that you want to pass to the LLM API as keyword args. The function should return the output of the LLM API as a string.
295+
296+
```python
297+
from guardrails import Guard
298+
from guardrails.hub import ProfanityFree
299+
300+
# Create a Guard class
301+
guard = Guard().use(ProfanityFree())
302+
303+
# Function that takes the prompt as a string and returns the LLM output as string
304+
def my_llm_api(
305+
prompt: Optional[str] = None,
306+
*,
307+
instructions: Optional[str] = None,
308+
msg_history: Optional[list[dict]] = None,
309+
**kwargs
310+
) -> str:
311+
"""Custom LLM API wrapper.
312+
313+
At least one of prompt, instruction or msg_history should be provided.
314+
315+
Args:
316+
prompt (str): The prompt to be passed to the LLM API
317+
instruction (str): The instruction to be passed to the LLM API
318+
msg_history (list[dict]): The message history to be passed to the LLM API
319+
**kwargs: Any additional arguments to be passed to the LLM API
320+
321+
Returns:
322+
str: The output of the LLM API
323+
"""
324+
325+
# Call your LLM API here
326+
# What you pass to the llm will depend on what arguments it accepts.
327+
llm_output = some_llm(prompt, instructions, msg_history, **kwargs)
328+
329+
return llm_output
330+
331+
# Wrap your LLM API call
332+
validated_response = guard(
333+
my_llm_api,
334+
prompt="Can you generate a list of 10 things that are not food?",
335+
**kwargs,
336+
)
337+
```

guardrails/applications/text2sql.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,14 @@
11
import asyncio
22
import json
33
import os
4+
import openai
45
from string import Template
56
from typing import Callable, Dict, Optional, Type, cast
67

78
from guardrails.classes import ValidationOutcome
89
from guardrails.document_store import DocumentStoreBase, EphemeralDocumentStore
910
from guardrails.embedding import EmbeddingBase, OpenAIEmbedding
1011
from guardrails.guard import Guard
11-
from guardrails.utils.openai_utils import get_static_openai_create_func
1212
from guardrails.utils.sql_utils import create_sql_driver
1313
from guardrails.vectordb import Faiss, VectorDBBase
1414

@@ -89,7 +89,7 @@ def __init__(
8989
reask_prompt: Prompt to use for reasking. Defaults to REASK_PROMPT.
9090
"""
9191
if llm_api is None:
92-
llm_api = get_static_openai_create_func()
92+
llm_api = openai.completions.create
9393

9494
self.example_formatter = example_formatter
9595
self.llm_api = llm_api

guardrails/cli/configure.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@
1212
from guardrails.cli.hub.console import console
1313
from guardrails.cli.server.hub_client import AuthenticationError, get_auth
1414
from guardrails.cli.telemetry import trace_if_enabled
15-
15+
from guardrails.cli.version import version_warnings_if_applicable
1616

1717
DEFAULT_TOKEN = ""
1818
DEFAULT_ENABLE_METRICS = True
@@ -78,6 +78,7 @@ def configure(
7878
help="Clear the existing token from the configuration file.",
7979
),
8080
):
81+
version_warnings_if_applicable(console)
8182
if settings.rc.exists():
8283
trace_if_enabled("configure")
8384
existing_token = _get_default_token()

guardrails/cli/hub/install.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,8 @@
66
from guardrails.cli.hub.hub import hub_command
77
from guardrails.cli.logger import logger
88
from guardrails.hub_telemetry.hub_tracing import trace
9+
from guardrails.cli.hub.console import console
10+
from guardrails.cli.version import version_warnings_if_applicable
911

1012

1113
@trace(name="guardrails-cli/hub/install", is_parent=True)
@@ -40,6 +42,8 @@ def confirm():
4042
" local models for local inference?",
4143
)
4244

45+
version_warnings_if_applicable(console)
46+
4347
install_multiple(
4448
package_uris,
4549
install_local_models=local_models,

guardrails/cli/start.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,8 @@
55
from guardrails.cli.hub.utils import pip_process
66
from guardrails.cli.logger import logger
77
from guardrails.cli.telemetry import trace_if_enabled
8+
from guardrails.cli.version import version_warnings_if_applicable
9+
from guardrails.cli.hub.console import console
810

911

1012
def api_is_installed() -> bool:
@@ -39,5 +41,6 @@ def start(
3941
from guardrails_api.cli.start import start # type: ignore
4042

4143
logger.info("Starting Guardrails server")
44+
version_warnings_if_applicable(console)
4245
trace_if_enabled("start")
4346
start(env, config, port)

guardrails/cli/version.py

Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,31 @@
1+
import contextlib
2+
import requests
3+
import semver
4+
from importlib.metadata import version
5+
from rich.console import Console
6+
7+
8+
GUARDRAILS_PACKAGE_NAME = "guardrails-ai"
9+
10+
11+
def get_guardrails_version():
12+
return version(GUARDRAILS_PACKAGE_NAME)
13+
14+
15+
def version_warnings_if_applicable(console: Console):
16+
current_version = get_guardrails_version()
17+
18+
with contextlib.suppress(Exception):
19+
res = requests.get(f"https://pypi.org/pypi/{GUARDRAILS_PACKAGE_NAME}/json")
20+
version_info = res.json()
21+
info = version_info.get("info", {})
22+
latest_version = info.get("version")
23+
24+
is_update_available = semver.compare(latest_version, current_version) > 0
25+
26+
if is_update_available:
27+
console.print(
28+
"[yellow]There is a newer version of Guardrails "
29+
f"available {latest_version}. Your current version "
30+
f"is {current_version}[/yellow]!"
31+
)

guardrails/formatters/json_formatter.py

Lines changed: 25 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
import json
2-
from typing import Optional, Union
2+
from typing import Dict, List, Optional, Union
33

44
from guardrails.formatters.base_formatter import BaseFormatter
55
from guardrails.llm_providers import (
@@ -99,32 +99,48 @@ def wrap_callable(self, llm_callable) -> ArbitraryCallable:
9999

100100
if isinstance(llm_callable, HuggingFacePipelineCallable):
101101
model = llm_callable.init_kwargs["pipeline"]
102-
return ArbitraryCallable(
103-
lambda p: json.dumps(
102+
103+
def fn(
104+
prompt: str,
105+
*args,
106+
instructions: Optional[str] = None,
107+
msg_history: Optional[List[Dict[str, str]]] = None,
108+
**kwargs,
109+
) -> str:
110+
return json.dumps(
104111
Jsonformer(
105112
model=model.model,
106113
tokenizer=model.tokenizer,
107114
json_schema=self.output_schema,
108-
prompt=p,
115+
prompt=prompt,
109116
)()
110117
)
111-
)
118+
119+
return ArbitraryCallable(fn)
112120
elif isinstance(llm_callable, HuggingFaceModelCallable):
113121
# This will not work because 'model_generate' is the .gen method.
114122
# model = self.api.init_kwargs["model_generate"]
115123
# Use the __self__ to grab the base mode for passing into JF.
116124
model = llm_callable.init_kwargs["model_generate"].__self__
117125
tokenizer = llm_callable.init_kwargs["tokenizer"]
118-
return ArbitraryCallable(
119-
lambda p: json.dumps(
126+
127+
def fn(
128+
prompt: str,
129+
*args,
130+
instructions: Optional[str] = None,
131+
msg_history: Optional[List[Dict[str, str]]] = None,
132+
**kwargs,
133+
) -> str:
134+
return json.dumps(
120135
Jsonformer(
121136
model=model,
122137
tokenizer=tokenizer,
123138
json_schema=self.output_schema,
124-
prompt=p,
139+
prompt=prompt,
125140
)()
126141
)
127-
)
142+
143+
return ArbitraryCallable(fn)
128144
else:
129145
raise ValueError(
130146
"JsonFormatter can only be used with HuggingFace*Callable."

0 commit comments

Comments
 (0)