Skip to content

Commit 645c7c6

Browse files
committed
Merge branch '0.3.0' into input-validation
2 parents b437d85 + 0509f80 commit 645c7c6

37 files changed

+2080
-677
lines changed

.github/workflows/deploy_docs.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ jobs:
3939
- name: Install dependencies
4040
run: poetry install --with docs
4141
- name: Build
42-
run: mkdocs build
42+
run: poetry run mkdocs build
4343
- name: Upload artifact
4444
uses: actions/upload-pages-artifact@v2
4545
with:

docs/api_reference/validators.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,6 @@
1111
- "!validate"
1212
- "!register_validator"
1313
- "!PydanticReAsk"
14-
- "!Filter"
1514
- "!Refrain"
1615
- "!ValidationResult"
1716
- "!PassResult"

docs/defining_guards/pydantic.ipynb

Lines changed: 87 additions & 275 deletions
Large diffs are not rendered by default.

docs/examples/competitors_check.ipynb

Lines changed: 240 additions & 0 deletions
Large diffs are not rendered by default.

docs/examples/toxic_language.ipynb

Lines changed: 206 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,206 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"metadata": {},
6+
"source": [
7+
"## Check whether an LLM-generated response contains toxic language\n",
8+
"\n",
9+
"### Using the `ToxicLanguage` validator\n",
10+
"\n",
11+
"This is a simple walkthrough of the `ToxicLanguage` validator. This validator checks whether an LLM-generated response contains toxic language. It uses the pre-trained multi-label model from HuggingFace -`unitary/unbiased-toxic-roberta` to check whether the generated text is toxic. It supports both full-text-level and sentence-level validation.\n"
12+
]
13+
},
14+
{
15+
"cell_type": "code",
16+
"execution_count": 1,
17+
"metadata": {},
18+
"outputs": [],
19+
"source": [
20+
"# Import the guardrails package\n",
21+
"# and the ToxicLanguage validator\n",
22+
"import guardrails as gd\n",
23+
"from guardrails.validators import ToxicLanguage\n",
24+
"from rich import print"
25+
]
26+
},
27+
{
28+
"cell_type": "code",
29+
"execution_count": 2,
30+
"metadata": {},
31+
"outputs": [],
32+
"source": [
33+
"# Create a Guard object with this validator\n",
34+
"# Here, we'll use the default validation method of \"sentence\"\n",
35+
"# and the default threshold of 0.25\n",
36+
"\n",
37+
"guard = gd.Guard.from_string(\n",
38+
" validators=[ToxicLanguage(on_fail=\"fix\")],\n",
39+
" description=\"testmeout\",\n",
40+
")"
41+
]
42+
},
43+
{
44+
"cell_type": "code",
45+
"execution_count": 3,
46+
"metadata": {},
47+
"outputs": [
48+
{
49+
"data": {
50+
"text/html": [
51+
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">\n",
52+
"What a lovely day it has been to roam around the city.\n",
53+
"I love it here!\n",
54+
"</pre>\n"
55+
],
56+
"text/plain": [
57+
"\n",
58+
"What a lovely day it has been to roam around the city.\n",
59+
"I love it here!\n"
60+
]
61+
},
62+
"metadata": {},
63+
"output_type": "display_data"
64+
}
65+
],
66+
"source": [
67+
"# Parse the raw response\n",
68+
"raw_response = \"\"\"\n",
69+
"What a lovely day it has been to roam around the city. I love it here!\n",
70+
"I hate how pathetic and needy you can be sometimes.\n",
71+
"\"\"\"\n",
72+
"output = guard.parse(\n",
73+
" llm_output=raw_response,\n",
74+
")\n",
75+
"\n",
76+
"# Print the output\n",
77+
"print(output)"
78+
]
79+
},
80+
{
81+
"cell_type": "markdown",
82+
"metadata": {},
83+
"source": [
84+
"As you can see here, the sentence `I hate how pathetic and needy you can be sometimes.` has been detected as toxic, and hence removed from the validated output.\n"
85+
]
86+
},
87+
{
88+
"cell_type": "code",
89+
"execution_count": 4,
90+
"metadata": {},
91+
"outputs": [
92+
{
93+
"data": {
94+
"text/html": [
95+
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">\n",
96+
"I am a Generative AI model that is trained on a large corpus of text.\n",
97+
"This is a very powerful tool for generating new text, but it can also be used to generate text that is offensive or\n",
98+
"hateful.\n",
99+
"</pre>\n"
100+
],
101+
"text/plain": [
102+
"\n",
103+
"I am a Generative AI model that is trained on a large corpus of text.\n",
104+
"This is a very powerful tool for generating new text, but it can also be used to generate text that is offensive or\n",
105+
"hateful.\n"
106+
]
107+
},
108+
"metadata": {},
109+
"output_type": "display_data"
110+
}
111+
],
112+
"source": [
113+
"# Parse the raw response\n",
114+
"raw_response = \"\"\"\n",
115+
"I am a Generative AI model that is trained on a large corpus of text.\n",
116+
"I am shocked by how disgusting and vile you are.\n",
117+
"This is a very powerful tool for generating new text, but it can also be used to generate text that is offensive or hateful.\n",
118+
"\"\"\"\n",
119+
"output = guard.parse(\n",
120+
" llm_output=raw_response,\n",
121+
")\n",
122+
"\n",
123+
"# Print the output\n",
124+
"print(output)"
125+
]
126+
},
127+
{
128+
"cell_type": "markdown",
129+
"metadata": {},
130+
"source": [
131+
"Similarly, here the sentence `I am shocked by how disgusting and vile you are.` has been detected as toxic, and hence removed from the validated output.\n"
132+
]
133+
},
134+
{
135+
"cell_type": "code",
136+
"execution_count": 5,
137+
"metadata": {},
138+
"outputs": [],
139+
"source": [
140+
"# Test with validation method 'full'\n",
141+
"full_guard = gd.Guard.from_string(\n",
142+
" validators=[ToxicLanguage(validation_method=\"full\", on_fail=\"fix\")],\n",
143+
" description=\"testmeout\",\n",
144+
")"
145+
]
146+
},
147+
{
148+
"cell_type": "code",
149+
"execution_count": 6,
150+
"metadata": {},
151+
"outputs": [
152+
{
153+
"data": {
154+
"text/html": [
155+
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">\n",
156+
"</pre>\n"
157+
],
158+
"text/plain": [
159+
"\n"
160+
]
161+
},
162+
"metadata": {},
163+
"output_type": "display_data"
164+
}
165+
],
166+
"source": [
167+
"# Parse the raw response\n",
168+
"raw_response = \"Stop being such a dumb piece of shit. Why can't you comprehend this?\"\n",
169+
"output = full_guard.parse(\n",
170+
" llm_output=raw_response,\n",
171+
")\n",
172+
"\n",
173+
"# Print the output\n",
174+
"print(output)"
175+
]
176+
},
177+
{
178+
"cell_type": "markdown",
179+
"metadata": {},
180+
"source": [
181+
"Here, we're doing validation on the entire text, and toxic language was detected here - hence, the nothing is returned here.\n"
182+
]
183+
}
184+
],
185+
"metadata": {
186+
"kernelspec": {
187+
"display_name": "lang",
188+
"language": "python",
189+
"name": "python3"
190+
},
191+
"language_info": {
192+
"codemirror_mode": {
193+
"name": "ipython",
194+
"version": 3
195+
},
196+
"file_extension": ".py",
197+
"mimetype": "text/x-python",
198+
"name": "python",
199+
"nbconvert_exporter": "python",
200+
"pygments_lexer": "ipython3",
201+
"version": "3.11.6"
202+
}
203+
},
204+
"nbformat": 4,
205+
"nbformat_minor": 2
206+
}

docs/integrations/langchain.ipynb

Lines changed: 19 additions & 58 deletions
Original file line numberDiff line numberDiff line change
@@ -20,13 +20,11 @@
2020
},
2121
{
2222
"cell_type": "code",
23-
"execution_count": 1,
23+
"execution_count": null,
2424
"metadata": {},
2525
"outputs": [],
2626
"source": [
27-
"import openai\n",
28-
"!pip install guardrails-ai\n",
29-
"!pip install langchain"
27+
"!pip install guardrails-ai langchain openai>=1"
3028
]
3129
},
3230
{
@@ -38,13 +36,8 @@
3836
},
3937
{
4038
"cell_type": "code",
41-
"execution_count": 2,
42-
"metadata": {
43-
"ExecuteTime": {
44-
"end_time": "2023-06-21T16:39:44.813776Z",
45-
"start_time": "2023-06-21T16:39:44.801522Z"
46-
}
47-
},
39+
"execution_count": null,
40+
"metadata": {},
4841
"outputs": [],
4942
"source": [
5043
"rail_spec = \"\"\"\n",
@@ -79,15 +72,11 @@
7972
},
8073
{
8174
"cell_type": "code",
82-
"execution_count": 3,
83-
"metadata": {
84-
"ExecuteTime": {
85-
"end_time": "2023-06-21T16:39:46.123745Z",
86-
"start_time": "2023-06-21T16:39:46.111378Z"
87-
}
88-
},
75+
"execution_count": null,
76+
"metadata": {},
8977
"outputs": [],
9078
"source": [
79+
"import openai\n",
9180
"from rich import print\n",
9281
"\n",
9382
"from langchain.output_parsers import GuardrailsOutputParser\n",
@@ -98,16 +87,11 @@
9887
},
9988
{
10089
"cell_type": "code",
101-
"execution_count": 4,
102-
"metadata": {
103-
"ExecuteTime": {
104-
"end_time": "2023-06-21T16:39:47.843397Z",
105-
"start_time": "2023-06-21T16:39:47.704763Z"
106-
}
107-
},
90+
"execution_count": null,
91+
"metadata": {},
10892
"outputs": [],
10993
"source": [
110-
"output_parser = GuardrailsOutputParser.from_rail_string(rail_spec, api=openai.ChatCompletion.create)"
94+
"output_parser = GuardrailsOutputParser.from_rail_string(rail_spec, api=openai.chat.completions.create)"
11195
]
11296
},
11397
{
@@ -119,13 +103,8 @@
119103
},
120104
{
121105
"cell_type": "code",
122-
"execution_count": 6,
123-
"metadata": {
124-
"ExecuteTime": {
125-
"end_time": "2023-06-21T16:39:49.828250Z",
126-
"start_time": "2023-06-21T16:39:49.823999Z"
127-
}
128-
},
106+
"execution_count": null,
107+
"metadata": {},
129108
"outputs": [
130109
{
131110
"data": {
@@ -221,13 +200,8 @@
221200
},
222201
{
223202
"cell_type": "code",
224-
"execution_count": 5,
225-
"metadata": {
226-
"ExecuteTime": {
227-
"end_time": "2023-06-21T16:39:54.395309Z",
228-
"start_time": "2023-06-21T16:39:54.378412Z"
229-
}
230-
},
203+
"execution_count": null,
204+
"metadata": {},
231205
"outputs": [],
232206
"source": [
233207
"prompt = PromptTemplate(\n",
@@ -245,13 +219,8 @@
245219
},
246220
{
247221
"cell_type": "code",
248-
"execution_count": 7,
249-
"metadata": {
250-
"ExecuteTime": {
251-
"end_time": "2023-06-21T16:39:57.246325Z",
252-
"start_time": "2023-06-21T16:39:56.882944Z"
253-
}
254-
},
222+
"execution_count": null,
223+
"metadata": {},
255224
"outputs": [],
256225
"source": [
257226
"model = OpenAI(temperature=0)\n",
@@ -266,16 +235,9 @@
266235
},
267236
{
268237
"cell_type": "code",
269-
"execution_count": 8,
238+
"execution_count": null,
270239
"metadata": {},
271240
"outputs": [
272-
{
273-
"name": "stderr",
274-
"output_type": "stream",
275-
"text": [
276-
"Async event loop found, but guard was invoked synchronously.For validator parallelization, please call `validate_async` instead.\n"
277-
]
278-
},
279241
{
280242
"data": {
281243
"text/html": [
@@ -322,10 +284,9 @@
322284
"mimetype": "text/x-python",
323285
"name": "python",
324286
"nbconvert_exporter": "python",
325-
"pygments_lexer": "ipython3",
326-
"version": "3.11.4"
287+
"pygments_lexer": "ipython3"
327288
}
328289
},
329290
"nbformat": 4,
330-
"nbformat_minor": 2
291+
"nbformat_minor": 4
331292
}

0 commit comments

Comments
 (0)