Skip to content

Commit f2824e2

Browse files
authored
Merge pull request #687 from aaravnavani/fix_validator_imports
Fix validator imports
2 parents 12e96e8 + 4a1fd14 commit f2824e2

19 files changed

+229
-59
lines changed

docs/examples/bug_free_python_code.ipynb

Lines changed: 12 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,14 @@
11
{
22
"cells": [
3+
{
4+
"cell_type": "code",
5+
"execution_count": null,
6+
"metadata": {},
7+
"outputs": [],
8+
"source": [
9+
"!guardrails hub install hub://reflex/valid_python"
10+
]
11+
},
312
{
413
"attachments": {},
514
"cell_type": "markdown",
@@ -23,7 +32,6 @@
2332
"\n",
2433
"In short, we want to make sure that the code can be executed without any errors.\n",
2534
"\n",
26-
"\n",
2735
"## Step 1: Generating `RAIL` Spec\n",
2836
"\n",
2937
"Ordinarily, we could create a separate `RAIL` spec in a file. However, for the sake of this example, we will generate the `RAIL` spec in the notebook as a string. We will also show the same RAIL spec in a code-first format using a Pydantic model."
@@ -80,7 +88,7 @@
8088
"outputs": [],
8189
"source": [
8290
"from pydantic import BaseModel, Field\n",
83-
"from guardrails.validators import BugFreePython\n",
91+
"from guardrails.hub import ValidPython\n",
8492
"\n",
8593
"prompt = \"\"\"\n",
8694
"Given the following high level leetcode problem description, write a short Python code snippet that solves the problem.\n",
@@ -91,7 +99,7 @@
9199
"${gr.complete_json_suffix}\"\"\"\n",
92100
"\n",
93101
"class BugFreePythonCode(BaseModel):\n",
94-
" python_code: str = Field(validators=[BugFreePython(on_fail=\"reask\")])\n",
102+
" python_code: str = Field(validators=[ValidPython(on_fail=\"reask\")])\n",
95103
"\n",
96104
" class Config:\n",
97105
" arbitrary_types_allowed = True"
@@ -429,7 +437,7 @@
429437
"name": "python",
430438
"nbconvert_exporter": "python",
431439
"pygments_lexer": "ipython3",
432-
"version": "3.9.17"
440+
"version": "3.12.2"
433441
},
434442
"orig_nbformat": 4,
435443
"vscode": {

docs/examples/competitors_check.ipynb

Lines changed: 19 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,14 @@
11
{
22
"cells": [
3+
{
4+
"cell_type": "code",
5+
"execution_count": null,
6+
"metadata": {},
7+
"outputs": [],
8+
"source": [
9+
"!guardrails hub install hub://guardrails/competitor_check\n"
10+
]
11+
},
312
{
413
"cell_type": "markdown",
514
"metadata": {},
@@ -10,6 +19,15 @@
1019
"To download this example as a Jupyter notebook, click [here](https://github.com/guardrails-ai/guardrails/blob/main/docs/examples/competitors_check.ipynb).\n"
1120
]
1221
},
22+
{
23+
"cell_type": "markdown",
24+
"metadata": {},
25+
"source": [
26+
"We first need to install the ```CompetitorCheck``` validator from Guardrails Hub: \n",
27+
"\n",
28+
"```guardrails hub install hub://guardrails/competitor_check```\n"
29+
]
30+
},
1331
{
1432
"cell_type": "code",
1533
"execution_count": 1,
@@ -26,7 +44,7 @@
2644
],
2745
"source": [
2846
"import guardrails as gd\n",
29-
"from guardrails.validators import CompetitorCheck\n",
47+
"from guardrails.hub import CompetitorCheck\n",
3048
"from rich import print"
3149
]
3250
},

docs/examples/extracting_entities.ipynb

Lines changed: 13 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,16 @@
11
{
22
"cells": [
3+
{
4+
"cell_type": "code",
5+
"execution_count": null,
6+
"metadata": {},
7+
"outputs": [],
8+
"source": [
9+
"!guardrails hub install hub://guardrails/valid_length\n",
10+
"!guardrails hub install hub://guardrails/two_words\n",
11+
"!guardrails hub install hub://guardrails/valid_range"
12+
]
13+
},
314
{
415
"attachments": {},
516
"cell_type": "markdown",
@@ -25,7 +36,7 @@
2536
},
2637
{
2738
"cell_type": "code",
28-
"execution_count": 22,
39+
"execution_count": null,
2940
"metadata": {},
3041
"outputs": [
3142
{
@@ -143,7 +154,7 @@
143154
"metadata": {},
144155
"outputs": [],
145156
"source": [
146-
"from guardrails.validators import LowerCase, TwoWords, OneLine\n",
157+
"from guardrails.hub import LowerCase, TwoWords, OneLine\n",
147158
"from pydantic import BaseModel, Field\n",
148159
"from typing import List\n",
149160
"\n",

docs/examples/generate_structured_data.ipynb

Lines changed: 12 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,16 @@
11
{
22
"cells": [
3+
{
4+
"cell_type": "code",
5+
"execution_count": null,
6+
"metadata": {},
7+
"outputs": [],
8+
"source": [
9+
"!guardrails hub install hub://reflex/valid_python\n",
10+
"!guardrails hub install hub://guardrails/two_words\n",
11+
"!guardrails hub install hub://guardrails/valid_range"
12+
]
13+
},
314
{
415
"attachments": {},
516
"cell_type": "markdown",
@@ -26,7 +37,6 @@
2637
"3. The number of orders associated with each user should be between 0 and 50.\n",
2738
"4. Each user should have a most recent order date.\n",
2839
"\n",
29-
"\n",
3040
"## Step 1: Generating `RAIL` Spec\n",
3141
"\n",
3242
"Ordinarily, we could create a separate `RAIL` spec in a file. However, for the sake of this example, we will generate the `RAIL` spec in the notebook as a string. We will also show the same RAIL spec in a code-first format using a Pydantic model."
@@ -83,7 +93,7 @@
8393
"outputs": [],
8494
"source": [
8595
"from pydantic import BaseModel, Field\n",
86-
"from guardrails.validators import ValidLength, TwoWords, ValidRange\n",
96+
"from guardrails.hub import ValidLength, TwoWords, ValidRange\n",
8797
"from datetime import date\n",
8898
"from typing import List\n",
8999
"\n",

docs/examples/guardrails_with_chat_models.ipynb

Lines changed: 12 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,16 @@
11
{
22
"cells": [
3+
{
4+
"cell_type": "code",
5+
"execution_count": null,
6+
"metadata": {},
7+
"outputs": [],
8+
"source": [
9+
"!guardrails hub install hub://guardrails/lowercase\n",
10+
"!guardrails hub install hub://guardrails/two_words\n",
11+
"!guardrails hub install hub://guardrails/one_line"
12+
]
13+
},
314
{
415
"attachments": {},
516
"cell_type": "markdown",
@@ -266,7 +277,7 @@
266277
"metadata": {},
267278
"outputs": [],
268279
"source": [
269-
"from guardrails.validators import LowerCase, TwoWords, OneLine\n",
280+
"from guardrails.hub import LowerCase, TwoWords, OneLine\n",
270281
"from pydantic import BaseModel, Field\n",
271282
"from typing import List, Optional\n",
272283
"\n",

docs/examples/input_validation.ipynb

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,14 @@
11
{
22
"cells": [
3+
{
4+
"cell_type": "code",
5+
"execution_count": null,
6+
"metadata": {},
7+
"outputs": [],
8+
"source": [
9+
"!guardrails hub install hub://guardrails/two_words"
10+
]
11+
},
312
{
413
"cell_type": "markdown",
514
"metadata": {},
@@ -98,7 +107,7 @@
98107
}
99108
],
100109
"source": [
101-
"from guardrails.validators import TwoWords\n",
110+
"from guardrails.hub import TwoWords\n",
102111
"from pydantic import BaseModel\n",
103112
"\n",
104113
"\n",

docs/examples/provenance.ipynb

Lines changed: 21 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,15 @@
11
{
22
"cells": [
3+
{
4+
"cell_type": "code",
5+
"execution_count": null,
6+
"metadata": {},
7+
"outputs": [],
8+
"source": [
9+
"!guardrails hub install hub://guardrails/provenance_embeddings\n",
10+
"!guardrails hub install hub://guardrails/provenance_llm"
11+
]
12+
},
313
{
414
"attachments": {},
515
"cell_type": "markdown",
@@ -26,16 +36,16 @@
2636
"cell_type": "markdown",
2737
"metadata": {},
2838
"source": [
29-
"## Provenance: v0\n",
39+
"## ProvenanceEmbeddings\n",
3040
"\n",
31-
"When you need to confirm that an LLM response is supported by a source text, you can use the `provenance-v0` Guardrails validator function. This function takes a list source articles, an LLM response (or prompt), and a threshold. It validates whether the response is \"close\" enough to the source articles. The threshold is a number between 0 and 1, where 1 means the response is not close to the source articles, and 0 means the response is very close to the source articles.\n",
41+
"When you need to confirm that an LLM response is supported by a source text, you can use the ```ProvenanceEmbeddings``` Guardrails validator function. This function takes a list source articles, an LLM response (or prompt), and a threshold. It validates whether the response is \"close\" enough to the source articles. The threshold is a number between 0 and 1, where 1 means the response is not close to the source articles, and 0 means the response is very close to the source articles.\n",
3242
"\n",
3343
"The provenance validator function can also wrap an LLM response (similar to what we saw in the first example). It can also wrap a query function to interact with your vector database as a source. These two uses are out-of-scope for this document.\n",
3444
"\n",
3545
"For the sake of this post, let's use the following configuration for our validator:\n",
3646
"\n",
3747
"1. `sources`: List of strings, each representing a source.\n",
38-
"2. `embed function`: A function that takes a list of strings of length k and returns a 2D numpy array of shape (k, embedding_dim).\n"
48+
"2. `embed function`: A function that takes a list of strings of length k and returns a 2D numpy array of shape (k, embedding_dim)."
3949
]
4050
},
4151
{
@@ -268,7 +278,7 @@
268278
"import cohere\n",
269279
"import numpy as np\n",
270280
"from guardrails import Guard\n",
271-
"from guardrails.validators import ProvenanceV0\n",
281+
"from guardrails.hub import ProvenanceEmbeddings\n",
272282
"from typing import List, Union\n",
273283
"import os\n",
274284
"\n",
@@ -295,7 +305,7 @@
295305
"cell_type": "markdown",
296306
"metadata": {},
297307
"source": [
298-
"Set up the rail with the `provenance-v0` validator on a single string output. The threshold set for this example is 0.3, which is more sensitive than usual. In this example, since some of the passed-in text is strongly supported by the source text, distance for those chunks is small and the result remains unchanged by Guardrails. However, some other information is not directly supporteds by the source. Guardrails took the Raw LLM output and removed the areas it deemed hallucinated.\n"
308+
"Set up the rail with the ```ProvenanceEmbeddings``` validator on a single string output. The threshold set for this example is 0.3, which is more sensitive than usual. In this example, since some of the passed-in text is strongly supported by the source text, distance for those chunks is small and the result remains unchanged by Guardrails. However, some other information is not directly supporteds by the source. Guardrails took the Raw LLM output and removed the areas it deemed hallucinated.\n"
299309
]
300310
},
301311
{
@@ -307,7 +317,7 @@
307317
"# Initialize the guard object\n",
308318
"guard = Guard.from_string(\n",
309319
" validators=[\n",
310-
" ProvenanceV0(threshold=0.3, validation_method=\"sentence\", on_fail=\"fix\")\n",
320+
" ProvenanceEmbeddings(threshold=0.3, validation_method=\"sentence\", on_fail=\"fix\")\n",
311321
" ],\n",
312322
" description=\"testmeout\",\n",
313323
")"
@@ -473,15 +483,15 @@
473483
"cell_type": "markdown",
474484
"metadata": {},
475485
"source": [
476-
"## Provenance: v1\n",
486+
"## ProvenanceLLM\n",
477487
"\n",
478-
"Guardrails also provides a provenance: v1 validator that uses evaluates provenance for LLM-generated text using an LLM. Currently, to perform this self-evaluation-based provenance check, you can pass in a name of any OpenAI ChatCompletion model like `gpt-3.5-turbo` or `gpt-4`, or pass in a callable that handles LLM calls. This callable can use any LLM, that you define. For simplicity purposes, we show here a demo of using OpenAI's `gpt-3.5-turbo` model.\n",
488+
"Guardrails also provides a ```ProvenanceLLM``` validator that uses evaluates provenance for LLM-generated text using an LLM. Currently, to perform this self-evaluation-based provenance check, you can pass in a name of any OpenAI ChatCompletion model like `gpt-3.5-turbo` or `gpt-4`, or pass in a callable that handles LLM calls. This callable can use any LLM, that you define. For simplicity purposes, we show here a demo of using OpenAI's `gpt-3.5-turbo` model.\n",
479489
"\n",
480490
"To use the OpenAI API, you have 3 options:\n",
481491
"\n",
482492
"- Set the OPENAI_API_KEY environment variable: os.environ[\"OPENAI_API_KEY\"] = \"[OpenAI_API_KEY]\"\n",
483493
"- Set the OPENAI_API_KEY using openai.api_key=\"[OpenAI_API_KEY]\"\n",
484-
"- Pass the api_key as a parameter to the parse function as done below, in this example\n"
494+
"- Pass the api_key as a parameter to the parse function as done below, in this example"
485495
]
486496
},
487497
{
@@ -491,11 +501,11 @@
491501
"outputs": [],
492502
"source": [
493503
"# Initialize the guard object\n",
494-
"from guardrails.validators import ProvenanceV1\n",
504+
"from guardrails.hub import ProvenanceLLM\n",
495505
"\n",
496506
"guard_1 = Guard.from_string(\n",
497507
" validators=[\n",
498-
" ProvenanceV1(\n",
508+
" ProvenanceLLM(\n",
499509
" validation_method=\"sentence\", # can be \"sentence\" or \"full\"\n",
500510
" llm_callable=\"gpt-3.5-turbo\", # as explained above\n",
501511
" top_k=3, # number of chunks to retrieve\n",

docs/examples/regex_validation.ipynb

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,14 @@
11
{
22
"cells": [
3+
{
4+
"cell_type": "code",
5+
"execution_count": null,
6+
"metadata": {},
7+
"outputs": [],
8+
"source": [
9+
"!guardrails hub install hub://guardrails/regex_match"
10+
]
11+
},
312
{
413
"cell_type": "markdown",
514
"metadata": {},
@@ -42,7 +51,7 @@
4251
"source": [
4352
"import openai\n",
4453
"from guardrails import Guard\n",
45-
"from guardrails.validators import RegexMatch\n",
54+
"from guardrails.hub import RegexMatch\n",
4655
"from rich import print"
4756
]
4857
},

docs/examples/response_is_on_topic.ipynb

Lines changed: 13 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,14 @@
11
{
22
"cells": [
3+
{
4+
"cell_type": "code",
5+
"execution_count": null,
6+
"metadata": {},
7+
"outputs": [],
8+
"source": [
9+
"!guardrails hub install hub://tryolabs/restricttotopic"
10+
]
11+
},
312
{
413
"cell_type": "markdown",
514
"metadata": {},
@@ -136,13 +145,13 @@
136145
],
137146
"source": [
138147
"import guardrails as gd\n",
139-
"from guardrails.validators import OnTopic\n",
148+
"from guardrails.hub import RestrictToTopic\n",
140149
"from guardrails.errors import ValidationError\n",
141150
"\n",
142151
"# Create the Guard with the OnTopic Validator\n",
143152
"guard = gd.Guard.from_string(\n",
144153
" validators=[\n",
145-
" OnTopic(\n",
154+
" RestrictToTopic(\n",
146155
" valid_topics=valid_topics,\n",
147156
" invalid_topics=invalid_topics,\n",
148157
" device=device,\n",
@@ -189,7 +198,7 @@
189198
"# Create the Guard with the OnTopic Validator\n",
190199
"guard = gd.Guard.from_string(\n",
191200
" validators=[\n",
192-
" OnTopic(\n",
201+
" RestrictToTopic(\n",
193202
" valid_topics=valid_topics,\n",
194203
" invalid_topics=invalid_topics,\n",
195204
" device=device,\n",
@@ -242,7 +251,7 @@
242251
"# Create the Guard with the OnTopic Validator\n",
243252
"guard = gd.Guard.from_string(\n",
244253
" validators=[\n",
245-
" OnTopic(\n",
254+
" RestrictToTopic(\n",
246255
" valid_topics=valid_topics,\n",
247256
" invalid_topics=invalid_topics,\n",
248257
" llm_callable=\"gpt-3.5-turbo\",\n",

0 commit comments

Comments
 (0)