Skip to content

Commit 2d045a5

Browse files
committed
update more notebooks
1 parent 156e835 commit 2d045a5

File tree

5 files changed

+51
-13
lines changed

5 files changed

+51
-13
lines changed

docs/examples/secrets_detection.ipynb

Lines changed: 16 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
"source": [
77
"## Check whether an LLM-generated code response contains secrets\n",
88
"\n",
9-
"### Using the `DetectSecrets` validator\n",
9+
"### Using the `SecretsPresent` validator\n",
1010
"\n",
1111
"This is a simple walkthrough of how to use the `DetectSecrets` validator to check whether an LLM-generated code response contains secrets. It utilizes the `detect-secrets` library, which is a Python library that scans code files for secrets. The library is available on GitHub at [this link](https://github.com/Yelp/detect-secrets).\n"
1212
]
@@ -31,9 +31,18 @@
3131
"! pip install detect-secrets -q"
3232
]
3333
},
34+
{
35+
"cell_type": "markdown",
36+
"metadata": {},
37+
"source": [
38+
"We also need to install the ```SecretsPresent``` validator from Guardrails Hub: \n",
39+
"\n",
40+
"```guardrails hub install hub://guardrails/secrets_present```"
41+
]
42+
},
3443
{
3544
"cell_type": "code",
36-
"execution_count": 2,
45+
"execution_count": null,
3746
"metadata": {},
3847
"outputs": [
3948
{
@@ -47,9 +56,10 @@
4756
],
4857
"source": [
4958
"# Import the guardrails package\n",
50-
"# and the DetectSecrets validator\n",
59+
"# and import the SecretsPresent validator\n",
60+
"# from Guardrails Hub\n",
5161
"import guardrails as gd\n",
52-
"from guardrails.validators import DetectSecrets\n",
62+
"from guardrails.hub import SecretsPresent\n",
5363
"from rich import print"
5464
]
5565
},
@@ -64,7 +74,7 @@
6474
"# if the validator detects secrets\n",
6575
"\n",
6676
"guard = gd.Guard.from_string(\n",
67-
" validators=[DetectSecrets(on_fail=\"fix\")],\n",
77+
" validators=[SecretsPresent(on_fail=\"fix\")],\n",
6878
" description=\"testmeout\",\n",
6979
")"
7080
]
@@ -207,7 +217,7 @@
207217
"cell_type": "markdown",
208218
"metadata": {},
209219
"source": [
210-
"#### In this way, you can use the `DetectSecrets` validator to check whether an LLM-generated code response contains secrets. With Guardrails as wrapper, you can be assured that the secrets in the code will be detected and masked and not be exposed.\n"
220+
"#### In this way, you can use the `SecretsPresent` validator to check whether an LLM-generated code response contains secrets. With Guardrails as wrapper, you can be assured that the secrets in the code will be detected and masked and not be exposed.\n"
211221
]
212222
}
213223
],

docs/examples/select_choice_based_on_action.ipynb

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,11 @@
2020
"\n",
2121
"We want the LLM to play an RP game where it can choose to either `fight` or `flight`. If it chooses to `fight`, the LLM should choose a `weapon` and an `enemy`. If the player chooses `flight`, the LLM should choose a `direction` and a `distance`.\n",
2222
"\n",
23+
"## Step 0: Install the ```ValidChoices``` validator from Guardrails Hub \n",
24+
"\n",
25+
"We first have to install the ```ValidChices``` validator from Guardrails Hub: \n",
26+
"\n",
27+
"```guardrails hub install hub://guardrails/valid_choices```\n",
2328
"\n",
2429
"## Step 1: Generating `RAIL` Spec\n",
2530
"\n",
@@ -83,7 +88,7 @@
8388
"metadata": {},
8489
"outputs": [],
8590
"source": [
86-
"from guardrails.validators import ValidChoices\n",
91+
"from guardrails.hub import ValidChoices\n",
8792
"from pydantic import BaseModel, Field\n",
8893
"from typing import Literal, Union\n",
8994
"\n",

docs/examples/syntax_error_free_sql.ipynb

Lines changed: 11 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -41,6 +41,15 @@
4141
"! pip install sqlvalidator"
4242
]
4343
},
44+
{
45+
"cell_type": "markdown",
46+
"metadata": {},
47+
"source": [
48+
"We also need to install the ```ValidSQL``` validator from Guardrails Hub: \n",
49+
"\n",
50+
"```guardrails hub install hub://guardrails/valid_sql```"
51+
]
52+
},
4453
{
4554
"attachments": {},
4655
"cell_type": "markdown",
@@ -117,7 +126,7 @@
117126
}
118127
],
119128
"source": [
120-
"from guardrails.validators import BugFreeSQL\n",
129+
"from guardrails.hub import ValidSQL\n",
121130
"from pydantic import BaseModel, Field\n",
122131
"\n",
123132
"prompt = \"\"\"\n",
@@ -130,7 +139,7 @@
130139
"\"\"\"\n",
131140
"\n",
132141
"class ValidSql(BaseModel):\n",
133-
" generated_sql: str = Field(description=\"Generate SQL for the given natural language instruction.\", validators=[BugFreeSQL(on_fail=\"reask\")])"
142+
" generated_sql: str = Field(description=\"Generate SQL for the given natural language instruction.\", validators=[ValidSQL(on_fail=\"reask\")])"
134143
]
135144
},
136145
{

docs/examples/text_summarization_quality.ipynb

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -38,6 +38,15 @@
3838
"!pip install numpy"
3939
]
4040
},
41+
{
42+
"cell_type": "markdown",
43+
"metadata": {},
44+
"source": [
45+
"We also need to install the ```SimilarToDocument``` validator from Guardrails Hub: \n",
46+
"\n",
47+
"```guardrails hub install hub://guardrails/similar_to_document```"
48+
]
49+
},
4150
{
4251
"attachments": {},
4352
"cell_type": "markdown",
@@ -123,7 +132,7 @@
123132
"outputs": [],
124133
"source": [
125134
"from pydantic import BaseModel, Field\n",
126-
"from guardrails.validators import SimilarToDocument\n",
135+
"from guardrails.hub import SimilarToDocument\n",
127136
"\n",
128137
"prompt = \"\"\"\n",
129138
"Summarize the following document:\n",

docs/examples/toxic_language.ipynb

Lines changed: 8 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -8,19 +8,24 @@
88
"\n",
99
"### Using the `ToxicLanguage` validator\n",
1010
"\n",
11-
"This is a simple walkthrough of the `ToxicLanguage` validator. This validator checks whether an LLM-generated response contains toxic language. It uses the pre-trained multi-label model from HuggingFace -`unitary/unbiased-toxic-roberta` to check whether the generated text is toxic. It supports both full-text-level and sentence-level validation.\n"
11+
"This is a simple walkthrough of the `ToxicLanguage` validator. This validator checks whether an LLM-generated response contains toxic language. It uses the pre-trained multi-label model from HuggingFace -`unitary/unbiased-toxic-roberta` to check whether the generated text is toxic. It supports both full-text-level and sentence-level validation.\n",
12+
"\n",
13+
"We first need to install the ```ToxicLanguage``` validator from Guardrails Hub: \n",
14+
"\n",
15+
"```guardrails hub install hub://guardrails/toxic_language```"
1216
]
1317
},
1418
{
1519
"cell_type": "code",
16-
"execution_count": 9,
20+
"execution_count": null,
1721
"metadata": {},
1822
"outputs": [],
1923
"source": [
2024
"# Import the guardrails package\n",
2125
"# and the ToxicLanguage validator\n",
26+
"# from Guardrails Hub\n",
2227
"import guardrails as gd\n",
23-
"from guardrails.validators import ToxicLanguage\n",
28+
"from guardrails.hub import ToxicLanguage\n",
2429
"from rich import print"
2530
]
2631
},

0 commit comments

Comments
 (0)