Skip to content

Commit 78ece30

Browse files
authored
Merge pull request #705 from guardrails-ai/shreya/docs-for-onfail-enum
Update docs to use OnFailAction enum
2 parents 762e1ea + 94ad9d7 commit 78ece30

File tree

4 files changed

+20
-19
lines changed

4 files changed

+20
-19
lines changed

README.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -64,11 +64,11 @@ pip install guardrails-ai
6464
3. Create a Guard from the installed guardrail.
6565

6666
```python
67-
from guardrails import Guard
67+
from guardrails import Guard, OnFailAction
6868
from guardrails.hub import RegexMatch
6969
7070
guard = Guard().use(
71-
RegexMatch, regex="\(?\d{3}\)?-? *\d{3}-? *-?\d{4}", on_fail="exception"
71+
RegexMatch, regex="\(?\d{3}\)?-? *\d{3}-? *-?\d{4}", on_fail=OnFailAction.EXCEPTION
7272
)
7373
7474
guard.validate("123-456-7890") # Guardrail passes
@@ -93,12 +93,12 @@ pip install guardrails-ai
9393
Then, create a Guard from the installed guardrails.
9494

9595
```python
96-
from guardrails import Guard
96+
from guardrails import Guard, OnFailAction
9797
from guardrails.hub import CompetitorCheck, ToxicLanguage
9898
9999
guard = Guard().use_many(
100-
CompetitorCheck(["Apple", "Microsoft", "Google"], on_fail="exception"),
101-
ToxicLanguage(threshold=0.5, validation_method="sentence", on_fail="exception"),
100+
CompetitorCheck(["Apple", "Microsoft", "Google"], on_fail=OnFailAction.EXCEPTION),
101+
ToxicLanguage(threshold=0.5, validation_method="sentence", on_fail=OnFailAction.EXCEPTION),),
102102
)
103103
104104
guard.validate(

docs/how_to_guides/telemetry.mdx

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -49,13 +49,13 @@ Then, set up your Guard the default tracer provided in the guardrails library. Y
4949

5050
<h5 a><strong><code>main.py</code></strong></h5>
5151
```python
52-
from guardrails import Guard
52+
from guardrails import Guard, OnFailAction
5353
from guardrails.utils.telemetry_utils import default_otlp_tracer
5454
from guardrails.hub import ValidLength
5555
import openai
5656

5757
guard = Guard.from_string(
58-
validators=[ValidLength(min=1, max=10, on_fail="exception")],
58+
validators=[ValidLength(min=1, max=10, on_fail=OnFailAction.EXCEPTION)],
5959
# use a descriptive name that will differentiate where your metrics are stored
6060
tracer=default_otlp_tracer("petname_guard")
6161
)
@@ -112,13 +112,13 @@ For advanced use cases (like if you have a metrics provider in a VPC), you can u
112112
Standard [open telemetry environment variables](https://opentelemetry-python.readthedocs.io/en/stable/getting-started.html#configure-the-exporter) are used to configure the collector. Use the default_otel_collector_tracer when configuring your guard.
113113

114114
```python
115-
from guardrails import Guard
115+
from guardrails import Guard, OnFailAction
116116
from guardrails.utils.telemetry_utils import default_otel_collector_tracer
117117
from guardrails.hub import ValidLength
118118
import openai
119119

120120
guard = Guard.from_string(
121-
validators=[ValidLength(min=1, max=10, on_fail="exception")],
121+
validators=[ValidLength(min=1, max=10, on_fail=OnFailAction.EXCEPTION)],
122122
# use a descriptive name that will differentiate where your metrics are stored
123123
tracer=default_otel_collector_tracer("petname_guard")
124124
)
Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,14 @@
11

2-
# `on_fail` Policies for Validators
2+
# `OnFailActions` for Validators
33

4-
While initializing a validator, you can specify `on_fail` policies to handle the failure of the validator. The `on_fail` policy specifies the corrective action that should be taken if the quality criteria is not met. The corrective action can be one of the following:
4+
Guardrails provides a number of `OnFailActions` for when a validator fails. The `OnFailAction` specifies the corrective action that should be taken if the quality criteria is not met. The corrective action can be one of the following:
55

66
| Action | Behavior |
77
|-----------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
8-
| `reask` | Reask the LLM to generate an output that meets the correctness criteria specified in the validator. The prompt used for reasking contains information about which quality criteria failed, which is auto-generated by the validator. |
9-
| `fix` | Programmatically fix the generated output to meet the correctness criteria when possible. E.g. the formatter `provenance_llm` validator will remove any sentences that are estimated to be hallucinated. |
10-
| `filter` | (Only applicable for structured data validation) Filter the incorrect value. This only filters the field that fails, and will return the rest of the generated output. |
11-
| `refrain` | Refrain from returning an output. This is useful when the generated output is not safe to return, in which case a `None` value is returned instead. |
12-
| `noop` | Do nothing. The failure will still be recorded in the logs, but no corrective action will be taken. |
13-
| `exception` | Raise an exception when validation fails. |
14-
| `fix_reask` | First, fix the generated output deterministically, and then rerun validation with the deterministically fixed output. If validation fails, then perform reasking. |
8+
| `OnFailAction.REASK` | Reask the LLM to generate an output that meets the correctness criteria specified in the validator. The prompt used for reasking contains information about which quality criteria failed, which is auto-generated by the validator. |
9+
| `OnFailAction.FIX` | Programmatically fix the generated output to meet the correctness criteria when possible. E.g. the formatter `provenance_llm` validator will remove any sentences that are estimated to be hallucinated. |
10+
| `OnFailAction.FILTER` | (Only applicable for structured data validation) Filter the incorrect value. This only filters the field that fails, and will return the rest of the generated output. |
11+
| `OnFailAction.REFRAIN` | Refrain from returning an output. This is useful when the generated output is not safe to return, in which case a `None` value is returned instead. |
12+
| `OnFailAction.NOOP` | Do nothing. The failure will still be recorded in the logs, but no corrective action will be taken. |
13+
| `OnFailAction.EXCEPTION` | Raise an exception when validation fails. |
14+
| `OnFailAction.FIX_REASK` | First, fix the generated output deterministically, and then rerun validation with the deterministically fixed output. If validation fails, then perform reasking. |

guardrails/__init__.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,13 +6,14 @@
66
from guardrails.prompt import Instructions, Prompt
77
from guardrails.rail import Rail
88
from guardrails.utils import constants, docs_utils
9-
from guardrails.validator_base import Validator, register_validator
9+
from guardrails.validator_base import OnFailAction, Validator, register_validator
1010

1111
__all__ = [
1212
"Guard",
1313
"PromptCallableBase",
1414
"Rail",
1515
"Validator",
16+
"OnFailAction",
1617
"register_validator",
1718
"constants",
1819
"docs_utils",

0 commit comments

Comments
 (0)