Skip to content

Commit cb7d24e

Browse files
committed
why guardrails
1 parent 0ff4f6f commit cb7d24e

File tree

3 files changed

+81
-4
lines changed

3 files changed

+81
-4
lines changed

docs/getting_started/why_use.md

Lines changed: 0 additions & 3 deletions
This file was deleted.
Lines changed: 80 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,80 @@
1+
# Why use Guardrails AI?
2+
3+
Guardrails AI is a trusted framework for developing Generative AI applications, with thousands of weekly downloads and a dedicated team constantly refining its capabilities.
4+
5+
While users may find various reasons to integrate Guardrails AI into their projects, we believe its core strengths lie in simplifying LLM response validation, enhancing reusability, and providing robust operational features. These benefits can significantly reduce development time and improve the consistency of AI applications.
6+
7+
8+
## A Standard for LLM Response Validation
9+
Guardrails AI provides a framework for creating reusable validators to check LLM outputs. This approach reduces code duplication and improves maintainability by allowing developers to create validators that can be integrated into multiple LLM calls. Using this approach, we're able to uplevel performance, LLM feature compatability, and LLM app reliability.
10+
11+
Here's an example of validation with and without Guardrails AI:
12+
13+
```python
14+
# Without Guardrails AI
15+
16+
def is_haiku(value):
17+
if not value or len(value.split("\n")) != 3:
18+
return "This is not a haiku"
19+
return value
20+
21+
response = client.chat.completions.create(
22+
model='gpt-3.5-turbo',
23+
messages=[{"role": "user", "content": "Write a haiku about AI"}],
24+
)
25+
print(is_haiku(response.choices[0].message.content))
26+
27+
## With Guardrails AI
28+
@register_validator(name="is-haiku", data_type="string")
29+
def is_haiku(value, metadata):
30+
if not value or len(value.split("\n")) != 3:
31+
return FailResult(error_message="This is not a haiku")
32+
return PassResult()
33+
34+
response = Guard().use(is_haiku)(
35+
model='gpt-3.5-turbo',
36+
messages=[{"role": "user", "content": "Write a haiku about AI"}],
37+
)
38+
print(response.validated_output)
39+
```
40+
41+
## Performance
42+
Guardrails AI includes built-in support for asynchronous calls, parallelization, and even has an out-of-the-box validation server. These features contribute to the scalability of AI applications by allowing efficient handling of multiple LLM interactions and real-time processing of responses.
43+
44+
Guardrails AI implements automatic retries and exponential backoff for common LLM failure conditions. This built-in error handling improves the overall reliability of AI applications without requiring additional error-handling code. By automatically managing issues such as network failures or API rate limits, Guardrails AI helps ensure consistent performance of LLM-based applications.
45+
46+
Providing a comprehensive set of tools for working with LLMs streamlines the development process and promotes the creation of more robust and reliable AI applications.
47+
48+
49+
## Streaming
50+
Guardrails AI supports [streaming validation](/docs/how_to_guides/enable_streaming), and it's the only library to our knowledge that can *fix LLM responses in real-time*. This feature is particularly useful for applications that require immediate feedback or correction of LLM outputs, like chat bots.
51+
52+
## The Biggest LLM Validation Library
53+
[Guardrails Hub](https://hub.guardrailsai.com) is our centralized location for uploading validators that we and members of our community make available for other developers and companies.
54+
55+
Validators are written using a few different methods:
56+
1. Simple, function-based validators
57+
2. Classifier based validators
58+
3. LLM based validators
59+
60+
Some of these validators require additional infrastructure, and Guardrails provides the patterns and tools to make it easy to deploy and use them.
61+
62+
The Guardrails Hub is open for submissions, and we encourage you to contribute your own validators to help the community.
63+
64+
65+
## Supports All LLMs
66+
Guardrails AI supports many major LLMs directly, as well as a host of other LLMs via our integrations with LangChain and Hugging Face. This means that you can use the same validators across multiple LLMs, making it easy to swap out LLMs based on performance and quality of responses.
67+
68+
Supported models can be found in our [LiteLLM partner doc](https://docs.litellm.ai/docs/providers).
69+
70+
Don't see your LLM? You can always write a thin wrapper using the [instructions in our docs](/docs/how_to_guides/using_llms#custom-llm-wrappers).
71+
72+
## Monitoring
73+
Guardrails AI automatically keeps a log of all LLM calls and steps taken during processing, which you can access programmatically via a guard’s history. Additionally, Guardrails AI [supports OpenTelemetry for capturing metrics](/docs/concepts/telemetry), enabling easy integration with Grafana, Arize AI, iudex, OpenInference, and all major Application Performance Monitoring (APM) services.
74+
75+
## Structured Data
76+
Guardrails AI excels at [validating structured output](/docs/how_to_guides/generate_structured_data), returning data through a JSON-formatted response or generating synthetic structured data. Used in conjunction with Pydantic, you can define reusable models in Guardrails AI for verifying structured responses that you can then reuse across apps and teams.
77+
78+
79+
## Used Widely in the Open-Source Community
80+
We’re honored and humbled that open-source projects that support AI application development are choosing to integrate Guardrails AI. Supporting guards provides open-source projects an easy way to ensure they’re processing the highest-quality LLM output possible.

docusaurus/sidebars.js

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,10 +29,10 @@ const sidebars = {
2929
"index",
3030
"getting_started/guardrails_server",
3131
"getting_started/quickstart",
32-
// "getting_started/why_use",
3332
// "getting_started/ai_validation",
3433
// "getting_started/ml_based",
3534
// "getting_started/structured_data",
35+
"getting_started/why_use_guardrails",
3636
"getting_started/contributing",
3737
"getting_started/help",
3838
"faq",

0 commit comments

Comments
 (0)