Skip to content

Commit 9a54d96

Browse files
Guardails docs
1 parent e13d29d commit 9a54d96

File tree

2 files changed

+65
-0
lines changed

2 files changed

+65
-0
lines changed
Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
---
2+
title: Guardrails in AI Gateway
3+
pcx_content_type: navigation
4+
order: 1
5+
sidebar:
6+
order: 8
7+
group:
8+
badge: Beta
9+
---
10+
11+
Guardrails in AI Gateway help you deploy AI applications safely by intercepting and evaluating both user prompts and model responses for harmful content. Acting as a proxy between your application and model providers (such as OpenAI, Anthropic, DeepSeek, and others), Guardrails ensures a consistent and secure experience across your entire AI ecosystem.
12+
13+
Guardrails proactively monitor interactions between users and AI models, allowing you to:
14+
15+
Enhance safety: Protect users by detecting and mitigating harmful content.
16+
Improve compliance: Meet evolving regulatory standards.
17+
Reduce costs: Prevent unnecessary processing by blocking harmful requests early.
Lines changed: 48 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,48 @@
1+
---
2+
pcx_content_type: how-to
3+
title: How Guardrails works
4+
sidebar:
5+
order: 3
6+
---
7+
8+
AI Gateway inspects all interactions in real time by evaluating content against predefined safety parameters. Here’s a breakdown of the process:
9+
10+
1. **Intercepting interactions:**
11+
AI Gateway sits between the user and the AI model, intercepting every prompt and response.
12+
13+
2. **Evaluating content:**
14+
15+
- **User prompts:** When a user sends a prompt, AI Gateway checks it against safety parameters (for example violence, hate, or sexual content). Based on your configuration, the system can either flag the prompt or block it before it reaches the AI model.
16+
- **Model responses:** After processing, the AI model’s response is inspected. If hazardous content is detected, it can be flagged or blocked before being delivered to the user.
17+
18+
3. **Model-specific behavior:**
19+
20+
- **Text generation models:** Both prompts and responses are evaluated.
21+
- **Embedding models:** Only the prompt is evaluated, and the response is passed directly back to the user.
22+
- **Catalogued models:** If the model type is identifiable, only the prompt is evaluated; the response bypasses Guardrails and is delivered directly.
23+
24+
4. **Real-time observability:**
25+
Detailed logs provide visibility into user queries and model outputs, allowing you to monitor interactions continuously and adjust safety parameters as needed.
26+
27+
## Configuration
28+
29+
Within AI Gateway settings, you can tailor the Guardrails feature to your requirements:
30+
31+
- **Guardrails:** Enable or disable content moderation.
32+
- **Evaluation scope:** Choose to analyse user prompts, model responses, or both.
33+
- **Hazard categories:** Define specific categories (such as violence, hate, or sexual content) to monitor, and set actions for each category (ignore, flag, or block).
34+
35+
## Leveraging Llama Guard on Workers AI
36+
37+
Guardrails is powered by [**Llama Guard**](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/), Meta’s open-source content moderation tool designed for real-time safety monitoring. AI Gateway uses the [**Llama Guard 3 8B model**](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/), hosted on [**Workers AI**](/workers-ai/) to drive its safety features. This model is continuously updated to adapt to emerging safety challenges.
38+
39+
## Additional considerations
40+
41+
- **Workers AI usage:**
42+
Enabling Guardrails incurs usage on Workers AI. Monitor your usage through the Workers AI Dashboard.
43+
44+
- **Latency impact:**
45+
Evaluating both the request and the response introduces extra latency. Factor this into your deployment planning.
46+
47+
- **Model availability:**
48+
If the underlying model is unavailable, requests that are flagged will proceed; however, requests set to be blocked will result in an error.

0 commit comments

Comments
 (0)