You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Guardrails in AI Gateway help you deploy AI applications safely by intercepting and evaluating both user prompts and model responses for harmful content. Acting as a proxy between your application and model providers(such as OpenAI, Anthropic, DeepSeek, and others), Guardrails ensures a consistent and secure experience across your entire AI ecosystem.
11
+
Guardrails help you deploy AI applications safely by intercepting and evaluating both user prompts and model responses for harmful content. Acting as a proxy between your application and [model providers](/ai-gateway/providers)(such as OpenAI, Anthropic, DeepSeek, and others), Guardrails ensures a consistent and secure experience across your entire AI ecosystem.
12
12
13
13
Guardrails proactively monitor interactions between users and AI models, allowing you to:
14
14
15
-
Enhance safety: Protect users by detecting and mitigating harmful content.
Copy file name to clipboardExpand all lines: src/content/docs/ai-gateway/guardrails/set-up-guardrail.mdx
+83-24Lines changed: 83 additions & 24 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,44 +5,103 @@ sidebar:
5
5
order: 3
6
6
---
7
7
8
-
AI Gateway inspects all interactions in real time by evaluating content against predefined safety parameters. Here’s a breakdown of the process:
8
+
AI Gateway inspects all interactions in real time by evaluating content against predefined safety parameters. Below a breakdown of the process:
9
9
10
-
1.**Intercepting interactions:**
11
-
AI Gateway sits between the user and the AI model, intercepting every prompt and response.
10
+
1. Intercepting interactions:
11
+
AI Gateway proxies requests and responses, sitting between the user and the AI model.
12
12
13
-
2.**Evaluating content:**
13
+
2.Inspecting content:
14
14
15
-
-**User prompts:** When a user sends a prompt, AI Gateway checks it against safety parameters (for example violence, hate, or sexual content). Based on your configuration, the system can either flag the prompt or block it before it reaches the AI model.
16
-
-**Model responses:** After processing, the AI model’s response is inspected. If hazardous content is detected, it can be flagged or blocked before being delivered to the user.
15
+
- User prompts:AI Gateway checks prompts against safety parameters (for example, violence, hate, or sexual content). Based on your settings, prompts can be flagged or blocked before reaching the model.
16
+
- Model responses: Once processed, the AI model response is inspected. If hazardous content is detected, it can be flagged or blocked before being delivered to the user.
17
17
18
-
3.**Model-specific behavior:**
18
+
3. Applying actions:
19
+
Depending on your configuration, flagged content is logged for review, while blocked content is prevented from proceeding.
19
20
20
-
-**Text generation models:** Both prompts and responses are evaluated.
21
-
-**Embedding models:** Only the prompt is evaluated, and the response is passed directly back to the user.
22
-
-**Catalogued models:** If the model type is identifiable, only the prompt is evaluated; the response bypasses Guardrails and is delivered directly.
21
+
## Supported model types
23
22
24
-
4.**Real-time observability:**
25
-
Detailed logs provide visibility into user queries and model outputs, allowing you to monitor interactions continuously and adjust safety parameters as needed.
23
+
Guardrails determines the type of AI model being used and applies safety checks accordingly:
24
+
25
+
- Text generation models: Both prompts and responses are evaluated.
26
+
- Embedding models: Only the prompt is evaluated, and the response is passed directly back to the user.
27
+
- Unknown models: If the model type cannot be determined, prompts are evaluated, but responses bypass Guardrails.
28
+
29
+
If Guardrails cannot access the underlying model, requests set to "block" will result in an error, while flagged requests will proceed.
30
+
31
+
## Configuration
32
+
33
+
Within AI Gateway settings, you can customize Guardrails:
34
+
35
+
- Enable or disable content moderation.
36
+
- Choose evaluation scope: Analyze user prompts, model responses, or both.
37
+
- Define hazard categories: Select categories like violence, hate, or sexual content and assign actions (ignore, flag, or block).
38
+
39
+
## Workers AI and Guardrails
40
+
41
+
Guardrails currently uses [Llama Guard 3 8B](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) on [Workers AI](/workers-ai/) to perform content evaluations. The underlying model may be updated in the future, and we will reflect those changes within Guardrails.
42
+
43
+
Since Guardrails runs on Workers AI, enabling it incurs usage on Workers AI. You can monitor usage through the Workers AI Dashboard.
44
+
45
+
## Additional considerations
46
+
47
+
- Latency impact: Enabling Guardrails adds some latency. Consider this when balancing safety and speed.
48
+
49
+
:::note
50
+
51
+
Llama Guard is provided as-is without any representations, warranties, or guarantees. Any rules or examples contained in blogs, developer docs, or other reference materials are provided for informational purposes only. You acknowledge and understand that you are responsible for the results and outcomes of your use of AI Gateway.
52
+
53
+
## :::
54
+
55
+
pcx_content_type: how-to
56
+
title: How Guardrails works
57
+
sidebar:
58
+
order: 3
59
+
60
+
---
61
+
62
+
AI Gateway inspects all interactions in real time by evaluating content against predefined safety parameters. Below a breakdown of the process:
63
+
64
+
1. Intercepting interactions:
65
+
AI Gateway proxies requests and responses, sitting between the user and the AI model.
66
+
67
+
2. Inspecting content:
68
+
69
+
- User prompts: AI Gateway checks prompts against safety parameters (for example, violence, hate, or sexual content). Based on your settings, prompts can be flagged or blocked before reaching the model.
70
+
- Model responses: Once processed, the AI model response is inspected. If hazardous content is detected, it can be flagged or blocked before being delivered to the user.
71
+
72
+
3. Applying actions:
73
+
Depending on your configuration, flagged content is logged for review, while blocked content is prevented from proceeding.
74
+
75
+
## Supported model types
76
+
77
+
Guardrails determines the type of AI model being used and applies safety checks accordingly:
78
+
79
+
- Text generation models: Both prompts and responses are evaluated.
80
+
- Embedding models: Only the prompt is evaluated, and the response is passed directly back to the user.
81
+
- Unknown models: If the model type cannot be determined, prompts are evaluated, but responses bypass Guardrails.
82
+
83
+
If Guardrails cannot access the underlying model, requests set to "block" will result in an error, while flagged requests will proceed.
26
84
27
85
## Configuration
28
86
29
-
Within AI Gateway settings, you can tailor the Guardrails feature to your requirements:
87
+
Within AI Gateway settings, you can customize Guardrails:
30
88
31
-
-**Guardrails:**Enable or disable content moderation.
32
-
-**Evaluation scope:** Choose to analyse user prompts, model responses, or both.
33
-
-**Hazard categories:** Define specific categories (such as violence, hate, or sexual content) to monitor, and set actions for each category (ignore, flag, or block).
89
+
- Enable or disable content moderation.
90
+
-Choose evaluation scope: Analyze user prompts, model responses, or both.
91
+
-Define hazard categories: Select categories like violence, hate, or sexual contentand assign actions (ignore, flag, or block).
34
92
35
-
## Leveraging Llama Guard on Workers AI
93
+
## Workers AI and Guardrails
36
94
37
-
Guardrails is powered by [**Llama Guard**](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/), Meta’s open-source content moderation tool designed for real-time safety monitoring. AI Gateway uses the [**Llama Guard 3 8B model**](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/), hosted on [**Workers AI**](/workers-ai/) to drive its safety features. This model is continuously updated to adapt to emerging safety challenges.
95
+
Guardrails currently uses [Llama Guard 3 8B](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) on [Workers AI](/workers-ai/) to perform content evaluations. The underlying model may be updated in the future, and we will reflect those changes within Guardrails.
96
+
97
+
Since Guardrails runs on Workers AI, enabling it incurs usage on Workers AI. You can monitor usage through the Workers AI Dashboard.
38
98
39
99
## Additional considerations
40
100
41
-
-**Workers AI usage:**
42
-
Enabling Guardrails incurs usage on Workers AI. Monitor your usage through the Workers AI Dashboard.
101
+
- Latency impact: Enabling Guardrails adds some latency. Consider this when balancing safety and speed.
102
+
103
+
:::note
43
104
44
-
-**Latency impact:**
45
-
Evaluating both the request and the response introduces extra latency. Factor this into your deployment planning.
105
+
Llama Guard is provided as-is without any representations, warranties, or guarantees. Any rules or examples contained in blogs, developer docs, or other reference materials are provided for informational purposes only. You acknowledge and understand that you are responsible for the results and outcomes of your use of AI Gateway.
46
106
47
-
-**Model availability:**
48
-
If the underlying model is unavailable, requests that are flagged will proceed; however, requests set to be blocked will result in an error.
0 commit comments