Skip to content
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 18 additions & 0 deletions src/content/docs/ai-gateway/guardrails/index.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
---
title: Guardrails
pcx_content_type: navigation
order: 1
sidebar:
order: 8
group:
badge: Beta
---

Guardrails help you deploy AI applications safely by intercepting and evaluating both user prompts and model responses for harmful content. Acting as a proxy between your application and [model providers](/ai-gateway/providers) (such as OpenAI, Anthropic, DeepSeek, and others), AI Gateway's Guardrails ensure a consistent and secure experience across your entire AI ecosystem.

Guardrails proactively monitor interactions between users and AI models, giving you:

- **Consistent moderation**: Uniform moderation layer that works across models and providers.
- **Enhanced safety and user trust**: Proactively protect users from harmful or inappropriate interactions.
- **Flexibility and control over allowed content**: Specify which categories to monitor and choose between flagging or outright blocking
- **Auditing and compliance capabilities**: Stay ahead of evolving regulatory requirements with logs of user prompts, model responses, and enforced guardrails.
107 changes: 107 additions & 0 deletions src/content/docs/ai-gateway/guardrails/set-up-guardrail.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,107 @@
---
pcx_content_type: how-to
title: How Guardrails works
sidebar:
order: 3
---

AI Gateway inspects all interactions in real time by evaluating content against predefined safety parameters. Below a breakdown of the process:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should delete this entire page and instead move the different sub-groups of information to other pages (noting below)


1. Intercepting interactions:
AI Gateway proxies requests and responses, sitting between the user and the AI model.

2. Inspecting content:

- User prompts: AI Gateway checks prompts against safety parameters (for example, violence, hate, or sexual content). Based on your settings, prompts can be flagged or blocked before reaching the model.
- Model responses: Once processed, the AI model response is inspected. If hazardous content is detected, it can be flagged or blocked before being delivered to the user.

3. Applying actions:
Depending on your configuration, flagged content is logged for review, while blocked content is prevented from proceeding.

## Supported model types

Guardrails determines the type of AI model being used and applies safety checks accordingly:

- Text generation models: Both prompts and responses are evaluated.
- Embedding models: Only the prompt is evaluated, and the response is passed directly back to the user.
- Unknown models: If the model type cannot be determined, prompts are evaluated, but responses bypass Guardrails.

If Guardrails cannot access the underlying model, requests set to "block" will result in an error, while flagged requests will proceed.

## Configuration

Within AI Gateway settings, you can customize Guardrails:

- Enable or disable content moderation.
- Choose evaluation scope: Analyze user prompts, model responses, or both.
- Define hazard categories: Select categories like violence, hate, or sexual content and assign actions (ignore, flag, or block).

## Workers AI and Guardrails

Guardrails currently uses [Llama Guard 3 8B](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) on [Workers AI](/workers-ai/) to perform content evaluations. The underlying model may be updated in the future, and we will reflect those changes within Guardrails.

Since Guardrails runs on Workers AI, enabling it incurs usage on Workers AI. You can monitor usage through the Workers AI Dashboard.

## Additional considerations

- Latency impact: Enabling Guardrails adds some latency. Consider this when balancing safety and speed.

:::note

Llama Guard is provided as-is without any representations, warranties, or guarantees. Any rules or examples contained in blogs, developer docs, or other reference materials are provided for informational purposes only. You acknowledge and understand that you are responsible for the results and outcomes of your use of AI Gateway.

## :::
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think from here until the end is just a duplication of the existing content?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can avoid this by previewing your build locally || looking at the preview links once you push it up.


pcx_content_type: how-to
title: How Guardrails works
sidebar:
order: 3

---

AI Gateway inspects all interactions in real time by evaluating content against predefined safety parameters. Below a breakdown of the process:

1. Intercepting interactions:
AI Gateway proxies requests and responses, sitting between the user and the AI model.

2. Inspecting content:

- User prompts: AI Gateway checks prompts against safety parameters (for example, violence, hate, or sexual content). Based on your settings, prompts can be flagged or blocked before reaching the model.
- Model responses: Once processed, the AI model response is inspected. If hazardous content is detected, it can be flagged or blocked before being delivered to the user.

3. Applying actions:
Depending on your configuration, flagged content is logged for review, while blocked content is prevented from proceeding.

## Supported model types

Guardrails determines the type of AI model being used and applies safety checks accordingly:

- Text generation models: Both prompts and responses are evaluated.
- Embedding models: Only the prompt is evaluated, and the response is passed directly back to the user.
- Unknown models: If the model type cannot be determined, prompts are evaluated, but responses bypass Guardrails.

If Guardrails cannot access the underlying model, requests set to "block" will result in an error, while flagged requests will proceed.

## Configuration

Within AI Gateway settings, you can customize Guardrails:

- Enable or disable content moderation.
- Choose evaluation scope: Analyze user prompts, model responses, or both.
- Define hazard categories: Select categories like violence, hate, or sexual content and assign actions (ignore, flag, or block).

## Workers AI and Guardrails

Guardrails currently uses [Llama Guard 3 8B](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) on [Workers AI](/workers-ai/) to perform content evaluations. The underlying model may be updated in the future, and we will reflect those changes within Guardrails.

Since Guardrails runs on Workers AI, enabling it incurs usage on Workers AI. You can monitor usage through the Workers AI Dashboard.

## Additional considerations

- Latency impact: Enabling Guardrails adds some latency. Consider this when balancing safety and speed.

:::note

Llama Guard is provided as-is without any representations, warranties, or guarantees. Any rules or examples contained in blogs, developer docs, or other reference materials are provided for informational purposes only. You acknowledge and understand that you are responsible for the results and outcomes of your use of AI Gateway.

:::
Loading