Skip to content

Commit 21f0956

Browse files
add setup details
1 parent 1f051a4 commit 21f0956

File tree

1 file changed

+22
-1
lines changed

1 file changed

+22
-1
lines changed

src/content/docs/ai-gateway/guardrails/set-up-guardrail.mdx

Lines changed: 22 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
---
22
pcx_content_type: how-to
3-
title: How Guardrails works
3+
title: Setting up Guardrails
44
sidebar:
55
order: 3
66
---
@@ -12,3 +12,24 @@ Within AI Gateway settings, you can customize Guardrails:
1212
- Enable or disable content moderation.
1313
- Choose evaluation scope: Analyze user prompts, model responses, or both.
1414
- Define hazard categories: Select categories like violence, hate, or sexual content and assign actions (ignore, flag, or block).
15+
16+
This tutorial will guide you through the process of setting up and customizing Guardrails in your AI Gateway using the Cloudflare dashboard.
17+
18+
## 1. Log in to the dashboard
19+
20+
1. Log into the [Cloudflare dashboard](https://dash.cloudflare.com/) and select your account.
21+
2. Go to AI > AI Gateway.
22+
23+
## 2. Access the Settings Tab
24+
25+
In the AI Gateway section, click on the Settings tab.
26+
Confirm that Guardrails is enabled.
27+
28+
## 3. Set Security Hazard on Prompt or Response
29+
30+
Within the Guardrails settings, you can choose where to apply security hazards:
31+
32+
- On Prompt: Guardrails will evaluate and transform incoming prompts based on your security policies.
33+
- On Response: Guardrails will inspect the model's responses to ensure they meet your content and formatting guidelines.
34+
35+
Select the option that best fits your use case. You can modify this setting at any time according to your needs.

0 commit comments

Comments
 (0)