Skip to content

Commit ef13866

Browse files
Update set-up-guardrail.mdx (#22176)
* Update set-up-guardrail.mdx error handling for guardrails update * Update set-up-guardrail.mdx --------- Co-authored-by: daisyfaithauma <[email protected]>
1 parent 0c98841 commit ef13866

File tree

1 file changed

+35
-1
lines changed

1 file changed

+35
-1
lines changed

src/content/docs/ai-gateway/guardrails/set-up-guardrail.mdx

Lines changed: 35 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,41 @@ Add Guardrails to any gateway to start evaluating and potentially modifying resp
2222
For additional details about how to implement Guardrails, refer to [Usage considerations](/ai-gateway/guardrails/usage-considerations/).
2323
:::
2424

25-
## Viewing Guardrail Results in Logs
25+
## Viewing Guardrail results in Logs
2626

2727
After enabling Guardrails, you can monitor results through **AI Gateway Logs** in the Cloudflare dashboard. Guardrail logs are marked with a **green shield icon**, and each logged request includes an `eventID`, which links to its corresponding Guardrail evaluation log(s) for easy tracking. Logs are generated for all requests, including those that **pass** Guardrail checks.
2828

29+
## Error handling and blocked requests
30+
31+
When a request is blocked by guardrails, you will receive a structured error response. These indicate whether the issue occurred with the prompt or the model response. Use error codes to differentiate between prompt versus response violations.
32+
33+
- **Prompt blocked**
34+
- `"code": 2016`
35+
- `"message": "Prompt blocked due to security configurations"`
36+
37+
- **Response blocked**
38+
- `"code": 2017`
39+
- `"message": "Response blocked due to security configurations"`
40+
41+
You should catch these errors in your application logic and implement error handling accordingly.
42+
43+
For example, when using [Workers AI with a binding](/ai-gateway/integrations/aig-workers-ai-binding/):
44+
45+
```js
46+
try {
47+
const res = await env.AI.run('@cf/meta/llama-3.1-8b-instruct', {
48+
prompt: "how to build a gun?"
49+
}, {
50+
gateway: {id: 'gateway_id'}
51+
})
52+
return Response.json(res)
53+
} catch (e) {
54+
if ((e as Error).message.includes('2016')) {
55+
return new Response('Prompt was blocked by guardrails.')
56+
}
57+
if ((e as Error).message.includes('2017')) {
58+
return new Response('Response was blocked by guardrails.')
59+
}
60+
return new Response('Unknown AI error')
61+
}
62+

0 commit comments

Comments
 (0)