Skip to content

Commit 0ba68bd

Browse files
[WAF] Firewall for AI: Add note about content type (#26014)
Clarify that Firewall for AI only triggers on JSON content type. --------- Co-authored-by: Pedro Sousa <[email protected]>
1 parent ec7b4c4 commit 0ba68bd

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

src/content/docs/waf/detections/firewall-for-ai.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ Firewall for AI is a detection that can help protect your services powered by <G
2626
- Detect and moderate unsafe or harmful prompts – for example, prompts potentially related to violent crimes.
2727
- Detect prompts intentionally designed to subvert the intended behavior of the LLM as specified by the developer – for example, <GlossaryTooltip term="prompt injection">prompt injection</GlossaryTooltip> attacks.
2828

29-
When enabled, the detection runs on incoming traffic, searching for any LLM prompts attempting to exploit the model.
29+
When enabled, the detection runs on incoming traffic, searching for any LLM prompts attempting to exploit the model. Currently, the detection only handles requests with a JSON content type (`application/json`).
3030

3131
Cloudflare will populate the existing [Firewall for AI fields](#firewall-for-ai-fields) based on the scan results. You can check these results in the [Security Analytics](/waf/analytics/security-analytics/) dashboard by filtering on the `cf-llm` [managed endpoint label](/api-shield/management-and-monitoring/endpoint-labels/) and reviewing the detection results on your traffic. Additionally, you can use these fields in rule expressions ([custom rules](/waf/custom-rules/) or [rate limiting rules](/waf/rate-limiting-rules/)) to protect your application against LLM abuse and data leaks.
3232

0 commit comments

Comments
 (0)