You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/content/docs/waf/detections/firewall-for-ai.mdx
+36-4Lines changed: 36 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -68,7 +68,7 @@ Then, use [Security Analytics](/waf/analytics/security-analytics/) in the new ap
68
68
69
69
Alternatively, create a custom rule like the one described in the next step using a _Log_ action. This rule will generate [security events](/waf/analytics/security-events/) that will allow you to validate your configuration.
70
70
71
-
### 3. Mitigate requests containing PII
71
+
### 3. Mitigate harmful requests
72
72
73
73
[Create a custom rule](/waf/custom-rules/create-dashboard/) that blocks requests where Cloudflare detected personally identifiable information (PII) in the incoming request (as part of an LLM prompt), returning a custom JSON body:
74
74
@@ -85,7 +85,7 @@ Alternatively, create a custom rule like the one described in the next step usin
85
85
-**With response type**: Custom JSON
86
86
-**Response body**: `{ "error": "Your request was blocked. Please rephrase your request." }`
87
87
88
-
This rule will match requests where the WAF detects PII within an LLM prompt. For a list of fields provided by Firewall for AI, refer to [Fields](#fields).
88
+
For additional examples, refer to [Example mitigation rules](#example-mitigation-rules). For a list of fields provided by Firewall for AI, refer to [Fields](#fields).
89
89
90
90
<Detailsheader="Combine with other Rules language fields">
91
91
@@ -125,6 +125,38 @@ When enabled, Firewall for AI populates the following fields:
For a list of PII categories, refer to the [`cf.llm.prompt.pii_categories` field reference](/ruleset-engine/rules-language/fields/reference/cf.llm.prompt.pii_categories/).
128
+
For a list of PII categories, refer to the [`cf.llm.prompt.pii_categories`](/ruleset-engine/rules-language/fields/reference/cf.llm.prompt.pii_categories/) field reference.
129
129
130
-
For a list of unsafe topic categories, refer to the [`cf.llm.prompt.unsafe_topic_categories` field reference](/ruleset-engine/rules-language/fields/reference/cf.llm.prompt.unsafe_topic_categories/).
130
+
For a list of unsafe topic categories, refer to the [`cf.llm.prompt.unsafe_topic_categories`](/ruleset-engine/rules-language/fields/reference/cf.llm.prompt.unsafe_topic_categories/) field reference.
131
+
132
+
## Example mitigation rules
133
+
134
+
### Block requests with specific PII category in prompt
135
+
136
+
The following example [custom rule](/waf/custom-rules/create-dashboard/) will block requests with an LLM prompt that tries to obtain PII of a specific [category](/ruleset-engine/rules-language/fields/reference/cf.llm.prompt.pii_categories/):
137
+
138
+
-**If incoming requests match**:
139
+
140
+
| Field | Operator | Value |
141
+
| ------------------ | -------- | ------------- |
142
+
| LLM PII Categories | is in |`Credit Card`|
143
+
144
+
If you use the Expression Editor, enter the following expression:<br />
145
+
`(any(cf.llm.prompt.pii_categories[*] in {"CREDIT_CARD"}))`
146
+
147
+
-**Action**: _Block_
148
+
149
+
### Block requests with specific unsafe content categories in prompt
150
+
151
+
The following example [custom rule](/waf/custom-rules/create-dashboard/) will block requests with an LLM prompt containing unsafe content of specific [categories](/ruleset-engine/rules-language/fields/reference/cf.llm.prompt.unsafe_topic_categories/):
0 commit comments