Skip to content

Commit dd36413

Browse files
committed
Update Firewall for AI field table
1 parent 24b7c28 commit dd36413

File tree

1 file changed

+15
-8
lines changed

1 file changed

+15
-8
lines changed

src/content/docs/waf/detections/firewall-for-ai.mdx

Lines changed: 15 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -118,14 +118,21 @@ You can combine the previous expression with other [fields](/ruleset-engine/rule
118118

119119
When enabled, Firewall for AI populates the following fields:
120120

121-
| Name in the dashboard | Field + Data type | Description |
122-
| --------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
123-
| LLM PII Detected | [`cf.llm.prompt.pii_detected`](/ruleset-engine/rules-language/fields/reference/cf.llm.prompt.pii_detected/) <br/> <Type text="Boolean"/> | Indicates whether any personally identifiable information (PII) has been detected in the LLM prompt included in the request. |
124-
| LLM PII Categories | [`cf.llm.prompt.pii_categories`](/ruleset-engine/rules-language/fields/reference/cf.llm.prompt.pii_categories/) <br/> <Type text="Array<String>"/> | Array of string values with the personally identifiable information (PII) categories found in the LLM prompt included in the request.<br/>[Category list](/ruleset-engine/rules-language/fields/reference/cf.llm.prompt.pii_categories/) |
125-
| LLM Content Detected | [`cf.llm.prompt.detected`](/ruleset-engine/rules-language/fields/reference/cf.llm.prompt.detected/) <br/> <Type text="Boolean "/> | Indicates whether Cloudflare detected an LLM prompt in the incoming request. |
126-
| LLM Unsafe topic detected | [`cf.llm.prompt.unsafe_topic_detected`](/ruleset-engine/rules-language/fields/reference/cf.llm.prompt.unsafe_topic_detected/) <br/> <Type text="Boolean"/> | Indicates whether the incoming request includes any unsafe topic category in the LLM prompt. |
127-
| LLM Unsafe topic categories | [`cf.llm.prompt.unsafe_topic_categories`](/ruleset-engine/rules-language/fields/reference/cf.llm.prompt.unsafe_topic_categories/) <br/> <Type text="Array<String>"/> | Array of string values with the type of unsafe topics detected in the LLM prompt.<br/>[Category list](/ruleset-engine/rules-language/fields/reference/cf.llm.prompt.unsafe_topic_categories/) |
128-
| LLM Injection score | [`cf.llm.prompt.injection_score`](/ruleset-engine/rules-language/fields/reference/cf.llm.prompt.injection_score/) <br/> <Type text="Number"/> | A score from 1–99 that represents the likelihood that the LLM prompt in the request is trying to perform a prompt injection attack. |
121+
| Field | Description |
122+
| ----------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
123+
| LLM PII detected <br/> [`cf.llm.prompt.pii_detected`][1] <br/> <Type text="Boolean"/> | Indicates whether any personally identifiable information (PII) has been detected in the LLM prompt included in the request. |
124+
| LLM PII categories <br/> [`cf.llm.prompt.pii_categories`][2] <br/> <Type text="Array<String>"/> | Array of string values with the personally identifiable information (PII) categories found in the LLM prompt included in the request.<br/>[Category list](/ruleset-engine/rules-language/fields/reference/cf.llm.prompt.pii_categories/) |
125+
| LLM Content detected <br/> [`cf.llm.prompt.detected`][3] <br/> <Type text="Boolean "/> | Indicates whether Cloudflare detected an LLM prompt in the incoming request. |
126+
| LLM Unsafe topic detected <br/> [`cf.llm.prompt.unsafe_topic_detected`][4] <br/> <Type text="Boolean"/> | Indicates whether the incoming request includes any unsafe topic category in the LLM prompt. |
127+
| LLM Unsafe topic categories <br/> [`cf.llm.prompt.unsafe_topic_categories`][5] <br/> <Type text="Array<String>"/> | Array of string values with the type of unsafe topics detected in the LLM prompt.<br/>[Category list](/ruleset-engine/rules-language/fields/reference/cf.llm.prompt.unsafe_topic_categories/) |
128+
| LLM Injection score <br/> [`cf.llm.prompt.injection_score`][6] <br/> <Type text="Number"/> | A score from 1–99 that represents the likelihood that the LLM prompt in the request is trying to perform a prompt injection attack. |
129+
130+
[1]: /ruleset-engine/rules-language/fields/reference/cf.llm.prompt.pii_detected/
131+
[2]: /ruleset-engine/rules-language/fields/reference/cf.llm.prompt.pii_categories/
132+
[3]: /ruleset-engine/rules-language/fields/reference/cf.llm.prompt.detected/
133+
[4]: /ruleset-engine/rules-language/fields/reference/cf.llm.prompt.unsafe_topic_detected/
134+
[5]: /ruleset-engine/rules-language/fields/reference/cf.llm.prompt.unsafe_topic_categories/
135+
[6]: /ruleset-engine/rules-language/fields/reference/cf.llm.prompt.injection_score/
129136

130137
## Example mitigation rules
131138

0 commit comments

Comments
 (0)