Skip to content

Commit 31995df

Browse files
committed
Add field descriptions
1 parent 72ff8ad commit 31995df

File tree

1 file changed

+8
-8
lines changed

1 file changed

+8
-8
lines changed

src/content/docs/waf/detections/firewall-for-ai.mdx

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -118,14 +118,14 @@ You can combine the previous expression with other [fields](/ruleset-engine/rule
118118

119119
When enabled, Firewall for AI populates the following fields:
120120

121-
| Field name in the dashboard | Field + Data type | Notes |
122-
| --------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------- |
123-
| LLM PII Detected | [`cf.llm.prompt.pii_detected`](/ruleset-engine/rules-language/fields/reference/cf.llm.prompt.pii_detected/) <br/> <Type text="Boolean"/> | |
124-
| LLM PII Categories | [`cf.llm.prompt.pii_categories`](/ruleset-engine/rules-language/fields/reference/cf.llm.prompt.pii_categories/) <br/> <Type text="Array<String>"/> | [Category list](/ruleset-engine/rules-language/fields/reference/cf.llm.prompt.pii_categories/) |
125-
| LLM Content Detected | [`cf.llm.prompt.detected`](/ruleset-engine/rules-language/fields/reference/cf.llm.prompt.detected/) <br/> <Type text="Boolean "/> | |
126-
| LLM Unsafe topic detected | [`cf.llm.prompt.unsafe_topic_detected`](/ruleset-engine/rules-language/fields/reference/cf.llm.prompt.unsafe_topic_detected/) <br/> <Type text="Boolean"/> | |
127-
| LLM Unsafe topic categories | [`cf.llm.prompt.unsafe_topic_categories`](/ruleset-engine/rules-language/fields/reference/cf.llm.prompt.unsafe_topic_categories/) <br/> <Type text="Array<String>"/> | [Category list](/ruleset-engine/rules-language/fields/reference/cf.llm.prompt.unsafe_topic_categories/) |
128-
| LLM Injection score | [`cf.llm.prompt.injection_score`](/ruleset-engine/rules-language/fields/reference/cf.llm.prompt.injection_score/) <br/> <Type text="Number"/> | Range: 1–99 |
121+
| Name in the dashboard | Field + Data type | Description |
122+
| --------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
123+
| LLM PII Detected | [`cf.llm.prompt.pii_detected`](/ruleset-engine/rules-language/fields/reference/cf.llm.prompt.pii_detected/) <br/> <Type text="Boolean"/> | Indicates whether any personally identifiable information (PII) has been detected in the LLM prompt included in the request. |
124+
| LLM PII Categories | [`cf.llm.prompt.pii_categories`](/ruleset-engine/rules-language/fields/reference/cf.llm.prompt.pii_categories/) <br/> <Type text="Array<String>"/> | Array of string values with the personally identifiable information (PII) categories found in the LLM prompt included in the request.<br/>[Category list](/ruleset-engine/rules-language/fields/reference/cf.llm.prompt.pii_categories/) |
125+
| LLM Content Detected | [`cf.llm.prompt.detected`](/ruleset-engine/rules-language/fields/reference/cf.llm.prompt.detected/) <br/> <Type text="Boolean "/> | Indicates whether Cloudflare detected an LLM prompt in the incoming request. |
126+
| LLM Unsafe topic detected | [`cf.llm.prompt.unsafe_topic_detected`](/ruleset-engine/rules-language/fields/reference/cf.llm.prompt.unsafe_topic_detected/) <br/> <Type text="Boolean"/> | Indicates whether the incoming request includes any unsafe topic category in the LLM prompt. |
127+
| LLM Unsafe topic categories | [`cf.llm.prompt.unsafe_topic_categories`](/ruleset-engine/rules-language/fields/reference/cf.llm.prompt.unsafe_topic_categories/) <br/> <Type text="Array<String>"/> | Array of string values with the type of unsafe topics detected in the LLM prompt.<br/>[Category list](/ruleset-engine/rules-language/fields/reference/cf.llm.prompt.unsafe_topic_categories/) |
128+
| LLM Injection score | [`cf.llm.prompt.injection_score`](/ruleset-engine/rules-language/fields/reference/cf.llm.prompt.injection_score/) <br/> <Type text="Number"/> | A score from 1–99 that represents the likelihood that the LLM prompt in the request is trying to perform a prompt injection attack. |
129129

130130
## Example mitigation rules
131131

0 commit comments

Comments
 (0)