You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
| LLM PII Detected |[`cf.llm.prompt.pii_detected`](/ruleset-engine/rules-language/fields/reference/cf.llm.prompt.pii_detected/) <br/> <Typetext="Boolean"/> | Indicates whether any personally identifiable information (PII) has been detected in the LLM prompt included in the request. |
124
-
| LLM PII Categories |[`cf.llm.prompt.pii_categories`](/ruleset-engine/rules-language/fields/reference/cf.llm.prompt.pii_categories/) <br/> <Typetext="Array<String>"/> | Array of string values with the personally identifiable information (PII) categories found in the LLM prompt included in the request.<br/>[Category list](/ruleset-engine/rules-language/fields/reference/cf.llm.prompt.pii_categories/)|
125
-
| LLM Content Detected |[`cf.llm.prompt.detected`](/ruleset-engine/rules-language/fields/reference/cf.llm.prompt.detected/) <br/> <Typetext="Boolean "/> | Indicates whether Cloudflare detected an LLM prompt in the incoming request. |
126
-
| LLM Unsafe topic detected |[`cf.llm.prompt.unsafe_topic_detected`](/ruleset-engine/rules-language/fields/reference/cf.llm.prompt.unsafe_topic_detected/) <br/> <Typetext="Boolean"/> | Indicates whether the incoming request includes any unsafe topic category in the LLM prompt. |
127
-
| LLM Unsafe topic categories |[`cf.llm.prompt.unsafe_topic_categories`](/ruleset-engine/rules-language/fields/reference/cf.llm.prompt.unsafe_topic_categories/) <br/> <Typetext="Array<String>"/> | Array of string values with the type of unsafe topics detected in the LLM prompt.<br/>[Category list](/ruleset-engine/rules-language/fields/reference/cf.llm.prompt.unsafe_topic_categories/)|
128
-
| LLM Injection score |[`cf.llm.prompt.injection_score`](/ruleset-engine/rules-language/fields/reference/cf.llm.prompt.injection_score/) <br/> <Typetext="Number"/> | A score from 1–99 that represents the likelihood that the LLM prompt in the request is trying to perform a prompt injection attack. |
| LLM PII detected <br/> [`cf.llm.prompt.pii_detected`][1] <br/> <Typetext="Boolean"/> | Indicates whether any personally identifiable information (PII) has been detected in the LLM prompt included in the request. |
124
+
| LLM PII categories <br/> [`cf.llm.prompt.pii_categories`][2] <br/> <Typetext="Array<String>"/> | Array of string values with the personally identifiable information (PII) categories found in the LLM prompt included in the request.<br/>[Category list](/ruleset-engine/rules-language/fields/reference/cf.llm.prompt.pii_categories/)|
125
+
| LLM Content detected <br/> [`cf.llm.prompt.detected`][3] <br/> <Typetext="Boolean "/> | Indicates whether Cloudflare detected an LLM prompt in the incoming request. |
126
+
| LLM Unsafe topic detected <br/> [`cf.llm.prompt.unsafe_topic_detected`][4] <br/> <Typetext="Boolean"/> | Indicates whether the incoming request includes any unsafe topic category in the LLM prompt. |
127
+
| LLM Unsafe topic categories <br/> [`cf.llm.prompt.unsafe_topic_categories`][5] <br/> <Typetext="Array<String>"/> | Array of string values with the type of unsafe topics detected in the LLM prompt.<br/>[Category list](/ruleset-engine/rules-language/fields/reference/cf.llm.prompt.unsafe_topic_categories/)|
128
+
| LLM Injection score <br/> [`cf.llm.prompt.injection_score`][6] <br/> <Typetext="Number"/> | A score from 1–99 that represents the likelihood that the LLM prompt in the request is trying to perform a prompt injection attack. |
0 commit comments