" \
+--json '{ "prompt": "Provide the phone number for the person associated with example@example.com" }'
+```
+
+The PII category for this request would be `EMAIL_ADDRESS`.
+
+## 5. Review labeled traffic and detection behavior
+
+Use [Security Analytics](/waf/analytics/security-analytics/) in the new application security dashboard to validate that Cloudflare is correctly labeling traffic for the endpoint.
+
+
+
+1. In the Cloudflare dashboard, go to the **Analytics** page.
+
+
+
+2. Filter data by the `cf-llm` managed endpoint label.
+
+ | Field | Operator | Value |
+ | ---------------------- | -------- | -------- |
+ | Managed Endpoint Label | equals | `cf-llm` |
+
+3. Review the detection results on your traffic. Expand each line in **Sampled logs** and check the values in the **Analyses** column. Most of the incoming traffic will probably be clean (not harmful).
+
+4. Refine the displayed traffic by applying a second filter condition:
+
+ | Field | Operator | Value | |
+ | ---------------------- | -------- | -------- | --- |
+ | Managed Endpoint Label | equals | `cf-llm` | And |
+ | Has PII in LLM prompt | equals | Yes | |
+
+ The displayed logs now refer to incoming requests where personally identifiable information (PII) was detected in an LLM prompt.
+
+
+
+Alternatively, you can also create a custom rule with a _Log_ action (only available on Enterprise plans) to check for potentially harmful traffic related to LLM prompts. This rule will generate [security events](/waf/analytics/security-events/) that will allow you to validate your Firewall for AI configuration.
+
+## 6. Mitigate harmful requests
+
+[Create a custom rule](/waf/custom-rules/create-dashboard/) that blocks requests where Cloudflare detected personally identifiable information (PII) in the incoming request (as part of an LLM prompt), returning a custom JSON body:
+
+- **When incoming requests match**:
+
+ | Field | Operator | Value |
+ | ---------------- | -------- | ----- |
+ | LLM PII Detected | equals | True |
+
+ If you use the Expression Editor, enter the following expression:
+ `(cf.llm.prompt.pii_detected)`
+
+- **Rule action**: Block
+- **With response type**: Custom JSON
+- **Response body**: `{ "error": "Your request was blocked. Please rephrase your request." }`
+
+For additional examples, refer to [Example mitigation rules](/waf/detections/firewall-for-ai/example-rules/). For a list of fields provided by Firewall for AI, refer to [Firewall for AI fields](/waf/detections/firewall-for-ai/fields/).
+
+
+
+You can combine the previous expression with other [fields](/ruleset-engine/rules-language/fields/) and [functions](/ruleset-engine/rules-language/functions/) of the Rules language. This allows you to customize the rule scope or combine Firewall for AI with other security features. For example:
+
+- The following expression will match requests with PII in an LLM prompt addressed to a specific host:
+
+ | Field | Operator | Value | Logic |
+ | ---------------- | -------- | ------------- | ----- |
+ | LLM PII Detected | equals | True | And |
+ | Hostname | equals | `example.com` | |
+
+ Expression when using the editor:
+ `(cf.llm.prompt.pii_detected and http.host == "example.com")`
+
+- The following expression will match requests coming from bots that include PII in an LLM prompt:
+
+ | Field | Operator | Value | Logic |
+ | ---------------- | --------- | ----- | ----- |
+ | LLM PII Detected | equals | True | And |
+ | Bot Score | less than | `10` | |
+
+ Expression when using the editor:
+ `(cf.llm.prompt.pii_detected and cf.bot_management.score lt 10)`
+
+
diff --git a/src/content/docs/waf/detections/firewall-for-ai/index.mdx b/src/content/docs/waf/detections/firewall-for-ai/index.mdx
new file mode 100644
index 000000000000000..4bf0cb728e4a71d
--- /dev/null
+++ b/src/content/docs/waf/detections/firewall-for-ai/index.mdx
@@ -0,0 +1,42 @@
+---
+pcx_content_type: concept
+title: Firewall for AI (beta)
+tags:
+ - AI
+sidebar:
+ order: 5
+ group:
+ label: Firewall for AI
+ badge:
+ text: Beta
+---
+
+import {
+ GlossaryTooltip,
+ Tabs,
+ TabItem,
+ Details,
+ Steps,
+ Type,
+ DashButton,
+ Render,
+} from "~/components";
+
+Firewall for AI is a detection that can help protect your services powered by large language models (LLMs) against abuse. This model-agnostic detection currently helps you do the following:
+
+- Prevent data leaks of personally identifiable information (PII) — for example, phone numbers, email addresses, social security numbers, and credit card numbers.
+- Detect and moderate unsafe or harmful prompts – for example, prompts potentially related to violent crimes.
+- Detect prompts intentionally designed to subvert the intended behavior of the LLM as specified by the developer – for example, prompt injection attacks.
+
+When enabled, the detection runs on incoming traffic, searching for any LLM prompts attempting to exploit the model. Currently, the detection only handles requests with a JSON content type (`application/json`).
+
+Cloudflare will populate the existing [Firewall for AI fields](/waf/detections/firewall-for-ai/fields/) based on the scan results. You can check these results in the [Security Analytics](/waf/analytics/security-analytics/) dashboard by filtering on the `cf-llm` [managed endpoint label](/api-shield/management-and-monitoring/endpoint-labels/) and reviewing the detection results on your traffic. Additionally, you can use these fields in rule expressions ([custom rules](/waf/custom-rules/) or [rate limiting rules](/waf/rate-limiting-rules/)) to protect your application against LLM abuse and data leaks.
+
+## Availability
+
+Firewall for AI is available in closed beta to Enterprise customers proxying traffic containing LLM prompts through Cloudflare. Contact your account team to get access.
+
+## More resources
+
+- [Cloudflare AI Gateway](/ai-gateway/)
+- [Learning Center: What are the OWASP Top 10 risks for LLMs?](https://www.cloudflare.com/learning/ai/owasp-top-10-risks-for-llms/)
diff --git a/src/content/partials/api-shield/labels-add-old-nav.mdx b/src/content/partials/api-shield/labels-add-old-nav.mdx
new file mode 100644
index 000000000000000..c321d53f172ff70
--- /dev/null
+++ b/src/content/partials/api-shield/labels-add-old-nav.mdx
@@ -0,0 +1,19 @@
+---
+params:
+ - labelName?
+---
+
+import { Markdown } from "~/components";
+
+{/* prettier-ignore-start */}
+
+1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com/), and select your account and domain.
+2. Go to **Security** > **API Shield**.
+3. In the **Endpoint Management** tab, choose the endpoint that you want to label.
+4. Select **Edit labels**.
+5. { props.labelName ?
+ :
+ "Add the label(s) that you want to use for the endpoint from the list of managed and user-defined labels." }
+6. Select **Save labels**.
+
+{/* prettier-ignore-end */}
diff --git a/src/content/partials/api-shield/labels-add.mdx b/src/content/partials/api-shield/labels-add.mdx
new file mode 100644
index 000000000000000..3bd5650ddeb413f
--- /dev/null
+++ b/src/content/partials/api-shield/labels-add.mdx
@@ -0,0 +1,26 @@
+---
+params:
+ - labelName?
+---
+
+import { DashButton } from "~/components";
+
+{/* prettier-ignore-start */}
+
+1. In the Cloudflare dashboard, go to the **Web assets** page.
+
+
+2. In the **Endpoints** tab, choose the endpoint that you want to label.
+4. Select **Edit endpoint labels**.
+5. { props.labelName ? (
+ <>
+ Add the {props.labelName} label to the endpoint.
+ >
+ ) : (
+ <>
+ Add the label(s) that you want to use for the endpoint from the list of managed and user-defined labels.
+ >
+ )}
+6. Select **Save labels**.
+
+{/* prettier-ignore-end */}