You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/content/docs/waf/detections/firewall-for-ai/get-started.mdx
+63-1Lines changed: 63 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -55,20 +55,58 @@ Once you have [onboarded your domain](/fundamentals/manage-domains/add-site/) to
55
55
56
56
Save the relevant endpoint receiving LLM-related traffic to [Endpoint Management](/api-shield/management-and-monitoring/endpoint-management/) once it has been discovered, or add the endpoint manually.
Once you add a label to the endpoint, Cloudflare will start labeling incoming traffic for the endpoint with the label you selected.
91
149
92
150
## 4. (Optional) Generate API traffic
93
151
94
-
You may need to issue some `POST` requests to the endpoint so that there is some labeled traffic to analyze in this step.
152
+
You may need to issue some `POST` requests to the endpoint so that there is some labeled traffic to review in the following step.
95
153
96
154
For example, send a `POST` request to the API endpoint you previously added (`/v1/messages` in this example) in your zone with an LLM prompt requesting PII:
97
155
@@ -107,6 +165,8 @@ The PII category for this request would be `EMAIL_ADDRESS`.
107
165
108
166
Use [Security Analytics](/waf/analytics/security-analytics/) in the new application security dashboard to validate that the WAF is correctly labeling traffic for the endpoint.
109
167
168
+
<Steps>
169
+
110
170
1. In the Cloudflare dashboard, go to the **Analytics** page.
@@ -128,6 +188,8 @@ Use [Security Analytics](/waf/analytics/security-analytics/) in the new applicat
128
188
129
189
The displayed logs now refer to incoming requests where personally identifiable information (PII) was detected in an LLM prompt.
130
190
191
+
</Steps>
192
+
131
193
Alternatively, you can also create a custom rule with a _Log_ action (only available on Enterprise plans) to check for potentially harmful traffic related to LLM prompts. This rule will generate [security events](/waf/analytics/security-events/) that will allow you to validate your Firewall For AI configuration.
0 commit comments