You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/integrations/workersai.md
+26-23Lines changed: 26 additions & 23 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -32,7 +32,31 @@ This use case demonstrates how to use Statsig experiments to test different prom
32
32
33
33

34
34
35
-
#### Worker Code for Prompt/Model Experimentation
35
+
See the sample code below for the experiment implementation in a Cloudflare Worker with AI.
36
+
37
+
### Use Case 2: Model Analytics
38
+
39
+
Beyond experiments, the logging mechanism demonstrated above provides valuable insights into your AI model's performance and usage patterns. You could keep the default parameters above and still get insights from the metadata you log to Statsig.
40
+
41
+
#### What to track for Model Analytics:
42
+
43
+
***Latency (`ai_inference_ms`):** Crucial for understanding user experience. You can monitor average, P90, P99 latencies in Statsig.
44
+
***Model Usage (e.g., `prompt_tokens`, `completion_tokens`):** If your AI provider returns token counts, logging these allows you to track cost and efficiency.
45
+
***Error Rates:** Log events when the AI model returns an error or an unexpected response.
46
+
***Output Quality (via custom events):**
47
+
***User Feedback:** If your application allows users to rate the AI's response (e.g., thumbs up/down), log these as Statsig events.
48
+
***Downstream Metrics:** Track how the AI's output influences key business metrics (e.g., conversion rates if the AI is generating product descriptions, or user engagement if it's a chatbot).
49
+
50
+
#### How to view Model Analytics in Statsig
51
+
52
+
By consistently logging these metrics, you can create custom dashboards in Statsig to monitor the health and effectiveness of your AI models in real-time. This allows you to identify performance bottlenecks, cost inefficiencies, and areas for improvement.
53
+
54
+
For instance, within minutes of adding the logging from the example below to your function, you can start to see the breakdown of latency per model with a query like this:
@@ -118,28 +142,7 @@ async function initStatsig(env: Env) {
118
142
5.**`logUsageToStatsig(...)`**: This function logs a custom event (`cloudflare_ai`) to Statsig. It includes the `model` used as the event value and attaches metadata such as `ai_inference_ms` and any `usage` information (e.g., token counts) returned by the AI model. This data is crucial for analyzing model performance and cost.
119
143
6.**`ctx.waitUntil(Statsig.flush(1000))`**: This ensures that all logged events are asynchronously sent to Statsig before the Worker's execution context is terminated, without blocking the response to the user.
120
144
121
-
### Use Case 2: Model Analytics
122
-
123
-
Beyond experiments, the logging mechanism demonstrated above provides valuable insights into your AI model's performance and usage patterns. You could keep the default parameters above and still get insights from the metadata you log to Statsig.
124
-
125
-
#### What to track for Model Analytics:
126
-
127
-
***Latency (`ai_inference_ms`):** Crucial for understanding user experience. You can monitor average, P90, P99 latencies in Statsig.
128
-
***Model Usage (e.g., `prompt_tokens`, `completion_tokens`):** If your AI provider returns token counts, logging these allows you to track cost and efficiency.
129
-
***Error Rates:** Log events when the AI model returns an error or an unexpected response.
130
-
***Output Quality (via custom events):**
131
-
***User Feedback:** If your application allows users to rate the AI's response (e.g., thumbs up/down), log these as Statsig events.
132
-
***Downstream Metrics:** Track how the AI's output influences key business metrics (e.g., conversion rates if the AI is generating product descriptions, or user engagement if it's a chatbot).
133
-
134
-
#### How to view Model Analytics in Statsig
135
-
136
-
By consistently logging these metrics, you can create custom dashboards in Statsig to monitor the health and effectiveness of your AI models in real-time. This allows you to identify performance bottlenecks, cost inefficiencies, and areas for improvement.
137
-
138
-
For example, within minutes of adding this logging to your function, you can start to see the breakdown of latency per model with a query like this:
***Prompt Tuning:** An e-commerce app running on Workers AI tries two different prompt styles for product descriptions. Statsig tracks cart conversion and time on site, revealing which prompt yields higher sales.
145
148
***Model Selection:** A developer tests GPT-3.5 vs. GPT-4 within Cloudflare Workers AI. Statsig shows which model, combined with specific temperature or frequency penalty values, generates more accurate or user-satisfying results.
0 commit comments