You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/integrations/workersai.md
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,7 +15,7 @@ For generic setup of Statsig with Cloudflare Workers (including KV namespace con
15
15
16
16
For setting up Workers AI itself, please refer to the [Cloudflare Workers AI documentation](https://developers.cloudflare.com/workers-ai/).
17
17
18
-
### Vision
18
+
### Overview
19
19
20
20
When you deploy a Cloudflare Worker running AI code, Statsig can automatically inject lightweight instrumentation to capture inference requests and responses. Statsig can track the key metadata for each request (models, latency, token usage), but you can include any others you find valuable (success rates, user interactions, etc).
21
21
@@ -24,21 +24,21 @@ This integration empowers developers to:
24
24
***Experimentation:** Easily set up experiments (e.g., prompt “A” vs. prompt “B”, llama vs deepseek models) and define success metrics (conversion, quality rating, user retention). Statsig dynamically determines which variation each request should use, ensuring statistically valid traffic splits.
25
25
***Real-time Analytics:** The integrated Statsig SDK sends anonymized event data (model outputs, user interactions, metrics) back to Statsig’s servers in real time. Data is gathered at the edge with minimal overhead, then streamed to Statsig for fast analysis.
26
26
27
-
###Use Case 1: Prompt and/or Model Experiments
27
+
## Use Case 1: Prompt and/or Model Experiments
28
28
29
29
This use case demonstrates how to use Statsig experiments to test different prompts and AI models within your Cloudflare Worker. For the sake of this example, we have 4 groups in our experiment. A control, with our default prompt and llama model, and then each possible variant switching to a different prompt and/or model (deepseek, in this case).
30
30
31
-
####Sample Experiment Setup in Statsig Console
31
+
### Sample Experiment Setup in Statsig Console
32
32
33
33

34
34
35
35
See the sample code below for the experiment implementation in a Cloudflare Worker with AI.
36
36
37
-
###Use Case 2: Model Analytics
37
+
## Use Case 2: Model Analytics
38
38
39
39
Beyond experiments, the logging mechanism illustrated below provides valuable insights into your AI model's performance and usage patterns. You could keep the default parameters for models and prompts and still get insights from the metadata you log to Statsig.
40
40
41
-
####What to track for Model Analytics:
41
+
### What to track for Model Analytics:
42
42
43
43
***Latency (`ai_inference_ms`):** Crucial for understanding user experience. You can monitor average, P90, P99 latencies in Statsig.
44
44
***Model Usage (e.g., `prompt_tokens`, `completion_tokens`):** If your AI provider returns token counts, logging these allows you to track cost and efficiency.
@@ -47,7 +47,7 @@ Beyond experiments, the logging mechanism illustrated below provides valuable in
47
47
***User Feedback:** If your application allows users to rate the AI's response (e.g., thumbs up/down), log these as Statsig events.
48
48
***Downstream Metrics:** Track how the AI's output influences key business metrics (e.g., conversion rates if the AI is generating product descriptions, or user engagement if it's a chatbot).
49
49
50
-
####How to view Model Analytics in Statsig
50
+
### How to view Model Analytics in Statsig
51
51
52
52
By consistently logging these metrics, you can create custom dashboards in Statsig to monitor the health and effectiveness of your AI models in real-time. This allows you to identify performance bottlenecks, cost inefficiencies, and areas for improvement.
53
53
@@ -56,7 +56,7 @@ For instance, within minutes of adding the logging from the example below to you
@@ -142,7 +142,7 @@ async function initStatsig(env: Env) {
142
142
5.**`logUsageToStatsig(...)`**: This function logs a custom event (`cloudflare_ai`) to Statsig. It includes the `model` used as the event value and attaches metadata such as `ai_inference_ms` and any `usage` information (e.g., token counts) returned by the AI model. This data is crucial for analyzing model performance and cost.
143
143
6.**`ctx.waitUntil(Statsig.flush(1000))`**: This ensures that all logged events are asynchronously sent to Statsig before the Worker's execution context is terminated, without blocking the response to the user.
144
144
145
-
###Other Use Cases enabled by this Integration
145
+
## Other Use Cases enabled by this Integration
146
146
147
147
***Prompt Tuning:** An e-commerce app running on Workers AI tries two different prompt styles for product descriptions. Statsig tracks cart conversion and time on site, revealing which prompt yields higher sales.
148
148
***Model Selection:** A developer tests GPT-3.5 vs. GPT-4 within Cloudflare Workers AI. Statsig shows which model, combined with specific temperature or frequency penalty values, generates more accurate or user-satisfying results.
0 commit comments