Skip to content

Commit 60ccff8

Browse files
authored
Update workersai.md
1 parent 56ef94f commit 60ccff8

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

docs/integrations/workersai.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -120,7 +120,7 @@ async function initStatsig(env: Env) {
120120
**Explanation:**
121121

122122
1. **`initStatsig(env)`**: This function initializes the Statsig SDK using the `CloudflareKVDataAdapter` to fetch configurations from Cloudflare KV, ensuring low-latency access to your experiment setups. Make sure to replace `'statsig-YOUR_STATSIG_PROJECT_ID'` with your actual Statsig project ID and configure `STATSIG_SERVER_API_KEY` and `STATSIG_KV` as environment variables in your Worker.
123-
2. **`Statsig.getExperimentSync(...)`**: This is the core of the experimentation. It retrieves the assigned experiment variant for the current user (based on `rayID`) for the `workers_ai_prompt` experiment. The `get()` method then safely retrieves the `prompt` and `model` parameters defined in your Statsig experiment, falling back to default values if the experiment or parameter is not found.
123+
2. **`Statsig.getExperimentSync(...)`**: This is the core of the experimentation. It retrieves the assigned experiment variant for the current user (based on `rayID`) for the `workers_ai_experiment` experiment. The `get()` method then safely retrieves the `prompt` and `model` parameters defined in your Statsig experiment, falling back to default values if the experiment or parameter is not found.
124124
3. **`env.AI.run(model, { prompt })`**: This executes the AI model provided by Cloudflare Workers AI with the dynamically chosen `model` and `prompt`.
125125
4. **Latency Measurement**: `performance.now()` is used to capture the start and end times of the AI inference, allowing you to track the `ai_inference_ms` metric.
126126
5. **`logUsageToStatsig(...)`**: This function logs a custom event (`cf_ai`) to Statsig. It includes the `model` used as the event value and attaches metadata such as `ai_inference_ms` and any `usage` information (e.g., token counts) returned by the AI model. This data is crucial for analyzing model performance and cost.

0 commit comments

Comments
 (0)