You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: contents/docs/experiments/exposures.mdx
+55Lines changed: 55 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -44,6 +44,7 @@ The experiment view displays real-time exposure data to help you monitor partici
44
44
/>
45
45
46
46
The exposures chart shows:
47
+
47
48
| Metric | Description |
48
49
|--------|-------------|
49
50
|**Daily cumulative count**| Unique users exposed to each variant over time |
@@ -103,6 +104,60 @@ Users exposed to multiple variants are completely removed from the experiment an
103
104
### Use first seen variant
104
105
Users are analyzed based on the first variant they were exposed to, regardless of subsequent exposures. This maximizes sample size but may introduce some noise into your results.
105
106
107
+
## Sample ratio mismatch detection
108
+
109
+
Sample ratio mismatch (SRM) is an automatic check that compares your actual user distribution across variants against your configured rollout percentages. PostHog uses a chi-squared statistical test to detect when the observed distribution is significantly different from what's expected.
110
+
111
+
### Why SRM matters
112
+
113
+
When your experiment has SRM, it means something may be systematically biasing which users end up in each variant. This can invalidate your experiment results because:
114
+
115
+
- The variants may no longer be comparable populations
116
+
- Any differences in metrics could be due to the bias rather than your changes
117
+
- Statistical conclusions become unreliable
118
+
119
+
### How it's displayed
120
+
121
+
PostHog automatically calculates SRM once your experiment has at least 100 total exposures. You'll see the result below the exposures table:
alt="Screenshot of sample ratio mismatch indicator"
135
+
/>
136
+
137
+
-**Green checkmark**: Distribution matches your rollout percentages. The observed difference is within normal random variation.
138
+
-**Yellow warning**: Sample ratio mismatch detected. The distribution is significantly different from expected.
139
+
140
+
PostHog uses a significance threshold of p < 0.001 to flag mismatches. The p-value is displayed alongside the status—a lower p-value indicates stronger evidence of a mismatch.
141
+
142
+
### What to do if SRM is detected
143
+
144
+
If you see a sample ratio mismatch warning, investigate before drawing conclusions from your experiment:
145
+
146
+
1.**Check your feature flag implementation**: Verify the flag is being evaluated correctly across all code paths. Ensure you're calling [`identify()`](/docs/product-analytics/identify) before evaluating flags (for frontend SDKs).
147
+
148
+
2.**Review release conditions**: If you're using property-based release conditions, ensure those properties are available at evaluation time. Missing properties can cause users to fall out of the experiment unevenly. Consider using simple percentage-based rollout (e.g., 50/50) rather than complex conditions.
149
+
150
+
3.**Check for ad-blockers or network issues**: These can prevent feature flag calls from reaching PostHog for certain users, skewing your distribution. Consider [setting up a reverse proxy](/docs/advanced/proxy) to route requests through your own domain, which bypasses ad blockers and typically increases event capture by 10-30%.
151
+
152
+
4.**Look for performance differences between variants**: If one variant causes pages to load slower or crash more often, users may drop off before the exposure event fires.
153
+
154
+
5.**Look for bot traffic**: Bots may trigger exposures unevenly across variants.
155
+
156
+
6.**Check for race conditions**: If your exposure event can fire before the feature flag is fully evaluated, some users may be incorrectly assigned.
157
+
158
+
7.**Verify test account filtering**: Ensure internal users aren't being included in ways that skew the distribution.
159
+
160
+
106
161
## Test account filtering
107
162
108
163
You can exclude internal team members and test accounts from your experiment by enabling test account filtering in the exposure criteria. This uses your project's test account filters to ensure only real users contribute to your metrics.
0 commit comments