You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: contents/docs/experiments/metrics.mdx
+34-1Lines changed: 34 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,7 @@ Once you've created your experiment, you can assign metrics to let you evaluate
10
10
11
11
## Metric types
12
12
13
-
PostHog supports three types of metrics to measure different aspects of your experiment's impact. Choose the metric type that best aligns with your hypothesis.
13
+
PostHog supports four types of metrics to measure different aspects of your experiment's impact. Choose the metric type that best aligns with your hypothesis.
14
14
15
15
### Funnel
16
16
@@ -61,6 +61,39 @@ Common use cases:
61
61
62
62
Both the numerator and denominator support all aggregation methods (count, sum, average, unique values). You can even use the same event with different properties for each part of the ratio, such as dividing total revenue by total quantity sold from purchase events.
63
63
64
+
### Retention
65
+
66
+
Use retention metrics to measure if users come back to perform a specific action within a defined time window after an initial action. This is ideal for measuring engagement patterns, feature stickiness, and long-term user behavior.
67
+
68
+
How it works:
69
+
1. Define a **start event** (e.g., user signs up, completes onboarding, makes first purchase)
70
+
2. Define a **completion event** (e.g., user logs in, uses a feature, makes another purchase)
71
+
3. Set a **retention window** specifying when the completion event should occur (e.g., between 1 and 7 days after the start event)
72
+
4. Each user either completes the action within the window (retained) or doesn't (not retained)
73
+
5. We calculate the retention rate (proportion of users who returned) for each variant
74
+
6. Statistical significance is determined using the difference in retention rates between variants
75
+
76
+
**Retention window:**
77
+
The retention window has two boundaries that define when the completion event counts:
78
+
-**Start boundary** (inclusive): Earliest time after the start event (e.g., 1 day)
79
+
-**End boundary** (inclusive): Latest time after the start event (e.g., 7 days means up to and including day 7)
80
+
81
+
For example, a "1 to 7 days" retention window means the completion event must occur at least 1 full day after the start event, and up to 7 full days after (days 1-7 all count).
82
+
83
+
**How time periods are calculated:**
84
+
For day-based windows, PostHog compares calendar days rather than exact 24-hour periods. This means the retention window captures any time within the specified days. For example, a "7 days" window includes any completion event that occurs on the 7th calendar day after the start event, regardless of the specific times. Hour-based windows work the same way, using whole hours (e.g., 2 PM to 3 PM) rather than exact 60-minute durations.
85
+
86
+
**Start handling:**
87
+
You can choose how PostHog identifies the start event for each user:
88
+
-**First seen**: Uses the first occurrence of the start event (ideal for one-time actions like signup)
89
+
-**Last seen**: Uses the last occurrence of the start event (useful for recurring actions)
90
+
91
+
Common use cases:
92
+
-**Onboarding effectiveness**: Track if users who complete onboarding return within 7 days
93
+
-**Feature stickiness**: Measure if users who try a new feature come back to use it again
94
+
-**Re-engagement**: Test if marketing campaigns bring back dormant users
95
+
-**Churn reduction**: Measure if changes reduce the time until users return
96
+
64
97
## Outlier handling
65
98
66
99
You can limit the impact of extreme values by capping metric data at specified percentile thresholds using configurable lower and upper bounds. Metric data beyond those thresholds aren't included in experiment calculations.
0 commit comments