You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
docs: update experiments documentation with multivariate testing and new features
Update experiments v1 documentation to reflect current capabilities including
support for 4 variants (A/B/C/D testing), experiment type presets, winner
rollout options, and improved results interpretation guidance.
Copy file name to clipboardExpand all lines: docs/tools/experiments-v1/configuring-experiments-v1.md
+53-9Lines changed: 53 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -52,13 +52,22 @@ Regardless of which method you choose, you'll need to configure the experiment s
52
52
53
53
## Required fields
54
54
55
-
To create your experiment, you must first enter the following required fields:
55
+
To create your experiment, you must first enter:
56
+
57
+
-**Experiment name**: A descriptive name for your test
58
+
-**Experiment type** (optional): Choose from preset types (Introductory offer, Free trial offer, Paywall design, Price point, Subscription duration, Subscription ordering, or Other) to get relevant default metric suggestions
59
+
-**Notes** (optional): Add markdown-formatted notes to document your hypothesis and track insights
60
+
-**Variant A (Control)**: The Offering(s) for your control group (baseline)
61
+
-**Variant B (Treatment)**: The Offering(s) for your first treatment group
62
+
63
+
### Adding more variants for multivariate testing
64
+
65
+
You can add up to 2 additional treatment variants:
66
+
-**Variant C (Treatment)**: Optional second treatment variant
67
+
-**Variant D (Treatment)**: Optional third treatment variant
68
+
69
+
Multivariate experiments allow you to test multiple variations against your control simultaneously, helping you identify the best-performing option more efficiently than running sequential A/B tests.
56
70
57
-
- Experiment name
58
-
- Control variant
59
-
- The Offering(s) that will be used for your Control group
60
-
- Treatment variant
61
-
- The Offering(s) that will be used for your Treatment group (the variant in your experiment)
62
71
63
72
## Using Placements in Experiments
64
73
@@ -97,15 +106,15 @@ Select from any of the available dimensions to filter which new customers are en
97
106
98
107
**New customers to enroll**
99
108
100
-
You can modify the % of new customers to enroll in 10% increments based on how much of your audience you want to expose to the test. Keep in mind that the enrolled new customers will be split between the two variants, so a test that enrolls 10% of new customers would yield 5% in the Control group and 5% in the Treatment group.
109
+
You can modify the % of new customers to enroll in 10% increments based on how much of your audience you want to expose to the test. Keep in mind that the enrolled new customers will be split evenly between all variants. For example, an A/B test (2 variants) that enrolls 10% of new customers would yield 5% in the Control group and 5% in the Treatment group. A 4-variant multivariate test enrolling 20% of new customers would yield 5% in each variant.
101
110
102
111
Once done, select **CREATE EXPERIMENT** to complete the process.
103
112
104
113
## Starting an experiment
105
114
106
115
When viewing a new experiment, you can start, edit, or delete the experiment.
107
116
108
-
-**Start**: Starts the experiment. Customer enrollment and data collection begins immediately, but results will take up to 24 hours to begin populating.
117
+
-**Start**: Starts the experiment. Customer enrollment and data collection begins immediately, but results will take up to 24 hours to begin populating. After that, results refresh periodically - check the **Last updated** timestamp on the Results page to see when data was last refreshed.
109
118
-**Edit**: Change the name, enrollment criteria, or Offerings in an experiment before it's been started. After it's been started, only the percent of new customers to enroll can be edited.
110
119
-**Delete**: Deletes the experiment.
111
120
@@ -154,6 +163,38 @@ When an experiment is stopped:
154
163
- Customers who were enrolled will begin receiving the Default Offering on their next paywall view
155
164
- Results will continue to refresh for 400 days after the experiment has ended
156
165
166
+
## Rolling out a winner
167
+
168
+
Once you've identified a winning variant from your experiment results, you can roll it out to all your users. RevenueCat provides several options for applying your experiment results:
169
+
170
+
### Rollout options
171
+
172
+
When you mark a variant as the winner, you can choose from these rollout strategies:
173
+
174
+
1.**Set as default offering**: The winning variant's offering becomes your project's default offering, served to all customers who aren't targeted by specific rules
175
+
2.**Create targeting rule**: Create a new targeting rule that serves the winning offering to a specific audience (e.g., specific countries, platforms, or custom attributes)
176
+
3.**Mark winner only**: Record which variant won without immediately changing your offering configuration - useful for tracking insights and planning future rollouts
177
+
178
+
### How to roll out a winner
179
+
180
+
1. Navigate to your experiment's results page
181
+
2. Review the performance data to identify the winning variant
If you want to roll out your winning variant gradually rather than all at once, choose the "Create targeting rule" option and set the rule to apply to a percentage of your audience. You can increase the percentage over time as you gain confidence.
192
+
:::
193
+
194
+
:::info Experiment data after rollout
195
+
After rolling out a winner, your experiment results remain available for 400 days, allowing you to track long-term performance and learn from your test.
196
+
:::
197
+
157
198
## Running multiple tests simultaneously
158
199
159
200
You can use Experiments to run multiple test simultaneously as long as:
@@ -203,7 +244,7 @@ When an experiment is running, only the percent of new customers to enroll can b
203
244
| Can I edit the Offerings in a started experiment? | Editing an Offering for an active experiment would make the results unusable. Be sure to check before starting your experiment that your chosen Offerings render correctly in your app(s). If you need to make a change to your Offerings, stop the experiment and create a new one with the updated Offerings. |
204
245
| Can I run multiple experiments simultaneously? | Yes, as long as they meet the criteria described above. |
205
246
| Can I run an experiment targeting different app versions for each app in my project? | No, at this time we don't support setting up an experiment in this way. However, you can certainly create unique experiments for each app, and target them by app version to achieve the same result in independent test. |
206
-
| Can I add multiple Treatment groups to a single test? |No, you cannot add multiple Treatment groups to a single test. However, by running multiple tests on the same audience to capture each desired variant you can achieve the same result. |
247
+
| Can I add multiple Treatment groups to a single test? |Yes, experiments support up to 4 variants total: 1 Control (Variant A) and up to 3 Treatment variants (B, C, D). This allows you to test multiple variations simultaneously in a single multivariate experiment. |
207
248
| Can I edit the enrollment criteria of a started experiment? | Before an experiment has been started, all aspects of enrollment criteria can be edited. However, once an experiment has been started, only new customers to enroll can be edited; since editing the audience that an experiment is exposed to would alter the nature of the test. |
208
249
| What's the difference between pausing and stopping an experiment? | Pausing temporarily stops new customer enrollment while existing participants continue to see their assigned variant. The experiment can be resumed later. Stopping permanently ends the experiment: new customers won't be enrolled and existing participants will see the Default Offering on their next paywall view. A stopped experiment cannot be restarted. Both paused and stopped experiments continue collecting data for up to 400 days. |
209
250
| Can I pause an experiment multiple times? | Yes, you can pause and resume an experiment as many times as needed. This allows you to control enrollment based on your testing needs and timeline. |
@@ -213,4 +254,7 @@ When an experiment is running, only the percent of new customers to enroll can b
213
254
| Can I restart an experiment after it's been stopped? | After you choose to stop an experiment, new customers will no longer be enrolled in it, and it cannot be restarted. However, if you need to temporarily halt new enrollments with the option to resume later, consider using the pause feature instead. Paused experiments can be resumed at any time. If you've already stopped an experiment and want to continue testing, create a new experiment and choose the same Offerings as the stopped experiment. You can use the duplicate feature to quickly recreate the same experiment configuration. *(NOTE: Results for stopped experiments will continue to refresh for 400 days after the experiment has ended)*|
214
255
| Can I duplicate an experiment? | Yes, you can duplicate any existing experiment from the experiments list using the context menu. This creates a new experiment with the same configuration as the original, which you can then modify as needed before starting. This is useful for running similar tests or follow-up experiments. |
215
256
| What happens to customers that were enrolled in an experiment after it's been stopped? | New customers will no longer be enrolled in an experiment after it's been stopped, and customers who were already enrolled in the experiment will begin receiving the Default Offering if they reach a paywall again. Since we continually refresh results for 400 days after an experiment has been ended, you may see renewals from these customers in your results, since they were enrolled as part of the test while it was running; but new subscriptions started by these customers after the experiment ended and one-time purchases made after the experiment ended will not be included in the results. |
257
+
| How many variants should I use in my experiment? | Start with 2 variants (A/B test) for most cases. Use 3-4 variants (multivariate) when you have multiple distinct hypotheses to test simultaneously. Keep in mind that more variants require more customers to reach statistical significance, so tests take longer. |
258
+
| What experiment type should I choose? | Choose the preset that best matches what you're testing. Presets provide relevant default metrics: "Price point" suggests revenue metrics, "Free trial offer" suggests trial conversion metrics, etc. You can always customize metrics after selecting a type. |
259
+
| What happens after I pick a winner? | After analyzing your results, you can roll out the winning variant by: (1) Setting the winning variant's offering as your default offering, (2) Creating a new targeting rule to serve the winning offering to specific audiences, or (3) Simply marking a winner without immediate rollout for your records. You'll choose the rollout approach when you declare the winner. |
Copy file name to clipboardExpand all lines: docs/tools/experiments-v1/creating-offerings-to-test.md
+12Lines changed: 12 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,6 +18,18 @@ Through Experiments, you can test any variable related to the products you're se
18
18
19
19
In addition, by programming your app to be responsive to Offering Metadata, you can test any other paywall variable outside of your product selection as well. [Learn more here](/tools/offering-metadata).
20
20
21
+
### Testing multiple variations at once
22
+
23
+
With support for up to 4 variants, you can test multiple hypotheses simultaneously. For example:
24
+
-**Variant A (Control)**: Current $9.99/month pricing
25
+
-**Variant B**: Test $7.99/month (lower price)
26
+
-**Variant C**: Test $12.99/month (higher price)
27
+
-**Variant D**: Test $9.99/month with 7-day trial (same price, add trial)
28
+
29
+
This multivariate approach can be faster than running sequential A/B tests, but requires more traffic to reach statistical significance.
30
+
31
+
When choosing experiment types in the dashboard, select the preset that matches your primary variable (e.g., "Price point" for pricing tests, "Free trial offer" for trial tests). This will suggest relevant metrics to track for your experiment.
32
+
21
33
## Setting up a new offering to test your hypothesis
22
34
23
35
Experiments uses [Offerings](/getting-started/entitlements#offerings) to represent the hypothesis that's being tested (aka: the group of products that will be offered to your customers). An Offering is a collection of Packages that contain Products from each store you're looking to serve that Offering on.
Copy file name to clipboardExpand all lines: docs/tools/experiments-v1/experiment-results-summaries.md
+3-1Lines changed: 3 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -25,6 +25,8 @@ You must first verify your email address with us in order to receive Experiment
25
25
26
26
We'll send you an email for each experiment you've had running in the last week in the Projects that you've subscribed to receive these summaries for. It will include the latest results for the experiment, focused on the following key metrics.
27
27
28
+
For multivariate experiments (3-4 variants), the summary includes performance for all variants compared to the control.
| Initial conversion rate | The percent of customers who purchased any product. |
@@ -33,7 +35,7 @@ We'll send you an email for each experiment you've had running in the last week
33
35
| Realized LTV (revenue) | The total revenue that's been generated so far (realized). |
34
36
| Realized LTV per customer | The total revenue that's been generated so far (realized), divided by the number of customers. This should frequently be your primary success metric for determining which variant performed best. |
35
37
36
-
All metrics are reported separately for the Control variant, the Treatment variant, and the relative difference between them.
38
+
All metrics are reported separately for the Control variant, each Treatment variant, and the relative difference between each treatment and control.
37
39
38
40
:::tip Full results on the Dashboard
39
41
To analyze how these metrics have changed over time, review other metrics, and breakdown performance by product or platform; you can click on the link in the email to go directly to the full results of your experiment.
Copy file name to clipboardExpand all lines: docs/tools/experiments-v1/experiments-overview-v1.md
+24-4Lines changed: 24 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ slug: experiments-overview-v1
5
5
hidden: false
6
6
---
7
7
8
-
Experiments allow you to answer questions about your users' behaviors and app's business by A/B testing two unique paywall configurations in your app and analyzing the full subscription lifecycle to understand which variant is producing more value for your business.
8
+
Experiments allow you to answer questions about your users' behaviors and app's business by A/B testing multiple paywall configurations (2-4 variants) in your app and analyzing the full subscription lifecycle to understand which variant is producing more value for your business.
9
9
10
10
While price testing is one of the most common forms of A/B testing in mobile apps, Experiments are based on RevenueCat Offerings, which means you can A/B test more than just prices, including: trial length, subscription length, different groupings of products, etc.
11
11
@@ -25,6 +25,22 @@ If you need help making your paywall more dynamic, see [Displaying Products](/ge
25
25
To learn more about creating a new Offering to test, and some tips to keep in mind when creating new Products on the stores, [check out our guide here](/tools/experiments-v1/creating-offerings-to-test).
26
26
:::
27
27
28
+
## Experiment Types
29
+
30
+
When creating an experiment, you can choose from preset experiment types that help guide your setup with relevant default metrics:
31
+
32
+
-**Introductory offer** - Test different introductory pricing strategies
33
+
-**Free trial offer** - Compare trial lengths or presence/absence of trials
34
+
-**Paywall design** - Test different paywall layouts and presentations
35
+
-**Price point** - Compare different price points for your products
36
+
-**Subscription duration** - Test different subscription lengths (monthly vs yearly)
37
+
-**Subscription ordering** - Test different product ordering or prominence
38
+
Choosing the right preset automatically suggests relevant metrics for your experiment type, making it easier to track what matters most for your test.
39
+
40
+
You can also click **+ New experiment** to create a custom experiment with your own metrics without selecting a preset.
41
+
42
+

As soon as a customer is enrolled in an experiment, they'll be included in the "Customers" count on the Experiment Results page, and you'll see any trial starts, paid conversions, status changes, etc. represented in the corresponding metrics. (Learn more [here](/tools/experiments-v1/experiments-results-v1))
@@ -55,9 +71,9 @@ Programmatically displaying the `current` Offering in your app when you fetch Of
55
71
:::
56
72
57
73
1. Create the Offerings that you want to test (make sure your app displays the `current` Offering.) You can skip this step if you already have the Offerings you want to test.
58
-
2. Create an Experiment and choose the Offerings to test. You can create a new experiment from scratch or duplicate an existing experiment to save time when testing similar configurations. By default you can choose one Offering per variant, but by creating Placements your Experiment can instead have a unique Offering displayed for each paywall location in your app. [Learn more here](https://www.revenuecat.com/docs/tools/experiments-v1/configuring-experiments-v1#using-placements-in-experiments).
59
-
3. Run your experiment and monitor the results. There is no time limit on experiments, so stop it when you feel confident choosing an outcome. (Learn more about interpreting your results [here](/tools/experiments-v1/experiments-results-v1))
60
-
4. Once you’re satisfied with the results you can set the winning Offering(s), if any, as default manually.
74
+
2. Create an Experiment and choose between 2-4 variants to test. You can select from experiment type presets (Price point, Free trial offer, etc.) to get relevant default metrics, or create a custom experiment. You can create a new experiment from scratch or duplicate an existing experiment to save time when testing similar configurations. By default you can choose one Offering per variant, but by creating Placements your Experiment can instead have a unique Offering displayed for each paywall location in your app. [Learn more here](https://www.revenuecat.com/docs/tools/experiments-v1/configuring-experiments-v1#using-placements-in-experiments).
75
+
3. Run your experiment and monitor the results. There is no time limit on experiments, so you can pause enrollment if needed and stop it when you feel confident choosing an outcome. (Learn more about interpreting your results [here](/tools/experiments-v1/experiments-results-v1))
76
+
4. Once you’re satisfied with the results roll out the winning variant. You can set the winning Offering as default, create a targeting rule, or simply mark the winner for your records.
61
77
5. Then, you're ready to run a new experiment.
62
78
63
79
Visit [Configuring Experiments](https://www.revenuecat.com/docs/configuring-experiments-v1) to learn how to setup your first test.
@@ -82,6 +98,10 @@ You can't restart a test once it's been stopped.
82
98
83
99
It's tempting to try to test multiple variables at once, such as free trial length and price; resist that temptation! The results are often clearer when only one variable is tested. You can run more tests for other variables as you further optimize your LTV.
84
100
101
+
:::tip Multivariate testing
102
+
With support for up to 4 variants, you can test multiple variations of the same variable simultaneously (e.g., testing $5, $7, and $9 price points in a single experiment). This is different from testing multiple variables at once - each variant should differ by the same variable to keep results interpretable.
103
+
:::
104
+
85
105
**Run multiple tests simultaneously to isolate variables & audiences**
86
106
87
107
If you're looking to test the price of a product and it's optimal trial length, you can run 2 tests simultaneously that each target a subset of your total audience. For example, Test #1 can test price with 20% of your audience; and Test #2 can test trial length with a different 20% of your audience.
0 commit comments