You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/experiment-analysis/configuration/filter-assignments-by-entry-point.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ For some experiments, subjects are assigned to a variant in one place, but are n
4
4
5
5
## Entry point for an experiment
6
6
7
-
Eppo provides the ability to filter an assignment source by an [Entry Point](/statistics/sample-size-calculator/setup#creating-entry-points) when configuring an experiment. This ensures that only the subjects assigned to that entry point are analyzed in the experiment, based on the logged events for that entry point. All decisions (inclusion into the experiment, time-framed metrics) are based on the timestamp of the entry point.
7
+
Eppo provides the ability to filter an assignment source by an [Entry Point](/statistics/sample-size-calculator/setup#creating-entry-points)(also known as a qualifying event) when configuring an experiment. This ensures that only the subjects assigned to that entry point are analyzed in the experiment, based on the logged events for that entry point. All decisions (inclusion into the experiment, time-framed metrics) are based on the timestamp of the entry point.
8
8
9
9
First you’ll need both an assignment source and an entry point source configured. Then, when setting up an experiment, check the box marked “Filter assignments by entry points” in the **Logging & Experiment Key** section:
10
10
@@ -18,4 +18,4 @@ The filtering will take place during the next experiment calculation (either dur
18
18
19
19
## Entry point for a sample size calculation
20
20
21
-
Before you run a test, we recommend that you check how sensitive that experiment can be using our [Sample size calculator](/statistics/sample-size-calculator/). Knowing how large an effect you can detect let you prioritise testing impactful, detectable changes. When the change will only be visible after the assignment, you can define an [Entry Point](/statistics/sample-size-calculator/setup#creating-entry-points) to measure the sensitivity of the test more accurately.
21
+
Before you run a test, we recommend that you check how sensitive that experiment can be using our [Sample size calculator](/statistics/sample-size-calculator/). Knowing how large an effect you can detect let you prioritize testing impactful, detectable changes. When the change will only be visible after the assignment, you can define an [Entry Point](/statistics/sample-size-calculator/setup#creating-entry-points) to measure the sensitivity of the test more accurately.
Tags allow you to organize your flags in the Eppo UI, but SDK tags include an additional benefit.
4
+
5
+
Eppo creates a configuration file for the client, but when your program is using hundreds of flags across multiple applications, there may be a concern that you're unnecessarily increasing the size of the configuration file. SDK tags allow you to indicate which flags you'd like to include in the configuration file, reducing the configuration file size.
6
+
7
+
## Creating and managing tags
8
+
9
+
Tags can be created and managed in the Admin section. Each tag has a name and a description.
10
+

11
+
12
+
Clicking edit allows you to see all SDK keys and flags using the tag and allows you to remove them.
When you create or edit an SDK key, you can optionally choose tags to associate with that key. When tags are selected, only flags that have those same tags are included in the configuration file fetched by the application using that SDK key.
19
+
20
+

21
+
22
+
If no tags are selected, all flags will be included in the configuration file.
23
+
24
+
## Tagging flags
25
+
26
+
When you create or edit a flag, you can optionally choose tags to associate with the flag.
27
+
28
+

29
+
30
+
Once a flag is tagged, the tags will display on the Flag list.
31
+
32
+

33
+
34
+
In this case, tags can be useful to organize your list of flags. Use the filter button and select the appropriate tags to only see flags with those included tags in the list.
35
+
36
+

Copy file name to clipboardExpand all lines: docs/guides/advanced-experimentation/entry_points.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ sidebar_position: 11
4
4
5
5
# When to add Entry Point filters
6
6
7
-
This guide walks through scenarios of when and why to consider using Entry Points. If you want to learn how to configure an Entry Point in Eppo, refer to [this page](/experiment-analysis/configuration/filter-assignments-by-entry-point).
7
+
This guide walks through scenarios of when and why to consider using Entry Points (also known as qualifying events). If you want to learn how to configure an Entry Point in Eppo, refer to [this page](/experiment-analysis/configuration/filter-assignments-by-entry-point).
8
8
9
9
## Why exclude some users from an experiment?
10
10
@@ -20,7 +20,7 @@ If you A/B test a better customer service experience, you’ll end up splitting
20
20
21
21
Even within a week or two, the distinction should be blatant. You would likely make a decision that the new approach to customer service is better.
22
22
23
-
If you include all your customers in the process, then you’ll have 9,000 extra participants in each variant that are not affected but the test, half of which will rate the service highly. The score will be around 4,500 + 250 = 4,750/10,000 vs. 5,250/10,000. The new treatment is still better but results will be noisier, resulting in a wider confidence interval of ±100 ($1.96 * \sqrt(.475 * .525 / 10,000) * 10,000$). The result after one month might not be conclusive. Your decision might have to wait for more evidence, while all you need is to focus on the information you already have.
23
+
If you include all your customers in the process, then you’ll have 9,000 extra participants in each variant that are not affected but the test, half of which will rate the service highly. The score will be around 4,500 + 250 = 4,750/10,000 vs. 5,250/10,000. The new treatment is still better but results will be noisier, resulting in a wider confidence interval of ±100 ($1.96 * \sqrt(.475 * .525 / 10,000) * 10,000$). The result after one month might not be conclusive. Your decision might have to wait for more evidence, while all you need is to focus on the information you already have by adding a qualifying event as an entry point.
24
24
25
25
## Examples when an Entry Point is useful
26
26
@@ -36,7 +36,7 @@ When testing, should you assign all users? You need to decide whether to trigger
36
36
37
37
There’s one concern, though: while all visitors will be assigned in the experiment, only the visitors who see the recommendation carousel are exposed to a different experience. (We’ll ignore the impact of triggering an expensive computation for now.) Therefore, the two-thirds of visitors who were assigned but never scrolled down the homepage should not be included in that experiment.
38
38
39
-
For cases like that, we let you define an **Entry Point**: what event needs to happen for visitors to be exposed to a different experience, and considered enrolled to the experiment. It remains up to you to decide if this should be when the carousel enters the viewport, is fully or partially visible; it’s also up to your front-end developpers to trigger and log that event. But once that information is in your data warehouse, then you can use it to filter out which users participate in the experiment.
39
+
For cases like that, we let you define an **Entry Point**: what qualified event needs to happen for visitors to be exposed to a different experience, and considered enrolled to the experiment. It remains up to you to decide if this should be when the carousel enters the viewport, is fully or partially visible; it’s also up to your front-end developpers to trigger and log that event. But once that information is in your data warehouse, then you can use it to filter out which users participate in the experiment.
40
40
41
41
:::note
42
42
If you define an Entry Point, all the time-limted metrics (“Conversion 7 days after assignment”) are based on the timestamp of the Entry Point, not the assignment.
@@ -83,4 +83,4 @@ Assigning early lowers the cost of testing a solution by half. Complex models, s
83
83
84
84
There are many more examples where the assignment has to happen before entities can see a difference, like API calls during time-sensitive ad auctions; or where the different treatment is triggered, but might not be visible to users, like including a legal warning on a pop-up that could be blocked by the browser.
85
85
86
-
If you have any questions, don’t hesitate to reach out to [Eppo support](mailto:[email protected]).
86
+
If you have any questions, don’t hesitate to reach out to [Eppo support](mailto:[email protected]).
0 commit comments