You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The context associated with the provider has changed, and the provider has [reconciled](/docs/reference/concepts/sdk-paradigms#static-context-paradigms-client-side-sdks) its associated state.
158
+
The context associated with the provider has changed, and the provider has [reconciled](/docs/reference/concepts/sdk-paradigms#static-context-paradigms-client-side-sdks) its associated state.
Copy file name to clipboardExpand all lines: docs/reference/concepts/07-tracking.mdx
+13-7Lines changed: 13 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,16 +17,21 @@ Tracking events associated with flag evaluations serves two primary purposes:
17
17
18
18
### Performance Monitoring
19
19
20
-
When changes are made to an application via feature flags, tracking helps measure their impact on performance. By associating events with flag evaluation contexts
20
+
When changes are made to an application via feature flags, tracking helps measure their impact on performance.
21
+
By associating events with flag evaluation contexts
21
22
and sending this data to telemetry or analytics platforms, teams can determine whether specific flag configurations improve or degrade measured performance,
22
23
whether business metrics or system performance.
23
24
24
25
### Experimentation
25
26
26
-
Tracking creates a crucial link between flag evaluations and business outcomes, enabling robust experimentation. Experimentation differs from generalized
27
-
Performance Monitoring in its execution. The most common form of Experimentation being A/B testing which is when two variations of an application or feature
28
-
are distributed randomly to similar groups and the differences in metrics is evaluated. For example, if a feature flag controls the order
29
-
of items in a menu, tracking events can be emitted when users click on menu items. The feature flag provider can typically be set to distribute the different variations
27
+
Tracking creates a crucial link between flag evaluations and business outcomes, enabling robust experimentation.
28
+
Experimentation differs from generalized
29
+
Performance Monitoring in its execution.
30
+
The most common form of Experimentation being A/B testing which is when two variations of an application or feature
31
+
are distributed randomly to similar groups and the differences in metrics is evaluated.
32
+
For example, if a feature flag controls the order
33
+
of items in a menu, tracking events can be emitted when users click on menu items.
34
+
The feature flag provider can typically be set to distribute the different variations
30
35
equally to your audience, making this an A/B test, helping to validate hypotheses about user behavior in a statistically relevant manner.
31
36
32
37
## Providers, Hooks and Integration
@@ -46,7 +51,8 @@ sequenceDiagram
46
51
47
52
## Track Event Implementation
48
53
49
-
The `track` function requires only a label parameter. You don’t need to pass an identifier or flag evaluation context since the OpenFeature provider already maintains this information.
54
+
The `track` function requires only a label parameter.
55
+
You don’t need to pass an identifier or flag evaluation context since the OpenFeature provider already maintains this information.
50
56
51
57
Optionally, you can associate additional metadata with each track event if your feature flagging tool supports it.
0 commit comments