Skip to content

Commit 5736587

Browse files
committed
better notes
1 parent a0832b8 commit 5736587

File tree

1 file changed

+30
-11
lines changed

1 file changed

+30
-11
lines changed

docs/metrics.md

Lines changed: 30 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -594,24 +594,43 @@ Consider these guidelines when determining the appropriate limit:
594594
above), use the same calculation approach as cumulative temporality.
595595
* For dynamic scenarios where not all combinations appear in every export cycle,
596596
base the limit on expected total measurements within a single interval.
597-
* **Example 1:** With a 60-second export interval and 1,000 measurements per
598-
interval, set the cardinality limit to 1,000. Delta temporality allows the SDK
599-
to reset after each export, accommodating different attribute combinations
600-
across intervals without accumulating state.
597+
* **Example 1:** If your application generates at most 1,000 distinct attribute
598+
combinations per export interval (regardless of the interval duration), set
599+
the cardinality limit to 1,000. Delta temporality allows the SDK to reset
600+
after each export, accommodating different attribute combinations across
601+
intervals without accumulating state.
601602
* **Example 2:** For web applications with known Request Per Second (RPS) rates,
602-
calculate the maximum measurements per interval: `RPS × Export Interval`. With
603-
500 RPS and a 60-second interval: `500 × 60 = 30,000` measurements per cycle.
604-
Set the cardinality limit to 30,000.
603+
calculate the maximum measurements per interval: `RPS × Export Interval`
604+
(assuming one measurement per request). With 500 RPS and a 60-second interval:
605+
`500 × 60 = 30,000` measurements per cycle. Set the cardinality limit to
606+
30,000.
605607
* **High-Cardinality Attributes:** Delta temporality excels with attributes like
606-
`user_id` where not all values appear simultaneously. Base the limit on
607-
concurrent active users within an interval rather than total possible users.
608-
Using the same calculation (`500 RPS × 60 seconds = 30,000`), this
609-
accommodates realistic concurrent user activity.
608+
`user_id` where not all values appear simultaneously. Due to delta temporality's
609+
state-clearing behavior and the fact that not all users are active within a
610+
single interval, you can set a cardinality limit much lower than the total
611+
possible cardinality. For example, even with millions of registered users, if
612+
only 30,000 are active per interval (based on `500 RPS × 60 seconds`), the
613+
cardinality limit can be set to 30,000 rather than millions.
610614
* **Export Interval Tuning:** Reducing export intervals lowers cardinality
611615
requirements. With 30-second intervals: `500 × 30 = 15,000` measurements,
612616
allowing a lower limit. However, balance this against increased serialization
613617
and network overhead from more frequent exports.
614618

619+
**3. Backend Compatibility Considerations:**
620+
621+
While delta temporality offers significant advantages for cardinality
622+
management, your choice may be constrained by backend support:
623+
624+
* **Backend Restrictions:** Some metrics backends only support cumulative
625+
temporality. For example, Prometheus requires cumulative temporality and
626+
cannot directly consume delta metrics.
627+
* **Collector Conversion:** To leverage delta temporality's memory advantages
628+
while maintaining backend compatibility, configure your SDK to use delta
629+
temporality and deploy an OpenTelemetry Collector with a delta-to-cumulative
630+
conversion processor. This approach pushes the memory overhead from your
631+
application to the collector, which can be more easily scaled and managed
632+
independently.
633+
615634
TODO: Add the memory cost incurred by each data points, so users can know the
616635
memory impact of setting a higher limits.
617636

0 commit comments

Comments
 (0)