Carlosroman/agtmetrics 340 v4 test better dogstatsd metrics stats#45080
Carlosroman/agtmetrics 340 v4 test better dogstatsd metrics stats#45080carlosroman wants to merge 2 commits intomainfrom
Conversation
Go Package Import DifferencesBaseline: b03eeb3
|
Static quality checks✅ Please find below the results from static quality gates Successful checksInfo
23 successful checks with minimal change (< 2 KiB)
On-wire sizes (compressed)
|
Regression DetectorRegression Detector ResultsMetrics dashboard Baseline: cde83e4 ❌ Experiments with missing or malformed dataThis is a critical error. No usable optimization goal data was produced by the listed experiments. This may be a result of misconfiguration. Ping #single-machine-performance and we can help out. ❌ Experiments with missing analysis due to target crashesThis is a critical error. One or more replicates failed with a non-zero exit code. No optimization goal or bounds check analysis will be present for these experiment(s). Ping #single-machine-performance and we can help out. See Replicate Execution Details for more information.
❌ Experiments with retried target crashesThis is a critical error. One or more replicates failed with a non-zero exit code. These replicates may have been retried. See Replicate Execution Details for more information.
Optimization Goals: ✅ No significant changes detected
|
| perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
|---|---|---|---|---|---|---|
| ➖ | docker_containers_cpu | % cpu utilization | +1.66 | [-1.31, +4.63] | 1 | Logs |
Fine details of change detection per experiment
| perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
|---|---|---|---|---|---|---|
| ➖ | docker_containers_cpu | % cpu utilization | +1.66 | [-1.31, +4.63] | 1 | Logs |
| ➖ | quality_gate_logs | % cpu utilization | +1.56 | [+0.10, +3.02] | 1 | Logs bounds checks dashboard |
| ➖ | otlp_ingest_logs | memory utilization | +1.37 | [+1.27, +1.48] | 1 | Logs |
| ➖ | uds_dogstatsd_20mb_12k_contexts_20_senders | memory utilization | +0.38 | [+0.33, +0.44] | 1 | Logs |
| ➖ | ddot_metrics_sum_delta | memory utilization | +0.36 | [+0.16, +0.55] | 1 | Logs |
| ➖ | tcp_syslog_to_blackhole | ingress throughput | +0.32 | [+0.25, +0.40] | 1 | Logs |
| ➖ | ddot_metrics | memory utilization | +0.31 | [+0.08, +0.53] | 1 | Logs |
| ➖ | docker_containers_memory | memory utilization | +0.30 | [+0.23, +0.37] | 1 | Logs |
| ➖ | quality_gate_idle | memory utilization | +0.16 | [+0.12, +0.21] | 1 | Logs bounds checks dashboard |
| ➖ | ddot_metrics_sum_cumulative | memory utilization | +0.16 | [-0.00, +0.32] | 1 | Logs |
| ➖ | file_to_blackhole_0ms_latency | egress throughput | +0.06 | [-0.45, +0.57] | 1 | Logs |
| ➖ | file_to_blackhole_100ms_latency | egress throughput | +0.02 | [-0.02, +0.07] | 1 | Logs |
| ➖ | file_to_blackhole_500ms_latency | egress throughput | +0.02 | [-0.36, +0.40] | 1 | Logs |
| ➖ | tcp_dd_logs_filter_exclude | ingress throughput | +0.00 | [-0.09, +0.09] | 1 | Logs |
| ➖ | quality_gate_idle_all_features | memory utilization | -0.01 | [-0.05, +0.03] | 1 | Logs bounds checks dashboard |
| ➖ | ddot_metrics_sum_cumulativetodelta_exporter | memory utilization | -0.04 | [-0.27, +0.19] | 1 | Logs |
| ➖ | file_tree | memory utilization | -0.07 | [-0.13, -0.02] | 1 | Logs |
| ➖ | file_to_blackhole_1000ms_latency | egress throughput | -0.09 | [-0.50, +0.32] | 1 | Logs |
| ➖ | otlp_ingest_metrics | memory utilization | -0.12 | [-0.27, +0.03] | 1 | Logs |
| ➖ | ddot_logs | memory utilization | -0.37 | [-0.44, -0.30] | 1 | Logs |
Bounds Checks: ✅ Passed
| perf | experiment | bounds_check_name | replicates_passed | links |
|---|---|---|---|---|
| ✅ | docker_containers_cpu | simple_check_run | 10/10 | |
| ✅ | docker_containers_memory | memory_usage | 10/10 | |
| ✅ | docker_containers_memory | simple_check_run | 10/10 | |
| ✅ | file_to_blackhole_0ms_latency | lost_bytes | 10/10 | |
| ✅ | file_to_blackhole_0ms_latency | memory_usage | 10/10 | |
| ✅ | file_to_blackhole_1000ms_latency | lost_bytes | 10/10 | |
| ✅ | file_to_blackhole_1000ms_latency | memory_usage | 10/10 | |
| ✅ | file_to_blackhole_100ms_latency | lost_bytes | 10/10 | |
| ✅ | file_to_blackhole_100ms_latency | memory_usage | 10/10 | |
| ✅ | file_to_blackhole_500ms_latency | lost_bytes | 10/10 | |
| ✅ | file_to_blackhole_500ms_latency | memory_usage | 10/10 | |
| ✅ | quality_gate_idle | intake_connections | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_idle | memory_usage | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_idle_all_features | intake_connections | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_idle_all_features | memory_usage | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_logs | intake_connections | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_logs | lost_bytes | 10/10 | bounds checks dashboard |
| ✅ | quality_gate_logs | memory_usage | 10/10 | bounds checks dashboard |
Explanation
Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%
Performance changes are noted in the perf column of each table:
- ✅ = significantly better comparison variant performance
- ❌ = significantly worse comparison variant performance
- ➖ = no significant change in performance
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
Replicate Execution Details
We run multiple replicates for each experiment/variant. However, we allow replicates to be automatically retried if there are any failures, up to 8 times, at which point the replicate is marked dead and we are unable to run analysis for the entire experiment. We call each of these attempts at running replicates a replicate execution. This section lists all replicate executions that failed due to the target crashing or being oom killed.
Note: In the below tables we bucket failures by experiment, variant, and failure type. For each of these buckets we list out the replicate indexes that failed with an annotation signifying how many times said replicate failed with the given failure mode. In the below example the baseline variant of the experiment named experiment_with_failures had two replicates that failed by oom kills. Replicate 0, which failed 8 executions, and replicate 1 which failed 6 executions, all with the same failure mode.
| Experiment | Variant | Replicates | Failure | Logs | Debug Dashboard |
|---|---|---|---|---|---|
| experiment_with_failures | baseline | 0 (x8) 1 (x6) | Oom killed | Debug Dashboard |
The debug dashboard links will take you to a debugging dashboard specifically designed to investigate replicate execution failures.
❌ Retried Normal Replicate Execution Failures (non-profiling)
| Experiment | Variant | Replicates | Failure | Debug Dashboard |
|---|---|---|---|---|
| quality_gate_metrics_logs | comparison | 9 (x2), 2 (x2), 5 (x2), 8, 1 (x2), 3 (x2), 0 (x4), 7, 6 (x2) | Crashed (exit code: 0) | Debug Dashboard |
| quality_gate_metrics_logs | comparison | 3 (x6), 9 (x6), 2 (x6), 5 (x6), 1 (x6), 6 (x6), 7 (x7), 0 (x4), 8 (x7), 4 (x8) | Oom killed | Debug Dashboard |
| uds_dogstatsd_to_api | comparison | 6 | Crashed (exit code: 0) | Debug Dashboard |
| uds_dogstatsd_to_api_v3 | comparison | 9, 4 | Crashed (exit code: 0) | Debug Dashboard |
❌ Retried Profiling Replicate Execution Failures (ddprof)
Note: Profiling replicas may still be executing. See the debug dashboard for up to date status.
| Experiment | Variant | Replicates | Failure | Debug Dashboard |
|---|---|---|---|---|
| quality_gate_metrics_logs | comparison | 10 (x2) | Crashed (exit code: 0) | Debug Dashboard |
| quality_gate_metrics_logs | comparison | 10 (x6) | Oom killed | Debug Dashboard |
❌ Retried Profiling Replicate Execution Failures (target internal profiling)
Note: Profiling replicas may still be executing. See the debug dashboard for up to date status.
| Experiment | Variant | Replicates | Failure | Debug Dashboard |
|---|---|---|---|---|
| quality_gate_idle_all_features | baseline | 11 (x4) | Oom killed | Debug Dashboard |
| quality_gate_idle_all_features | comparison | 11 (x4) | Oom killed | Debug Dashboard |
| quality_gate_metrics_logs | comparison | 11 | Crashed (exit code: 0) | Debug Dashboard |
| quality_gate_metrics_logs | comparison | 11 (x7) | Oom killed | Debug Dashboard |
CI Pass/Fail Decision
✅ Passed. All Quality Gates passed.
- quality_gate_logs, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check lost_bytes: 10/10 replicas passed. Gate passed.
- quality_gate_idle_all_features, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_idle_all_features, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_idle, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_idle, bounds check memory_usage: 10/10 replicas passed. Gate passed.
What does this PR do?
Motivation
Describe how you validated your changes
Additional Notes