Add custom CI Visibility spans for new-e2e test jobs#47075
Add custom CI Visibility spans for new-e2e test jobs#47075
Conversation
Gitlab CI Configuration ChangesChanges Summary
ℹ️ Diff available in the job log. |
Files inventory check summaryFile checks results against ancestor a6298b3e: Results for datadog-agent_7.79.0~devel.git.9.3e53e15.pipeline.103731930-1_amd64.deb:No change detected |
Static quality checks✅ Please find below the results from static quality gates 31 successful checks with minimal change (< 2 KiB)
On-wire sizes (compressed)
|
Regression DetectorRegression Detector ResultsMetrics dashboard Baseline: 3fb6170 Optimization Goals: ✅ No significant changes detected
|
| perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
|---|---|---|---|---|---|---|
| ➖ | docker_containers_cpu | % cpu utilization | +4.43 | [+1.30, +7.56] | 1 | Logs |
Fine details of change detection per experiment
| perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
|---|---|---|---|---|---|---|
| ➖ | docker_containers_cpu | % cpu utilization | +4.43 | [+1.30, +7.56] | 1 | Logs |
| ➖ | quality_gate_logs | % cpu utilization | +1.34 | [-0.24, +2.92] | 1 | Logs bounds checks dashboard |
| ➖ | quality_gate_idle_all_features | memory utilization | +0.36 | [+0.32, +0.40] | 1 | Logs bounds checks dashboard |
| ➖ | ddot_metrics_sum_cumulative | memory utilization | +0.31 | [+0.16, +0.45] | 1 | Logs |
| ➖ | ddot_metrics | memory utilization | +0.22 | [+0.05, +0.40] | 1 | Logs |
| ➖ | uds_dogstatsd_20mb_12k_contexts_20_senders | memory utilization | +0.22 | [+0.16, +0.28] | 1 | Logs |
| ➖ | file_tree | memory utilization | +0.21 | [+0.16, +0.26] | 1 | Logs |
| ➖ | quality_gate_idle | memory utilization | +0.18 | [+0.13, +0.23] | 1 | Logs bounds checks dashboard |
| ➖ | docker_containers_memory | memory utilization | +0.12 | [+0.05, +0.19] | 1 | Logs |
| ➖ | ddot_metrics_sum_cumulativetodelta_exporter | memory utilization | +0.07 | [-0.16, +0.29] | 1 | Logs |
| ➖ | file_to_blackhole_1000ms_latency | egress throughput | +0.06 | [-0.38, +0.50] | 1 | Logs |
| ➖ | file_to_blackhole_100ms_latency | egress throughput | +0.03 | [-0.06, +0.12] | 1 | Logs |
| ➖ | uds_dogstatsd_to_api_v3 | ingress throughput | +0.00 | [-0.19, +0.19] | 1 | Logs |
| ➖ | uds_dogstatsd_to_api | ingress throughput | -0.00 | [-0.20, +0.19] | 1 | Logs |
| ➖ | tcp_dd_logs_filter_exclude | ingress throughput | -0.00 | [-0.09, +0.09] | 1 | Logs |
| ➖ | ddot_metrics_sum_delta | memory utilization | -0.02 | [-0.20, +0.15] | 1 | Logs |
| ➖ | file_to_blackhole_0ms_latency | egress throughput | -0.06 | [-0.52, +0.40] | 1 | Logs |
| ➖ | file_to_blackhole_500ms_latency | egress throughput | -0.06 | [-0.45, +0.33] | 1 | Logs |
| ➖ | quality_gate_metrics_logs | memory utilization | -0.27 | [-0.50, -0.03] | 1 | Logs bounds checks dashboard |
| ➖ | ddot_logs | memory utilization | -0.27 | [-0.33, -0.21] | 1 | Logs |
| ➖ | tcp_syslog_to_blackhole | ingress throughput | -0.72 | [-0.86, -0.58] | 1 | Logs |
| ➖ | otlp_ingest_metrics | memory utilization | -0.73 | [-0.89, -0.57] | 1 | Logs |
| ➖ | otlp_ingest_logs | memory utilization | -1.44 | [-1.54, -1.35] | 1 | Logs |
Bounds Checks: ✅ Passed
| perf | experiment | bounds_check_name | replicates_passed | observed_value | links |
|---|---|---|---|---|---|
| ✅ | docker_containers_cpu | simple_check_run | 10/10 | 651 ≥ 26 | |
| ✅ | docker_containers_memory | memory_usage | 10/10 | 275.03MiB ≤ 370MiB | |
| ✅ | docker_containers_memory | simple_check_run | 10/10 | 679 ≥ 26 | |
| ✅ | file_to_blackhole_0ms_latency | memory_usage | 10/10 | 0.19GiB ≤ 1.20GiB | |
| ✅ | file_to_blackhole_0ms_latency | missed_bytes | 10/10 | 0B = 0B | |
| ✅ | file_to_blackhole_1000ms_latency | memory_usage | 10/10 | 0.23GiB ≤ 1.20GiB | |
| ✅ | file_to_blackhole_1000ms_latency | missed_bytes | 10/10 | 0B = 0B | |
| ✅ | file_to_blackhole_100ms_latency | memory_usage | 10/10 | 0.19GiB ≤ 1.20GiB | |
| ✅ | file_to_blackhole_100ms_latency | missed_bytes | 10/10 | 0B = 0B | |
| ✅ | file_to_blackhole_500ms_latency | memory_usage | 10/10 | 0.21GiB ≤ 1.20GiB | |
| ✅ | file_to_blackhole_500ms_latency | missed_bytes | 10/10 | 0B = 0B | |
| ✅ | quality_gate_idle | intake_connections | 10/10 | 3 = 3 | bounds checks dashboard |
| ✅ | quality_gate_idle | memory_usage | 10/10 | 174.62MiB ≤ 175MiB | bounds checks dashboard |
| ✅ | quality_gate_idle_all_features | intake_connections | 10/10 | 2 ≤ 3 | bounds checks dashboard |
| ✅ | quality_gate_idle_all_features | memory_usage | 10/10 | 491.49MiB ≤ 550MiB | bounds checks dashboard |
| ✅ | quality_gate_logs | intake_connections | 10/10 | 3 ≤ 6 | bounds checks dashboard |
| ✅ | quality_gate_logs | memory_usage | 10/10 | 202.17MiB ≤ 220MiB | bounds checks dashboard |
| ✅ | quality_gate_logs | missed_bytes | 10/10 | 0B = 0B | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | cpu_usage | 10/10 | 356.07 ≤ 2000 | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | intake_connections | 10/10 | 4 ≤ 6 | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | memory_usage | 10/10 | 400.36MiB ≤ 475MiB | bounds checks dashboard |
| ✅ | quality_gate_metrics_logs | missed_bytes | 10/10 | 0B = 0B | bounds checks dashboard |
Explanation
Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%
Performance changes are noted in the perf column of each table:
- ✅ = significantly better comparison variant performance
- ❌ = significantly worse comparison variant performance
- ➖ = no significant change in performance
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
CI Pass/Fail Decision
✅ Passed. All Quality Gates passed.
- quality_gate_logs, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check missed_bytes: 10/10 replicas passed. Gate passed.
- quality_gate_idle, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_idle, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check missed_bytes: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_metrics_logs, bounds check cpu_usage: 10/10 replicas passed. Gate passed.
- quality_gate_idle_all_features, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_idle_all_features, bounds check memory_usage: 10/10 replicas passed. Gate passed.
|
This pull request has been automatically marked as stale because it has not had activity in the past 15 days. It will be closed in 30 days if no further activity occurs. If this pull request is still relevant, adding a comment or pushing new commits will keep it open. Also, you can always reopen the pull request if you missed the window. Thank you for your contributions! |
Create datadog-ci span instrumentation for e2e pipeline jobs so each setup step (dep retrieval, credential setup, Pulumi login) and test phase (dynamic test calculation, execution attempts, teardown, result processing) appears as a separate span in CI Visibility flamegraphs.
`datadog-ci span` does not exist — the correct syntax is `datadog-ci trace span`. Also remove the batch --payload-file approach since that flag is not supported.
Emit a span covering pod scheduling, git clone, and artifact downloads using CI_JOB_STARTED_AT. These runner-managed phases happen before any user script and were previously invisible.
Summary
datadog-ci spaninstrumentation to e2e pipeline jobs so each setup step and test phase appears as a separate span in CI Visibility flamegraphs.setup_datadog_ci_sections) for before_script spans and Python helpers (ci_visibility_sectioncontext manager) for invoke task spansCIVisibilitySection.send_all()hook incustom_task.pyWhat Changed
tasks/libs/common/ci_visibility.py: Python CI Visibility helpers —CIVisibilitySectiondataclass with batch send,ci_visibility_section()context manager (no-op outside CI).gitlab/.pre/common/ci_visibility_sections.yml: Bash helpers (datadog-ci-start-section/datadog-ci-end-section) +DATADOG_API_KEYfetch.gitlab/test/e2e/e2e.yml: Wrapped 8 before_script steps with span functions (dep retrieval, AWS creds, SSH keys, Pulumi login, cloud creds, CI Visibility links)tasks/custom_task/custom_task.py: AddedCIVisibilitySection.send_all()in finally block after each invoke tasktasks/new_e2e_tests.py: Wrapped 6 phases inrun()withci_visibility_section(dynamic test calc, test execution attempts, teardown, merge results, process results, logs post-processing)Span Tagging Convention
agent-custom-span:true— filter for agent custom spansagent-category:e2e-setup— before_script spansagent-category:e2e— script (invoke task) spansTest plan
ci_level:custom @agent-custom-span:true|| true/warn=Trueso failures are non-blocking🤖 Generated with Claude Code