-
Notifications
You must be signed in to change notification settings - Fork 8
chore(agent-data-plane): update health registry worker to allow being restarted #1176
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: tobz/adp-move-int-o11y-control-plane-to-supervisor
Are you sure you want to change the base?
chore(agent-data-plane): update health registry worker to allow being restarted #1176
Conversation
|
Warning This pull request is not mergeable via GitHub because a downstack PR is open. Once all requirements are satisfied, merge this PR as a stack on Graphite.
This stack of pull requests is managed by Graphite. Learn more about stacking. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Adds a shutdown-aware, restartable health registry runner so the registry can be spawned again after the runner stops.
Changes:
- Update
HealthRegistry::spawnto accept ashutdownfuture and allow respawning after the runner ends. - Introduce
RunnerGuardto return the liveness response receiver back into registry state on shutdown. - Add tests for “duplicate spawn while running” and “respawn after shutdown”.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| let mut inner = self.registry.lock().unwrap(); | ||
| inner.responses_rx = Some(rx); | ||
| debug!("Returned response receiver to registry state."); |
Copilot
AI
Feb 9, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Drop should avoid panicking. Using lock().unwrap() inside drop() can panic if the mutex is poisoned, and a panic during drop can lead to aborts (especially if another panic is already in flight). Prefer handling poisoning explicitly (e.g., recover the inner value) and avoid unwrap() in drop().
| let mut inner = self.registry.lock().unwrap(); | |
| inner.responses_rx = Some(rx); | |
| debug!("Returned response receiver to registry state."); | |
| match self.registry.lock() { | |
| Ok(mut inner) => { | |
| inner.responses_rx = Some(rx); | |
| debug!("Returned response receiver to registry state."); | |
| } | |
| Err(poisoned) => { | |
| let mut inner = poisoned.into_inner(); | |
| inner.responses_rx = Some(rx); | |
| debug!("Returned response receiver to registry state after mutex poisoning."); | |
| } | |
| } |
| // Take the response receiver out of the guard so we can use it in the select loop. | ||
| // It will be put back when the guard is dropped. | ||
| let mut responses_rx = self | ||
| .guard | ||
| .responses_rx | ||
| .take() | ||
| .expect("responses_rx should always be Some when Runner is created"); |
Copilot
AI
Feb 9, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Taking responses_rx out of RunnerGuard means the receiver will not be returned to the registry if the task is cancelled/aborted or unwinds before reaching the “put it back” code path. That breaks the documented goal of being restartable “after shutdown or an error”. Consider an RAII pattern that guarantees the receiver is put back even on early-exit (e.g., a small local guard whose Drop moves responses_rx back), or keep the receiver inside RunnerGuard and only borrow it mutably for recv().
| }; | ||
|
|
||
| for component_id in 0..component_count { | ||
| self.process_component_health_update(component_id, HealthUpdate::Unknown); |
Copilot
AI
Feb 9, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On runner (re)start, this forces every existing component’s health to Unknown, which can erase the last-known state and potentially emit unnecessary state transitions/notifications. If the intent is to “pick up where it left off”, consider scheduling immediate probes without overwriting existing health, or only setting Unknown for components that truly have no prior health value.
| self.process_component_health_update(component_id, HealthUpdate::Unknown); | |
| // Do not overwrite existing health with `Unknown` on (re)start; just schedule immediate probes. |
Binary Size Analysis (Agent Data Plane)Target: 9fe09ef (baseline) vs 7ca8cbc (comparison) diff
|
| Module | File Size | Symbols |
|---|---|---|
saluki_core::runtime::supervisor |
+68.92 KiB | 59 |
core |
+43.71 KiB | 11166 |
tokio |
+24.10 KiB | 2717 |
agent_data_plane::internal::initialize_and_launch_runtime |
-22.07 KiB | 2 |
std |
-11.37 KiB | 280 |
[sections] |
+10.34 KiB | 10 |
saluki_core::runtime::process |
+9.33 KiB | 7 |
agent_data_plane::internal::control_plane |
-8.03 KiB | 26 |
anyhow |
+6.94 KiB | 1286 |
saluki_core::runtime::dedicated |
+5.63 KiB | 4 |
agent_data_plane::internal::observability |
+5.60 KiB | 16 |
saluki_core::topology::running |
+5.50 KiB | 31 |
[Unmapped] |
-5.18 KiB | 1 |
saluki_app::metrics::collect_runtime_metrics |
-4.73 KiB | 1 |
saluki_core::runtime::restart |
+3.50 KiB | 7 |
saluki_core::runtime::shutdown |
+2.27 KiB | 4 |
tracing_core |
+2.24 KiB | 521 |
alloc |
+2.10 KiB | 1222 |
saluki_health::Runner::run |
+1.89 KiB | 8 |
hashbrown |
+1.81 KiB | 412 |
Detailed Symbol Changes
FILE SIZE VM SIZE
-------------- --------------
[NEW] +1.79Mi [NEW] +1.79Mi std::thread::local::LocalKey<T>::with::h938f6ddf7aedbbb8
+1.1% +170Ki +1.0% +141Ki [29783 Others]
[NEW] +113Ki [NEW] +113Ki agent_data_plane::cli::run::create_topology::_{{closure}}::hfb0b645d0bb8c8cf
[NEW] +68.2Ki [NEW] +68.1Ki h2::hpack::decoder::Decoder::try_decode_string::hd1c9a39c48e78a6e
[NEW] +63.7Ki [NEW] +63.6Ki saluki_components::common::datadog::io::run_endpoint_io_loop::_{{closure}}::hd35493454b2aefa0
[NEW] +63.5Ki [NEW] +63.2Ki _<agent_data_plane::internal::control_plane::PrivilegedApiWorker as saluki_core::runtime::supervisor::Supervisable>::initialize::_{{closure}}::hc40718d721e65675
[NEW] +62.3Ki [NEW] +62.2Ki agent_data_plane::main::_{{closure}}::hd35953e90113f5c5
[NEW] +55.7Ki [NEW] +55.6Ki agent_data_plane::cli::run::handle_run_command::_{{closure}}::hb1b79147c6e6e2ef
[NEW] +48.8Ki [NEW] +48.7Ki saluki_app::bootstrap::AppBootstrapper::bootstrap::_{{closure}}::hc8e634acdf97ad33
[NEW] +47.7Ki [NEW] +47.6Ki moka::sync::base_cache::Inner<K,V,S>::do_run_pending_tasks::h0ef6755dbc77ea2a
[NEW] +46.1Ki [NEW] +46.0Ki h2::proto::connection::Connection<T,P,B>::poll::h1671860da0918c66
[DEL] -46.1Ki [DEL] -46.0Ki h2::proto::connection::Connection<T,P,B>::poll::h2aedcbe1089b311c
[DEL] -47.7Ki [DEL] -47.6Ki moka::sync::base_cache::Inner<K,V,S>::do_run_pending_tasks::ha97a2f55834f17a3
[DEL] -48.8Ki [DEL] -48.7Ki saluki_app::bootstrap::AppBootstrapper::bootstrap::_{{closure}}::hbb5d0d8e944d3a21
[DEL] -57.8Ki [DEL] -57.7Ki agent_data_plane::cli::run::handle_run_command::_{{closure}}::hd8d13580d16cc8a3
[DEL] -62.3Ki [DEL] -62.2Ki agent_data_plane::main::_{{closure}}::h1c7c640002ce8ad1
[DEL] -63.8Ki [DEL] -63.6Ki saluki_components::common::datadog::io::run_endpoint_io_loop::_{{closure}}::hc194831d658db8cc
[DEL] -68.2Ki [DEL] -68.1Ki h2::hpack::decoder::Decoder::try_decode_string::hbfc0bb5a77e7669f
[DEL] -84.5Ki [DEL] -84.4Ki agent_data_plane::internal::control_plane::spawn_control_plane::_{{closure}}::heace61ab0f2bde0c
[DEL] -113Ki [DEL] -113Ki agent_data_plane::cli::run::create_topology::_{{closure}}::h6a8b41fa76a89e2c
[DEL] -1.79Mi [DEL] -1.79Mi std::thread::local::LocalKey<T>::with::h0d8770940d38805b
+0.5% +146Ki +0.5% +117Ki TOTAL
Regression Detector (Agent Data Plane)Regression Detector ResultsRun ID: ff31bc77-4d94-4ffb-8380-73b962e96ea9 Baseline: 9fe09ef ❌ Experiments with retried target crashesThis is a critical error. One or more replicates failed with a non-zero exit code. These replicates may have been retried. See Replicate Execution Details for more information.
Optimization Goals: ✅ No significant changes detected
|
| perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
|---|---|---|---|---|---|---|
| ❌ | otlp_ingest_logs_5mb_memory | memory utilization | +10.71 | [+10.00, +11.43] | 1 | (metrics) (profiles) (logs) |
| ➖ | otlp_ingest_logs_5mb_cpu | % cpu utilization | +0.47 | [-4.88, +5.82] | 1 | (metrics) (profiles) (logs) |
| ➖ | otlp_ingest_logs_5mb_throughput | ingress throughput | +0.03 | [-0.10, +0.16] | 1 | (metrics) (profiles) (logs) |
Fine details of change detection per experiment
| perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
|---|---|---|---|---|---|---|
| ❌ | otlp_ingest_logs_5mb_memory | memory utilization | +10.71 | [+10.00, +11.43] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_512kb_3k_contexts_cpu | % cpu utilization | +4.87 | [-50.63, +60.37] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_1mb_3k_contexts_cpu | % cpu utilization | +4.43 | [-47.32, +56.18] | 1 | (metrics) (profiles) (logs) |
| ➖ | quality_gates_rss_idle | memory utilization | +3.01 | [+2.94, +3.08] | 1 | (metrics) (profiles) (logs) |
| ➖ | otlp_ingest_metrics_5mb_cpu | % cpu utilization | +2.17 | [-3.85, +8.19] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_10mb_3k_contexts_memory | memory utilization | +2.15 | [+1.96, +2.35] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_1mb_3k_contexts_memory | memory utilization | +1.74 | [+1.55, +1.93] | 1 | (metrics) (profiles) (logs) |
| ➖ | quality_gates_rss_dsd_medium | memory utilization | +1.38 | [+1.19, +1.57] | 1 | (metrics) (profiles) (logs) |
| ➖ | otlp_ingest_traces_5mb_cpu | % cpu utilization | +1.33 | [-1.27, +3.93] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_500mb_3k_contexts_memory | memory utilization | +1.20 | [+1.01, +1.38] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_100mb_3k_contexts_memory | memory utilization | +1.11 | [+0.92, +1.30] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_512kb_3k_contexts_memory | memory utilization | +0.92 | [+0.74, +1.10] | 1 | (metrics) (profiles) (logs) |
| ➖ | quality_gates_rss_dsd_low | memory utilization | +0.89 | [+0.71, +1.06] | 1 | (metrics) (profiles) (logs) |
| ➖ | otlp_ingest_logs_5mb_cpu | % cpu utilization | +0.47 | [-4.88, +5.82] | 1 | (metrics) (profiles) (logs) |
| ➖ | quality_gates_rss_dsd_heavy | memory utilization | +0.45 | [+0.33, +0.58] | 1 | (metrics) (profiles) (logs) |
| ➖ | quality_gates_rss_dsd_ultraheavy | memory utilization | +0.28 | [+0.15, +0.41] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_100mb_3k_contexts_cpu | % cpu utilization | +0.25 | [-5.75, +6.24] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_500mb_3k_contexts_cpu | % cpu utilization | +0.03 | [-1.24, +1.31] | 1 | (metrics) (profiles) (logs) |
| ➖ | otlp_ingest_logs_5mb_throughput | ingress throughput | +0.03 | [-0.10, +0.16] | 1 | (metrics) (profiles) (logs) |
| ➖ | otlp_ingest_metrics_5mb_throughput | ingress throughput | +0.02 | [-0.10, +0.14] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_10mb_3k_contexts_throughput | ingress throughput | +0.01 | [-0.15, +0.17] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_512kb_3k_contexts_throughput | ingress throughput | +0.00 | [-0.05, +0.05] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_1mb_3k_contexts_throughput | ingress throughput | -0.00 | [-0.06, +0.06] | 1 | (metrics) (profiles) (logs) |
| ➖ | otlp_ingest_traces_5mb_throughput | ingress throughput | -0.01 | [-0.09, +0.07] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_100mb_3k_contexts_throughput | ingress throughput | -0.03 | [-0.08, +0.03] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_500mb_3k_contexts_throughput | ingress throughput | -0.11 | [-0.23, +0.02] | 1 | (metrics) (profiles) (logs) |
| ➖ | otlp_ingest_traces_5mb_memory | memory utilization | -0.40 | [-0.65, -0.15] | 1 | (metrics) (profiles) (logs) |
| ➖ | otlp_ingest_metrics_5mb_memory | memory utilization | -1.64 | [-1.88, -1.41] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_10mb_3k_contexts_cpu | % cpu utilization | -4.97 | [-34.86, +24.92] | 1 | (metrics) (profiles) (logs) |
Bounds Checks: ✅ Passed
| perf | experiment | bounds_check_name | replicates_passed | links |
|---|---|---|---|---|
| ✅ | quality_gates_rss_dsd_heavy | memory_usage | 10/10 | (metrics) (profiles) (logs) |
| ✅ | quality_gates_rss_dsd_low | memory_usage | 10/10 | (metrics) (profiles) (logs) |
| ✅ | quality_gates_rss_dsd_medium | memory_usage | 10/10 | (metrics) (profiles) (logs) |
| ✅ | quality_gates_rss_dsd_ultraheavy | memory_usage | 10/10 | (metrics) (profiles) (logs) |
| ✅ | quality_gates_rss_idle | memory_usage | 10/10 | (metrics) (profiles) (logs) |
Explanation
Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%
Performance changes are noted in the perf column of each table:
- ✅ = significantly better comparison variant performance
- ❌ = significantly worse comparison variant performance
- ➖ = no significant change in performance
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
Replicate Execution Details
We run multiple replicates for each experiment/variant. However, we allow replicates to be automatically retried if there are any failures, up to 8 times, at which point the replicate is marked dead and we are unable to run analysis for the entire experiment. We call each of these attempts at running replicates a replicate execution. This section lists all replicate executions that failed due to the target crashing or being oom killed.
Note: In the below tables we bucket failures by experiment, variant, and failure type. For each of these buckets we list out the replicate indexes that failed with an annotation signifying how many times said replicate failed with the given failure mode. In the below example the baseline variant of the experiment named experiment_with_failures had two replicates that failed by oom kills. Replicate 0, which failed 8 executions, and replicate 1 which failed 6 executions, all with the same failure mode.
| Experiment | Variant | Replicates | Failure | Logs | Debug Dashboard |
|---|---|---|---|---|---|
| experiment_with_failures | baseline | 0 (x8) 1 (x6) | Oom killed | Debug Dashboard |
The debug dashboard links will take you to a debugging dashboard specifically designed to investigate replicate execution failures.
❌ Retried Normal Replicate Execution Failures (non-profiling)
| Experiment | Variant | Replicates | Failure | Debug Dashboard |
|---|---|---|---|---|
| otlp_ingest_logs_5mb_cpu | comparison | 0 | Failed to shutdown when requested | Debug Dashboard |
| otlp_ingest_logs_5mb_memory | comparison | 2, 1 | Failed to shutdown when requested | Debug Dashboard |
| otlp_ingest_traces_5mb_cpu | baseline | 1 | Failed to shutdown when requested | Debug Dashboard |
| quality_gates_rss_dsd_medium | baseline | 5 | Failed to shutdown when requested | Debug Dashboard |
| quality_gates_rss_dsd_ultraheavy | baseline | 6 | Failed to shutdown when requested | Debug Dashboard |
70c54c9 to
b72ece8
Compare
a0e1d19 to
7ca8cbc
Compare

Summary
This PR slightly refactors
HealthWorkerand the underlying health registry runner code to support being able to restart the health registry worker.Prior to this PR, spawning the health registry worker would fail un subsequent attempts since the receiver used to registry new components into the registry was already consumed by the first call to spawn the worker. We've simply added the ability to return the receiver and reset the state such that subsequent attempts to spawn the worker can take the receiver. We're still limited by only being able to have a single health registry worker at a time, but at least we can now cleanly recover from it being restarted.
Change Type
How did you test this PR?
Existing and new unit tests.
References
AGTMETRICS-393