-
Notifications
You must be signed in to change notification settings - Fork 8
enhancement(core): add support for spawning supervisors on a dedicated runtime #1145
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Binary Size Analysis (Agent Data Plane)Target: 9fe09ef (baseline) vs 694d69e (comparison) diff
|
| Module | File Size | Symbols |
|---|---|---|
tokio |
+19.15 KiB | 2709 |
core |
+6.78 KiB | 11100 |
[Unmapped] |
-2.59 KiB | 1 |
alloc |
+1.88 KiB | 1216 |
[sections] |
+1.09 KiB | 7 |
saluki_env::workload::helpers |
-698 B | 60 |
std |
+386 B | 285 |
saluki_app::api::APIBuilder |
+370 B | 4 |
saluki_common::task::instrument |
-361 B | 86 |
agent_data_plane::cli::dogstatsd |
-301 B | 34 |
agent_data_plane::cli::debug |
+301 B | 92 |
agent_data_plane::internal::remote_agent |
-244 B | 62 |
miniz_oxide |
-227 B | 1 |
futures_util |
-208 B | 86 |
tonic |
-161 B | 356 |
saluki_components::common::datadog |
+160 B | 326 |
tokio_util |
-158 B | 38 |
hyper |
+130 B | 594 |
saluki_components::forwarders::otlp |
+120 B | 60 |
agent_data_plane::main |
-120 B | 2 |
Detailed Symbol Changes
FILE SIZE VM SIZE
-------------- --------------
[NEW] +1.79Mi [NEW] +1.79Mi std::thread::local::LocalKey<T>::with::h938f6ddf7aedbbb8
[NEW] +113Ki [NEW] +113Ki agent_data_plane::cli::run::create_topology::_{{closure}}::hfb0b645d0bb8c8cf
[NEW] +84.5Ki [NEW] +84.4Ki agent_data_plane::internal::control_plane::spawn_control_plane::_{{closure}}::h6a61f8e463394a17
[NEW] +68.2Ki [NEW] +68.1Ki h2::hpack::decoder::Decoder::try_decode_string::hd1c9a39c48e78a6e
[NEW] +63.7Ki [NEW] +63.6Ki saluki_components::common::datadog::io::run_endpoint_io_loop::_{{closure}}::hd35493454b2aefa0
[NEW] +62.3Ki [NEW] +62.2Ki agent_data_plane::main::_{{closure}}::hd35953e90113f5c5
[NEW] +57.8Ki [NEW] +57.7Ki agent_data_plane::cli::run::handle_run_command::_{{closure}}::hb1b79147c6e6e2ef
[NEW] +48.8Ki [NEW] +48.7Ki saluki_app::bootstrap::AppBootstrapper::bootstrap::_{{closure}}::hc8e634acdf97ad33
[NEW] +47.7Ki [NEW] +47.6Ki moka::sync::base_cache::Inner<K,V,S>::do_run_pending_tasks::h0ef6755dbc77ea2a
[NEW] +46.1Ki [NEW] +46.0Ki h2::proto::connection::Connection<T,P,B>::poll::h1671860da0918c66
+0.2% +26.0Ki +0.2% +23.3Ki [29572 Others]
[DEL] -46.1Ki [DEL] -46.0Ki h2::proto::connection::Connection<T,P,B>::poll::h2aedcbe1089b311c
[DEL] -47.7Ki [DEL] -47.6Ki moka::sync::base_cache::Inner<K,V,S>::do_run_pending_tasks::ha97a2f55834f17a3
[DEL] -48.8Ki [DEL] -48.7Ki saluki_app::bootstrap::AppBootstrapper::bootstrap::_{{closure}}::hbb5d0d8e944d3a21
[DEL] -57.8Ki [DEL] -57.7Ki agent_data_plane::cli::run::handle_run_command::_{{closure}}::hd8d13580d16cc8a3
[DEL] -62.3Ki [DEL] -62.2Ki agent_data_plane::main::_{{closure}}::h1c7c640002ce8ad1
[DEL] -63.8Ki [DEL] -63.6Ki saluki_components::common::datadog::io::run_endpoint_io_loop::_{{closure}}::hc194831d658db8cc
[DEL] -68.2Ki [DEL] -68.1Ki h2::hpack::decoder::Decoder::try_decode_string::hbfc0bb5a77e7669f
[DEL] -84.5Ki [DEL] -84.4Ki agent_data_plane::internal::control_plane::spawn_control_plane::_{{closure}}::heace61ab0f2bde0c
[DEL] -113Ki [DEL] -113Ki agent_data_plane::cli::run::create_topology::_{{closure}}::h6a8b41fa76a89e2c
[DEL] -1.79Mi [DEL] -1.79Mi std::thread::local::LocalKey<T>::with::h0d8770940d38805b
+0.1% +25.3Ki +0.1% +22.6Ki TOTAL
Regression Detector (Agent Data Plane)Regression Detector ResultsRun ID: 50fbb559-f4d7-4cc6-9f12-9a57fdfcd067 Baseline: 9fe09ef ❌ Experiments with retried target crashesThis is a critical error. One or more replicates failed with a non-zero exit code. These replicates may have been retried. See Replicate Execution Details for more information.
Optimization Goals: ✅ Improvement(s) detected
|
| perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
|---|---|---|---|---|---|---|
| ➖ | otlp_ingest_logs_5mb_memory | memory utilization | +2.09 | [+1.51, +2.67] | 1 | (metrics) (profiles) (logs) |
| ➖ | otlp_ingest_logs_5mb_throughput | ingress throughput | +0.00 | [-0.13, +0.13] | 1 | (metrics) (profiles) (logs) |
| ➖ | otlp_ingest_logs_5mb_cpu | % cpu utilization | -1.20 | [-6.29, +3.88] | 1 | (metrics) (profiles) (logs) |
Fine details of change detection per experiment
| perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
|---|---|---|---|---|---|---|
| ➖ | dsd_uds_512kb_3k_contexts_cpu | % cpu utilization | +9.82 | [-49.12, +68.76] | 1 | (metrics) (profiles) (logs) |
| ➖ | otlp_ingest_traces_5mb_cpu | % cpu utilization | +2.84 | [+0.21, +5.48] | 1 | (metrics) (profiles) (logs) |
| ➖ | otlp_ingest_logs_5mb_memory | memory utilization | +2.09 | [+1.51, +2.67] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_100mb_3k_contexts_memory | memory utilization | +1.29 | [+1.09, +1.49] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_500mb_3k_contexts_memory | memory utilization | +0.98 | [+0.80, +1.16] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_500mb_3k_contexts_throughput | ingress throughput | +0.94 | [+0.81, +1.07] | 1 | (metrics) (profiles) (logs) |
| ➖ | quality_gates_rss_idle | memory utilization | +0.82 | [+0.76, +0.88] | 1 | (metrics) (profiles) (logs) |
| ➖ | quality_gates_rss_dsd_medium | memory utilization | +0.56 | [+0.37, +0.76] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_512kb_3k_contexts_memory | memory utilization | +0.48 | [+0.30, +0.66] | 1 | (metrics) (profiles) (logs) |
| ➖ | otlp_ingest_traces_5mb_memory | memory utilization | +0.43 | [+0.19, +0.68] | 1 | (metrics) (profiles) (logs) |
| ➖ | quality_gates_rss_dsd_heavy | memory utilization | +0.29 | [+0.17, +0.41] | 1 | (metrics) (profiles) (logs) |
| ➖ | quality_gates_rss_dsd_ultraheavy | memory utilization | +0.29 | [+0.16, +0.42] | 1 | (metrics) (profiles) (logs) |
| ➖ | quality_gates_rss_dsd_low | memory utilization | +0.19 | [+0.02, +0.37] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_10mb_3k_contexts_memory | memory utilization | +0.14 | [-0.06, +0.34] | 1 | (metrics) (profiles) (logs) |
| ➖ | otlp_ingest_metrics_5mb_throughput | ingress throughput | +0.02 | [-0.11, +0.14] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_100mb_3k_contexts_throughput | ingress throughput | +0.01 | [-0.03, +0.05] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_1mb_3k_contexts_throughput | ingress throughput | +0.00 | [-0.05, +0.06] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_512kb_3k_contexts_throughput | ingress throughput | +0.00 | [-0.05, +0.06] | 1 | (metrics) (profiles) (logs) |
| ➖ | otlp_ingest_logs_5mb_throughput | ingress throughput | +0.00 | [-0.13, +0.13] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_10mb_3k_contexts_throughput | ingress throughput | -0.00 | [-0.16, +0.15] | 1 | (metrics) (profiles) (logs) |
| ➖ | otlp_ingest_traces_5mb_throughput | ingress throughput | -0.01 | [-0.09, +0.07] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_1mb_3k_contexts_memory | memory utilization | -0.33 | [-0.52, -0.15] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_100mb_3k_contexts_cpu | % cpu utilization | -0.60 | [-6.45, +5.25] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_10mb_3k_contexts_cpu | % cpu utilization | -0.78 | [-29.49, +27.94] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_500mb_3k_contexts_cpu | % cpu utilization | -1.17 | [-2.45, +0.11] | 1 | (metrics) (profiles) (logs) |
| ➖ | otlp_ingest_logs_5mb_cpu | % cpu utilization | -1.20 | [-6.29, +3.88] | 1 | (metrics) (profiles) (logs) |
| ➖ | otlp_ingest_metrics_5mb_cpu | % cpu utilization | -1.72 | [-7.64, +4.19] | 1 | (metrics) (profiles) (logs) |
| ✅ | otlp_ingest_metrics_5mb_memory | memory utilization | -5.63 | [-5.87, -5.40] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_1mb_3k_contexts_cpu | % cpu utilization | -8.64 | [-58.91, +41.63] | 1 | (metrics) (profiles) (logs) |
Bounds Checks: ✅ Passed
| perf | experiment | bounds_check_name | replicates_passed | links |
|---|---|---|---|---|
| ✅ | quality_gates_rss_dsd_heavy | memory_usage | 10/10 | (metrics) (profiles) (logs) |
| ✅ | quality_gates_rss_dsd_low | memory_usage | 10/10 | (metrics) (profiles) (logs) |
| ✅ | quality_gates_rss_dsd_medium | memory_usage | 10/10 | (metrics) (profiles) (logs) |
| ✅ | quality_gates_rss_dsd_ultraheavy | memory_usage | 10/10 | (metrics) (profiles) (logs) |
| ✅ | quality_gates_rss_idle | memory_usage | 10/10 | (metrics) (profiles) (logs) |
Explanation
Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%
Performance changes are noted in the perf column of each table:
- ✅ = significantly better comparison variant performance
- ❌ = significantly worse comparison variant performance
- ➖ = no significant change in performance
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
Replicate Execution Details
We run multiple replicates for each experiment/variant. However, we allow replicates to be automatically retried if there are any failures, up to 8 times, at which point the replicate is marked dead and we are unable to run analysis for the entire experiment. We call each of these attempts at running replicates a replicate execution. This section lists all replicate executions that failed due to the target crashing or being oom killed.
Note: In the below tables we bucket failures by experiment, variant, and failure type. For each of these buckets we list out the replicate indexes that failed with an annotation signifying how many times said replicate failed with the given failure mode. In the below example the baseline variant of the experiment named experiment_with_failures had two replicates that failed by oom kills. Replicate 0, which failed 8 executions, and replicate 1 which failed 6 executions, all with the same failure mode.
| Experiment | Variant | Replicates | Failure | Logs | Debug Dashboard |
|---|---|---|---|---|---|
| experiment_with_failures | baseline | 0 (x8) 1 (x6) | Oom killed | Debug Dashboard |
The debug dashboard links will take you to a debugging dashboard specifically designed to investigate replicate execution failures.
❌ Retried Normal Replicate Execution Failures (non-profiling)
| Experiment | Variant | Replicates | Failure | Debug Dashboard |
|---|---|---|---|---|
| dsd_uds_100mb_3k_contexts_cpu | comparison | 6 | Failed to shutdown when requested | Debug Dashboard |
| otlp_ingest_logs_5mb_throughput | baseline | 8 | Failed to shutdown when requested | Debug Dashboard |
| otlp_ingest_metrics_5mb_memory | comparison | 4 | Failed to shutdown when requested | Debug Dashboard |
| otlp_ingest_traces_5mb_memory | comparison | 3 | Failed to shutdown when requested | Debug Dashboard |
| quality_gates_rss_dsd_medium | comparison | 0 | Failed to shutdown when requested | Debug Dashboard |
6a8f7e2 to
10571d0
Compare
10571d0 to
d4f07a2
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR enhances the supervisor runtime system to support spawning supervisors on dedicated asynchronous runtimes, enabling runtime isolation for different components of the control and data planes.
Changes:
- Added
RuntimeModeenum to control whether supervisors run on ambient or dedicated runtimes - Implemented
RuntimeConfigurationfor configuring single-threaded or multi-threaded dedicated runtimes - Extended
SupervisorAPI with methods to configure dedicated runtime execution
Reviewed changes
Copilot reviewed 4 out of 5 changed files in this pull request and generated 1 comment.
| File | Description |
|---|---|
| lib/saluki-core/src/runtime/supervisor.rs | Added runtime mode field and methods to support dedicated runtime configuration and execution |
| lib/saluki-core/src/runtime/mod.rs | Exported new RuntimeConfiguration and RuntimeMode types |
| lib/saluki-core/src/runtime/dedicated.rs | Implemented dedicated runtime spawning logic with OS thread management and initialization handling |
| Cargo.toml | Updated tokio version to 1.49 to support new runtime features |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
b278bff to
332f393
Compare
332f393 to
694d69e
Compare

Summary
This PR adds support to the runtime system to configure/spawn supervisors on a dedicated asynchronous runtime.
As part of the goal to supervisor-ify the various tasks that compromise the control plane and data plane functionality of ADP, a major consideration is that we spawn various tasks on dedicated runtimes: we have a dedicated control plane runtime, internal observability runtime, the main multi-threaded runtime where the topology runs, and then the dedicated multi-threaded runtime we use for compute-heavy tasks.
Supervisorwas designed with nesting in mind, but everything assumed that we'd spawn tasks on the exact same runtime across the entire supervision tree... which doesn't align with our needs in practice.This PR adds support for customizing a
Supervisorto specify what runtime it should execute on: the ambient runtime (wherever its run from at that moment), or a dedicated runtime. When using a dedicated runtime, we can configure whether or not a single-threaded or multi-threaded runtime is created behind the scenes. A dedicated background OS thread is spawned in either case to drive the supervisor to completion, with the difference being that for single-threaded runtimes, all work happens on that thread, while multi-threaded runtimes have a pool of dedicated worker threads where all meaningful task execution occurs.Change Type
How did you test this PR?
Existing unit tests.
References
AGTMETRICS-393