-
Notifications
You must be signed in to change notification settings - Fork 8
enhancement(api): add dynamic API router based on runtime state #1179
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: tobz/runtime-system-state-mgmt-primitives
Are you sure you want to change the base?
enhancement(api): add dynamic API router based on runtime state #1179
Conversation
|
Warning This pull request is not mergeable via GitHub because a downstack PR is open. Once all requirements are satisfied, merge this PR as a stack on Graphite.
This stack of pull requests is managed by Graphite. Learn more about stacking. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Adds a dynamic API server that can hot-swap HTTP and gRPC route sets at runtime based on dataspace registry assertions.
Changes:
- Introduces
DynamicAPIBuilderto multiplex dynamically-updated HTTP and gRPC routers on a single listener. - Adds new
saluki-apitypes (EndpointType,DynamicHttpRoute,DynamicGrpcRoute) to describe and publish dynamic routes. - Updates
saluki-appto expose the new module and include required dependencies.
Reviewed changes
Copilot reviewed 4 out of 5 changed files in this pull request and generated 6 comments.
| File | Description |
|---|---|
| lib/saluki-app/src/lib.rs | Exposes the new dynamic_api module. |
| lib/saluki-app/src/dynamic_api.rs | Implements the dynamic router swapping server and event loop. |
| lib/saluki-app/Cargo.toml | Adds deps needed for swapping/services (arc-swap, async-trait, hyper). |
| lib/saluki-api/src/lib.rs | Adds public types for endpoint selection and dynamic route publication. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| pub fn with_self_signed_tls(self) -> Self { | ||
| let CertifiedKey { cert, key_pair } = generate_simple_self_signed(["localhost".to_owned()]).unwrap(); | ||
| let cert_chain = vec![cert.der().clone()]; | ||
| let key = PrivateKeyDer::Pkcs8(PrivatePkcs8KeyDer::from(key_pair.serialize_der())); | ||
|
|
||
| let config = ServerConfig::builder() | ||
| .with_no_client_auth() | ||
| .with_single_cert(cert_chain, key) | ||
| .unwrap(); | ||
|
|
||
| self.with_tls_config(config) |
Copilot
AI
Feb 9, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
with_self_signed_tls uses unwrap() twice, which can panic inside a library API. Consider changing this method to return Result<Self, E> (or Result<Self, GenericError>) and propagate failures from certificate/key generation and with_single_cert instead of panicking.
| pub fn with_self_signed_tls(self) -> Self { | |
| let CertifiedKey { cert, key_pair } = generate_simple_self_signed(["localhost".to_owned()]).unwrap(); | |
| let cert_chain = vec![cert.der().clone()]; | |
| let key = PrivateKeyDer::Pkcs8(PrivatePkcs8KeyDer::from(key_pair.serialize_der())); | |
| let config = ServerConfig::builder() | |
| .with_no_client_auth() | |
| .with_single_cert(cert_chain, key) | |
| .unwrap(); | |
| self.with_tls_config(config) | |
| pub fn with_self_signed_tls(self) -> Result<Self, GenericError> { | |
| let CertifiedKey { cert, key_pair } = | |
| generate_simple_self_signed(["localhost".to_owned()]).map_err(GenericError::from)?; | |
| let cert_chain = vec![cert.der().clone()]; | |
| let key = PrivateKeyDer::Pkcs8(PrivatePkcs8KeyDer::from(key_pair.serialize_der())); | |
| let config = ServerConfig::builder() | |
| .with_no_client_auth() | |
| .with_single_cert(cert_chain, key) | |
| .map_err(GenericError::from)?; | |
| Ok(self.with_tls_config(config)) |
| maybe_update = http_subscription.recv() => { | ||
| let Some(update) = maybe_update else { | ||
| warn!("HTTP route subscription channel closed."); | ||
| break; | ||
| }; |
Copilot
AI
Feb 9, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the HTTP subscription channel closes, the event loop breaks without calling shutdown_handle.shutdown(). That can leave the HTTP server running while the supervisor future completes (or at least skip graceful shutdown). Consider calling shutdown_handle.shutdown() before breaking (or returning an error) so the server lifecycle is consistent.
|
|
||
| maybe_update = grpc_subscription.recv() => { | ||
| let Some(update) = maybe_update else { | ||
| warn!("gRPC route subscription channel closed."); |
Copilot
AI
Feb 9, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same issue as the HTTP subscription: if the gRPC subscription channel closes, the loop exits without triggering shutdown_handle.shutdown(). Consider shutting down the server (or returning an error) on this path as well.
| warn!("gRPC route subscription channel closed."); | |
| warn!("gRPC route subscription channel closed."); | |
| shutdown_handle.shutdown(); |
| fn call(&mut self, request: http::Request<AxumBody>) -> Self::Future { | ||
| let mut router = Arc::unwrap_or_clone(self.inner_router.load_full()); | ||
| Box::pin(async move { router.call(request).await }) | ||
| } |
Copilot
AI
Feb 9, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Arc::unwrap_or_clone(self.inner_router.load_full()) will almost always clone the entire Router for every request (because the Arc is typically shared). Consider structuring this so each request avoids cloning the full router (e.g., store an Arc<Router> and call into it via a service that is cheap to clone, or keep a ready-to-use service behind the swap).
| debug!(?handle, "Registering dynamic HTTP handler."); | ||
| http_handlers.insert(handle, route.router); | ||
| } | ||
| AssertionUpdate::Retracted(handle) => { | ||
| if http_handlers.swap_remove(&handle).is_some() { | ||
| debug!(?handle, "Withdrawing dynamic HTTP handler."); |
Copilot
AI
Feb 9, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Log levels are inconsistent between HTTP (debug) and gRPC (info) for the same event type (register/withdraw). Consider using the same level for both to keep operational noise predictable.
| debug!(?handle, "Registering dynamic HTTP handler."); | |
| http_handlers.insert(handle, route.router); | |
| } | |
| AssertionUpdate::Retracted(handle) => { | |
| if http_handlers.swap_remove(&handle).is_some() { | |
| debug!(?handle, "Withdrawing dynamic HTTP handler."); | |
| info!(?handle, "Registering dynamic HTTP handler."); | |
| http_handlers.insert(handle, route.router); | |
| } | |
| AssertionUpdate::Retracted(handle) => { | |
| if http_handlers.swap_remove(&handle).is_some() { | |
| info!(?handle, "Withdrawing dynamic HTTP handler."); |
| info!(handle = ?handle, "Registering dynamic gRPC handler."); | ||
| grpc_handlers.insert(handle, route.router); | ||
| } | ||
| AssertionUpdate::Retracted(handle) => { | ||
| if grpc_handlers.swap_remove(&handle).is_some() { | ||
| info!(handle = ?handle, "Withdrawing dynamic gRPC handler."); |
Copilot
AI
Feb 9, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Log levels are inconsistent between HTTP (debug) and gRPC (info) for the same event type (register/withdraw). Consider using the same level for both to keep operational noise predictable.
| info!(handle = ?handle, "Registering dynamic gRPC handler."); | |
| grpc_handlers.insert(handle, route.router); | |
| } | |
| AssertionUpdate::Retracted(handle) => { | |
| if grpc_handlers.swap_remove(&handle).is_some() { | |
| info!(handle = ?handle, "Withdrawing dynamic gRPC handler."); | |
| debug!(handle = ?handle, "Registering dynamic gRPC handler."); | |
| grpc_handlers.insert(handle, route.router); | |
| } | |
| AssertionUpdate::Retracted(handle) => { | |
| if grpc_handlers.swap_remove(&handle).is_some() { | |
| debug!(handle = ?handle, "Withdrawing dynamic gRPC handler."); |
Binary Size Analysis (Agent Data Plane)Target: 9fe09ef (baseline) vs 93ea4b3 (comparison) diff
|
| Module | File Size | Symbols |
|---|---|---|
saluki_core::runtime::supervisor |
+69.67 KiB | 59 |
core |
+62.23 KiB | 11204 |
tokio |
+24.16 KiB | 2717 |
agent_data_plane::internal::initialize_and_launch_runtime |
-22.07 KiB | 2 |
agent_data_plane::internal::create_internal_supervisor |
+16.05 KiB | 1 |
saluki_app::memory::MemoryBoundsConfiguration |
-13.70 KiB | 5 |
[sections] |
+12.91 KiB | 7 |
agent_data_plane::internal::control_plane |
-12.14 KiB | 26 |
std |
-11.16 KiB | 284 |
saluki_core::runtime::process |
+11.03 KiB | 9 |
anyhow |
+8.61 KiB | 1293 |
agent_data_plane::cli::run |
+7.81 KiB | 76 |
saluki_core::runtime::dedicated |
+5.63 KiB | 4 |
agent_data_plane::internal::observability |
+5.60 KiB | 16 |
[Unmapped] |
-5.57 KiB | 1 |
saluki_core::topology::running |
+5.50 KiB | 31 |
alloc |
+5.22 KiB | 1227 |
saluki_app::metrics::collect_runtime_metrics |
-4.73 KiB | 1 |
matchit |
+4.22 KiB | 53 |
saluki_core::runtime::restart |
+3.50 KiB | 7 |
Detailed Symbol Changes
FILE SIZE VM SIZE
-------------- --------------
[NEW] +1.79Mi [NEW] +1.79Mi std::thread::local::LocalKey<T>::with::h346695abe3c51b40
+1.3% +205Ki +1.3% +169Ki [29837 Others]
[NEW] +113Ki [NEW] +113Ki agent_data_plane::cli::run::create_topology::_{{closure}}::h78a1190544a24f76
[NEW] +68.2Ki [NEW] +68.1Ki h2::hpack::decoder::Decoder::try_decode_string::hd1c9a39c48e78a6e
[NEW] +63.8Ki [NEW] +63.7Ki agent_data_plane::cli::run::handle_run_command::_{{closure}}::h06c0894d865ef0e8
[NEW] +63.7Ki [NEW] +63.6Ki saluki_components::common::datadog::io::run_endpoint_io_loop::_{{closure}}::hd35493454b2aefa0
[NEW] +62.3Ki [NEW] +62.2Ki agent_data_plane::main::_{{closure}}::h3b8c24a68a67fe4a
[NEW] +59.3Ki [NEW] +59.1Ki _<agent_data_plane::internal::control_plane::PrivilegedApiWorker as saluki_core::runtime::supervisor::Supervisable>::initialize::_{{closure}}::h0916f5a82a9d11a9
[NEW] +48.8Ki [NEW] +48.7Ki saluki_app::bootstrap::AppBootstrapper::bootstrap::_{{closure}}::h6b2ac096928e54f5
[NEW] +47.7Ki [NEW] +47.6Ki moka::sync::base_cache::Inner<K,V,S>::do_run_pending_tasks::h0ef6755dbc77ea2a
[NEW] +46.1Ki [NEW] +46.0Ki h2::proto::connection::Connection<T,P,B>::poll::h1671860da0918c66
[DEL] -46.1Ki [DEL] -46.0Ki h2::proto::connection::Connection<T,P,B>::poll::h2aedcbe1089b311c
[DEL] -47.7Ki [DEL] -47.6Ki moka::sync::base_cache::Inner<K,V,S>::do_run_pending_tasks::ha97a2f55834f17a3
[DEL] -48.8Ki [DEL] -48.7Ki saluki_app::bootstrap::AppBootstrapper::bootstrap::_{{closure}}::hbb5d0d8e944d3a21
[DEL] -57.8Ki [DEL] -57.7Ki agent_data_plane::cli::run::handle_run_command::_{{closure}}::hd8d13580d16cc8a3
[DEL] -62.3Ki [DEL] -62.2Ki agent_data_plane::main::_{{closure}}::h1c7c640002ce8ad1
[DEL] -63.8Ki [DEL] -63.6Ki saluki_components::common::datadog::io::run_endpoint_io_loop::_{{closure}}::hc194831d658db8cc
[DEL] -68.2Ki [DEL] -68.1Ki h2::hpack::decoder::Decoder::try_decode_string::hbfc0bb5a77e7669f
[DEL] -84.5Ki [DEL] -84.4Ki agent_data_plane::internal::control_plane::spawn_control_plane::_{{closure}}::heace61ab0f2bde0c
[DEL] -113Ki [DEL] -113Ki agent_data_plane::cli::run::create_topology::_{{closure}}::h6a8b41fa76a89e2c
[DEL] -1.79Mi [DEL] -1.79Mi std::thread::local::LocalKey<T>::with::h0d8770940d38805b
+0.7% +185Ki +0.6% +150Ki TOTAL
Regression Detector (Agent Data Plane)Regression Detector ResultsRun ID: e715a53f-d18d-44c9-a9c2-ac9cca23abe8 Baseline: 9fe09ef ❌ Experiments with retried target crashesThis is a critical error. One or more replicates failed with a non-zero exit code. These replicates may have been retried. See Replicate Execution Details for more information.
Optimization Goals: ❌ Regression(s) detected
|
| perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
|---|---|---|---|---|---|---|
| ➖ | otlp_ingest_logs_5mb_throughput | ingress throughput | +0.03 | [-0.10, +0.17] | 1 | (metrics) (profiles) (logs) |
| ➖ | otlp_ingest_logs_5mb_memory | memory utilization | -0.76 | [-1.33, -0.18] | 1 | (metrics) (profiles) (logs) |
| ➖ | otlp_ingest_logs_5mb_cpu | % cpu utilization | -1.30 | [-6.50, +3.90] | 1 | (metrics) (profiles) (logs) |
Fine details of change detection per experiment
| perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
|---|---|---|---|---|---|---|
| ➖ | dsd_uds_1mb_3k_contexts_cpu | % cpu utilization | +9.04 | [-44.55, +62.62] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_10mb_3k_contexts_cpu | % cpu utilization | +8.11 | [-23.31, +39.54] | 1 | (metrics) (profiles) (logs) |
| ❌ | otlp_ingest_metrics_5mb_memory | memory utilization | +5.29 | [+5.07, +5.51] | 1 | (metrics) (profiles) (logs) |
| ➖ | quality_gates_rss_idle | memory utilization | +2.39 | [+2.32, +2.45] | 1 | (metrics) (profiles) (logs) |
| ➖ | quality_gates_rss_dsd_low | memory utilization | +1.83 | [+1.66, +2.00] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_500mb_3k_contexts_throughput | ingress throughput | +1.64 | [+1.51, +1.76] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_100mb_3k_contexts_memory | memory utilization | +1.43 | [+1.23, +1.62] | 1 | (metrics) (profiles) (logs) |
| ➖ | quality_gates_rss_dsd_medium | memory utilization | +1.33 | [+1.14, +1.53] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_1mb_3k_contexts_memory | memory utilization | +1.32 | [+1.13, +1.51] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_10mb_3k_contexts_memory | memory utilization | +1.01 | [+0.81, +1.21] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_512kb_3k_contexts_memory | memory utilization | +0.88 | [+0.70, +1.06] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_500mb_3k_contexts_memory | memory utilization | +0.36 | [+0.18, +0.54] | 1 | (metrics) (profiles) (logs) |
| ➖ | quality_gates_rss_dsd_heavy | memory utilization | +0.08 | [-0.05, +0.21] | 1 | (metrics) (profiles) (logs) |
| ➖ | otlp_ingest_logs_5mb_throughput | ingress throughput | +0.03 | [-0.10, +0.17] | 1 | (metrics) (profiles) (logs) |
| ➖ | otlp_ingest_metrics_5mb_throughput | ingress throughput | +0.02 | [-0.10, +0.14] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_512kb_3k_contexts_throughput | ingress throughput | -0.00 | [-0.05, +0.05] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_1mb_3k_contexts_throughput | ingress throughput | -0.00 | [-0.06, +0.06] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_10mb_3k_contexts_throughput | ingress throughput | -0.00 | [-0.15, +0.15] | 1 | (metrics) (profiles) (logs) |
| ➖ | otlp_ingest_traces_5mb_throughput | ingress throughput | -0.00 | [-0.09, +0.08] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_100mb_3k_contexts_throughput | ingress throughput | -0.01 | [-0.06, +0.04] | 1 | (metrics) (profiles) (logs) |
| ➖ | quality_gates_rss_dsd_ultraheavy | memory utilization | -0.16 | [-0.30, -0.03] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_500mb_3k_contexts_cpu | % cpu utilization | -0.54 | [-1.88, +0.79] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_100mb_3k_contexts_cpu | % cpu utilization | -0.59 | [-6.75, +5.57] | 1 | (metrics) (profiles) (logs) |
| ➖ | otlp_ingest_logs_5mb_memory | memory utilization | -0.76 | [-1.33, -0.18] | 1 | (metrics) (profiles) (logs) |
| ➖ | otlp_ingest_metrics_5mb_cpu | % cpu utilization | -0.94 | [-6.46, +4.58] | 1 | (metrics) (profiles) (logs) |
| ➖ | otlp_ingest_logs_5mb_cpu | % cpu utilization | -1.30 | [-6.50, +3.90] | 1 | (metrics) (profiles) (logs) |
| ➖ | otlp_ingest_traces_5mb_cpu | % cpu utilization | -1.92 | [-4.54, +0.70] | 1 | (metrics) (profiles) (logs) |
| ➖ | otlp_ingest_traces_5mb_memory | memory utilization | -2.83 | [-3.07, -2.59] | 1 | (metrics) (profiles) (logs) |
| ➖ | dsd_uds_512kb_3k_contexts_cpu | % cpu utilization | -13.26 | [-64.24, +37.71] | 1 | (metrics) (profiles) (logs) |
Bounds Checks: ✅ Passed
| perf | experiment | bounds_check_name | replicates_passed | links |
|---|---|---|---|---|
| ✅ | quality_gates_rss_dsd_heavy | memory_usage | 10/10 | (metrics) (profiles) (logs) |
| ✅ | quality_gates_rss_dsd_low | memory_usage | 10/10 | (metrics) (profiles) (logs) |
| ✅ | quality_gates_rss_dsd_medium | memory_usage | 10/10 | (metrics) (profiles) (logs) |
| ✅ | quality_gates_rss_dsd_ultraheavy | memory_usage | 10/10 | (metrics) (profiles) (logs) |
| ✅ | quality_gates_rss_idle | memory_usage | 10/10 | (metrics) (profiles) (logs) |
Explanation
Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%
Performance changes are noted in the perf column of each table:
- ✅ = significantly better comparison variant performance
- ❌ = significantly worse comparison variant performance
- ➖ = no significant change in performance
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
Replicate Execution Details
We run multiple replicates for each experiment/variant. However, we allow replicates to be automatically retried if there are any failures, up to 8 times, at which point the replicate is marked dead and we are unable to run analysis for the entire experiment. We call each of these attempts at running replicates a replicate execution. This section lists all replicate executions that failed due to the target crashing or being oom killed.
Note: In the below tables we bucket failures by experiment, variant, and failure type. For each of these buckets we list out the replicate indexes that failed with an annotation signifying how many times said replicate failed with the given failure mode. In the below example the baseline variant of the experiment named experiment_with_failures had two replicates that failed by oom kills. Replicate 0, which failed 8 executions, and replicate 1 which failed 6 executions, all with the same failure mode.
| Experiment | Variant | Replicates | Failure | Logs | Debug Dashboard |
|---|---|---|---|---|---|
| experiment_with_failures | baseline | 0 (x8) 1 (x6) | Oom killed | Debug Dashboard |
The debug dashboard links will take you to a debugging dashboard specifically designed to investigate replicate execution failures.
❌ Retried Normal Replicate Execution Failures (non-profiling)
| Experiment | Variant | Replicates | Failure | Debug Dashboard |
|---|---|---|---|---|
| dsd_uds_1mb_3k_contexts_memory | comparison | 1 | Failed to shutdown when requested | Debug Dashboard |
| otlp_ingest_logs_5mb_throughput | baseline | 1 | Failed to shutdown when requested | Debug Dashboard |
| otlp_ingest_traces_5mb_cpu | comparison | 0 | Failed to shutdown when requested | Debug Dashboard |
64f9fcd to
93ea4b3
Compare
e8b5919 to
52f0a8a
Compare

Summary
Change Type
How did you test this PR?
References