Skip to content

Conversation

@tobz
Copy link
Member

@tobz tobz commented Feb 9, 2026

Summary

Change Type

  • Bug fix
  • New feature
  • Non-functional (chore, refactoring, docs)
  • Performance

How did you test this PR?

References

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds a dynamic API server that can hot-swap HTTP and gRPC route sets at runtime based on dataspace registry assertions.

Changes:

  • Introduces DynamicAPIBuilder to multiplex dynamically-updated HTTP and gRPC routers on a single listener.
  • Adds new saluki-api types (EndpointType, DynamicHttpRoute, DynamicGrpcRoute) to describe and publish dynamic routes.
  • Updates saluki-app to expose the new module and include required dependencies.

Reviewed changes

Copilot reviewed 4 out of 5 changed files in this pull request and generated 6 comments.

File Description
lib/saluki-app/src/lib.rs Exposes the new dynamic_api module.
lib/saluki-app/src/dynamic_api.rs Implements the dynamic router swapping server and event loop.
lib/saluki-app/Cargo.toml Adds deps needed for swapping/services (arc-swap, async-trait, hyper).
lib/saluki-api/src/lib.rs Adds public types for endpoint selection and dynamic route publication.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +83 to +93
pub fn with_self_signed_tls(self) -> Self {
let CertifiedKey { cert, key_pair } = generate_simple_self_signed(["localhost".to_owned()]).unwrap();
let cert_chain = vec![cert.der().clone()];
let key = PrivateKeyDer::Pkcs8(PrivatePkcs8KeyDer::from(key_pair.serialize_der()));

let config = ServerConfig::builder()
.with_no_client_auth()
.with_single_cert(cert_chain, key)
.unwrap();

self.with_tls_config(config)
Copy link

Copilot AI Feb 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

with_self_signed_tls uses unwrap() twice, which can panic inside a library API. Consider changing this method to return Result<Self, E> (or Result<Self, GenericError>) and propagate failures from certificate/key generation and with_single_cert instead of panicking.

Suggested change
pub fn with_self_signed_tls(self) -> Self {
let CertifiedKey { cert, key_pair } = generate_simple_self_signed(["localhost".to_owned()]).unwrap();
let cert_chain = vec![cert.der().clone()];
let key = PrivateKeyDer::Pkcs8(PrivatePkcs8KeyDer::from(key_pair.serialize_der()));
let config = ServerConfig::builder()
.with_no_client_auth()
.with_single_cert(cert_chain, key)
.unwrap();
self.with_tls_config(config)
pub fn with_self_signed_tls(self) -> Result<Self, GenericError> {
let CertifiedKey { cert, key_pair } =
generate_simple_self_signed(["localhost".to_owned()]).map_err(GenericError::from)?;
let cert_chain = vec![cert.der().clone()];
let key = PrivateKeyDer::Pkcs8(PrivatePkcs8KeyDer::from(key_pair.serialize_der()));
let config = ServerConfig::builder()
.with_no_client_auth()
.with_single_cert(cert_chain, key)
.map_err(GenericError::from)?;
Ok(self.with_tls_config(config))

Copilot uses AI. Check for mistakes.
Comment on lines +212 to +216
maybe_update = http_subscription.recv() => {
let Some(update) = maybe_update else {
warn!("HTTP route subscription channel closed.");
break;
};
Copy link

Copilot AI Feb 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the HTTP subscription channel closes, the event loop breaks without calling shutdown_handle.shutdown(). That can leave the HTTP server running while the supervisor future completes (or at least skip graceful shutdown). Consider calling shutdown_handle.shutdown() before breaking (or returning an error) so the server lifecycle is consistent.

Copilot uses AI. Check for mistakes.

maybe_update = grpc_subscription.recv() => {
let Some(update) = maybe_update else {
warn!("gRPC route subscription channel closed.");
Copy link

Copilot AI Feb 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same issue as the HTTP subscription: if the gRPC subscription channel closes, the loop exits without triggering shutdown_handle.shutdown(). Consider shutting down the server (or returning an error) on this path as well.

Suggested change
warn!("gRPC route subscription channel closed.");
warn!("gRPC route subscription channel closed.");
shutdown_handle.shutdown();

Copilot uses AI. Check for mistakes.
Comment on lines +177 to +180
fn call(&mut self, request: http::Request<AxumBody>) -> Self::Future {
let mut router = Arc::unwrap_or_clone(self.inner_router.load_full());
Box::pin(async move { router.call(request).await })
}
Copy link

Copilot AI Feb 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Arc::unwrap_or_clone(self.inner_router.load_full()) will almost always clone the entire Router for every request (because the Arc is typically shared). Consider structuring this so each request avoids cloning the full router (e.g., store an Arc<Router> and call into it via a service that is cheap to clone, or keep a ready-to-use service behind the swap).

Copilot uses AI. Check for mistakes.
Comment on lines +224 to +229
debug!(?handle, "Registering dynamic HTTP handler.");
http_handlers.insert(handle, route.router);
}
AssertionUpdate::Retracted(handle) => {
if http_handlers.swap_remove(&handle).is_some() {
debug!(?handle, "Withdrawing dynamic HTTP handler.");
Copy link

Copilot AI Feb 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Log levels are inconsistent between HTTP (debug) and gRPC (info) for the same event type (register/withdraw). Consider using the same level for both to keep operational noise predictable.

Suggested change
debug!(?handle, "Registering dynamic HTTP handler.");
http_handlers.insert(handle, route.router);
}
AssertionUpdate::Retracted(handle) => {
if http_handlers.swap_remove(&handle).is_some() {
debug!(?handle, "Withdrawing dynamic HTTP handler.");
info!(?handle, "Registering dynamic HTTP handler.");
http_handlers.insert(handle, route.router);
}
AssertionUpdate::Retracted(handle) => {
if http_handlers.swap_remove(&handle).is_some() {
info!(?handle, "Withdrawing dynamic HTTP handler.");

Copilot uses AI. Check for mistakes.
Comment on lines +249 to +254
info!(handle = ?handle, "Registering dynamic gRPC handler.");
grpc_handlers.insert(handle, route.router);
}
AssertionUpdate::Retracted(handle) => {
if grpc_handlers.swap_remove(&handle).is_some() {
info!(handle = ?handle, "Withdrawing dynamic gRPC handler.");
Copy link

Copilot AI Feb 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Log levels are inconsistent between HTTP (debug) and gRPC (info) for the same event type (register/withdraw). Consider using the same level for both to keep operational noise predictable.

Suggested change
info!(handle = ?handle, "Registering dynamic gRPC handler.");
grpc_handlers.insert(handle, route.router);
}
AssertionUpdate::Retracted(handle) => {
if grpc_handlers.swap_remove(&handle).is_some() {
info!(handle = ?handle, "Withdrawing dynamic gRPC handler.");
debug!(handle = ?handle, "Registering dynamic gRPC handler.");
grpc_handlers.insert(handle, route.router);
}
AssertionUpdate::Retracted(handle) => {
if grpc_handlers.swap_remove(&handle).is_some() {
debug!(handle = ?handle, "Withdrawing dynamic gRPC handler.");

Copilot uses AI. Check for mistakes.
@pr-commenter
Copy link

pr-commenter bot commented Feb 9, 2026

Binary Size Analysis (Agent Data Plane)

Target: 9fe09ef (baseline) vs 93ea4b3 (comparison) diff
Analysis Type: Stripped binaries (debug symbols excluded)
Baseline Size: 26.96 MiB
Comparison Size: 27.14 MiB
Size Change: +185.84 KiB (+0.67%)
Pass/Fail Threshold: +5%
Result: PASSED ✅

Changes by Module

Module File Size Symbols
saluki_core::runtime::supervisor +69.67 KiB 59
core +62.23 KiB 11204
tokio +24.16 KiB 2717
agent_data_plane::internal::initialize_and_launch_runtime -22.07 KiB 2
agent_data_plane::internal::create_internal_supervisor +16.05 KiB 1
saluki_app::memory::MemoryBoundsConfiguration -13.70 KiB 5
[sections] +12.91 KiB 7
agent_data_plane::internal::control_plane -12.14 KiB 26
std -11.16 KiB 284
saluki_core::runtime::process +11.03 KiB 9
anyhow +8.61 KiB 1293
agent_data_plane::cli::run +7.81 KiB 76
saluki_core::runtime::dedicated +5.63 KiB 4
agent_data_plane::internal::observability +5.60 KiB 16
[Unmapped] -5.57 KiB 1
saluki_core::topology::running +5.50 KiB 31
alloc +5.22 KiB 1227
saluki_app::metrics::collect_runtime_metrics -4.73 KiB 1
matchit +4.22 KiB 53
saluki_core::runtime::restart +3.50 KiB 7

Detailed Symbol Changes

    FILE SIZE        VM SIZE    
 --------------  -------------- 
  [NEW] +1.79Mi  [NEW] +1.79Mi    std::thread::local::LocalKey<T>::with::h346695abe3c51b40
  +1.3%  +205Ki  +1.3%  +169Ki    [29837 Others]
  [NEW]  +113Ki  [NEW]  +113Ki    agent_data_plane::cli::run::create_topology::_{{closure}}::h78a1190544a24f76
  [NEW] +68.2Ki  [NEW] +68.1Ki    h2::hpack::decoder::Decoder::try_decode_string::hd1c9a39c48e78a6e
  [NEW] +63.8Ki  [NEW] +63.7Ki    agent_data_plane::cli::run::handle_run_command::_{{closure}}::h06c0894d865ef0e8
  [NEW] +63.7Ki  [NEW] +63.6Ki    saluki_components::common::datadog::io::run_endpoint_io_loop::_{{closure}}::hd35493454b2aefa0
  [NEW] +62.3Ki  [NEW] +62.2Ki    agent_data_plane::main::_{{closure}}::h3b8c24a68a67fe4a
  [NEW] +59.3Ki  [NEW] +59.1Ki    _<agent_data_plane::internal::control_plane::PrivilegedApiWorker as saluki_core::runtime::supervisor::Supervisable>::initialize::_{{closure}}::h0916f5a82a9d11a9
  [NEW] +48.8Ki  [NEW] +48.7Ki    saluki_app::bootstrap::AppBootstrapper::bootstrap::_{{closure}}::h6b2ac096928e54f5
  [NEW] +47.7Ki  [NEW] +47.6Ki    moka::sync::base_cache::Inner<K,V,S>::do_run_pending_tasks::h0ef6755dbc77ea2a
  [NEW] +46.1Ki  [NEW] +46.0Ki    h2::proto::connection::Connection<T,P,B>::poll::h1671860da0918c66
  [DEL] -46.1Ki  [DEL] -46.0Ki    h2::proto::connection::Connection<T,P,B>::poll::h2aedcbe1089b311c
  [DEL] -47.7Ki  [DEL] -47.6Ki    moka::sync::base_cache::Inner<K,V,S>::do_run_pending_tasks::ha97a2f55834f17a3
  [DEL] -48.8Ki  [DEL] -48.7Ki    saluki_app::bootstrap::AppBootstrapper::bootstrap::_{{closure}}::hbb5d0d8e944d3a21
  [DEL] -57.8Ki  [DEL] -57.7Ki    agent_data_plane::cli::run::handle_run_command::_{{closure}}::hd8d13580d16cc8a3
  [DEL] -62.3Ki  [DEL] -62.2Ki    agent_data_plane::main::_{{closure}}::h1c7c640002ce8ad1
  [DEL] -63.8Ki  [DEL] -63.6Ki    saluki_components::common::datadog::io::run_endpoint_io_loop::_{{closure}}::hc194831d658db8cc
  [DEL] -68.2Ki  [DEL] -68.1Ki    h2::hpack::decoder::Decoder::try_decode_string::hbfc0bb5a77e7669f
  [DEL] -84.5Ki  [DEL] -84.4Ki    agent_data_plane::internal::control_plane::spawn_control_plane::_{{closure}}::heace61ab0f2bde0c
  [DEL]  -113Ki  [DEL]  -113Ki    agent_data_plane::cli::run::create_topology::_{{closure}}::h6a8b41fa76a89e2c
  [DEL] -1.79Mi  [DEL] -1.79Mi    std::thread::local::LocalKey<T>::with::h0d8770940d38805b
  +0.7%  +185Ki  +0.6%  +150Ki    TOTAL

@pr-commenter
Copy link

pr-commenter bot commented Feb 9, 2026

Regression Detector (Agent Data Plane)

Regression Detector Results

Run ID: e715a53f-d18d-44c9-a9c2-ac9cca23abe8

Baseline: 9fe09ef
Comparison: 93ea4b3
Diff

❌ Experiments with retried target crashes

This is a critical error. One or more replicates failed with a non-zero exit code. These replicates may have been retried. See Replicate Execution Details for more information.

  • otlp_ingest_traces_5mb_cpu
  • otlp_ingest_logs_5mb_throughput
  • dsd_uds_1mb_3k_contexts_memory

Optimization Goals: ❌ Regression(s) detected

perf experiment goal Δ mean % Δ mean % CI trials links
otlp_ingest_metrics_5mb_memory memory utilization +5.29 [+5.07, +5.51] 1 (metrics) (profiles) (logs)

Experiments ignored for regressions

Regressions in experiments with settings containing erratic: true are ignored.

perf experiment goal Δ mean % Δ mean % CI trials links
otlp_ingest_logs_5mb_throughput ingress throughput +0.03 [-0.10, +0.17] 1 (metrics) (profiles) (logs)
otlp_ingest_logs_5mb_memory memory utilization -0.76 [-1.33, -0.18] 1 (metrics) (profiles) (logs)
otlp_ingest_logs_5mb_cpu % cpu utilization -1.30 [-6.50, +3.90] 1 (metrics) (profiles) (logs)

Fine details of change detection per experiment

perf experiment goal Δ mean % Δ mean % CI trials links
dsd_uds_1mb_3k_contexts_cpu % cpu utilization +9.04 [-44.55, +62.62] 1 (metrics) (profiles) (logs)
dsd_uds_10mb_3k_contexts_cpu % cpu utilization +8.11 [-23.31, +39.54] 1 (metrics) (profiles) (logs)
otlp_ingest_metrics_5mb_memory memory utilization +5.29 [+5.07, +5.51] 1 (metrics) (profiles) (logs)
quality_gates_rss_idle memory utilization +2.39 [+2.32, +2.45] 1 (metrics) (profiles) (logs)
quality_gates_rss_dsd_low memory utilization +1.83 [+1.66, +2.00] 1 (metrics) (profiles) (logs)
dsd_uds_500mb_3k_contexts_throughput ingress throughput +1.64 [+1.51, +1.76] 1 (metrics) (profiles) (logs)
dsd_uds_100mb_3k_contexts_memory memory utilization +1.43 [+1.23, +1.62] 1 (metrics) (profiles) (logs)
quality_gates_rss_dsd_medium memory utilization +1.33 [+1.14, +1.53] 1 (metrics) (profiles) (logs)
dsd_uds_1mb_3k_contexts_memory memory utilization +1.32 [+1.13, +1.51] 1 (metrics) (profiles) (logs)
dsd_uds_10mb_3k_contexts_memory memory utilization +1.01 [+0.81, +1.21] 1 (metrics) (profiles) (logs)
dsd_uds_512kb_3k_contexts_memory memory utilization +0.88 [+0.70, +1.06] 1 (metrics) (profiles) (logs)
dsd_uds_500mb_3k_contexts_memory memory utilization +0.36 [+0.18, +0.54] 1 (metrics) (profiles) (logs)
quality_gates_rss_dsd_heavy memory utilization +0.08 [-0.05, +0.21] 1 (metrics) (profiles) (logs)
otlp_ingest_logs_5mb_throughput ingress throughput +0.03 [-0.10, +0.17] 1 (metrics) (profiles) (logs)
otlp_ingest_metrics_5mb_throughput ingress throughput +0.02 [-0.10, +0.14] 1 (metrics) (profiles) (logs)
dsd_uds_512kb_3k_contexts_throughput ingress throughput -0.00 [-0.05, +0.05] 1 (metrics) (profiles) (logs)
dsd_uds_1mb_3k_contexts_throughput ingress throughput -0.00 [-0.06, +0.06] 1 (metrics) (profiles) (logs)
dsd_uds_10mb_3k_contexts_throughput ingress throughput -0.00 [-0.15, +0.15] 1 (metrics) (profiles) (logs)
otlp_ingest_traces_5mb_throughput ingress throughput -0.00 [-0.09, +0.08] 1 (metrics) (profiles) (logs)
dsd_uds_100mb_3k_contexts_throughput ingress throughput -0.01 [-0.06, +0.04] 1 (metrics) (profiles) (logs)
quality_gates_rss_dsd_ultraheavy memory utilization -0.16 [-0.30, -0.03] 1 (metrics) (profiles) (logs)
dsd_uds_500mb_3k_contexts_cpu % cpu utilization -0.54 [-1.88, +0.79] 1 (metrics) (profiles) (logs)
dsd_uds_100mb_3k_contexts_cpu % cpu utilization -0.59 [-6.75, +5.57] 1 (metrics) (profiles) (logs)
otlp_ingest_logs_5mb_memory memory utilization -0.76 [-1.33, -0.18] 1 (metrics) (profiles) (logs)
otlp_ingest_metrics_5mb_cpu % cpu utilization -0.94 [-6.46, +4.58] 1 (metrics) (profiles) (logs)
otlp_ingest_logs_5mb_cpu % cpu utilization -1.30 [-6.50, +3.90] 1 (metrics) (profiles) (logs)
otlp_ingest_traces_5mb_cpu % cpu utilization -1.92 [-4.54, +0.70] 1 (metrics) (profiles) (logs)
otlp_ingest_traces_5mb_memory memory utilization -2.83 [-3.07, -2.59] 1 (metrics) (profiles) (logs)
dsd_uds_512kb_3k_contexts_cpu % cpu utilization -13.26 [-64.24, +37.71] 1 (metrics) (profiles) (logs)

Bounds Checks: ✅ Passed

perf experiment bounds_check_name replicates_passed links
quality_gates_rss_dsd_heavy memory_usage 10/10 (metrics) (profiles) (logs)
quality_gates_rss_dsd_low memory_usage 10/10 (metrics) (profiles) (logs)
quality_gates_rss_dsd_medium memory_usage 10/10 (metrics) (profiles) (logs)
quality_gates_rss_dsd_ultraheavy memory_usage 10/10 (metrics) (profiles) (logs)
quality_gates_rss_idle memory_usage 10/10 (metrics) (profiles) (logs)

Explanation

Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%

Performance changes are noted in the perf column of each table:

  • ✅ = significantly better comparison variant performance
  • ❌ = significantly worse comparison variant performance
  • ➖ = no significant change in performance

A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".

For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:

  1. Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.

  2. Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.

  3. Its configuration does not mark it "erratic".

Replicate Execution Details

We run multiple replicates for each experiment/variant. However, we allow replicates to be automatically retried if there are any failures, up to 8 times, at which point the replicate is marked dead and we are unable to run analysis for the entire experiment. We call each of these attempts at running replicates a replicate execution. This section lists all replicate executions that failed due to the target crashing or being oom killed.

Note: In the below tables we bucket failures by experiment, variant, and failure type. For each of these buckets we list out the replicate indexes that failed with an annotation signifying how many times said replicate failed with the given failure mode. In the below example the baseline variant of the experiment named experiment_with_failures had two replicates that failed by oom kills. Replicate 0, which failed 8 executions, and replicate 1 which failed 6 executions, all with the same failure mode.

Experiment Variant Replicates Failure Logs Debug Dashboard
experiment_with_failures baseline 0 (x8) 1 (x6) Oom killed Debug Dashboard

The debug dashboard links will take you to a debugging dashboard specifically designed to investigate replicate execution failures.

❌ Retried Normal Replicate Execution Failures (non-profiling)

Experiment Variant Replicates Failure Debug Dashboard
dsd_uds_1mb_3k_contexts_memory comparison 1 Failed to shutdown when requested Debug Dashboard
otlp_ingest_logs_5mb_throughput baseline 1 Failed to shutdown when requested Debug Dashboard
otlp_ingest_traces_5mb_cpu comparison 0 Failed to shutdown when requested Debug Dashboard

@tobz tobz force-pushed the tobz/dynamic-api-endpoint-routes branch from 64f9fcd to 93ea4b3 Compare February 11, 2026 16:24
@tobz tobz force-pushed the tobz/runtime-system-state-mgmt-primitives branch from e8b5919 to 52f0a8a Compare February 11, 2026 16:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area/core Core functionality, event model, etc. area/observability Internal observability of ADP and Saluki.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant