Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -108,3 +108,4 @@ logs/

# Claude Code guidance file (local only)
CLAUDE.md
.claude/
2 changes: 1 addition & 1 deletion ballista/scheduler/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ required-features = ["build-binary"]

[features]
build-binary = ["clap", "tracing-subscriber", "tracing-appender", "tracing", "ballista-core/build-binary"]
default = ["build-binary", "substrait"]
default = ["build-binary", "rest-api"]
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Default feature silently drops substrait when adding rest-api

High Severity

The default feature list changed from ["build-binary", "substrait"] to ["build-binary", "rest-api"], which removes substrait from the defaults. This appears accidental — the intent was to add rest-api to defaults, but substrait was replaced rather than kept alongside it. Anyone relying on default features for Substrait support will silently lose that functionality.

Fix in Cursor Fix in Web

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

value:good-to-have; category:bug; feedback: The Bugbot AI reviewer is correct! Adding the rest-api feature as a default is related to the PR. But dropping the "substrait" one is not and it should be done in a separate PR, so that it appears in the changelog to let the users know about this change.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This changes the scheduler crate default features from substrait to rest-api; if downstream users relied on substrait being enabled by default, this is a behavior change. Was the intent to drop substrait from defaults rather than add rest-api alongside it?

Severity: low

Fix This in Augment

🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

value:good-to-have; category:bug; feedback: The Augment AI reviewer is correct! Adding the rest-api feature as a default is related to the PR. But dropping the "substrait" one is not and it should be done in a separate PR, so that it appears in the changelog to let the users know about this change.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Keep substrait in the default feature set.

This silently removes Substrait support from default scheduler builds, even though the gRPC query path still supports it behind feature = "substrait". That is a breaking behavior change unrelated to exposing REST plan details. If the goal is just to turn on the REST API by default, please add rest-api without dropping substrait.

Suggested change
-default = ["build-binary", "rest-api"]
+default = ["build-binary", "rest-api", "substrait"]
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
default = ["build-binary", "rest-api"]
default = ["build-binary", "rest-api", "substrait"]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ballista/scheduler/Cargo.toml` at line 37, The default feature list was
changed to remove substrait; restore it by ensuring the default features array
includes "substrait" along with the newly desired "rest-api" (i.e., keep
"substrait" and add "rest-api" instead of replacing it) so the scheduler's
default build still enables Substrait support; update the default =
["build-binary", "rest-api"] entry to include "substrait" as well.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

value:good-to-have; category:bug; feedback: The CodeRabbit AI reviewer is correct! Adding the rest-api feature as a default is related to the PR. But dropping the "substrait" one is not and it should be done in a separate PR, so that it appears in the changelog to let the users know about this change.

# job info can cache stage plans, in some cases where
# task plans can be re-computed, cache behavior may need to be disabled.
disable-stage-plan-cache = []
Expand Down
75 changes: 75 additions & 0 deletions ballista/scheduler/src/api/handlers.rs
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,9 @@ pub struct JobResponse {
pub num_stages: usize,
pub completed_stages: usize,
pub percent_complete: u8,
pub logical_plan: Option<String>,
pub physical_plan: Option<String>,
pub stage_plan: Option<String>,
}

#[derive(Debug, serde::Serialize)]
Expand Down Expand Up @@ -172,13 +175,85 @@ pub async fn get_jobs<
num_stages: job.num_stages,
completed_stages: job.completed_stages,
percent_complete,
logical_plan: None,
physical_plan: None,
stage_plan: None,
}
})
.collect();

Ok(Json(jobs))
}

pub async fn get_job<
T: AsLogicalPlan + Clone + Send + Sync + 'static,
U: AsExecutionPlan + Send + Sync + 'static,
>(
State(data_server): State<Arc<SchedulerServer<T, U>>>,
Path(job_id): Path<String>,
) -> Result<impl IntoResponse, StatusCode> {
let graph = data_server
.state
.task_manager
.get_job_execution_graph(&job_id)
.await
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?
.ok_or(StatusCode::NOT_FOUND)?;
let stage_plan = format!("{:?}", graph);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

stage_plan is currently built from format!("{:?}", graph), which can be very large and not a stable/intentional API representation (it may also expose a lot of internal details). Consider whether the REST payload should be more structured/stable or otherwise bounded in size.

Severity: medium

Fix This in Augment

🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

value:good-to-have; category:bug; feedback: The Augment AI reviewer is correct! A (JSON) structured plans would be easier to consume by software clients (e.g. UI/TUI). Using the Debug implementation of the plans at least makes it easier to change the format without concerns that it will break the structure.

let job = graph.as_ref();
let (plain_status, job_status) = match &job.status().status {
Some(Status::Queued(_)) => ("Queued".to_string(), "Queued".to_string()),
Some(Status::Running(_)) => ("Running".to_string(), "Running".to_string()),
Some(Status::Failed(error)) => {
("Failed".to_string(), format!("Failed: {}", error.error))
}
Some(Status::Successful(completed)) => {
let num_rows = completed
.partition_location
.iter()
.map(|p| p.partition_stats.as_ref().map(|s| s.num_rows).unwrap_or(0))
.sum::<i64>();
let num_rows_term = if num_rows == 1 { "row" } else { "rows" };
let num_partitions = completed.partition_location.len();
let num_partitions_term = if num_partitions == 1 {
"partition"
} else {
"partitions"
};
(
"Completed".to_string(),
format!(
"Completed. Produced {} {} containing {} {}. Elapsed time: {} ms.",
num_partitions,
num_partitions_term,
num_rows,
num_rows_term,
job.end_time() - job.start_time()
),
)
}
_ => ("Invalid".to_string(), "Invalid State".to_string()),
};

let num_stages = job.stage_count();
let completed_stages = job.completed_stages();
let percent_complete =
((completed_stages as f32 / num_stages as f32) * 100_f32) as u8;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Division by zero when num_stages is zero

Low Severity

The new get_job handler computes percent_complete by dividing completed_stages by num_stages without guarding against num_stages being zero. If stage_count() ever returns zero, this produces NaN from the f32 division, which then gets cast to u8 — yielding a potentially confusing result rather than a meaningful value.

Fix in Cursor Fix in Web

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Duplicated status-mapping logic across two handlers

Low Severity

The new get_job handler contains a nearly identical copy of the status-matching and percent_complete computation logic from get_jobs. This duplicated block (mapping Status variants to strings, computing rows/partitions formatting, calculating percent) means future bug fixes or status changes need to be applied in two places, risking divergence.

Additional Locations (1)
Fix in Cursor Fix in Web


Ok(Json(JobResponse {
job_id: job.job_id().to_string(),
job_name: job.job_name().to_string(),
job_status,
status: plain_status,
num_stages,
completed_stages,
percent_complete,
logical_plan: job.logical_plan().map(str::to_owned),
physical_plan: job.physical_plan().map(str::to_owned),
stage_plan: Some(stage_plan),
Comment on lines +195 to +253
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

stage_plan should not be the execution graph's Debug dump.

Line 202 serializes the entire graph with format!("{:?}", graph), so stage_plan is now coupled to Rust Debug output rather than an explicit stage-plan format. That makes the API brittle, drags unrelated internals into the response, and will change whenever the graph structs change. Please build this field from the per-stage plans explicitly instead of exposing the whole graph debug string.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ballista/scheduler/src/api/handlers.rs` around lines 195 - 253, The current
code sets stage_plan = format!("{:?}", graph), which leaks the Debug dump of the
entire execution graph; instead iterate the graph/job's per-stage objects and
construct stage_plan from each stage's explicit plan fields (e.g.,
logical/physical stage plan strings or a structured stage summary) rather than
using Debug. Locate the graph/job value (symbols: graph, job) and replace the
Debug-format call with code that collects per-stage plan representations (e.g.,
map over job.stages() or graph.stages(), extracting
stage.logical_plan()/stage.physical_plan() or equivalent and serializing them
into a compact Vec or JSON string), then assign that structured
string/collection to JobResponse.stage_plan so the API returns only the explicit
stage plans.

}))
}
Comment on lines +188 to +255

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-high high

The new get_job endpoint exposes logical and physical execution plans, as well as the full execution graph, via an unauthenticated REST API. These plans are derived from SQL queries and often contain sensitive information such as database credentials, API keys, or Personally Identifiable Information (PII) embedded as literals. Since the Ballista scheduler's REST API lacks authentication and authorization, any user with network access can retrieve this sensitive information by discovering job IDs (e.g., via the /api/jobs endpoint). Additionally, the code block for determining job status (lines 204-236) is duplicated from the get_jobs function. Consider extracting this logic into a private helper function to improve maintainability and reduce code duplication. This helper could take &JobStatus, start_time, and end_time as arguments and return the (String, String) tuple for plain_status and job_status.


pub async fn cancel_job<
T: AsLogicalPlan + Clone + Send + Sync + 'static,
U: AsExecutionPlan + Send + Sync + 'static,
Expand Down
1 change: 1 addition & 0 deletions ballista/scheduler/src/api/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,7 @@ pub fn get_routes<
.route("/api/executors", get(handlers::get_executors::<T, U>))
.route("/api/jobs", get(handlers::get_jobs::<T, U>))
.route("/api/job/{job_id}", patch(handlers::cancel_job::<T, U>))
.route("/api/job/{job_id}", get(handlers::get_job::<T, U>))
Comment on lines 33 to +34

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

Adding a new route for the same path /api/job/{job_id} will overwrite the existing PATCH route, effectively disabling the cancel_job functionality. You should chain the method handlers for the same path.

See axum docs on routing to multiple methods.

Suggested change
.route("/api/job/{job_id}", patch(handlers::cancel_job::<T, U>))
.route("/api/job/{job_id}", get(handlers::get_job::<T, U>))
.route("/api/job/{job_id}", get(handlers::get_job::<T, U>).patch(handlers::cancel_job::<T, U>))

Comment on lines 33 to +34
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Protect the per-job plan endpoint before enabling it by default.

This route exposes raw logical, physical, and stage-plan data for any job, and I don't see any auth/authorization layer in this router. Those plans can contain table names, file paths, and literals, so adding this endpoint to the default REST surface creates a new disclosure path unless it is explicitly protected.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ballista/scheduler/src/api/mod.rs` around lines 33 - 34, The GET route
exposing per-job plans at "/api/job/{job_id}" (handlers::get_job::<T, U>) must
not be registered by default; gate it behind an explicit protection mechanism
(e.g., a configuration flag like enable_job_plan_endpoint or an
authentication/authorization middleware/admin-only guard) and only register the
route when that flag/guard is present, or move it to a separate admin-only
router; ensure the cancel_job route (handlers::cancel_job::<T, U>) remains
unaffected unless it also needs protection.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

value:good-to-have; category:bug; feedback: The CodeRabbit AI reviewer is correct! Since the plan may contain sensitive data the REST API should not be enabled by default and it should be documented that the Scheduler should not be reachable from unauthorized clients.

.route(
"/api/job/{job_id}/stages",
get(handlers::get_query_stages::<T, U>),
Expand Down
16 changes: 16 additions & 0 deletions ballista/scheduler/src/state/aqe/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -109,6 +109,10 @@ pub(crate) struct AdaptiveExecutionGraph {
failed_stage_attempts: HashMap<usize, HashSet<usize>>,
/// Session config for this job
session_config: Arc<SessionConfig>,
/// Logical plan as a human-readable string, captured at submission time.
logical_plan: Option<String>,
/// Physical plan as a human-readable string, captured at submission time.
physical_plan: Option<String>,
}

impl AdaptiveExecutionGraph {
Expand All @@ -125,6 +129,8 @@ impl AdaptiveExecutionGraph {
plan: Arc<dyn ExecutionPlan>,
queued_at: u64,
session_config: Arc<SessionConfig>,
logical_plan: Option<String>,
physical_plan: Option<String>,
) -> ballista_core::error::Result<Self> {
let mut planner =
AdaptivePlanner::try_new(&session_config, plan, job_name.to_owned())?;
Expand Down Expand Up @@ -178,6 +184,8 @@ impl AdaptiveExecutionGraph {
task_id_gen: 0,
failed_stage_attempts: HashMap::new(),
session_config,
logical_plan,
physical_plan,
})
}
}
Expand Down Expand Up @@ -503,6 +511,14 @@ impl ExecutionGraph for AdaptiveExecutionGraph {
&self.status
}

fn logical_plan(&self) -> Option<&str> {
self.logical_plan.as_deref()
}

fn physical_plan(&self) -> Option<&str> {
self.physical_plan.as_deref()
}

fn start_time(&self) -> u64 {
self.start_time
}
Expand Down
22 changes: 22 additions & 0 deletions ballista/scheduler/src/state/execution_graph.rs
Original file line number Diff line number Diff line change
Expand Up @@ -112,6 +112,12 @@ pub trait ExecutionGraph: Debug {
/// Returns the current job status.
fn status(&self) -> &JobStatus;

/// Returns the logical plan as a string, if captured at submission time.
fn logical_plan(&self) -> Option<&str>;

/// Returns the physical plan as a string, if captured at submission time.
fn physical_plan(&self) -> Option<&str>;

/// Returns the timestamp when this job started execution.
fn start_time(&self) -> u64;

Expand Down Expand Up @@ -263,6 +269,10 @@ pub struct StaticExecutionGraph {
failed_stage_attempts: HashMap<usize, HashSet<usize>>,
/// Session config for this job
session_config: Arc<SessionConfig>,
/// Logical plan as a human-readable string, captured at submission time.
logical_plan: Option<String>,
/// Physical plan as a human-readable string, captured at submission time.
physical_plan: Option<String>,
}

/// Information about a currently running task.
Expand Down Expand Up @@ -298,6 +308,8 @@ impl StaticExecutionGraph {
queued_at: u64,
session_config: Arc<SessionConfig>,
planner: &mut dyn DistributedPlanner,
logical_plan: Option<String>,
physical_plan: Option<String>,
) -> Result<Self> {
let shuffle_stages =
planner.plan_query_stages(job_id, plan, session_config.options())?;
Expand Down Expand Up @@ -330,6 +342,8 @@ impl StaticExecutionGraph {
task_id_gen: 0,
failed_stage_attempts: HashMap::new(),
session_config,
logical_plan,
physical_plan,
})
}

Expand Down Expand Up @@ -635,6 +649,14 @@ impl ExecutionGraph for StaticExecutionGraph {
&self.status
}

fn logical_plan(&self) -> Option<&str> {
self.logical_plan.as_deref()
}

fn physical_plan(&self) -> Option<&str> {
self.physical_plan.as_deref()
}

fn start_time(&self) -> u64 {
self.start_time
}
Expand Down
4 changes: 4 additions & 0 deletions ballista/scheduler/src/state/execution_graph_dot.rs
Original file line number Diff line number Diff line change
Expand Up @@ -612,6 +612,8 @@ filter_expr="]
0,
Arc::new(SessionConfig::new_with_ballista()),
&mut planner,
None,
None,
)
}

Expand Down Expand Up @@ -648,6 +650,8 @@ filter_expr="]
0,
Arc::new(SessionConfig::new_with_ballista()),
&mut planner,
None,
None,
)
}
}
7 changes: 7 additions & 0 deletions ballista/scheduler/src/state/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -443,11 +443,16 @@ impl<T: 'static + AsLogicalPlan, U: 'static + AsExecutionPlan> SchedulerState<T,
None
};

let logical_plan_str = plan.display_indent().to_string();

let plan = session_ctx.state().create_physical_plan(plan).await?;
debug!(
"Physical plan: {}",
DisplayableExecutionPlan::new(plan.as_ref()).indent(false)
);
let physical_plan_str = DisplayableExecutionPlan::new(plan.as_ref())
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

physical_plan_str is captured before the later transform_down that can replace nodes (e.g., EmptyExec / distributed explain), so the plan returned by the API may not match the actual executed plan (plan.data). Consider whether the captured physical plan should reflect the final transformed plan.

Severity: medium

Fix This in Augment

🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

value:good-to-have; category:bug; feedback: The Augment AI reviewer is correct! The physical plan is stringyfied before the final transformation and thus it may show wrong/obsolete information. This should either be corrected or a comment should be added why it is done before the final transformation.

.indent(false)
.to_string();
Comment on lines +446 to +455

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-high high

Capturing and storing raw logical and physical plans as strings without any redaction or masking of sensitive literals leads to information disclosure when these plans are exposed via the REST API. SQL queries frequently contain sensitive data in literals which are preserved in these plan representations.

Comment on lines 448 to +455
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Capture the physical-plan string after the final rewrite.

Line 453 snapshots the physical plan before the transform_down rewrite on Lines 457-483, but Line 494 submits the transformed plan. For ExplainExec replacement and zero-partition rewrites, the REST API will report a different physical plan than the one actually executed. Please build physical_plan_str from plan.data after the transformation instead.

Suggested change
-        let physical_plan_str = DisplayableExecutionPlan::new(plan.as_ref())
-            .indent(false)
-            .to_string();
-
         let plan = plan.transform_down(&|node: Arc<dyn ExecutionPlan>| {
             if node.output_partitioning().partition_count() == 0 {
                 let empty: Arc<dyn ExecutionPlan> =
                     Arc::new(EmptyExec::new(node.schema()));
                 Ok(Transformed::yes(empty))
@@
         debug!(
             "Transformed physical plan: {}",
             DisplayableExecutionPlan::new(plan.data.as_ref()).indent(false)
         );
+        let physical_plan_str = DisplayableExecutionPlan::new(plan.data.as_ref())
+            .indent(false)
+            .to_string();

Also applies to: 489-499

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ballista/scheduler/src/state/mod.rs` around lines 448 - 455, The current code
builds physical_plan_str from the plan before downstream rewrites, causing the
REST API to show a different plan than executed; update the logic to generate
physical_plan_str after the final transforms by using the rewritten plan data
(e.g., call
DisplayableExecutionPlan::new(plan.data.as_ref()).indent(false).to_string() or
equivalent) after transform_down/ExplainExec replacement and zero-partition
rewrites complete (update both occurrences around the transform_down block and
the other similar block at 489-499) so the serialized physical plan matches the
actual executed plan.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

value:good-to-have; category:bug; feedback: The CodeRabbit AI reviewer is not correct! The physical plan is the plan produced by DataFusion. The transformed plan is shown as "staged_plan".


let plan = plan.transform_down(&|node: Arc<dyn ExecutionPlan>| {
if node.output_partitioning().partition_count() == 0 {
Expand Down Expand Up @@ -490,6 +495,8 @@ impl<T: 'static + AsLogicalPlan, U: 'static + AsExecutionPlan> SchedulerState<T,
queued_at,
session_config,
subscriber,
Some(logical_plan_str),
Some(physical_plan_str),
)
.await?;

Expand Down
6 changes: 6 additions & 0 deletions ballista/scheduler/src/state/task_manager.rs
Original file line number Diff line number Diff line change
Expand Up @@ -278,6 +278,8 @@ impl<T: 'static + AsLogicalPlan, U: 'static + AsExecutionPlan> TaskManager<T, U>
queued_at: u64,
session_config: Arc<SessionConfig>,
subscriber: Option<JobStatusSubscriber>,
logical_plan: Option<String>,
physical_plan: Option<String>,
) -> Result<()> {
let mut planner = DefaultDistributedPlanner::new();

Expand All @@ -294,6 +296,8 @@ impl<T: 'static + AsLogicalPlan, U: 'static + AsExecutionPlan> TaskManager<T, U>
plan,
queued_at,
session_config,
logical_plan,
physical_plan,
)?) as ExecutionGraphBox
} else {
debug!("Using static query planner for job planning");
Expand All @@ -306,6 +310,8 @@ impl<T: 'static + AsLogicalPlan, U: 'static + AsExecutionPlan> TaskManager<T, U>
queued_at,
session_config,
&mut planner,
logical_plan,
physical_plan,
)?) as ExecutionGraphBox
};

Expand Down
12 changes: 12 additions & 0 deletions ballista/scheduler/src/test_utils.rs
Original file line number Diff line number Diff line change
Expand Up @@ -920,6 +920,8 @@ pub async fn test_aggregation_plan_with_job_id(
0,
Arc::new(SessionConfig::new_with_ballista()),
&mut planner,
None,
None,
)
.unwrap()
}
Expand Down Expand Up @@ -968,6 +970,8 @@ pub async fn test_two_aggregations_plan(partition: usize) -> StaticExecutionGrap
0,
Arc::new(SessionConfig::new_with_ballista()),
&mut planner,
None,
None,
)
.unwrap()
}
Expand Down Expand Up @@ -1008,6 +1012,8 @@ pub async fn test_coalesce_plan(partition: usize) -> StaticExecutionGraph {
0,
Arc::new(SessionConfig::new_with_ballista()),
&mut planner,
None,
None,
)
.unwrap()
}
Expand Down Expand Up @@ -1068,6 +1074,8 @@ pub async fn test_join_plan(partition: usize) -> StaticExecutionGraph {
0,
Arc::new(SessionConfig::new_with_ballista()),
&mut planner,
None,
None,
)
.unwrap();

Expand Down Expand Up @@ -1110,6 +1118,8 @@ pub async fn test_union_all_plan(partition: usize) -> StaticExecutionGraph {
0,
Arc::new(SessionConfig::new_with_ballista()),
&mut planner,
None,
None,
)
.unwrap();

Expand Down Expand Up @@ -1152,6 +1162,8 @@ pub async fn test_union_plan(partition: usize) -> StaticExecutionGraph {
0,
Arc::new(SessionConfig::new_with_ballista()),
&mut planner,
None,
None,
)
.unwrap();

Expand Down