-
Notifications
You must be signed in to change notification settings - Fork 28.9k
[SPARK-53812][SDP] Refactor DefineDataset and DefineFlow protos to group related properties and future-proof #52532
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
mostly LGTM with one minor comment
optional SourceCodeLocation source_code_location = 6; | ||
|
||
oneof details { | ||
WriteRelationFlowDetails relation_flow_details = 7; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
open to discussion, wdyt we define a ImplicitFlowDetails
that only has relation
, and StandaloneFlowDetails
that has flow_name
, relation
and potential optional bool once
in the future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The way I had been thinking about it, the "details" are conceptually independent from whether the flow is implicitly specified as part of the dataset definition. The details describe the computation that happens during flow execution.
Even though our current APIs don't permit it, if we add new flow types in the future, I think we might want the option to specify them either...
- As part of the statement that defines the table
- As a standalone flow
E.g.
CREATE STREAMING TABLE t1
AS MERGE ...
or
CREATE STREAMING TABLE t1
CREATE FLOW f1
AS MERGE INTO t1 ...
And it would be good to only need to add one MergeIntoFlowDetails
flow in this case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The details describe the computation that happens during flow execution.
Agreed, and the direction I’m proposing is aligned with that philosophy. The idea is to categorize flow definitions with a finer granularity while keeping the schema extensible for future flow types (e.g., MergeIntoFlow). This approach preserves the semantic separation between implicit and standalone flows while allowing us to evolve the protocol incrementally as new flow behaviors emerge.
message DefineFlow {
...
oneof details {
// Can be planned as a CompleteFlow, StreamingFlow, ExternalSinkFlow, and ForeachBatchSinkFlow
// depends on the dataset it writes to
ImplicitlyFlow implicitFlow = 1;
// can either be a AppendFlow or UpdateFlow
StandaloneFlow standalone_flow = 2;
AutoCdcFlow auto_cdc_flow = 3;
AutoCdcFromSnapshotFlow auto_cdc_flow4;
Any extension = 999;
}
// implicit flow only have the unresolved logical plan, it doesn't have a user specified flow name
// as it is inferred from the dataset it writes to
message ImplicitlyFlow {
optional spark.connect.Relation relation = 1;
}
message StandaloneFlow {
optional spark.connect.Relation relation = 1;
// user-specified flow name for standalone flow
optional string name = 2;
// whether it is a one time operation
optional bool once = 3;
// can either be Append or Update
optional outputMode = 4;
}
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What I mean is that I want to avoid something like...
oneof details {
StandaloneWriteRelationFlowDetails standalone_write_relation = 1;
ImplicitWriteRelationFlowDetails implicit_write_relation = 2;
StandaloneMergeIntoFlowDetails standalone_merge_into = 3;
ImplicitMergeIntoFlowDetails implicit_merge_into = 4;
...
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1, LGTM from my side. Thank you, @sryza and all.
What changes were proposed in this pull request?
DefineFlow
, pulls out therelation
property into its own sub-message.Why are the changes needed?
The DefineDataset and DefineFlow Spark Connect protos are moshpits of properties that could be refactored into a more coherent structure:
DefineDataset
, there are a set of properties that are only relevant to tables (not views). They can beDefineFlow
, the relation property refers to flows that write the results of a relation to a target table. In the future, we may want to introduce additional flows types that mutate the target table in different ways.Does this PR introduce any user-facing change?
No, these protos haven't been shipped yet.
How was this patch tested?
Updated existing tests.
Was this patch authored or co-authored using generative AI tooling?