You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -41,26 +41,172 @@ Once you know what you want to add, you'll need to update the query planner to r
41
41
42
42
### Adding the Expression in Scala
43
43
44
-
The `QueryPlanSerde` object has a method `exprToProto`, which is responsible for converting a Spark expression to a protobuf expression. Within that method is an `exprToProtoInternal` method that contains a large match statement for each expression type. You'll need to add a new case to this match statement for your new expression.
44
+
DataFusion Comet uses a framework based on the `CometExpressionSerde` trait for converting Spark expressions to protobuf. Instead of a large match statement, each expression type has its own serialization handler. For aggregate expressions, use the `CometAggregateExpressionSerde` trait instead.
45
+
46
+
#### Creating a CometExpressionSerde Implementation
47
+
48
+
First, create an object that extends `CometExpressionSerde[T]` where `T` is the Spark expression type. This is typically added to one of the serde files in `spark/src/main/scala/org/apache/comet/serde/` (e.g., `math.scala`, `strings.scala`, `arrays.scala`, etc.).
45
49
46
50
For example, the `unhex` function looks like this:
The `CometExpressionSerde` trait provides three methods you can override:
74
+
75
+
*`convert(expr: T, inputs: Seq[Attribute], binding: Boolean): Option[Expr]` - **Required**. Converts the Spark expression to protobuf. Return `None` if the expression cannot be converted.
76
+
*`getSupportLevel(expr: T): SupportLevel` - Optional. Returns the level of support for the expression. See "Using getSupportLevel" section below for details.
77
+
*`getExprConfigName(expr: T): String` - Optional. Returns a short name for configuration keys. Defaults to the Spark class name.
78
+
79
+
For simple scalar functions that map directly to a DataFusion function, you can use the built-in `CometScalarFunction` implementation:
80
+
81
+
```scala
82
+
classOf[Cos] ->CometScalarFunction("cos")
83
+
```
84
+
85
+
#### Registering the Expression Handler
86
+
87
+
Once you've created your `CometExpressionSerde` implementation, register it in `QueryPlanSerde.scala` by adding it to the appropriate expression map (e.g., `mathExpressions`, `stringExpressions`, `predicateExpressions`, etc.):
The `exprToProtoInternal` method will automatically use this mapping to find and invoke your handler when it encounters the corresponding Spark expression type.
97
+
98
+
A few things to note:
99
+
100
+
* The `convert` method is recursively called on child expressions using `exprToProtoInternal`, so you'll need to make sure that the child expressions are also converted to protobuf.
101
+
*`scalarFunctionExprToProtoWithReturnType` is for scalar functions that need to return type information. Your expression may use a different method depending on the type of expression.
102
+
* Use helper methods like `createBinaryExpr` and `createUnaryExpr` from `QueryPlanSerde` for common expression patterns.
103
+
104
+
#### Using getSupportLevel
105
+
106
+
The `getSupportLevel` method allows you to control whether an expression should be executed by Comet based on various conditions such as data types, parameter values, or other expression-specific constraints. This is particularly useful when:
107
+
108
+
1. Your expression only supports specific data types
109
+
2. Your expression has known incompatibilities with Spark's behavior
110
+
3. Your expression has edge cases that aren't yet supported
111
+
112
+
The method returns one of three `SupportLevel` values:
113
+
114
+
***`Compatible(notes: Option[String] = None)`** - Comet supports this expression with full compatibility with Spark, or may have known differences in specific edge cases that are unlikely to be an issue for most users. This is the default if you don't override `getSupportLevel`.
115
+
***`Incompatible(notes: Option[String] = None)`** - Comet supports this expression but results can be different from Spark. The expression will only be used if `spark.comet.expr.allowIncompatible=true` or the expression-specific config `spark.comet.expr.<exprName>.allowIncompatible=true` is set.
116
+
***`Unsupported(notes: Option[String] = None)`** - Comet does not support this expression under the current conditions. The expression will not be used and Spark will fall back to its native execution.
117
+
118
+
All three support levels accept an optional `notes` parameter to provide additional context about the support level.
This expression will only be used when users explicitly enable incompatible expressions via configuration.
197
+
198
+
##### How getSupportLevel Affects Execution
199
+
200
+
When the query planner encounters an expression:
61
201
62
-
* The function is recursively called on child expressions, so you'll need to make sure that the child expressions are also converted to protobuf.
63
-
*`scalarExprToProtoWithReturnType` is for scalar functions that need return type information. Your expression may use a different method depending on the type of expression.
202
+
1. It first checks if the expression is explicitly disabled via `spark.comet.expr.<exprName>.enabled=false`
203
+
2. It then calls `getSupportLevel` on the expression handler
204
+
3. Based on the result:
205
+
-`Compatible()`: Expression proceeds to conversion
206
+
-`Incompatible()`: Expression is skipped unless `spark.comet.expr.allowIncompatible=true` or expression-specific allow config is set
207
+
-`Unsupported()`: Expression is skipped and a fallback to Spark occurs
208
+
209
+
Any notes provided will be logged to help with debugging and understanding why an expression was not used.
64
210
65
211
#### Adding Spark-side Tests for the New Expression
66
212
@@ -92,9 +238,9 @@ test("unhex") {
92
238
93
239
### Adding the Expression To the Protobuf Definition
94
240
95
-
Once you have the expression implemented in Scala, you might need to update the protobuf definition to include the new expression. You may not need to do this if the expression is already covered by the existing protobuf definition (e.g. you're adding a new scalar function).
241
+
Once you have the expression implemented in Scala, you might need to update the protobuf definition to include the new expression. You may not need to do this if the expression is already covered by the existing protobuf definition (e.g. you're adding a new scalar function that uses the `ScalarFunc` message).
96
242
97
-
You can find the protobuf definition in `expr.proto`, and in particular the `Expr` or potentially the `AggExpr`. These are similar in theory to the large case statement in `QueryPlanSerde`, but in protobuf format. So if you were to add a new expression called `Add2`, you would add a new case to the `Expr` message like so:
243
+
You can find the protobuf definition in `native/proto/src/proto/expr.proto`, and in particular the `Expr` or potentially the `AggExpr` messages. If you were to add a new expression called `Add2`, you would add a new case to the `Expr` message like so:
98
244
99
245
```proto
100
246
message Expr {
@@ -118,51 +264,58 @@ message Add2 {
118
264
119
265
With the serialization complete, the next step is to implement the expression in Rust and ensure that the incoming plan can make use of it.
120
266
121
-
How this works, is somewhat dependent on the type of expression you're adding, so see the `core/src/execution/datafusion/expressions` directory for examples of how to implement different types of expressions.
267
+
How this works is somewhat dependent on the type of expression you're adding. Expression implementations live in the `native/spark-expr/src/` directory, organized by category (e.g., `math_funcs/`, `string_funcs/`, `array_funcs/`).
122
268
123
269
#### Generally Adding a New Expression
124
270
125
-
If you're adding a new expression, you'll need to review `create_plan` and `create_expr`. `create_plan` is responsible for translating the incoming plan into a DataFusion plan, and may delegate to`create_expr` to create the physical expressions for the plan.
271
+
If you're adding a new expression that requires custom protobuf serialization, you may need to:
126
272
127
-
If you added a new message to the protobuf definition, you'll add a new match case to the `create_expr` method to handle the new expression. For example, if you added an `Add2` expression, you would add a new case like so:
273
+
1. Add a new message to the protobuf definition in `native/proto/src/proto/expr.proto`
274
+
2. Update the Rust deserialization code to handle the new protobuf message type
`self.create_binary_expr` is for a binary expression, but if something out of the box is needed, you can create a new `PhysicalExpr` implementation. For example, see `if_expr.rs` for an example of an implementation that doesn't fit the `create_binary_expr` mold.
276
+
For most expressions, you can skip this step if you're using the existing scalar function infrastructure.
137
277
138
278
#### Adding a New Scalar Function Expression
139
279
140
-
For a new scalar function, you can reuse a lot of code by updating the `create_comet_physical_fun` method to match on the function name and make the scalar UDF to be called. For example, the diff to add the `unhex` function is:
141
-
142
-
```diff
143
-
macro_rules! make_comet_scalar_udf {
144
-
($name:expr, $func:ident, $data_type:ident) => {{
145
-
146
-
+ "unhex" => {
147
-
+ let func = Arc::new(spark_unhex);
148
-
+ make_comet_scalar_udf!("unhex", func, without data_type)
149
-
+ }
280
+
For a new scalar function, you can reuse a lot of code by updating the `create_comet_physical_fun` method in `native/spark-expr/src/comet_scalar_funcs.rs`. Add a match case for your function name:
With that addition, you can now implement the spark function in Rust. This function will look very similar to DataFusion code. For examples, see the `core/src/execution/datafusion/expressions/scalar_funcs` directory.
293
+
The `make_comet_scalar_udf!` macro has several variants depending on whether your function needs:
294
+
- A data type parameter: `make_comet_scalar_udf!("ceil", spark_ceil, data_type)`
295
+
- No data type parameter: `make_comet_scalar_udf!("unhex", func, without data_type)`
296
+
- An eval mode: `make_comet_scalar_udf!("decimal_div", spark_decimal_div, data_type, eval_mode)`
297
+
- A fail_on_error flag: `make_comet_scalar_udf!("spark_modulo", func, without data_type, fail_on_error)`
156
298
157
-
Without getting into the internals, the function signature will look like:
299
+
#### Implementing the Function
300
+
301
+
Then implement your function in an appropriate module under `native/spark-expr/src/`. The function signature will look like:
> **_NOTE:_** If you call the `make_comet_scalar_udf` macro with the data type, the function signature will look include the data type as a second argument.
309
+
If your function uses the data type or eval mode, the signature will include those as additional parameters:
310
+
311
+
```rust
312
+
pubfnspark_ceil(
313
+
args:&[ColumnarValue],
314
+
data_type:&DataType
315
+
) ->Result<ColumnarValue, DataFusionError> {
316
+
// Implementation
317
+
}
318
+
```
166
319
167
320
### API Differences Between Spark Versions
168
321
@@ -173,33 +326,33 @@ If the expression you're adding has different behavior across different Spark ve
173
326
174
327
## Shimming to Support Different Spark Versions
175
328
176
-
By adding shims for each Spark version, you can provide a consistent interface for the expression across different Spark versions. For example, `unhex` added a new optional parameter is Spark 3.4, for if it should `failOnError` or not. So for version 3.3, the shim is:
329
+
If the expression you're adding has different behavior across different Spark versions, you can use the shim system located in `spark/src/main/spark-$SPARK_VERSION/org/apache/comet/shims/CometExprShim.scala` for each Spark version.
177
330
178
-
```scala
179
-
traitCometExprShim {
180
-
/**
181
-
* Returns a tuple of expressions for the `unhex` function.
Then when `unhexSerde` is called in the `QueryPlanSerde` object, it will use the correct shim for the Spark version.
353
+
The `QueryPlanSerde.exprToProtoInternal` method calls `versionSpecificExprToProtoInternal` first, allowing shims to intercept and handle version-specific expressions before falling back to the standard expression maps.
354
+
355
+
Your `CometExpressionSerde` implementation can also access shim methods by mixing in the `CometExprShim` trait, though in most cases you can directly access the expression properties if they're available across all supported Spark versions.
0 commit comments