You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source/user-guide/latest/configs.md
+2-1Lines changed: 2 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -67,7 +67,7 @@ Comet provides the following configuration settings.
67
67
|`spark.comet.exceptionOnDatetimeRebase`| Whether to throw exception when seeing dates/timestamps from the legacy hybrid (Julian + Gregorian) calendar. Since Spark 3, dates/timestamps were written according to the Proleptic Gregorian calendar. When this is true, Comet will throw exceptions when seeing these dates/timestamps that were written by Spark version before 3.0. If this is false, these dates/timestamps will be read as if they were written to the Proleptic Gregorian calendar and will not be rebased. | false |
68
68
|`spark.comet.exec.enabled`| Whether to enable Comet native vectorized execution for Spark. This controls whether Spark should convert operators into their Comet counterparts and execute them in native space. Note: each operator is associated with a separate config in the format of `spark.comet.exec.<operator_name>.enabled` at the moment, and both the config and this need to be turned on, in order for the operator to be executed in native. | true |
69
69
|`spark.comet.exec.replaceSortMergeJoin`| Experimental feature to force Spark to replace SortMergeJoin with ShuffledHashJoin for improved performance. This feature is not stable yet. For more information, refer to the [Comet Tuning Guide](https://datafusion.apache.org/comet/user-guide/tuning.html). | false |
70
-
|`spark.comet.exec.strictFloatingPoint`| When enabled, fall back to Spark for floating-point operations that differ from Spark, such as when comparing or sorting -0.0 and 0.0. For more information, refer to the [Comet Compatibility Guide](https://datafusion.apache.org/comet/user-guide/compatibility.html).| false |
70
+
|`spark.comet.exec.strictFloatingPoint`| When enabled, fall back to Spark for floating-point operations that may differ from Spark, such as when comparing or sorting -0.0 and 0.0. For more information, refer to the [Comet Compatibility Guide](https://datafusion.apache.org/comet/user-guide/compatibility.html). | false |
71
71
|`spark.comet.expression.allowIncompatible`| Comet is not currently fully compatible with Spark for all expressions. Set this config to true to allow them anyway. For more information, refer to the [Comet Compatibility Guide](https://datafusion.apache.org/comet/user-guide/compatibility.html). | false |
72
72
|`spark.comet.maxTempDirectorySize`| The maximum amount of data (in bytes) stored inside the temporary directories. | 107374182400b |
73
73
|`spark.comet.metrics.updateInterval`| The interval in milliseconds to update metrics. If interval is negative, metrics will be updated upon task completion. | 3000 |
@@ -225,6 +225,7 @@ These settings can be used to determine which parts of the plan are accelerated
225
225
|`spark.comet.expression.ConcatWs.enabled`| Enable Comet acceleration for `ConcatWs`| true |
226
226
|`spark.comet.expression.Contains.enabled`| Enable Comet acceleration for `Contains`| true |
227
227
|`spark.comet.expression.Cos.enabled`| Enable Comet acceleration for `Cos`| true |
228
+
|`spark.comet.expression.Cot.enabled`| Enable Comet acceleration for `Cot`| true |
228
229
|`spark.comet.expression.CreateArray.enabled`| Enable Comet acceleration for `CreateArray`| true |
229
230
|`spark.comet.expression.CreateNamedStruct.enabled`| Enable Comet acceleration for `CreateNamedStruct`| true |
230
231
|`spark.comet.expression.DateAdd.enabled`| Enable Comet acceleration for `DateAdd`| true |
0 commit comments