You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Write-only setting, available only in Create/Update calls. Specifies the user or service principal that the pipeline runs as. If not specified, the pipeline runs as the user who created the pipeline.
596
593
597
594
Only `user_name` or `service_principal_name` can be specified. If both are specified, an error is thrown.
598
-
"x-databricks-preview": |-
599
-
PRIVATE
600
595
"schema":
601
596
"description": |-
602
597
The default schema (database) where tables are read from or published to.
An option to disable auto optimization in serverless
3121
+
"disabled":
3122
+
"description": |-
3123
+
An optional flag to disable the task. If set to true, the task will not run even if it is part of a job.
3124
+
"x-databricks-preview": |-
3125
+
PRIVATE
3122
3126
"email_notifications":
3123
3127
"description": |-
3124
3128
An optional set of email addresses that is notified when runs of this task begin or complete as well as when this task is deleted. The default behavior is to not send any emails.
The task runs a Python file when the `spark_python_task` field is present.
3197
3201
"spark_submit_task":
3198
3202
"description": |-
3199
-
(Legacy) The task runs the spark-submit script when the `spark_submit_task` field is present. This task can run only on new clusters and is not compatible with serverless compute.
3200
-
3201
-
In the `new_cluster` specification, `libraries` and `spark_conf` are not supported. Instead, use `--jars` and `--py-files` to add Java and Python libraries and `--conf` to set the Spark configurations.
3202
-
3203
-
`master`, `deploy-mode`, and `executor-cores` are automatically configured by Databricks; you _cannot_ specify them in parameters.
3204
-
3205
-
By default, the Spark submit job uses all available memory (excluding reserved memory for Databricks services). You can set `--driver-memory`, and `--executor-memory` to a smaller value to leave some room for off-heap usage.
3206
-
3207
-
The `--jars`, `--py-files`, `--files` arguments support DBFS and S3 paths.
3203
+
(Legacy) The task runs the spark-submit script when the spark_submit_task field is present. Databricks recommends using the spark_jar_task instead; see [Spark Submit task for jobs](/jobs/spark-submit).
3204
+
"deprecation_message": |-
3205
+
This field is deprecated
3208
3206
"sql_task":
3209
3207
"description": |-
3210
3208
The task runs a SQL query or file, or it refreshes a SQL alert or a legacy SQL dashboard when the `sql_task` field is present.
Immutable. Identifier for the gateway that is used by this ingestion pipeline to communicate with the source database. This is used with connectors to databases like SQL Server.
3426
+
"netsuite_jar_path":
3427
+
"description": |-
3428
+
Netsuite only configuration. When the field is set for a netsuite connector,
3429
+
the jar stored in the field will be validated and added to the classpath of
3430
+
pipeline's cluster.
3431
+
"x-databricks-preview": |-
3432
+
PRIVATE
3428
3433
"objects":
3429
3434
"description": |-
3430
3435
Required. Settings specifying tables to replicate and the destination for the replicated tables.
The column names specifying the logical order of events in the source data. Delta Live Tables uses this sequencing to handle change events that arrive out of order.
3913
+
"workday_report_parameters":
3914
+
"description": |-
3915
+
(Optional) Additional custom parameters for Workday Report
0 commit comments