You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/resources/job.md
+10Lines changed: 10 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -187,6 +187,16 @@ You can invoke Spark submit tasks only on new clusters. **In the `new_cluster` s
187
187
188
188
You also need to include a `git_source` block to configure the repository that contains the dbt project.
189
189
190
+
### sql_task Configuration Block
191
+
192
+
One of the `query`, `dashboard` or `alert` needs to be provided.
193
+
194
+
*`warehouse_id` - (Required) ID of the (the [databricks_sql_endpoint](sql_endpoint.md)) that will be used to execute the task. Only serverless warehouses are supported right now.
195
+
*`parameters` - (Optional) (Map) parameters to be used for each run of this task. The SQL alert task does not support custom parameters.
196
+
*`query` - (Optional) block consisting of single string field: `query_id` - identifier of the Databricks SQL Query ([databricks_sql_query](sql_query.md)).
197
+
*`dashboard` - (Optional) block consisting of single string field: `dashboard_id` - identifier of the Databricks SQL Dashboard [databricks_sql_dashboard](sql_dashboard.md).
198
+
*`alert` - (Optional) block consisting of single string field: `alert_id` - identifier of the Databricks SQL Alert.
199
+
190
200
### email_notifications Configuration Block
191
201
192
202
*`on_failure` - (Optional) (List) list of emails to notify on failure
0 commit comments