Skip to content

Commit 6eefdda

Browse files
authored
Document sql_task configuration block in databricks_job resource (#1589)
1 parent d1d468a commit 6eefdda

File tree

1 file changed

+10
-0
lines changed

1 file changed

+10
-0
lines changed

docs/resources/job.md

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -187,6 +187,16 @@ You can invoke Spark submit tasks only on new clusters. **In the `new_cluster` s
187187

188188
You also need to include a `git_source` block to configure the repository that contains the dbt project.
189189

190+
### sql_task Configuration Block
191+
192+
One of the `query`, `dashboard` or `alert` needs to be provided.
193+
194+
* `warehouse_id` - (Required) ID of the (the [databricks_sql_endpoint](sql_endpoint.md)) that will be used to execute the task. Only serverless warehouses are supported right now.
195+
* `parameters` - (Optional) (Map) parameters to be used for each run of this task. The SQL alert task does not support custom parameters.
196+
* `query` - (Optional) block consisting of single string field: `query_id` - identifier of the Databricks SQL Query ([databricks_sql_query](sql_query.md)).
197+
* `dashboard` - (Optional) block consisting of single string field: `dashboard_id` - identifier of the Databricks SQL Dashboard [databricks_sql_dashboard](sql_dashboard.md).
198+
* `alert` - (Optional) block consisting of single string field: `alert_id` - identifier of the Databricks SQL Alert.
199+
190200
### email_notifications Configuration Block
191201

192202
* `on_failure` - (Optional) (List) list of emails to notify on failure

0 commit comments

Comments
 (0)