You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Minor fixes for job documentation
* Also library link
* Add back malplaced migration guide
---------
Co-authored-by: Tanmay Rustagi <[email protected]>
Copy file name to clipboardExpand all lines: docs/resources/job.md
+23-21Lines changed: 23 additions & 21 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -82,7 +82,10 @@ The resource supports the following arguments:
82
82
*`description` - (Optional) An optional description for the job. The maximum length is 1024 characters in UTF-8 encoding.
83
83
*`task` - (Optional) A list of task specification that the job will execute. See [task Configuration Block](#task-configuration-block) below.
84
84
*`job_cluster` - (Optional) A list of job [databricks_cluster](cluster.md) specifications that can be shared and reused by tasks of this job. Libraries cannot be declared in a shared job cluster. You must declare dependent libraries in task settings. *Multi-task syntax*
85
+
*`schedule` - (Optional) (List) An optional periodic schedule for this job. The default behavior is that the job runs when triggered by clicking Run Now in the Jobs UI or sending an API request to runNow. This field is a block and is [documented below](#schedule-configuration-block).
85
86
*`trigger` - (Optional) The conditions that triggers the job to start. See [trigger Configuration Block](#trigger-configuration-block) below.
87
+
*`continuous`- (Optional) Configuration block to configure pause status. See [continuous Configuration Block](#continuous-configuration-block).
88
+
*`queue` - (Optional) The queue status for the job. See [queue Configuration Block](#queue-configuration-block) below.
86
89
*`always_running` - (Optional, Deprecated) (Bool) Whenever the job is always running, like a Spark Streaming application, on every update restart the current active run or start it again, if nothing it is not running. False by default. Any job runs are started with `parameters` specified in `spark_jar_task` or `spark_submit_task` or `spark_python_task` or `notebook_task` blocks.
87
90
*`run_as` - (Optional) The user or the service prinicipal the job runs as. See [run_as Configuration Block](#run_as-configuration-block) below.
88
91
*`control_run_state` - (Optional) (Bool) If true, the Databricks provider will stop and start the job as needed to ensure that the active run for the job reflects the deployed configuration. For continuous jobs, the provider respects the `pause_status` by stopping the current active run. This flag cannot be set for non-continuous jobs.
@@ -93,16 +96,16 @@ The resource supports the following arguments:
93
96
continuous { }
94
97
```
95
98
96
-
*`library` - (Optional) (List) An optional list of libraries to be installed on the cluster that will execute the job. Please consult [libraries section of the databricks_cluster](cluster.md#library-configuration-block)resource for more information.
99
+
*`library` - (Optional) (List) An optional list of libraries to be installed on the cluster that will execute the job. See [library Configuration Block](#library-configuration-block)below.
97
100
*`git_source` - (Optional) Specifices the a Git repository for task source code. See [git_source Configuration Block](#git_source-configuration-block) below.
101
+
*`parameter` - (Optional) Specifices job parameter for the job. See [parameter Configuration Block](#parameter-configuration-block)
98
102
*`timeout_seconds` - (Optional) (Integer) An optional timeout applied to each run of this job. The default behavior is to have no timeout.
99
103
*`min_retry_interval_millis` - (Optional) (Integer) An optional minimal interval in milliseconds between the start of the failed run and the subsequent retry run. The default behavior is that unsuccessful runs are immediately retried.
100
104
*`max_concurrent_runs` - (Optional) (Integer) An optional maximum allowed number of concurrent runs of the job. Defaults to *1*.
101
105
*`email_notifications` - (Optional) (List) An optional set of email addresses notified when runs of this job begins, completes or fails. The default behavior is to not send any emails. This field is a block and is [documented below](#email_notifications-configuration-block).
102
-
*`webhook_notifications` - (Optional) (List) An optional set of system destinations (for example, webhook destinations or Slack) to be notified when runs of this job begins, completes or fails. The default behavior is to not send any notifications. This field is a block and is documented below.
106
+
*`webhook_notifications` - (Optional) (List) An optional set of system destinations (for example, webhook destinations or Slack) to be notified when runs of this job begins, completes or fails. The default behavior is to not send any notifications. This field is a block and is [documented below](#webhook_notifications-configuration-block).
103
107
*`notification_settings` - (Optional) An optional block controlling the notification settings on the job level [documented below](#notification_settings-configuration-block).
104
-
*`schedule` - (Optional) (List) An optional periodic schedule for this job. The default behavior is that the job runs when triggered by clicking Run Now in the Jobs UI or sending an API request to runNow. This field is a block and is documented below.
105
-
*`health` - (Optional) An optional block that specifies the health conditions for the job (described below).
108
+
*`health` - (Optional) An optional block that specifies the health conditions for the job [documented below](#health-configuration-block).
106
109
*`tags` - (Optional) An optional map of the tags associated with the job. See [tags Configuration Map](#tags-configuration-map)
107
110
108
111
### task Configuration Block
@@ -294,22 +297,6 @@ This block describes upstream dependencies of a given task. For multiple upstrea
294
297
295
298
-> **Note** Similar to the tasks themselves, each dependency inside the task need to be declared in alphabetical order with respect to task_key in order to get consistent Terraform diffs.
296
299
297
-
### tags Configuration Map
298
-
299
-
`tags` - (Optional) (Map) An optional map of the tags associated with the job. Specified tags will be used as cluster tags for job clusters.
300
-
301
-
Example
302
-
303
-
```hcl
304
-
resource "databricks_job" "this" {
305
-
# ...
306
-
tags = {
307
-
environment = "dev"
308
-
owner = "dream-team"
309
-
}
310
-
}
311
-
```
312
-
313
300
### run_as Configuration Block
314
301
315
302
The `run_as` block allows specifying the user or the service principal that the job runs as. If not specified, the job runs as the user or service
@@ -443,6 +430,21 @@ This block describes health conditions for a given job or an individual task. It
443
430
*`op` - (Optional) string specifying the operation used to evaluate the given metric. The only supported operation is `GREATER_THAN`.
444
431
*`value` - (Optional) integer value used to compare to the given metric.
445
432
433
+
### tags Configuration Map
434
+
435
+
`tags` - (Optional) (Map) An optional map of the tags associated with the job. Specified tags will be used as cluster tags for job clusters.
436
+
437
+
Example
438
+
439
+
```hcl
440
+
resource "databricks_job" "this" {
441
+
# ...
442
+
tags = {
443
+
environment = "dev"
444
+
owner = "dream-team"
445
+
}
446
+
}
447
+
```
446
448
447
449
## Attribute Reference
448
450
@@ -542,7 +544,7 @@ The following resources are often used in the same context:
542
544
*[databricks_global_init_script](global_init_script.md) to manage [global init scripts](https://docs.databricks.com/clusters/init-scripts.html#global-init-scripts), which are run on all [databricks_cluster](cluster.md#init_scripts) and [databricks_job](job.md#new_cluster).
543
545
*[databricks_instance_pool](instance_pool.md) to manage [instance pools](https://docs.databricks.com/clusters/instance-pools/index.html) to reduce [cluster](cluster.md) start and auto-scaling times by maintaining a set of idle, ready-to-use instances.
544
546
*[databricks_instance_profile](instance_profile.md) to manage AWS EC2 instance profiles that users can launch [databricks_cluster](cluster.md) and access data, like [databricks_mount](mount.md).
545
-
*[databricks_jobs] data to get all jobs and their names from a workspace.
547
+
*[databricks_jobs](../data-sources/jobs.md) data to get all jobs and their names from a workspace.
546
548
*[databricks_library](library.md) to install a [library](https://docs.databricks.com/libraries/index.html) on [databricks_cluster](cluster.md).
547
549
*[databricks_node_type](../data-sources/node_type.md) data to get the smallest node type for [databricks_cluster](cluster.md) that fits search criteria, like amount of RAM or number of cores.
548
550
*[databricks_notebook](notebook.md) to manage [Databricks Notebooks](https://docs.databricks.com/notebooks/index.html).
0 commit comments