Skip to content

Commit 7465b57

Browse files
Document task_retry_mode (#4993)
## Changes <!-- Summary of your changes that are easy to understand --> ## Tests <!-- How is this tested? Please see the checklist below and also describe any other relevant tests --> - [x] `make test` run locally - [ ] relevant change in `docs/` folder - [ ] covered with integration tests in `internal/acceptance` - [ ] using Go SDK - [ ] using TF Plugin Framework - [x] has entry in `NEXT_CHANGELOG.md` file --------- Co-authored-by: Alex Ott <[email protected]>
1 parent 21ef72f commit 7465b57

File tree

2 files changed

+6
-0
lines changed

2 files changed

+6
-0
lines changed

NEXT_CHANGELOG.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,8 @@
1111
### Bug Fixes
1212

1313
### Documentation
14+
* Document `continuous.task_retry_mode` in `databricks_job` ([#4993](https://github.com/databricks/terraform-provider-databricks/pull/4993))
15+
1416

1517
### Exporter
1618

docs/resources/job.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -416,6 +416,10 @@ resource "databricks_job" "this" {
416416

417417
* `pause_status` - (Optional) Indicate whether this continuous job is paused or not. Either `PAUSED` or `UNPAUSED`. When the `pause_status` field is omitted in the block, the server will default to using `UNPAUSED` as a value for `pause_status`.
418418

419+
* `task_retry_mode` - (Optional) Controls task level retry behaviour. Allowed values are:
420+
* `NEVER` (default): The failed task will not be retried.
421+
* `ON_FAILURE`: Retry a failed task if at least one other task in the job is still running its first attempt. When this condition is no longer met or the retry limit is reached, the job run is cancelled and a new run is started.
422+
419423
### queue Configuration Block
420424

421425
This block describes the queue settings of the job:

0 commit comments

Comments
 (0)