Add DATABRICKS_DISABLE_RETRIES config option to disable the default retry mechanism.
#523
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What changes are proposed in this pull request?
This PR adds a config-level property to disable the default retry mechanism. That is, no request will be retried no matter its error code/status. This is implemented via a new retry strategy
NoRetryStrategywhich systematically returnsfalse.This PR addresses some of the limitations called out in issue: #289.
Why not providing control on the number of retries directly?
The number of retries should be considered an implementation detail of a specific retry strategy. The Databricks SDKs are meant to evolve to a model where the retry conditions are more and more server controlled (e.g. leverage the RetryInfo error details). Rather than providing control over the number of retries, we intent to provide users with control over (i) the overall timeout of the method call (including retries), and (ii) the retry strategy used on a specific client.
How is this tested?
Unit + Integration tests.