|
| 1 | +--- |
| 2 | +layout: blog |
| 3 | +title: "Kubernetes v1.33:Job 逐索引的回退限制进阶至 GA" |
| 4 | +date: 2025-05-13T10:30:00-08:00 |
| 5 | +slug: kubernetes-v1-33-jobs-backoff-limit-per-index-goes-ga |
| 6 | +author: > |
| 7 | + [Michał Woźniak](https://github.com/mimowo) (Google) |
| 8 | +translator: > |
| 9 | + [Michael Yao](https://github.com/windsonsea) (DaoCloud) |
| 10 | +--- |
| 11 | +<!-- |
| 12 | +layout: blog |
| 13 | +title: "Kubernetes v1.33: Job's Backoff Limit Per Index Goes GA" |
| 14 | +date: 2025-05-13T10:30:00-08:00 |
| 15 | +slug: kubernetes-v1-33-jobs-backoff-limit-per-index-goes-ga |
| 16 | +author: > |
| 17 | + [Michał Woźniak](https://github.com/mimowo) (Google) |
| 18 | +--> |
| 19 | + |
| 20 | +<!-- |
| 21 | +In Kubernetes v1.33, the _Backoff Limit Per Index_ feature reaches general |
| 22 | +availability (GA). This blog describes the Backoff Limit Per Index feature and |
| 23 | +its benefits. |
| 24 | +--> |
| 25 | +在 Kubernetes v1.33 中,**逐索引的回退限制**特性进阶至 GA(正式发布)。本文介绍此特性及其优势。 |
| 26 | + |
| 27 | +<!-- |
| 28 | +## About backoff limit per index |
| 29 | +
|
| 30 | +When you run workloads on Kubernetes, you must consider scenarios where Pod |
| 31 | +failures can affect the completion of your workloads. Ideally, your workload |
| 32 | +should tolerate transient failures and continue running. |
| 33 | +
|
| 34 | +To achieve failure tolerance in a Kubernetes Job, you can set the |
| 35 | +`spec.backoffLimit` field. This field specifies the total number of tolerated |
| 36 | +failures. |
| 37 | +--> |
| 38 | +## 关于逐索引的回退限制 {#about-backoff-limit-per-index} |
| 39 | + |
| 40 | +当你在 Kubernetes 上运行工作负载时,必须考虑 Pod 失效可能影响工作负载完成的场景。 |
| 41 | +理想情况下,你的工作负载应该能够容忍短暂的失效并继续运行。 |
| 42 | + |
| 43 | +为了在 Kubernetes Job 中容忍失效,你可以设置 `spec.backoffLimit` 字段。 |
| 44 | +此字段指定容忍的失效总数。 |
| 45 | + |
| 46 | +<!-- |
| 47 | +However, for workloads where every index is considered independent, like |
| 48 | +[embarassingly parallel](https://en.wikipedia.org/wiki/Embarrassingly_parallel) |
| 49 | +workloads - the `spec.backoffLimit` field is often not flexible enough. |
| 50 | +For example, you may choose to run multiple suites of integration tests by |
| 51 | +representing each suite as an index within an [Indexed Job](/docs/tasks/job/indexed-parallel-processing-static/). |
| 52 | +In that setup, a fast-failing index (test suite) is likely to consume your |
| 53 | +entire budget for tolerating Pod failures, and you might not be able to run the |
| 54 | +other indexes. |
| 55 | +--> |
| 56 | +但是,对于每个索引都被视为独立单元的工作负载, |
| 57 | +比如[过易并行](https://zh.wikipedia.org/zh-cn/%E8%BF%87%E6%98%93%E5%B9%B6%E8%A1%8C)的工作负载, |
| 58 | +`spec.backoffLimit` 字段通常不够灵活。例如,你可以选择运行多个继承测试套件, |
| 59 | +将每个套件作为[带索引的 Job](/zh-cn/docs/tasks/job/indexed-parallel-processing-static/)内的某个索引。 |
| 60 | +在这种情况下,快速失效的索引(测试套件)很可能消耗你全部的 Pod 失效容忍预算,你可能无法运行其他索引的 Pod。 |
| 61 | + |
| 62 | +<!-- |
| 63 | +In order to address this limitation, Kubernetes introduced _backoff limit per index_, |
| 64 | +which allows you to control the number of retries per index. |
| 65 | +
|
| 66 | +## How backoff limit per index works |
| 67 | +
|
| 68 | +To use Backoff Limit Per Index for Indexed Jobs, specify the number of tolerated |
| 69 | +Pod failures per index with the `spec.backoffLimitPerIndex` field. When you set |
| 70 | +this field, the Job executes all indexes by default. |
| 71 | +--> |
| 72 | +为了解决这一限制,Kubernetes 引入了**逐索引的回退限制**,允许你控制逐索引的重试次数。 |
| 73 | + |
| 74 | +## 逐索引回退限制的工作原理 {#how-backoff-limit-per-index-works} |
| 75 | + |
| 76 | +要在带索引的 Job 中使用逐索引的回退限制,可以通过 `spec.backoffLimitPerIndex` |
| 77 | +字段指定每个索引允许的 Pod 失效次数。当你设置此字段后,Job 默认将执行所有索引。 |
| 78 | + |
| 79 | +<!-- |
| 80 | +Additionally, to fine-tune the error handling: |
| 81 | +* Specify the cap on the total number of failed indexes by setting the |
| 82 | + `spec.maxFailedIndexes` field. When the limit is exceeded the entire Job is |
| 83 | + terminated. |
| 84 | +* Define a short-circuit to detect a failed index by using the `FailIndex` action in the |
| 85 | + [Pod Failure Policy](/docs/concepts/workloads/controllers/job/#pod-failure-policy) |
| 86 | + mechanism. |
| 87 | +--> |
| 88 | +另外,你可以通过以下方式微调错误处理: |
| 89 | + |
| 90 | +* 通过设置 `spec.maxFailedIndexes` 字段,指定失效索引总数的上限。超过此限制时,整个 Job 会被终止。 |
| 91 | +* 通过 [Pod 失效策略](/zh-cn/docs/concepts/workloads/controllers/job/#pod-failure-policy)机制中的 |
| 92 | + `FailIndex` 动作定义短路来检测失效的索引。 |
| 93 | + |
| 94 | +<!-- |
| 95 | +When the number of tolerated failures is exceeded, the Job marks that index as |
| 96 | +failed and lists it in the Job's `status.failedIndexes` field. |
| 97 | +
|
| 98 | +### Example |
| 99 | +
|
| 100 | +The following Job spec snippet is an example of how to combine backoff limit per |
| 101 | +index with the _Pod Failure Policy_ feature: |
| 102 | +--> |
| 103 | +当超过容忍的失效次数时,Job 会将该索引标记为失效,并在 Job 的 `status.failedIndexes` 字段中列出该索引。 |
| 104 | + |
| 105 | +### 示例 |
| 106 | + |
| 107 | +下面的 Job 规约片段展示了如何将逐索引的回退限制与 **Pod 失效策略**特性结合使用: |
| 108 | + |
| 109 | +```yaml |
| 110 | +completions: 10 |
| 111 | +parallelism: 10 |
| 112 | +completionMode: Indexed |
| 113 | +backoffLimitPerIndex: 1 |
| 114 | +maxFailedIndexes: 5 |
| 115 | +podFailurePolicy: |
| 116 | + rules: |
| 117 | + - action: Ignore |
| 118 | + onPodConditions: |
| 119 | + - type: DisruptionTarget |
| 120 | + - action: FailIndex |
| 121 | + onExitCodes: |
| 122 | + operator: In |
| 123 | + values: [ 42 ] |
| 124 | +``` |
| 125 | +
|
| 126 | +<!-- |
| 127 | +In this example, the Job handles Pod failures as follows: |
| 128 | +
|
| 129 | +- Ignores any failed Pods that have the built-in |
| 130 | + [disruption condition](/docs/concepts/workloads/pods/disruptions/#pod-disruption-conditions), |
| 131 | + called `DisruptionTarget`. These Pods don't count towards Job backoff limits. |
| 132 | +- Fails the index corresponding to the failed Pod if any of the failed Pod's |
| 133 | + containers finished with the exit code 42 - based on the matching "FailIndex" |
| 134 | + rule. |
| 135 | +--> |
| 136 | +在此例中,Job 对 Pod 失效的处理逻辑如下: |
| 137 | + |
| 138 | +* 忽略具有内置[干扰状况](/zh-cn/docs/concepts/workloads/pods/disruptions/#pod-disruption-conditions) |
| 139 | + (称为 `DisruptionTarget`)的失效 Pod。这些 Pod 不计入 Job 的回退限制。 |
| 140 | +* 如果失效的 Pod 中任何容器的退出码是 42,则基于匹配的 `FailIndex` 规则,将对应的索引标记为失效。 |
| 141 | +<!-- |
| 142 | +- Retries the first failure of any index, unless the index failed due to the |
| 143 | + matching `FailIndex` rule. |
| 144 | +- Fails the entire Job if the number of failed indexes exceeded 5 (set by the |
| 145 | + `spec.maxFailedIndexes` field). |
| 146 | +--> |
| 147 | +* 除非索引因匹配的 `FailIndex` 规则失效,否则会重试该索引的首次失效。 |
| 148 | +* 如果失效索引数量超过 5 个(由 `spec.maxFailedIndexes` 设置),则整个 Job 失效。 |
| 149 | + |
| 150 | +<!-- |
| 151 | +## Learn more |
| 152 | + |
| 153 | +- Read the blog post on the closely related feature of Pod Failure Policy [Kubernetes 1.31: Pod Failure Policy for Jobs Goes GA](/blog/2024/08/19/kubernetes-1-31-pod-failure-policy-for-jobs-goes-ga/) |
| 154 | +- For a hands-on guide to using Pod failure policy, including the use of FailIndex, see |
| 155 | + [Handling retriable and non-retriable pod failures with Pod failure policy](/docs/tasks/job/pod-failure-policy/) |
| 156 | +- Read the documentation for |
| 157 | + [Backoff limit per index](/docs/concepts/workloads/controllers/job/#backoff-limit-per-index) and |
| 158 | + [Pod failure policy](/docs/concepts/workloads/controllers/job/#pod-failure-policy) |
| 159 | +- Read the KEP for the [Backoff Limits Per Index For Indexed Jobs](https://github.com/kubernetes/enhancements/tree/master/keps/sig-apps/3850-backoff-limits-per-index-for-indexed-jobs) |
| 160 | +--> |
| 161 | +## 进一步了解 |
| 162 | + |
| 163 | +* 阅读与 Pod 失效策略密切相关的博客文章:[Kubernetes 1.31:Job 的 Pod 失效策略进阶至 GA](/zh-cn/blog/2024/08/19/kubernetes-1-31-pod-failure-policy-for-jobs-goes-ga/) |
| 164 | +* 查看包含 FailIndex 用法在内的 Pod 失效策略实操指南: |
| 165 | + [使用 Pod 失效策略处理可重试和不可重试的 Pod 失效](/zh-cn/docs/tasks/job/pod-failure-policy/) |
| 166 | +* 阅读[逐索引的回退限制](/zh-cn/docs/concepts/workloads/controllers/job/#backoff-limit-per-index)和 |
| 167 | + [Pod 失效策略](/zh-cn/docs/concepts/workloads/controllers/job/#pod-failure-policy)等文档 |
| 168 | +* 查阅 KEP:[带索引的 Job 的逐索引回退限制](https://github.com/kubernetes/enhancements/tree/master/keps/sig-apps/3850-backoff-limits-per-index-for-indexed-jobs) |
| 169 | + |
| 170 | +<!-- |
| 171 | +## Get involved |
| 172 | + |
| 173 | +This work was sponsored by the Kubernetes |
| 174 | +[batch working group](https://github.com/kubernetes/community/tree/master/wg-batch) |
| 175 | +in close collaboration with the |
| 176 | +[SIG Apps](https://github.com/kubernetes/community/tree/master/sig-apps) community. |
| 177 | + |
| 178 | +If you are interested in working on new features in the space we recommend |
| 179 | +subscribing to our [Slack](https://kubernetes.slack.com/messages/wg-batch) |
| 180 | +channel and attending the regular community meetings. |
| 181 | +--> |
| 182 | +## 参与此工作 {#get-involved} |
| 183 | + |
| 184 | +这项工作由 Kubernetes [Batch Working Group(批处理工作组)](https://github.com/kubernetes/community/tree/master/wg-batch)负责,且与 |
| 185 | +[SIG Apps](https://github.com/kubernetes/community/tree/master/sig-apps) 社区密切协作。 |
| 186 | + |
| 187 | +如果你有兴趣参与此领域的新特性开发,建议订阅我们的 |
| 188 | +[Slack 频道](https://kubernetes.slack.com/messages/wg-batch),并参加定期社区会议。 |
0 commit comments