|
1 | 1 | ---
|
2 | 2 | reviewers:
|
3 | 3 | - janetkuo
|
4 |
| -title: TTL Controller for Finished Resources |
| 4 | +title: Automatic Clean-up for Finished Jobs |
5 | 5 | content_type: concept
|
6 | 6 | weight: 70
|
7 | 7 | ---
|
8 | 8 |
|
9 | 9 | <!-- overview -->
|
10 | 10 |
|
11 |
| -{{< feature-state for_k8s_version="v1.21" state="beta" >}} |
| 11 | +{{< feature-state for_k8s_version="v1.23" state="stable" >}} |
12 | 12 |
|
13 |
| -The TTL controller provides a TTL (time to live) mechanism to limit the lifetime of resource |
14 |
| -objects that have finished execution. TTL controller only handles |
15 |
| -{{< glossary_tooltip text="Jobs" term_id="job" >}} for now, |
16 |
| -and may be expanded to handle other resources that will finish execution, |
17 |
| -such as Pods and custom resources. |
18 |
| - |
19 |
| -This feature is currently beta and enabled by default, and can be disabled via |
20 |
| -[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) |
21 |
| -`TTLAfterFinished` in both kube-apiserver and kube-controller-manager. |
| 13 | +TTL-after-finished {{<glossary_tooltip text="controller" term_id="controller">}} provides a |
| 14 | +TTL (time to live) mechanism to limit the lifetime of resource objects that |
| 15 | +have finished execution. TTL controller only handles |
| 16 | +{{< glossary_tooltip text="Jobs" term_id="job" >}}. |
22 | 17 |
|
23 | 18 | <!-- body -->
|
24 | 19 |
|
25 |
| -## TTL Controller |
| 20 | +## TTL-after-finished Controller |
26 | 21 |
|
27 |
| -The TTL controller only supports Jobs for now. A cluster operator can use this feature to clean |
| 22 | +The TTL-after-finished controller is only supported for Jobs. A cluster operator can use this feature to clean |
28 | 23 | up finished Jobs (either `Complete` or `Failed`) automatically by specifying the
|
29 | 24 | `.spec.ttlSecondsAfterFinished` field of a Job, as in this
|
30 | 25 | [example](/docs/concepts/workloads/controllers/job/#clean-up-finished-jobs-automatically).
|
31 |
| -The TTL controller will assume that a resource is eligible to be cleaned up |
32 |
| -TTL seconds after the resource has finished, in other words, when the TTL has expired. When the |
33 |
| -TTL controller cleans up a resource, it will delete it cascadingly, that is to say it will delete |
34 |
| -its dependent objects together with it. Note that when the resource is deleted, |
| 26 | +The TTL-after-finished controller will assume that a job is eligible to be cleaned up |
| 27 | +TTL seconds after the job has finished, in other words, when the TTL has expired. When the |
| 28 | +TTL-after-finished controller cleans up a job, it will delete it cascadingly, that is to say it will delete |
| 29 | +its dependent objects together with it. Note that when the job is deleted, |
35 | 30 | its lifecycle guarantees, such as finalizers, will be honored.
|
36 | 31 |
|
37 | 32 | The TTL seconds can be set at any time. Here are some examples for setting the
|
38 | 33 | `.spec.ttlSecondsAfterFinished` field of a Job:
|
39 | 34 |
|
40 |
| -* Specify this field in the resource manifest, so that a Job can be cleaned up |
| 35 | +* Specify this field in the job manifest, so that a Job can be cleaned up |
41 | 36 | automatically some time after it finishes.
|
42 |
| -* Set this field of existing, already finished resources, to adopt this new |
| 37 | +* Set this field of existing, already finished jobs, to adopt this new |
43 | 38 | feature.
|
44 | 39 | * Use a
|
45 | 40 | [mutating admission webhook](/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks)
|
46 |
| - to set this field dynamically at resource creation time. Cluster administrators can |
47 |
| - use this to enforce a TTL policy for finished resources. |
| 41 | + to set this field dynamically at job creation time. Cluster administrators can |
| 42 | + use this to enforce a TTL policy for finished jobs. |
48 | 43 | * Use a
|
49 | 44 | [mutating admission webhook](/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks)
|
50 |
| - to set this field dynamically after the resource has finished, and choose |
51 |
| - different TTL values based on resource status, labels, etc. |
| 45 | + to set this field dynamically after the job has finished, and choose |
| 46 | + different TTL values based on job status, labels, etc. |
52 | 47 |
|
53 | 48 | ## Caveat
|
54 | 49 |
|
55 | 50 | ### Updating TTL Seconds
|
56 | 51 |
|
57 | 52 | Note that the TTL period, e.g. `.spec.ttlSecondsAfterFinished` field of Jobs,
|
58 |
| -can be modified after the resource is created or has finished. However, once the |
| 53 | +can be modified after the job is created or has finished. However, once the |
59 | 54 | Job becomes eligible to be deleted (when the TTL has expired), the system won't
|
60 | 55 | guarantee that the Jobs will be kept, even if an update to extend the TTL
|
61 | 56 | returns a successful API response.
|
62 | 57 |
|
63 | 58 | ### Time Skew
|
64 | 59 |
|
65 |
| -Because TTL controller uses timestamps stored in the Kubernetes resources to |
| 60 | +Because TTL-after-finished controller uses timestamps stored in the Kubernetes jobs to |
66 | 61 | determine whether the TTL has expired or not, this feature is sensitive to time
|
67 |
| -skew in the cluster, which may cause TTL controller to clean up resource objects |
| 62 | +skew in the cluster, which may cause TTL-after-finish controller to clean up job objects |
68 | 63 | at the wrong time.
|
69 | 64 |
|
70 |
| -In Kubernetes, it's required to run NTP on all nodes |
71 |
| -(see [#6159](https://github.com/kubernetes/kubernetes/issues/6159#issuecomment-93844058)) |
72 |
| -to avoid time skew. Clocks aren't always correct, but the difference should be |
| 65 | +Clocks aren't always correct, but the difference should be |
73 | 66 | very small. Please be aware of this risk when setting a non-zero TTL.
|
74 | 67 |
|
75 | 68 |
|
|
0 commit comments