@@ -91,9 +91,9 @@ Some kubelet garbage collection features are deprecated in favor of eviction:
91
91
| ------------- | -------- | --------- |
92
92
| ` --image-gc-high-threshold ` | ` --eviction-hard ` or ` --eviction-soft ` | existing eviction signals can trigger image garbage collection |
93
93
| ` --image-gc-low-threshold ` | ` --eviction-minimum-reclaim ` | eviction reclaims achieve the same behavior |
94
- | ` --maximum-dead-containers ` | | deprecated once old logs are stored outside of container's context |
95
- | ` --maximum-dead-containers-per-container ` | | deprecated once old logs are stored outside of container's context |
96
- | ` --minimum-container-ttl-duration ` | | deprecated once old logs are stored outside of container's context |
94
+ | ` --maximum-dead-containers ` | - | deprecated once old logs are stored outside of container's context |
95
+ | ` --maximum-dead-containers-per-container ` | - | deprecated once old logs are stored outside of container's context |
96
+ | ` --minimum-container-ttl-duration ` | - | deprecated once old logs are stored outside of container's context |
97
97
98
98
### Eviction thresholds
99
99
@@ -216,7 +216,7 @@ the kubelet frees up disk space in the following order:
216
216
If the kubelet's attempts to reclaim node-level resources don't bring the eviction
217
217
signal below the threshold, the kubelet begins to evict end-user pods.
218
218
219
- The kubelet uses the following parameters to determine pod eviction order:
219
+ The kubelet uses the following parameters to determine the pod eviction order:
220
220
221
221
1 . Whether the pod's resource usage exceeds requests
222
222
1 . [ Pod Priority] ( /docs/concepts/scheduling-eviction/pod-priority-preemption/ )
@@ -319,7 +319,7 @@ The kubelet sets an `oom_score_adj` value for each container based on the QoS fo
319
319
320
320
{{<note>}}
321
321
The kubelet also sets an `oom_score_adj` value of `-997` for containers in Pods that have
322
- ` system-node-critical` {{<glossary_tooltip text="Priority" term_id="pod-priority">}}
322
+ ` system-node-critical` {{<glossary_tooltip text="Priority" term_id="pod-priority">}}.
323
323
{{</note>}}
324
324
325
325
If the kubelet can't reclaim memory before a node experiences OOM, the
@@ -401,7 +401,7 @@ counted as `active_file`. If enough of these kernel block buffers are on the
401
401
active LRU list, the kubelet is liable to observe this as high resource use and
402
402
taint the node as experiencing memory pressure - triggering pod eviction.
403
403
404
- For more more details, see [https://github.com/kubernetes/kubernetes/issues/43916](https://github.com/kubernetes/kubernetes/issues/43916)
404
+ For more details, see [https://github.com/kubernetes/kubernetes/issues/43916](https://github.com/kubernetes/kubernetes/issues/43916)
405
405
406
406
You can work around that behavior by setting the memory limit and memory request
407
407
the same for containers likely to perform intensive I/O activity. You will need
0 commit comments