Skip to content

Commit 54b01a2

Browse files
committed
docs: document trade-offs in memory configuration
Problem: memory requests and limits has been set for `master` process in PR #1631. It does not follow best practices for setting those values, but the intention was provide default values for a wide variety of clusters, including small ones. Solution: provide solid documentation about the problems that might happen in production environments when `resource.memory.requests << resource.memory.limits`. Add a link to relevant external sources, which includes the advise from Tim Hockin: > Always set memory limit == request Signed-off-by: cmontemuino <[email protected]>
1 parent 7938e81 commit 54b01a2

File tree

2 files changed

+7
-1
lines changed

2 files changed

+7
-1
lines changed

deployment/helm/node-feature-discovery/values.yaml

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -100,6 +100,12 @@ master:
100100
memory: 4Gi
101101
requests:
102102
cpu: 100m
103+
# You may want to use the same value for `requests.memory` and `limits.memory`. The “requests” value affects scheduling to accommodate pods on nodes.
104+
# If there is a large difference between “requests” and “limits” and nodes experience memory pressure, the kernel may invoke
105+
# the OOM Killer, even if the memory does not exceed the “limits” threshold. This can cause unexpected pod evictions. Memory
106+
# cannot be compressed and once allocated to a pod, it can only be reclaimed by killing the pod. There is a great article by
107+
# Robusta that discusses this issue.
108+
# https://home.robusta.dev/blog/kubernetes-memory-limit
103109
memory: 128Mi
104110

105111
nodeSelector: {}

docs/deployment/helm.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -132,7 +132,7 @@ API's you need to install the prometheus operator in your cluster.
132132
| `master.service.type` | string | ClusterIP | NFD master service type. **NOTE**: this parameter is related to the deprecated gRPC API and will be removed with it in a future release |
133133
| `master.service.port` | integer | 8080 | NFD master service port. **NOTE**: this parameter is related to the deprecated gRPC API and will be removed with it in a future release |
134134
| `master.resources.limits` | dict | {cpu: 300m, memory: 4Gi} | NFD master pod [resources limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits) |
135-
| `master.resources.requests`| dict | {cpu: 100m, memory: 128Mi} | NFD master pod [resources requests](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits) |
135+
| `master.resources.requests`| dict | {cpu: 100m, memory: 128Mi} | NFD master pod [resources requests](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits). You may want to use the same value for `requests.memory` and `limits.memory`. The “requests” value affects scheduling to accommodate pods on nodes. If there is a large difference between “requests” and “limits” and nodes experience memory pressure, the kernel may invoke the OOM Killer, even if the memory does not exceed the “limits” threshold. This can cause unexpected pod evictions. Memory cannot be compressed and once allocated to a pod, it can only be reclaimed by killing the pod. There is a great article by [Robusta](https://home.robusta.dev/blog/kubernetes-memory-limit) that discusses this issue.|
136136
| `master.tolerations` | dict | _Scheduling to master node is disabled_ | NFD master pod [tolerations](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) |
137137
| `master.annotations` | dict | {} | NFD master pod [annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) |
138138
| `master.affinity` | dict | | NFD master pod required [node affinity](https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/) |

0 commit comments

Comments
 (0)