Skip to content

Commit 4840148

Browse files
authored
Merge pull request #31571 from prameshj/patch-8
Add a section about nodelocaldns memory limits.
2 parents b2063ee + 9b0539e commit 4840148

File tree

1 file changed

+28
-1
lines changed

1 file changed

+28
-1
lines changed

content/en/docs/tasks/administer-cluster/nodelocaldns.md

Lines changed: 28 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -100,4 +100,31 @@ shown in [the example](/docs/tasks/administer-cluster/dns-custom-nameservers/#ex
100100
The `node-local-dns` ConfigMap can also be modified directly with the stubDomain configuration
101101
in the Corefile format. Some cloud providers might not allow modifying `node-local-dns` ConfigMap directly.
102102
In those cases, the `kube-dns` ConfigMap can be updated.
103-
103+
104+
## Setting memory limits
105+
106+
node-local-dns pods use memory for storing cache entries and processing queries. Since they do not watch Kubernetes objects, the cluster size or the number of Services/Endpoints do not directly affect memory usage. Memory usage is influenced by the DNS query pattern.
107+
From [CoreDNS docs](https://github.com/coredns/deployment/blob/master/kubernetes/Scaling_CoreDNS.md),
108+
> The default cache size is 10000 entries, which uses about 30 MB when completely filled.
109+
110+
This would be the memory usage for each server block (if the cache gets completely filled).
111+
Memory usage can be reduced by specifying smaller cache sizes.
112+
113+
The number of concurrent queries is linked to the memory demand, because each extra
114+
goroutine used for handling a query requires an amount of memory. You can set an upper limit
115+
using the `max_concurrent` option in the forward plugin.
116+
117+
If a node-local-dns pod attempts to use more memory than is available (because of total system
118+
resources, or because of a configured
119+
[resource limit](/docs/concepts/configuration/manage-resources-containers/)), the operating system
120+
may shut down that pod's container.
121+
If this happens, the container that is terminated (“OOMKilled”) does not clean up the custom
122+
packet filtering rules that it previously added during startup.
123+
The node-local-dns container should get restarted (since managed as part of a DaemonSet), but this
124+
will lead to a brief DNS downtime each time that the container fails: the packet filtering rules direct
125+
DNS queries to a local Pod that is unhealthy.
126+
127+
You can determine a suitable memory limit by running node-local-dns pods without a limit and
128+
measuring the peak usage. You can also set up and use a
129+
[VerticalPodAutoscaler](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler)
130+
in _recommender mode_, and then check its recommendations.

0 commit comments

Comments
 (0)