You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/en/docs/tasks/administer-cluster/nodelocaldns.md
+28-1Lines changed: 28 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -100,4 +100,31 @@ shown in [the example](/docs/tasks/administer-cluster/dns-custom-nameservers/#ex
100
100
The `node-local-dns` ConfigMap can also be modified directly with the stubDomain configuration
101
101
in the Corefile format. Some cloud providers might not allow modifying `node-local-dns` ConfigMap directly.
102
102
In those cases, the `kube-dns` ConfigMap can be updated.
103
-
103
+
104
+
## Setting memory limits
105
+
106
+
node-local-dns pods use memory for storing cache entries and processing queries. Since they do not watch Kubernetes objects, the cluster size or the number of Services/Endpoints do not directly affect memory usage. Memory usage is influenced by the DNS query pattern.
107
+
From [CoreDNS docs](https://github.com/coredns/deployment/blob/master/kubernetes/Scaling_CoreDNS.md),
108
+
> The default cache size is 10000 entries, which uses about 30 MB when completely filled.
109
+
110
+
This would be the memory usage for each server block (if the cache gets completely filled).
111
+
Memory usage can be reduced by specifying smaller cache sizes.
112
+
113
+
The number of concurrent queries is linked to the memory demand, because each extra
114
+
goroutine used for handling a query requires an amount of memory. You can set an upper limit
115
+
using the `max_concurrent` option in the forward plugin.
116
+
117
+
If a node-local-dns pod attempts to use more memory than is available (because of total system
118
+
resources, or because of a configured
119
+
[resource limit](/docs/concepts/configuration/manage-resources-containers/)), the operating system
120
+
may shut down that pod's container.
121
+
If this happens, the container that is terminated (“OOMKilled”) does not clean up the custom
122
+
packet filtering rules that it previously added during startup.
123
+
The node-local-dns container should get restarted (since managed as part of a DaemonSet), but this
124
+
will lead to a brief DNS downtime each time that the container fails: the packet filtering rules direct
125
+
DNS queries to a local Pod that is unhealthy.
126
+
127
+
You can determine a suitable memory limit by running node-local-dns pods without a limit and
128
+
measuring the peak usage. You can also set up and use a
0 commit comments