You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/en/docs/tasks/administer-cluster/topology-manager.md
+25-11Lines changed: 25 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -222,17 +222,31 @@ You will still have to enable each option using the `TopologyManagerPolicyOption
222
222
223
223
The following policy options exists:
224
224
*`prefer-closest-numa-nodes` (beta, visible by default; `TopologyManagerPolicyOptions` and `TopologyManagerPolicyBetaOptions` feature gates have to be enabled).
225
-
The `prefer-closest-numa-nodes` policy option is beta in Kubernetes {{< skew currentVersion >}}.
226
-
227
-
If the `prefer-closest-numa-nodes` policy option is specified, the `best-effort` and `restricted`
228
-
policies will favor sets of NUMA nodes with shorter distance between them when making admission decisions.
229
-
You can enable this option by adding `prefer-closest-numa-nodes=true` to the Topology Manager policy options.
230
-
By default, without this option, Topology Manager aligns resources on either a single NUMA node or
231
-
the minimum number of NUMA nodes (in cases where more than one NUMA node is required). However,
232
-
the `TopologyManager` is not aware of NUMA distances and does not take them into account when making admission decisions.
233
-
This limitation surfaces in multi-socket, as well as single-socket multi NUMA systems,
234
-
and can cause significant performance degradation in latency-critical execution and high-throughput applications if the
235
-
Topology Manager decides to align resources on non-adjacent NUMA nodes.
225
+
The `prefer-closest-numa-nodes` policy option is beta in Kubernetes {{< skew currentVersion >}}.
226
+
227
+
If the `prefer-closest-numa-nodes` policy option is specified, the `best-effort` and `restricted`
228
+
policies will favor sets of NUMA nodes with shorter distance between them when making admission decisions.
229
+
You can enable this option by adding `prefer-closest-numa-nodes=true` to the Topology Manager policy options.
230
+
By default, without this option, Topology Manager aligns resources on either a single NUMA node or
231
+
the minimum number of NUMA nodes (in cases where more than one NUMA node is required). However,
232
+
the `TopologyManager` is not aware of NUMA distances and does not take them into account when making admission decisions.
233
+
This limitation surfaces in multi-socket, as well as single-socket multi NUMA systems,
234
+
and can cause significant performance degradation in latency-critical execution and high-throughput applications if the
235
+
Topology Manager decides to align resources on non-adjacent NUMA nodes.
236
+
237
+
*`max-allowable-numa-nodes` (beta, visible by default).
238
+
The `max-allowable-numa-nodes` policy option is beta in Kubernetes {{< skew currentVersion >}}.
239
+
240
+
The time to admit a pod is tied to the number of NUMA nodes on the physical machine.
241
+
By default, Kubernetes does not run a kubelet with the topology manager enabled, on any (Kubernetes) node where more than 8 NUMA nodes are detected.
242
+
If you select the the `max-allowable-numa-nodes` policy option, nodes with more than 8 NUMA nodes can
243
+
be allowed to run with the topology manager enabled. The Kubernetes project only has limited data on the impact
244
+
of using the topology manager on (Kubernetes) nodes with more than 8 NUMA nodes. Because of that
245
+
lack of data, using this policy option is **not** recommended and is at your own risk.
246
+
Setting a value of `max-allowable-numa-nodes` does not (in and of itself) affect the
247
+
latency of pod admission, but binding a Pod to a (Kubernetes) node with many NUMA does does have an impact.
248
+
Future, potential improvements to Kubernetes may improve Pod admission performance and the high
249
+
latency that happens as the number of NUMA nodes increases.
236
250
237
251
### Pod Interactions with Topology Manager Policies
0 commit comments