Skip to content

Commit c1783b4

Browse files
committed
Tune memory manager page
This PR fixes the memory manager page by: - removing links to non-existent sections; - fixing links with bad anchors; - fixing incorrect language tag for code snippets
1 parent 0a474e7 commit c1783b4

File tree

1 file changed

+12
-8
lines changed

1 file changed

+12
-8
lines changed

content/en/docs/tasks/administer-cluster/memory-manager.md

Lines changed: 12 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -63,13 +63,11 @@ Important topic in the context of Memory Manager operation is the management of
6363

6464
## Memory Manager configuration
6565

66-
Other Managers should be first pre-configured (section [Pre-configuration](#pre-configuration)). Next, the Memory Manger feature should be enabled (section [Enable the Memory Manager feature](#enable-the-memory-manager-feature)) and be run with `Static` policy (section [Static policy](#static-policy)). Optionally, some amount of memory can be reserved for system or kubelet processes to increase node stability (section [Reserved memory flag](#reserved-memory-flag)).
66+
Other Managers should be first pre-configured. Next, the Memory Manger feature should be enabled and be run with `Static` policy (section [Static policy](#policy-static)). Optionally, some amount of memory can be reserved for system or kubelet processes to increase node stability (section [Reserved memory flag](#reserved-memory-flag)).
6767

6868
### Policies
6969

70-
Memory Manager supports two policies. You can select a policy via a `kubelet` flag `--memory-manager-policy`.
71-
72-
Two policies can be selected:
70+
Memory Manager supports two policies. You can select a policy via a `kubelet` flag `--memory-manager-policy`:
7371

7472
* `None` (default)
7573
* `Static`
@@ -93,7 +91,6 @@ The [Node Allocatable](/docs/tasks/administer-cluster/reserve-compute-resources/
9391

9492
The Kubernetes scheduler incorporates "allocatable" to optimise pod scheduling process. The foregoing flags include `--kube-reserved`, `--system-reserved` and `--eviction-threshold`. The sum of their values will account for the total amount of reserved memory.
9593

96-
9794
A new `--reserved-memory` flag was added to Memory Manager to allow for this total reserved memory to be split (by a node administrator) and accordingly reserved across many NUMA nodes.
9895

9996
The flag specifies a comma-separated list of memory reservations per NUMA node.
@@ -150,7 +147,7 @@ The default hard eviction threshold is 100MiB, and **not** zero. Remember to inc
150147

151148
Here is an example of a correct configuration:
152149

153-
```shell
150+
```none
154151
--feature-gates=MemoryManager=true
155152
--kube-reserved=cpu=4,memory=4Gi
156153
--system-reserved=cpu=1,memory=1Gi
@@ -225,14 +222,19 @@ This error typically occurs in the following situations:
225222
* the pod's request is rejected due to particular Topology Manager policy constraints
226223

227224
The error appears in the status of a pod:
225+
228226
```shell
229-
# kubectl get pods
227+
kubectl get pods
228+
```
229+
230+
```none
230231
NAME READY STATUS RESTARTS AGE
231232
guaranteed 0/1 TopologyAffinityError 0 113s
232233
```
233234

234235
Use `kubectl describe pod <id>` or `kubectl get events` to obtain detailed error message:
235-
```shell
236+
237+
```none
236238
Warning TopologyAffinityError 10m kubelet, dell8 Resources cannot be allocated with Topology locality
237239
```
238240

@@ -253,6 +255,7 @@ Also, search the logs for occurrences associated with the Memory Manager, e.g. t
253255
### Examine the memory manager state on a node
254256

255257
Let us first deploy a sample `Guaranteed` pod whose specification is as follows:
258+
256259
```yaml
257260
apiVersion: v1
258261
kind: Pod
@@ -274,6 +277,7 @@ spec:
274277
```
275278

276279
Next, let us log into the node where it was deployed and examine the state file in `/var/lib/kubelet/memory_manager_state`:
280+
277281
```json
278282
{
279283
"policyName":"Static",

0 commit comments

Comments
 (0)