You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/operator-nexus/troubleshoot-memory-limits.md
+24-2Lines changed: 24 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,7 +13,7 @@ author: matternst7258
13
13
14
14
## Alerting for memory limits
15
15
16
-
It is recommended to have alerts setup for the Operator Nexus cluster to look for Kubernetes pods restarting from OOMKill errors. These alerts will allow customers to know if a component on a server is working appropriately.
16
+
It's recommended to have alerts set up for the Operator Nexus cluster to look for Kubernetes pods restarting from OOMKill errors. These alerts allow customers to know if a component on a server is working appropriately.
17
17
18
18
## Identifying Out of Memory (OOM) pods
19
19
@@ -51,11 +51,33 @@ The data from these commands identify whether a pod is restarting due to `OOMKil
51
51
52
52
## Patching memory limits
53
53
54
-
It is recommended for all memory limit changes be reported to Microsoft support for further investigation or adjustments.
54
+
Raise all memory limit changes be reported to Microsoft support for further investigation or adjustments.
55
55
56
56
> [!WARNING]
57
57
> Patching memory limits to a pod are not permanent and can be overwritten if the pod restarts.
58
58
59
+
## Confirm memory limit changes
60
+
61
+
When memory limits change, the pods should return to `Ready` state and stop restarting.
62
+
63
+
The following commands can be used to confirm the behavior.
64
+
65
+
```azcli
66
+
az networkcloud baremetalmachine run-read-command --name "<bareMetalMachineName>" \
0 commit comments