Skip to content

Commit ee1e15c

Browse files
author
amsliu
committed
fix build validation warnings
1 parent 528103b commit ee1e15c

File tree

3 files changed

+4
-4
lines changed

3 files changed

+4
-4
lines changed

support/azure/azure-kubernetes/availability-performance/identify-memory-saturation-aks.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ The following table outlines the common symptoms of memory saturation.
2525
| Unschedulable pods | Additional pods can't be scheduled if the node is close to its set memory limit. |
2626
| Pod eviction | If a node is running out of memory, the kubelet can evict pods. Although the control plane tries to reschedule the evicted pods on other nodes that have resources, there's no guarantee that other nodes have sufficient memory to run these pods. |
2727
| Node not ready | Memory saturation can cause `kubelet` and `containerd` to become unresponsive, eventually causing node readiness issues. |
28-
| Out-of-memory (OOM) kill | An OOM problem occurs if the pod eviction can't prevent a node issue. For more information, see [Troubleshoot OOMkilled in AKS clusters](./troubleshoot-oomkilled-in-aks-clusters.md).|
28+
| Out-of-memory (OOM) kill | An OOM problem occurs if the pod eviction can't prevent a node issue. For more information, see [Troubleshoot OOMkilled in AKS clusters](./troubleshoot-oomkilled-aks-clusters.md).|
2929

3030
## Troubleshooting checklist
3131

support/azure/azure-kubernetes/availability-performance/troubleshoot-oomkilled-in-aks-clusters.md renamed to support/azure/azure-kubernetes/availability-performance/troubleshoot-oomkilled-aks-clusters.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -175,7 +175,7 @@ User pods may be OOMKilled due to insufficient memory limits or excessive memory
175175

176176
### Cause 1: User workloads may be running in a system node pools
177177

178-
It is recommended to create user node pools for user workloads. For more information, see: [Manage system node pools in Azure Kubernetes Service (AKS)](https://learn.microsoft.com/en-us/azure/aks/use-system-pools).
178+
It is recommended to create user node pools for user workloads. For more information, see: [Manage system node pools in Azure Kubernetes Service (AKS)](/azure/aks/use-system-pools).
179179

180180
### Cause 2: Application pod keeps restarting due to OOMkilled
181181

@@ -184,7 +184,7 @@ to it and it requires more, which will cause the pod to constantly
184184
restart.
185185

186186
To solve, review request and limits documentation to understand how to modify
187-
your deployment accordingly. For more information, see [Resource Management for Pods and Containers](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/%22%20/l%20%22requests-and-limits).
187+
your deployment accordingly. For more information, see [Resource Management for Pods and Containers](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits).
188188

189189
`kubectl set resources deployment \<deployment-name\>
190190
\--limits=memory=\<LIMITS\>Mi ---requests=memory=\<MEMORY\>Mi`

support/azure/azure-kubernetes/toc.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -177,7 +177,7 @@
177177
- name: Troubleshoot pod scheduler errors
178178
href: availability-performance/troubleshoot-pod-scheduler-errors.md
179179
- name: Troubleshoot OOMkilled in AKS clusters
180-
href: availability-performance/troubleshoot-oomkilled-in-aks-clusters.md
180+
href: availability-performance/troubleshoot-oomkilled-aks-clusters.md
181181
- name: Troubleshoot node not ready
182182
items:
183183
- name: Basic troubleshooting

0 commit comments

Comments
 (0)