|
| 1 | +--- |
| 2 | +title: Troubleshooting Nexus Kubernetes Cluster pods stuck in ContainerCreating status |
| 3 | +description: Troubleshooting Nexus Kubernetes Cluster pods stuck in ContainerCreating status |
| 4 | +ms.service: azure-operator-nexus |
| 5 | +ms.custom: troubleshooting |
| 6 | +ms.topic: troubleshooting |
| 7 | +ms.date: 08/12/2024 |
| 8 | +ms.author: hbusipalle |
| 9 | +author: hem2 |
| 10 | +--- |
| 11 | +# Troubleshooting Nexus Kubernetes Cluster pods stuck in ContainerCreating status |
| 12 | +This guide provides detailed steps for troubleshooting Nexus Kubernetes Cluster pods stuck in `ContainerCreating` status |
| 13 | + |
| 14 | +## Prerequisites |
| 15 | + |
| 16 | +* Command line access to the Nexus Kubernetes Cluster is required |
| 17 | +* Necessary permissions to make changes to the Nexus Kubernetes Cluster objects |
| 18 | + |
| 19 | +## Symptoms |
| 20 | + |
| 21 | +In environments operating at scale, there are rare instances where pods using the `nexus-volume` storage class Persistent Volume Claims (PVCs) might become stuck in the `ContainerCreating` status. |
| 22 | + |
| 23 | +Verify if the pod is experiencing the described error by inspecting its details and reviewing its events. |
| 24 | + |
| 25 | +``` console |
| 26 | +kubectl describe pod <pod name> |
| 27 | +``` |
| 28 | +```console |
| 29 | +Events: |
| 30 | + Type Reason Age From Message |
| 31 | + ---- ------ ---- ---- ------- |
| 32 | + |
| 33 | +Warning FailedAttachVolume 13s (x6 over 31s) attachdetach-controller AttachVolume.Attach failed for volume "pvc-561a2f5b-f673-4f6c-aa4d-34dbc4a6224e" : rpc error: code = Internal desc = failed to handle ControllerPublishVolume |
| 34 | +``` |
| 35 | + |
| 36 | +## Solution |
| 37 | +To address this issue, the following workaround can be applied for pods... |
| 38 | + |
| 39 | +### Steps to Resolve |
| 40 | +1. Identify the StatefulSet whose pods are stuck in the `ContainerCreating` status. |
| 41 | +2. Scale down the StatefulSet’s replicas to zero: |
| 42 | + |
| 43 | + ```console |
| 44 | + kubectl scale statefulset <statefulset-name> --replicas=0 |
| 45 | + ``` |
| 46 | + |
| 47 | +3. Wait until the persistent volume attachments are fully removed. Volume attachments should clear up quickly, usually in a matter of a minute or two. In this example, the persistent volume is named pvc-561a2f5b-f673-4f6c-aa4d-34dbc4a6224e. Typically, persistent volumes are named with the prefix pvc-xxx, even though they aren't volume claims. You can verify by running the following command: |
| 48 | + |
| 49 | + ```console |
| 50 | + kubectl get volumeattachments | grep -c pvc-561a2f5b-f673-4f6c-aa4d-34dbc4a6224e |
| 51 | + 0 |
| 52 | + ``` |
| 53 | + |
| 54 | +4. Scale up the StatefulSet to the desired number of replicas: |
| 55 | + |
| 56 | + ```console |
| 57 | + kubectl scale statefulset <statefulset-name> --replicas=<desired-replica-count> |
| 58 | + ``` |
0 commit comments