Skip to content

Commit abc10c4

Browse files
Merge pull request #270410 from lorbichara/docs-editor/troubleshoot-container-storage-1711569955
Update troubleshoot-container-storage.md
2 parents c5c3642 + 3dcc8ed commit abc10c4

File tree

2 files changed

+146
-69
lines changed

2 files changed

+146
-69
lines changed

articles/storage/container-storage/troubleshoot-container-storage.md

Lines changed: 9 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ az aks update -n <cluster-name> -g <resource-group> --enable-azure-container-sto
2828

2929
### Can't set storage pool type to NVMe
3030

31-
If you try to install Azure Container Storage with ephemeral disk, specifically with local NVMe on a cluster where the virtual machine (VM) SKU doesn't have NVMe drives, you get the following error message: *Cannot set --storage-pool-option as NVMe as none of the node pools can support ephemeral NVMe disk*.
31+
If you try to install Azure Container Storage with Ephemeral Disk, specifically with local NVMe on a cluster where the virtual machine (VM) SKU doesn't have NVMe drives, you get the following error message: *Cannot set --storage-pool-option as NVMe as none of the node pools can support ephemeral NVMe disk*.
3232

3333
To remediate, create a node pool with a VM SKU that has NVMe drives and try again. See [storage optimized VMs](../../virtual-machines/sizes-storage.md).
3434

@@ -42,7 +42,7 @@ If you're trying to create an Elastic SAN storage pool, you might see the messag
4242

4343
### No block devices found
4444

45-
If you see this message, you're likely trying to create an ephemeral disk storage pool on a cluster where the VM SKU doesn't have NVMe drives.
45+
If you see this message, you're likely trying to create an Ephemeral Disk storage pool on a cluster where the VM SKU doesn't have NVMe drives.
4646

4747
To remediate, create a node pool with a VM SKU that has NVMe drives and try again. See [storage optimized VMs](../../virtual-machines/sizes-storage.md).
4848

@@ -64,6 +64,13 @@ If you created an Elastic SAN storage pool, you might not be able to delete the
6464

6565
To resolve this, sign in to the [Azure portal](https://portal.azure.com?azure-portal=true) and select **Resource groups**. Locate the resource group that AKS created (the resource group name starts with **MC_**). Select the SAN resource object within that resource group. Manually remove all volumes and volume groups. Then retry deleting the resource group that includes your AKS cluster.
6666

67+
## Troubleshoot persistent volume issues
68+
69+
### Can't create persistent volumes from ephemeral disk storage pools
70+
Because ephemeral disks (local NVMe and Temp SSD) are ephemeral and not durable, we enforce the use of [Kubernetes Generic Ephemeral Volumes](https://kubernetes.io/docs/concepts/storage/ephemeral-volumes/#generic-ephemeral-volumes). If you try to create a persistent volume claim using an ephemeral disk pool, you'll see the following error: *Error from server (Forbidden): error when creating "eph-pvc.yaml": admission webhook "pvc.acstor.azure.com" denied the request: only generic ephemeral volumes are allowed in unreplicated ephemeralDisk storage pools*.
71+
72+
If you need a persistent volume, where the volume has a lifecycle independent of any individual pod that's using the volume, Azure Container Storage supports replication for NVMe. You can create a storage pool with replication and create persistent volumes from there. See [Create storage pool with volume replication](use-container-storage-with-local-disk.md#optional-create-storage-pool-with-volume-replication-nvme-only) for guidance. Note that because ephemeral disk storage pools consume all the available NVMe disks, you must delete any existing ephemeral disk storage pools before creating a new storage pool with replication enabled. If you don't need persistence, you can create a generic ephemeral volume.
73+
6774
## See also
6875

6976
- [Azure Container Storage FAQ](container-storage-faq.md)

articles/storage/container-storage/use-container-storage-with-local-disk.md

Lines changed: 137 additions & 67 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ ms.custom: references_regions
1414
[Azure Container Storage](container-storage-introduction.md) is a cloud-based volume management, deployment, and orchestration service built natively for containers. This article shows you how to configure Azure Container Storage to use Ephemeral Disk as back-end storage for your Kubernetes workloads. At the end, you'll have a pod that's using either local NVMe or temp SSD as its storage.
1515

1616
> [!IMPORTANT]
17-
> Local disks are ephemeral, meaning that they're created on the local virtual machine (VM) storage and not saved to an Azure storage service. Data will be lost on these disks if you stop/deallocate your VM.
17+
> Local disks are ephemeral, meaning that they're created on the local virtual machine (VM) storage and not saved to an Azure storage service. Data will be lost on these disks if you stop/deallocate your VM. You can only create [Kubernetes generic ephemeral volumes](https://kubernetes.io/docs/concepts/storage/ephemeral-volumes/#generic-ephemeral-volumes) from an Ephemeral Disk storage pool. If you want to create a persistent volume, you have to enable [replication for your storage pool](#optional-create-storage-pool-with-volume-replication-nvme-only).
1818
1919
## Prerequisites
2020

@@ -129,56 +129,13 @@ Run `kubectl get sc` to display the available storage classes. You should see a
129129
> [!IMPORTANT]
130130
> Don't use the storage class that's marked **internal**. It's an internal storage class that's needed for Azure Container Storage to work.
131131
132-
## Create a persistent volume claim
133-
134-
A persistent volume claim (PVC) is used to automatically provision storage based on a storage class. Follow these steps to create a PVC using the new storage class.
132+
## Deploy a pod with a generic ephemeral volume
135133

136-
1. Use your favorite text editor to create a YAML manifest file such as `code acstor-pvc.yaml`.
137-
138-
1. Paste in the following code and save the file. The PVC `name` value can be whatever you want.
139-
140-
```yml
141-
apiVersion: v1
142-
kind: PersistentVolumeClaim
143-
metadata:
144-
name: ephemeralpvc
145-
spec:
146-
accessModes:
147-
- ReadWriteOnce
148-
storageClassName: acstor-ephemeraldisk # replace with the name of your storage class if different
149-
resources:
150-
requests:
151-
storage: 100Gi
152-
```
153-
154-
1. Apply the YAML manifest file to create the PVC.
155-
156-
```azurecli-interactive
157-
kubectl apply -f acstor-pvc.yaml
158-
```
159-
160-
You should see output similar to:
161-
162-
```output
163-
persistentvolumeclaim/ephemeralpvc created
164-
```
165-
166-
You can verify the status of the PVC by running the following command:
167-
168-
```azurecli-interactive
169-
kubectl describe pvc ephemeralpvc
170-
```
171-
172-
Once the PVC is created, it's ready for use by a pod.
173-
174-
## Deploy a pod and attach a persistent volume
175-
176-
Create a pod using [Fio](https://github.com/axboe/fio) (Flexible I/O Tester) for benchmarking and workload simulation, and specify a mount path for the persistent volume. For **claimName**, use the **name** value that you used when creating the persistent volume claim.
134+
Create a pod using [Fio](https://github.com/axboe/fio) (Flexible I/O Tester) for benchmarking and workload simulation, that uses a generic ephemeral volume.
177135

178136
1. Use your favorite text editor to create a YAML manifest file such as `code acstor-pod.yaml`.
179137

180138
1. Paste in the following code and save the file.
181-
182139
```yml
183140
kind: Pod
184141
apiVersion: v1
@@ -187,10 +144,6 @@ Create a pod using [Fio](https://github.com/axboe/fio) (Flexible I/O Tester) for
187144
spec:
188145
nodeSelector:
189146
acstor.azure.com/io-engine: acstor
190-
volumes:
191-
- name: ephemeralpv
192-
persistentVolumeClaim:
193-
claimName: ephemeralpvc
194147
containers:
195148
- name: fio
196149
image: nixery.dev/shell/fio
@@ -199,7 +152,20 @@ Create a pod using [Fio](https://github.com/axboe/fio) (Flexible I/O Tester) for
199152
- "1000000"
200153
volumeMounts:
201154
- mountPath: "/volume"
202-
name: ephemeralpv
155+
name: ephemeralvolume
156+
volumes:
157+
- name: ephemeralvolume
158+
ephemeral:
159+
volumeClaimTemplate:
160+
metadata:
161+
labels:
162+
type: my-ephemeral-volume
163+
spec:
164+
accessModes: [ "ReadWriteOnce" ]
165+
storageClassName: "acstor-ephemeraldisk-nvme" # replace with the name of your storage class if different
166+
resources:
167+
requests:
168+
storage: 1Gi
203169
```
204170
205171
1. Apply the YAML manifest file to deploy the pod.
@@ -214,11 +180,11 @@ Create a pod using [Fio](https://github.com/axboe/fio) (Flexible I/O Tester) for
214180
pod/fiopod created
215181
```
216182

217-
1. Check that the pod is running and that the persistent volume claim has been bound successfully to the pod:
183+
1. Check that the pod is running and that the ephemeral volume claim has been bound successfully to the pod:
218184

219185
```azurecli-interactive
220186
kubectl describe pod fiopod
221-
kubectl describe pvc ephemeralpvc
187+
kubectl describe pvc fiopod-ephemeralvolume
222188
```
223189

224190
1. Check fio testing to see its current status:
@@ -229,18 +195,6 @@ Create a pod using [Fio](https://github.com/axboe/fio) (Flexible I/O Tester) for
229195

230196
You've now deployed a pod that's using Ephemeral Disk as its storage, and you can use it for your Kubernetes workloads.
231197

232-
## Detach and reattach a persistent volume
233-
234-
To detach a persistent volume, delete the pod that the persistent volume is attached to. Replace `<pod-name>` with the name of the pod, for example **fiopod**.
235-
236-
```azurecli-interactive
237-
kubectl delete pods <pod-name>
238-
```
239-
240-
To reattach a persistent volume, simply reference the persistent volume claim name in the YAML manifest file as described in [Deploy a pod and attach a persistent volume](#deploy-a-pod-and-attach-a-persistent-volume).
241-
242-
To check which persistent volume a persistent volume claim is bound to, run `kubectl get pvc <persistent-volume-claim-name>`.
243-
244198
## Expand a storage pool
245199

246200
You can expand storage pools backed by local NVMe or temp SSD to scale up quickly and without downtime. Shrinking storage pools isn't currently supported.
@@ -267,14 +221,18 @@ kubectl delete sp -n acstor <storage-pool-name>
267221

268222
## Optional: Create storage pool with volume replication (NVMe only)
269223

270-
Applications that use local NVMe can leverage storage replication for improved resiliency. Replication isn't currently supported for local SSD.
224+
Applications that use local NVMe can leverage storage replication for improved resiliency. Replication isn't currently supported for temp SSD.
271225

272226
Azure Container Storage currently supports three-replica and five-replica configurations. If you specify three replicas, you must have at least three nodes in your AKS cluster. If you specify five replicas, you must have at least five nodes.
273227

274228
Follow these steps to create a storage pool using local NVMe with replication.
275229

230+
> [!NOTE]
231+
> Because Ephemeral Disk storage pools consume all the available NVMe disks, you must delete any existing Ephemeral Disk local NVMe storage pools before creating a new storage pool with replication.
232+
276233
1. Use your favorite text editor to create a YAML manifest file such as `code acstor-storagepool.yaml`.
277234

235+
278236
1. Paste in the following code and save the file. The storage pool **name** value can be whatever you want. Set replicas to 3 or 5.
279237

280238
```yml
@@ -308,7 +266,119 @@ Follow these steps to create a storage pool using local NVMe with replication.
308266
kubectl describe sp <storage-pool-name> -n acstor
309267
```
310268

311-
When the storage pool is created, Azure Container Storage will create a storage class on your behalf, using the naming convention `acstor-<storage-pool-name>`. Now you can [display the available storage classes](#display-the-available-storage-classes) and [create a persistent volume claim](#create-a-persistent-volume-claim).
269+
When the storage pool is created, Azure Container Storage will create a storage class on your behalf, using the naming convention `acstor-<storage-pool-name>`. Now you can [display the available storage classes](#display-the-available-storage-classes) and create a persistent volume claim.
270+
271+
## Create a persistent volume claim
272+
273+
A persistent volume claim (PVC) is used to automatically provision storage based on a storage class. Follow these steps to create a PVC using the new storage class.
274+
275+
1. Use your favorite text editor to create a YAML manifest file such as `code acstor-pvc.yaml`.
276+
277+
1. Paste in the following code and save the file. The PVC `name` value can be whatever you want.
278+
279+
```yml
280+
apiVersion: v1
281+
kind: PersistentVolumeClaim
282+
metadata:
283+
name: ephemeralpvc
284+
spec:
285+
accessModes:
286+
- ReadWriteOnce
287+
storageClassName: acstor-ephemeraldisk-nvme # replace with the name of your storage class if different
288+
resources:
289+
requests:
290+
storage: 100Gi
291+
```
292+
293+
1. Apply the YAML manifest file to create the PVC.
294+
295+
```azurecli-interactive
296+
kubectl apply -f acstor-pvc.yaml
297+
```
298+
299+
You should see output similar to:
300+
301+
```output
302+
persistentvolumeclaim/ephemeralpvc created
303+
```
304+
305+
You can verify the status of the PVC by running the following command:
306+
307+
```azurecli-interactive
308+
kubectl describe pvc ephemeralpvc
309+
```
310+
311+
Once the PVC is created, it's ready for use by a pod.
312+
313+
## Deploy a pod and attach a persistent volume
314+
315+
Create a pod using [Fio](https://github.com/axboe/fio) (Flexible I/O Tester) for benchmarking and workload simulation, and specify a mount path for the persistent volume. For **claimName**, use the **name** value that you used when creating the persistent volume claim.
316+
317+
1. Use your favorite text editor to create a YAML manifest file such as `code acstor-pod.yaml`.
318+
319+
1. Paste in the following code and save the file.
320+
321+
```yml
322+
kind: Pod
323+
apiVersion: v1
324+
metadata:
325+
name: fiopod
326+
spec:
327+
nodeSelector:
328+
acstor.azure.com/io-engine: acstor
329+
volumes:
330+
- name: ephemeralpv
331+
persistentVolumeClaim:
332+
claimName: ephemeralpvc
333+
containers:
334+
- name: fio
335+
image: nixery.dev/shell/fio
336+
args:
337+
- sleep
338+
- "1000000"
339+
volumeMounts:
340+
- mountPath: "/volume"
341+
name: ephemeralpv
342+
```
343+
344+
1. Apply the YAML manifest file to deploy the pod.
345+
346+
```azurecli-interactive
347+
kubectl apply -f acstor-pod.yaml
348+
```
349+
350+
You should see output similar to the following:
351+
352+
```output
353+
pod/fiopod created
354+
```
355+
356+
1. Check that the pod is running and that the persistent volume claim has been bound successfully to the pod:
357+
358+
```azurecli-interactive
359+
kubectl describe pod fiopod
360+
kubectl describe pvc ephemeralpvc
361+
```
362+
363+
1. Check fio testing to see its current status:
364+
365+
```azurecli-interactive
366+
kubectl exec -it fiopod -- fio --name=benchtest --size=800m --filename=/volume/test --direct=1 --rw=randrw --ioengine=libaio --bs=4k --iodepth=16 --numjobs=8 --time_based --runtime=60
367+
```
368+
369+
You've now deployed a pod that's using Ephemeral Disk as its storage, and you can use it for your Kubernetes workloads.
370+
371+
## Detach and reattach a persistent volume
372+
373+
To detach a persistent volume, delete the pod that the persistent volume is attached to. Replace `<pod-name>` with the name of the pod, for example **fiopod**.
374+
375+
```azurecli-interactive
376+
kubectl delete pods <pod-name>
377+
```
378+
379+
To reattach a persistent volume, simply reference the persistent volume claim name in the YAML manifest file as described in [Deploy a pod and attach a persistent volume](#deploy-a-pod-and-attach-a-persistent-volume).
380+
381+
To check which persistent volume a persistent volume claim is bound to, run `kubectl get pvc <persistent-volume-claim-name>`.
312382

313383
## See also
314384

0 commit comments

Comments
 (0)