Skip to content

Commit 9de2f4c

Browse files
authored
Merge pull request #276500 from lorbichara/azurediskupdates
Minor structure updates to the Azure Disks article & prereqs of ACStor
2 parents e4fb90e + 1b711ef commit 9de2f4c

File tree

4 files changed

+117
-126
lines changed

4 files changed

+117
-126
lines changed

articles/storage/container-storage/use-container-storage-with-elastic-san.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -17,8 +17,6 @@ ms.custom: references_regions
1717

1818
[!INCLUDE [container-storage-prerequisites](../../../includes/container-storage-prerequisites.md)]
1919

20-
- If you haven't already installed Azure Container Storage, follow the instructions in [Install Azure Container Storage](container-storage-aks-quickstart.md).
21-
2220
- Ensure your subscription has [Azure role-based access control (Azure RBAC) Owner](../../role-based-access-control/built-in-roles/general.md#owner) role. For Azure Container Storage to successfully communicate with Elastic SAN's API, it needs special permissions that the Owner role will grant.
2321

2422
> [!NOTE]

articles/storage/container-storage/use-container-storage-with-local-disk.md

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -20,10 +20,6 @@ ms.custom: references_regions
2020

2121
[!INCLUDE [container-storage-prerequisites](../../../includes/container-storage-prerequisites.md)]
2222

23-
- If you haven't already installed Azure Container Storage, follow the instructions in [Install Azure Container Storage](container-storage-aks-quickstart.md).
24-
25-
- Check if your target region is supported in [Azure Container Storage regions](container-storage-introduction.md#regional-availability).
26-
2723
## Choose a VM type that supports Ephemeral Disk
2824

2925
Ephemeral Disk is only available in certain types of VMs. If you plan to use Ephemeral Disk with local NVMe, a [storage optimized VM](../../virtual-machines/sizes-storage.md) such as **standard_l8s_v3** is required. If you plan to use Ephemeral Disk with temp SSD, a [Ev3 and Esv3-series VM](../../virtual-machines/ev3-esv3-series.md) is required.

articles/storage/container-storage/use-container-storage-with-managed-disks.md

Lines changed: 113 additions & 120 deletions
Original file line numberDiff line numberDiff line change
@@ -17,25 +17,18 @@ ms.custom: references_regions
1717

1818
[!INCLUDE [container-storage-prerequisites](../../../includes/container-storage-prerequisites.md)]
1919

20-
- If you haven't already installed Azure Container Storage, follow the instructions in [Install Azure Container Storage](container-storage-aks-quickstart.md).
21-
2220
> [!NOTE]
2321
> To use Azure Container Storage with Azure managed disks, your AKS cluster should have a node pool of at least three [general purpose VMs](../../virtual-machines/sizes-general.md) such as **standard_d4s_v5** for the cluster nodes, each with a minimum of four virtual CPUs (vCPUs).
2422
25-
## Regional availability
26-
27-
[!INCLUDE [container-storage-regions](../../../includes/container-storage-regions.md)]
28-
2923
## Create a storage pool
3024

3125
First, create a storage pool, which is a logical grouping of storage for your Kubernetes cluster, by defining it in a YAML manifest file.
3226

33-
If you enabled Azure Container Storage using `az aks create` or `az aks update` commands, you might already have a storage pool. Use `kubectl get sp -n acstor` to get the list of storage pools. If you have a storage pool already available that you want to use, you can skip this section and proceed to [Display the available storage classes](#display-the-available-storage-classes). If you have Azure managed disks that are already provisioned, you can [create a pre-provisioned storage pool](#create-a-pre-provisioned-storage-pool) using those disks.
27+
If you enabled Azure Container Storage using `az aks create` or `az aks update` commands, you might already have a storage pool. Use `kubectl get sp -n acstor` to get the list of storage pools. If you have a storage pool already available that you want to use, you can skip this section and proceed to [Display the available storage classes](#display-the-available-storage-classes).
3428

35-
> [!IMPORTANT]
36-
> If you want to use your own keys to encrypt your volumes instead of using Microsoft-managed keys, don't create your storage pool using the steps in this section. Instead, go to [Enable server-side encryption with customer-managed keys](#enable-server-side-encryption-with-customer-managed-keys) and follow the steps there.
37-
38-
Follow these steps to create a storage pool for Azure Disks.
29+
Follow these steps to create a storage pool for Azure Disks. You can also:
30+
- [Create a storage pool with a pre-provisioned Azure managed disk](#create-a-pre-provisioned-storage-pool)
31+
- [Create a storage pool that has server-side encryption with customer managed keys enabled](#enable-server-side-encryption-with-customer-managed-keys)
3932

4033
1. Use your favorite text editor to create a YAML manifest file such as `code acstor-storagepool.yaml`.
4134

@@ -76,6 +69,115 @@ Follow these steps to create a storage pool for Azure Disks.
7669

7770
When the storage pool is created, Azure Container Storage will create a storage class on your behalf, using the naming convention `acstor-<storage-pool-name>`. Now you can [display the available storage classes](#display-the-available-storage-classes) and [create a persistent volume claim](#create-a-persistent-volume-claim).
7871

72+
## Display the available storage classes
73+
74+
When the storage pool is ready to use, you must select a storage class to define how storage is dynamically created when creating persistent volume claims and deploying persistent volumes.
75+
76+
Run `kubectl get sc` to display the available storage classes. You should see a storage class called `acstor-<storage-pool-name>`.
77+
78+
> [!IMPORTANT]
79+
> Don't use the storage class that's marked **internal**. It's an internal storage class that's needed for Azure Container Storage to work.
80+
81+
## Create a persistent volume claim
82+
83+
A persistent volume claim (PVC) is used to automatically provision storage based on a storage class. Follow these steps to create a PVC using the new storage class.
84+
85+
1. Use your favorite text editor to create a YAML manifest file such as `code acstor-pvc.yaml`.
86+
87+
1. Paste in the following code and save the file. The PVC `name` value can be whatever you want.
88+
89+
```yml
90+
apiVersion: v1
91+
kind: PersistentVolumeClaim
92+
metadata:
93+
name: azurediskpvc
94+
spec:
95+
accessModes:
96+
- ReadWriteOnce
97+
storageClassName: acstor-azuredisk # replace with the name of your storage class if different
98+
resources:
99+
requests:
100+
storage: 100Gi
101+
```
102+
103+
1. Apply the YAML manifest file to create the PVC.
104+
105+
```azurecli-interactive
106+
kubectl apply -f acstor-pvc.yaml
107+
```
108+
109+
You should see output similar to:
110+
111+
```output
112+
persistentvolumeclaim/azurediskpvc created
113+
```
114+
115+
You can verify the status of the PVC by running the following command:
116+
117+
```azurecli-interactive
118+
kubectl describe pvc azurediskpvc
119+
```
120+
121+
Once the PVC is created, it's ready for use by a pod.
122+
123+
## Deploy a pod and attach a persistent volume
124+
125+
Create a pod using [Fio](https://github.com/axboe/fio) (Flexible I/O Tester) for benchmarking and workload simulation, and specify a mount path for the persistent volume. For **claimName**, use the **name** value that you used when creating the persistent volume claim.
126+
127+
1. Use your favorite text editor to create a YAML manifest file such as `code acstor-pod.yaml`.
128+
129+
1. Paste in the following code and save the file.
130+
131+
```yml
132+
kind: Pod
133+
apiVersion: v1
134+
metadata:
135+
name: fiopod
136+
spec:
137+
nodeSelector:
138+
acstor.azure.com/io-engine: acstor
139+
volumes:
140+
- name: azurediskpv
141+
persistentVolumeClaim:
142+
claimName: azurediskpvc
143+
containers:
144+
- name: fio
145+
image: nixery.dev/shell/fio
146+
args:
147+
- sleep
148+
- "1000000"
149+
volumeMounts:
150+
- mountPath: "/volume"
151+
name: azurediskpv
152+
```
153+
154+
1. Apply the YAML manifest file to deploy the pod.
155+
156+
```azurecli-interactive
157+
kubectl apply -f acstor-pod.yaml
158+
```
159+
160+
You should see output similar to the following:
161+
162+
```output
163+
pod/fiopod created
164+
```
165+
166+
1. Check that the pod is running and that the persistent volume claim has been bound successfully to the pod:
167+
168+
```azurecli-interactive
169+
kubectl describe pod fiopod
170+
kubectl describe pvc azurediskpvc
171+
```
172+
173+
1. Check fio testing to see its current status:
174+
175+
```azurecli-interactive
176+
kubectl exec -it fiopod -- fio --name=benchtest --size=800m --filename=/volume/test --direct=1 --rw=randrw --ioengine=libaio --bs=4k --iodepth=16 --numjobs=8 --time_based --runtime=60
177+
```
178+
179+
You've now deployed a pod that's using Azure Disks as its storage, and you can use it for your Kubernetes workloads.
180+
79181
## Create a pre-provisioned storage pool
80182

81183
If you have Azure managed disks that are already provisioned, you can create a pre-provisioned storage pool using those disks. Because the disks are already provisioned, you don't need to specify the skuName or storage capacity when creating the storage pool.
@@ -186,115 +288,6 @@ Follow these steps to create a storage pool using your own encryption key. All p
186288

187289
When the storage pool is created, Azure Container Storage will create a storage class on your behalf, using the naming convention `acstor-<storage-pool-name>`.
188290

189-
## Display the available storage classes
190-
191-
When the storage pool is ready to use, you must select a storage class to define how storage is dynamically created when creating persistent volume claims and deploying persistent volumes.
192-
193-
Run `kubectl get sc` to display the available storage classes. You should see a storage class called `acstor-<storage-pool-name>`.
194-
195-
> [!IMPORTANT]
196-
> Don't use the storage class that's marked **internal**. It's an internal storage class that's needed for Azure Container Storage to work.
197-
198-
## Create a persistent volume claim
199-
200-
A persistent volume claim (PVC) is used to automatically provision storage based on a storage class. Follow these steps to create a PVC using the new storage class.
201-
202-
1. Use your favorite text editor to create a YAML manifest file such as `code acstor-pvc.yaml`.
203-
204-
1. Paste in the following code and save the file. The PVC `name` value can be whatever you want.
205-
206-
```yml
207-
apiVersion: v1
208-
kind: PersistentVolumeClaim
209-
metadata:
210-
name: azurediskpvc
211-
spec:
212-
accessModes:
213-
- ReadWriteOnce
214-
storageClassName: acstor-azuredisk # replace with the name of your storage class if different
215-
resources:
216-
requests:
217-
storage: 100Gi
218-
```
219-
220-
1. Apply the YAML manifest file to create the PVC.
221-
222-
```azurecli-interactive
223-
kubectl apply -f acstor-pvc.yaml
224-
```
225-
226-
You should see output similar to:
227-
228-
```output
229-
persistentvolumeclaim/azurediskpvc created
230-
```
231-
232-
You can verify the status of the PVC by running the following command:
233-
234-
```azurecli-interactive
235-
kubectl describe pvc azurediskpvc
236-
```
237-
238-
Once the PVC is created, it's ready for use by a pod.
239-
240-
## Deploy a pod and attach a persistent volume
241-
242-
Create a pod using [Fio](https://github.com/axboe/fio) (Flexible I/O Tester) for benchmarking and workload simulation, and specify a mount path for the persistent volume. For **claimName**, use the **name** value that you used when creating the persistent volume claim.
243-
244-
1. Use your favorite text editor to create a YAML manifest file such as `code acstor-pod.yaml`.
245-
246-
1. Paste in the following code and save the file.
247-
248-
```yml
249-
kind: Pod
250-
apiVersion: v1
251-
metadata:
252-
name: fiopod
253-
spec:
254-
nodeSelector:
255-
acstor.azure.com/io-engine: acstor
256-
volumes:
257-
- name: azurediskpv
258-
persistentVolumeClaim:
259-
claimName: azurediskpvc
260-
containers:
261-
- name: fio
262-
image: nixery.dev/shell/fio
263-
args:
264-
- sleep
265-
- "1000000"
266-
volumeMounts:
267-
- mountPath: "/volume"
268-
name: azurediskpv
269-
```
270-
271-
1. Apply the YAML manifest file to deploy the pod.
272-
273-
```azurecli-interactive
274-
kubectl apply -f acstor-pod.yaml
275-
```
276-
277-
You should see output similar to the following:
278-
279-
```output
280-
pod/fiopod created
281-
```
282-
283-
1. Check that the pod is running and that the persistent volume claim has been bound successfully to the pod:
284-
285-
```azurecli-interactive
286-
kubectl describe pod fiopod
287-
kubectl describe pvc azurediskpvc
288-
```
289-
290-
1. Check fio testing to see its current status:
291-
292-
```azurecli-interactive
293-
kubectl exec -it fiopod -- fio --name=benchtest --size=800m --filename=/volume/test --direct=1 --rw=randrw --ioengine=libaio --bs=4k --iodepth=16 --numjobs=8 --time_based --runtime=60
294-
```
295-
296-
You've now deployed a pod that's using Azure Disks as its storage, and you can use it for your Kubernetes workloads.
297-
298291
## Detach and reattach a persistent volume
299292

300293
To detach a persistent volume, delete the pod that the persistent volume is attached to. Replace `<pod-name>` with the name of the pod, for example **fiopod**.

includes/container-storage-prerequisites.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,3 +11,7 @@ ms.author: kendownie
1111
- This article requires the latest version (2.35.0 or later) of the Azure CLI. See [How to install the Azure CLI](/cli/azure/install-azure-cli). If you're using the Bash environment in Azure Cloud Shell, the latest version is already installed. If you plan to run the commands locally instead of in Azure Cloud Shell, be sure to run them with administrative privileges. For more information, see [Get started with Azure Cloud Shell](/azure/cloud-shell/get-started).
1212

1313
- You'll need the Kubernetes command-line client, `kubectl`. It's already installed if you're using Azure Cloud Shell, or you can install it locally by running the `az aks install-cli` command.
14+
15+
- If you haven't already installed Azure Container Storage, follow the instructions in [Install Azure Container Storage](../articles/storage/container-storage/container-storage-aks-quickstart.md).
16+
17+
- Check if your target region is supported in [Azure Container Storage regions](../articles/storage/container-storage/container-storage-introduction.md#regional-availability).

0 commit comments

Comments
 (0)