You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/storage/container-storage/use-container-storage-with-local-nvme-replication.md
+88-3Lines changed: 88 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -394,7 +394,92 @@ To check which persistent volume a persistent volume claim is bound to, run:
394
394
kubectl get pvc <persistent-volume-claim-name>
395
395
```
396
396
397
-
### Expand a storage pool
397
+
## Enable hyperconvergence (optional)
398
+
399
+
### What is hyperconvergence?
400
+
401
+
Hyperconvergence in Azure Container Storage enables pods to run on the same host as their corresponding volumes, reducing network overhead and significantly improving read performance.
402
+
403
+
* For single-replica workloads, hyperconvergence is **always enabled by default** to maximize data locality.
404
+
405
+
* For multi-replica workloads, hyperconvergence is **optional** and must be explicitly enabled.
406
+
407
+
When hyperconvergence is enabled for multi-replica volumes, the workload is scheduled on the same host as one of the volume replicas, optimizing data access while still maintaining redundancy.
408
+
409
+
### Hyperconvergence behavior for non-replicated vs. replicated volumes
410
+
411
+
Non-replicated NVMe/TempSSD volumes:
412
+
413
+
* Hyperconvergence is **enabled by default**.
414
+
415
+
* If no suitable node is available with a localized disk pool, the application pod will fail to start due to insufficient resources.
416
+
417
+
* This strict enforcement prevents a non-replicated volume-consuming application from running on a different node than where its storage is provisioned.
418
+
419
+
Replicated NVMe/TempSSD volumes:
420
+
421
+
* Hyperconvergence is **best effort**.
422
+
423
+
* The scheduler will attempt to place the application pod on the same node as one of its volume replicas.
424
+
425
+
* If no suitable node is available, the pod will still be scheduled elsewhere, but read performance may be lower than expected.
426
+
427
+
### How It Works
428
+
429
+
When hyperconvergence is enabled, Azure Container Storage prioritizes scheduling pods on the nodes where their volume replicas reside.
430
+
431
+
1. The default Kubernetes scheduler assigns scores to all nodes based on standard parameters like CPU, memory, affinities, and tolerations.
432
+
2. Azure Container Storage Node Affinity Scoring: Azure Container Storage uses preferred node affinities to influence the scheduler’s decision. Thus, each node receives:
433
+
* 1 point if it has a valid disk pool.
434
+
* 1 point if it already hosts a replica of the volume; these scores are additive and provide a slight preference for nodes with local volume replicas while respecting other scheduling criteria.
435
+
3. Final Scheduling Decision: The Kubernetes scheduler combines the default scores with Azure Container Storage affinity-based scores. The node with the highest combined score, balancing both Azure Container Storage preferences and Kubernetes default logic, is selected for pod placement.
436
+
437
+
### When to Use Hyperconvergence
438
+
**Note**: The following considerations apply only to replicated volumes, as non-replicated volumes always use hyperconvergence by default and cannot be configured otherwise.
439
+
440
+
Consider enabling hyperconvergence for replicated volumes when:
441
+
442
+
* High read performance is critical – Keeping workloads and storage replicas on the same node reduces network latency and improves read performance.
443
+
* Data locality can significantly impact performance – Applications that frequently read from storage benefit from reduced cross-node data transfers.
444
+
445
+
### When to Not Use Hyperconvergence
446
+
**Note**: This section applies only to replicated volumes because hyperconvergence is always enforced for non-replicated volumes.
447
+
448
+
Hyperconvergence can improve performance by co-locating workloads with their storage, but there are scenarios where it might not be ideal:
449
+
450
+
***Potential resource imbalance**: While hyperconvergence itself doesn't limit the number of applications on a node, if multiple workloads create replicas on the same node and that node runs out of resources (CPU, memory, or storage bandwidth), some workloads might not be able to schedule there. As a result, they might end up running **without hyperconvergence**, despite it being enabled.
451
+
452
+
### Enable hyperconvergence in Azure Container Storage
453
+
454
+
Hyperconvergence is enabled by default for NVMe and temporary disk storage pools with only one replica. This ensures optimized data locality and improved performance for single-replica configurations. For multi-replica setups, hyperconvergence isn't enabled by default but can be configured using the `hyperconverged` parameter in the StoragePool specification.
455
+
456
+
The following is an example YAML template to enable hyperconvergence for multi-replica configurations:
457
+
458
+
```
459
+
apiVersion: containerstorage.azure.com/v1
460
+
461
+
kind: StoragePool
462
+
463
+
metadata:
464
+
465
+
name: nvmedisk
466
+
467
+
namespace: acstor
468
+
469
+
spec:
470
+
471
+
poolType:
472
+
473
+
ephemeralDisk:
474
+
475
+
diskType: "nvme"
476
+
477
+
replicas: 3
478
+
479
+
hyperconverged: true
480
+
```
481
+
482
+
## Expand a storage pool
398
483
399
484
You can expand storage pools backed by local NVMe to scale up quickly and without downtime. Shrinking storage pools isn't currently supported.
400
485
@@ -410,15 +495,15 @@ Because a storage pool backed by Ephemeral Disk uses local storage resources on
410
495
411
496
1. Run `kubectl get sp -A` and you should see that the capacity of the storage pool has increased.
412
497
413
-
###Delete a storage pool
498
+
## Delete a storage pool
414
499
415
500
If you want to delete a storage pool, run the following command. Replace `<storage-pool-name>` with the storage pool name.
416
501
417
502
```azurecli-interactive
418
503
kubectl delete sp -n acstor <storage-pool-name>
419
504
```
420
505
421
-
###Optimize performance when using local NVMe
506
+
## Optimize performance when using local NVMe
422
507
423
508
Depending on your workload’s performance requirements, you can choose from three different performance tiers: **Basic**, **Standard**, and **Premium**. These tiers offer a different range of IOPS, and your selection will impact the number of vCPUs that Azure Container Storage components consume in the nodes where it's installed. Standard is the default configuration if you don't update the performance tier.
0 commit comments