You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: azure-local/plan/rack-aware-clustering-overview.md
+9-9Lines changed: 9 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
---
2
2
title: Overview of Azure Local Rack Aware Clustering
3
-
description: Use this topic to learn about Azure Local Rack Aware Clustering.
3
+
description: Use this article to learn about Azure Local Rack Aware Clustering.
4
4
author: alkohli
5
5
ms.author: alkohli
6
6
ms.topic: overview
@@ -36,13 +36,13 @@ For detailed networking requirements, see [Rack Aware Clustering network design]
36
36
37
37
The main benefits of Azure Local Rack Aware Cluster include:
38
38
39
-
-**High availability**: By distributing data across two or more racks, the system can withstand the failure of an entire rack without losing access to data. This is particularly important for mission-critical applications that require continuous uptime.
39
+
-**High availability**: By distributing data across two or more racks, the system can withstand the failure of an entire rack without losing access to data. This is important for mission-critical applications that require continuous uptime.
40
40
41
41
-**Improved performance**: The architecture allows for better load balancing and resource utilization, as workloads can be distributed across multiple racks.
42
42
43
43
-**Simplified management**: Azure Local Rack Aware Clustering provides a unified management interface for both racks, making it easier to monitor and maintain the system.
44
44
45
-
-**Cost-effective**: By leveraging existing infrastructure and providing a single storage pool, organizations can reduce costs associated with hardware and maintenance.
45
+
-**Cost-effective**: By using existing infrastructure and providing a single storage pool, organizations can reduce costs associated with hardware and maintenance.
46
46
47
47
48
48
## Use cases
@@ -60,7 +60,7 @@ All [system requirements for Azure Local](../concepts/system-requirements-23h2.m
60
60
61
61
Other requirements for rack aware clusters include:
62
62
63
-
-**Drive requirements**: Data drives must be all-flash, either non volatile memory express (NVMe) or solid-state drives (SSD).
63
+
-**Drive requirements**: Data drives must be all-flash. Either non volatile memory express (NVMe) or solid-state drives (SSD) work.
64
64
65
65
-**Availability zone requirements**: Rack aware cluster supports only two local availability zones, with a maximum of four machines in each zone. The two zones must contain an equal number of machines.
66
66
@@ -69,18 +69,18 @@ Other requirements for rack aware clusters include:
69
69
- The following table summarizes the supported configurations with volume resiliency settings.
70
70
71
71
| Machines in two zones | Volume resiliency | Infra volumes | Workload volumes |
- Only the new deployments are supported. Conversion from existing standard deployments to rack aware clusters is not supported.
78
+
- Only the new deployments are supported. Conversion from existing standard deployments to rack aware clusters isn't supported.
79
79
80
80
-**Replication requirements**: To facilitate synchronous replications between racks, a dedicated storage network is essential to ensure adequate bandwidth and low latency for storage traffic. The round-trip latency requirement between two racks should be 1 millisecond or less. The necessary bandwidth can be calculated based on the cluster size and the network interface card (NIC) speed as follows:
81
81
82
82
| Machines in zone | NIC speed | Storage ports | Bandwidth required |
83
-
| -- | -- | -- | -- |
83
+
|--| -- |--| -- |
84
84
| 1 | 10 | 2 | 20 GbE |
85
85
| 2 | 10 | 2 | 40 GbE |
86
86
| 3 | 10 | 2 | 60 GbE |
@@ -98,8 +98,8 @@ Storage Spaces Direct is used to create a single storage pool that aggregates th
98
98
99
99
- Only two-way mirror volumes are supported. Three-way mirror volumes aren't supported.
100
100
101
-
- For a *1+1* configuration, two volumes are created — one on each machine — with a two-way mirror that respects the rack fault domain, ensuring two copies of data are available in the cluster, one in each rack.
102
-
- In a *2+2* configuration, four volumes are created — one on each machine — with a two-way mirror that also respects the rack fault domain, providing one copy of data in each rack.
101
+
- For a *1+1* configuration, two volumes are created—one on each machine—with a two-way mirror that respects the rack fault domain, ensuring two copies of data are available in the cluster, one in each rack.
102
+
- In a *2+2* configuration, four volumes are created—one on each machine—with a two-way mirror that also respects the rack fault domain, providing one copy of data in each rack.
0 commit comments