You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/databox-online/azure-stack-edge-gpu-clustering-overview.md
+7-9Lines changed: 7 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -58,13 +58,13 @@ The infrastructure cluster on your device provides persistent storage and is sho
58
58
59
59
## Supported network topologies
60
60
61
-
Based on the use-case and workloads, you can select how the two Azure Stack Edge device nodes will be connected. Network topologies will differ depending on whether you use an Azure Stack Edge Pro GPU device or an Azure Stack Edge Pro 2 device.
61
+
Based on the usecase and workloads, you can select how the two Azure Stack Edge device nodes will be connected. Network topologies will differ depending on whether you use an Azure Stack Edge Pro GPU device or an Azure Stack Edge Pro 2 device.
62
62
63
63
At a high level, supported network topologies for each of the device types are described here.
64
64
65
65
### [Azure Stack Edge Pro GPU](#tab/1)
66
66
67
-
On your Azure Stack Edge Pro GPU device node:
67
+
On your Azure Stack Edge Pro GPU device node:
68
68
69
69
- Port 2 is used for management traffic.
70
70
- Port 3 and Port 4 are used for storage and cluster traffic. This traffic includes that needed for storage mirroring and Azure Stack Edge cluster heartbeat traffic that is required for the cluster to be online.
@@ -73,23 +73,23 @@ The following network topologies are available:
1.**Switchless** - Use this option when you don't have high speed switches available in the environment for storage and cluster traffic.
76
+
-**Option 1 - Switchless** - Use this option when you don't have high speed switches available in the environment for storage and cluster traffic.
77
77
78
78
In this option, Port 3 and Port 4 are connected back-to-back without a switch. These ports are dedicated to storage and Azure Stack Edge cluster traffic and aren't available for workload traffic. <!--For example, these ports can't be enabled for compute--> Optionally you can also provide IP addresses for these ports.
79
79
80
-
1.**Using switches and NIC teaming** - Use this option when you have high speed switches available for use with your device nodes for storage and cluster traffic.
80
+
-**Option 2 - Use switches and NIC teaming** - Use this option when you have high speed switches available for use with your device nodes for storage and cluster traffic.
81
81
82
82
Each of ports 3 and 4 of the two nodes of your device are connected via an external switch. The Port 3 and Port 4 are teamed on each node and a virtual switch and two virtual NICs are created that allow for port-level redundancy for storage and cluster traffic. These ports can be used for workload traffic as well.
83
83
84
-
1.**Using switches and without NIC teaming** - Use this option when you need an extra dedicated port for workload traffic and port-level redundancy isn’t required for storage and cluster traffic.
84
+
-**Option 3 - Use switches without NIC teaming** - Use this option when you need an extra dedicated port for workload traffic and port-level redundancy isn’t required for storage and cluster traffic.
85
85
86
86
Port 3 on each node is connected via an external switch. If Port 3 fails, the cluster may go offline. Separate virtual switches are created on Port 3 and Port 4.
87
87
88
88
For more information, see how to [Choose a network topology for your device node](azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md#configure-network).
89
89
90
90
### [Azure Stack Edge Pro 2](#tab/2)
91
91
92
-
On your Azure Stack Edge Pro 2 device node, the following network topologies are supported:
92
+
On your Azure Stack Edge Pro 2 device node:
93
93
94
94
-**Option 1** - Port 1 and Port 2 are in different subnets. Separate virtual switches will be created. Port 3 and Port 4 connect to an external virtual switch.
95
95
@@ -122,7 +122,7 @@ Pros and cons for supported topologies are summarized as follows:
122
122
123
123
| Local web UI option | Advantages | Disadvantages |
| Port 3 and Port 4 Switchless, Port 1 and Port 2 in separate subnet, separate virtual switches. | Redundant paths for management and storage traffic. | Clients must reconnect if Port 1 or Port 2 fails. |
125
+
| Port 3 and Port 4 are Switchless, Port 1 and Port 2 in separate subnet, separate virtual switches. | Redundant paths for management and storage traffic. | Clients must reconnect if Port 1 or Port 2 fails. |
126
126
|| No single point of failure within the device. | VM workload can't leverage Port 3 or Port 4 to connect to network endpoints other than a peer Azure Stack Edge node. This is why PMEC workloads can't use this option. |
127
127
|| Lots of bandwidth for storage and cluster traffic across nodes. ||
128
128
|| Can be deployed with Port 1 and Port 2 in different subnets. ||
@@ -217,5 +217,3 @@ If you deploy an Azure Stack Edge two-node cluster, each node is billed separate
217
217
- Learn about [Cluster witness for your Azure Stack Edge](azure-stack-edge-gpu-cluster-witness-overview.md).
218
218
- See [Kubernetes for your Azure Stack Edge](azure-stack-edge-gpu-kubernetes-overview.md)
0 commit comments