Skip to content

Commit efaf619

Browse files
committed
Editorial tweaks.
1 parent d28fd58 commit efaf619

File tree

1 file changed

+7
-9
lines changed

1 file changed

+7
-9
lines changed

articles/databox-online/azure-stack-edge-gpu-clustering-overview.md

Lines changed: 7 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -58,13 +58,13 @@ The infrastructure cluster on your device provides persistent storage and is sho
5858

5959
## Supported network topologies
6060

61-
Based on the use-case and workloads, you can select how the two Azure Stack Edge device nodes will be connected. Network topologies will differ depending on whether you use an Azure Stack Edge Pro GPU device or an Azure Stack Edge Pro 2 device.
61+
Based on the use case and workloads, you can select how the two Azure Stack Edge device nodes will be connected. Network topologies will differ depending on whether you use an Azure Stack Edge Pro GPU device or an Azure Stack Edge Pro 2 device.
6262

6363
At a high level, supported network topologies for each of the device types are described here.
6464

6565
### [Azure Stack Edge Pro GPU](#tab/1)
6666

67-
On your Azure Stack Edge Pro GPU device node:
67+
On your Azure Stack Edge Pro GPU device node:
6868

6969
- Port 2 is used for management traffic.
7070
- Port 3 and Port 4 are used for storage and cluster traffic. This traffic includes that needed for storage mirroring and Azure Stack Edge cluster heartbeat traffic that is required for the cluster to be online.
@@ -73,23 +73,23 @@ The following network topologies are available:
7373

7474
![Available network topologies](media/azure-stack-edge-gpu-clustering-overview/azure-stack-edge-network-topologies.png)
7575

76-
1. **Switchless** - Use this option when you don't have high speed switches available in the environment for storage and cluster traffic.
76+
- **Option 1 - Switchless** - Use this option when you don't have high speed switches available in the environment for storage and cluster traffic.
7777

7878
In this option, Port 3 and Port 4 are connected back-to-back without a switch. These ports are dedicated to storage and Azure Stack Edge cluster traffic and aren't available for workload traffic. <!--For example, these ports can't be enabled for compute--> Optionally you can also provide IP addresses for these ports.
7979

80-
1. **Using switches and NIC teaming** - Use this option when you have high speed switches available for use with your device nodes for storage and cluster traffic.
80+
- **Option 2 - Use switches and NIC teaming** - Use this option when you have high speed switches available for use with your device nodes for storage and cluster traffic.
8181

8282
Each of ports 3 and 4 of the two nodes of your device are connected via an external switch. The Port 3 and Port 4 are teamed on each node and a virtual switch and two virtual NICs are created that allow for port-level redundancy for storage and cluster traffic. These ports can be used for workload traffic as well.
8383

84-
1. **Using switches and without NIC teaming** - Use this option when you need an extra dedicated port for workload traffic and port-level redundancy isn’t required for storage and cluster traffic.
84+
- **Option 3 - Use switches without NIC teaming** - Use this option when you need an extra dedicated port for workload traffic and port-level redundancy isn’t required for storage and cluster traffic.
8585

8686
Port 3 on each node is connected via an external switch. If Port 3 fails, the cluster may go offline. Separate virtual switches are created on Port 3 and Port 4.
8787

8888
For more information, see how to [Choose a network topology for your device node](azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md#configure-network).
8989

9090
### [Azure Stack Edge Pro 2](#tab/2)
9191

92-
On your Azure Stack Edge Pro 2 device node, the following network topologies are supported:
92+
On your Azure Stack Edge Pro 2 device node:
9393

9494
- **Option 1** - Port 1 and Port 2 are in different subnets. Separate virtual switches will be created. Port 3 and Port 4 connect to an external virtual switch.
9595

@@ -122,7 +122,7 @@ Pros and cons for supported topologies are summarized as follows:
122122

123123
| Local web UI option | Advantages | Disadvantages |
124124
|---------------------|------------|---------------|
125-
| Port 3 and Port 4 Switchless, Port 1 and Port 2 in separate subnet, separate virtual switches. | Redundant paths for management and storage traffic. | Clients must reconnect if Port 1 or Port 2 fails. |
125+
| Port 3 and Port 4 are Switchless, Port 1 and Port 2 in separate subnet, separate virtual switches. | Redundant paths for management and storage traffic. | Clients must reconnect if Port 1 or Port 2 fails. |
126126
| | No single point of failure within the device. | VM workload can't leverage Port 3 or Port 4 to connect to network endpoints other than a peer Azure Stack Edge node. This is why PMEC workloads can't use this option. |
127127
| | Lots of bandwidth for storage and cluster traffic across nodes. | |
128128
| | Can be deployed with Port 1 and Port 2 in different subnets. | |
@@ -217,5 +217,3 @@ If you deploy an Azure Stack Edge two-node cluster, each node is billed separate
217217
- Learn about [Cluster witness for your Azure Stack Edge](azure-stack-edge-gpu-cluster-witness-overview.md).
218218
- See [Kubernetes for your Azure Stack Edge](azure-stack-edge-gpu-kubernetes-overview.md)
219219
- Understand [Cluster failover scenarios](azure-stack-edge-gpu-cluster-failover-scenarios.md)
220-
221-

0 commit comments

Comments
 (0)