You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Option A illustrates four TOR switches (TOR1 to TOR4). Each machine is equipped with two physical network cards, totaling four interfaces. One network card utilizes NIC teaming for management (7) and compute (8) network intents, creating an aggregated link with Switch Embedded Teaming (SET). The second network card manages storage intents (711, 712) traffic, with each interface tagged with a single VLAN.
46
46
@@ -50,7 +50,7 @@ For storage intents, the switch ports have one VLAN tag per interface, whereas t
50
50
> Storage intents are intentionally omitted on certain TOR devices and aren't universally applied. The rack aware cluster follows the standard hyper-converged infrastructure disaggregated network design, where storage interfaces support only one storage Intent per interface. This configuration was tested using Multiple Spanning Tree Protocol (MSTP). Storage VLANs are grouped into one spanning tree group, while non-storage VLANs are in a different STP group. During MSTP configuration, the spine switches needed to include the storage STP group even though they did not support those VLANs.
The configuration is shown for a storage intent switch interface connected to a storage host NIC. This setup applies to both Option A and Option B. The physical link connects to NIC1 on the host, supporting VLAN 711 of TOR1. The switch port is trunked with VLAN tag 711, and a QoS policy matching the host policy is assigned. See the Appendix for specific QoS settings.
106
106
@@ -167,7 +167,7 @@ The link speed should match a 1:1 ratio, recommended as a bundled set of links b
The Software Load Balancing (SLB) solution for Azure Local in a rack aware cluster consists of three network layers: spine, TOR, and SLB. In this setup, the spine is a combined Layer 2/Layer 3 BGP router that peers with the SLB. The TOR layer is a simple Layer 2 switch that enables BGP sessions to pass from the SLB to the spine. Each spine is configured with a Dynamic BGP configuration, allowing multiple SLBs to establish BGP sessions with the spines. The SLB uses the spine loopback IP address as the peering IP. In the spine BGP configuration, the update source is set to loopback 0. When the SLB establishes a BGP session with the spine, it advertises the VIP addresses provided by the network controller as host IP addresses.
In a Layer 2 configuration, the AKS load balancer uses ARP in Layer 2 networking to reach the network. The switch/router is a primary gateway for the Azure Local system and provides the Layer 3 services to AKS. The MetalLB system uses ARP to network allowing its IP address to be reachable. The IP pool of the MetalLB service is required to be in the same subnet as the Kubernetes K8 nodes.
In a Layer 3 configuration, the switch/router (spine) acts as a BGP router that the Azure Local AKS MetalLB service uses to connect using BGP. The spine enables dynamic BGP as the peering neighbor for the compute intent 10.68.40.0/26. Any IP address used by AKS within the compute intent network can be used if it uses the configured BGP AS number on the spine. In this example, the spine AS number is 64512, with a router ID of Y.Y.5.5. The spine advertises the x.x.40.0/26 compute intent and its loopback 0 networks. In the dynamic BGP configuration the compute intent subnet is specified as the neighbor with the neighbor AS of 65500. It uses update-source loopback 0 and set a `ebgp-multihop` of 3.
695
695
@@ -735,19 +735,19 @@ router bgp 64512
735
735
736
736
## References
737
737
738
-
-[Host network requirements for Azure Local](https://learn.microsoft.com/en-us/azure/azure-local/concepts/host-network-requirements?view=azloc-24113)
738
+
-[Host network requirements for Azure Local](../concepts/host-network-requirements.md)
739
739
740
-
-[Physical network requirements for Azure Local](https://learn.microsoft.com/en-us/azure/azure-local/concepts/physical-network-requirements?view=azloc-24113&tabs=overview%2C23H2reqs)
740
+
-[Physical network requirements for Azure Local](../concepts/physical-network-requirements.md)
741
741
742
742
-[RDMA over Converged Ethernet (RoCE) on Cisco Nexus 9300](https://aboutnetworks.net/rocev2-on-nexus9k/)
743
743
744
-
-[What is Software Load Balancer (SLB) for SDN?](https://learn.microsoft.com/en-us/azure/azure-local/concepts/software-load-balancer?view=azloc-24113)
744
+
-[What is Software Load Balancer (SLB) for SDN?](../concepts/software-load-balancer.md)
745
745
746
-
-[Overview of MetalLB for Kubernetes clusters](https://learn.microsoft.com/en-us/azure/aks/aksarc/load-balancer-overview)
746
+
-[Overview of MetalLB for Kubernetes clusters](/azure/aks/aksarc/load-balancer-overview)
747
747
748
-
-[Software defined networking (SDN) in Azure Local and Windows Server](https://learn.microsoft.com/en-us/azure/azure-local/concepts/software-defined-networking?view=azloc-24113)
748
+
-[Software defined networking (SDN) in Azure Local and Windows Server](../concepts/software-defined-networking.md)
749
749
750
-
-[Plan a Software Defined Network infrastructure](https://learn.microsoft.com/en-us/azure/azure-local/concepts/plan-software-defined-networking-infrastructure?view=azloc-24113)
750
+
-[Plan a Software Defined Network infrastructure](../concepts/plan-software-defined-networking-infrastructure.md)
751
751
752
752
-[Azure Local Network configuration design with SDN](https://techcommunity.microsoft.com/blog/azurestackblog/azure-stack-hci---network-configuration-design-with-sdn/3817175)
This packet capture shows the ETS values in the LLDP packet. This specific setup is using priority ID 5 vs 7. Packet capture of a LLDP packet with the ETS and PFC configured.
0 commit comments