Skip to content

Improve fabric numbering allocation with PODs and Superspines #6620

@dgonzalez-arista

Description

@dgonzalez-arista

Enhancement summary

As discussed with @gmuloc, when using default_interfaces + fabric numbering with multiple PODs, by default we will have a conflict in the interface allocation due to the same ID being used in the Spines:

An exception occurred during task execution. To see the full traceback, use -vvv. The error was: pyavd._errors.AristaAvdDuplicateDataError
fatal: [SSP2 -> localhost]: FAILED! => {"changed": false, "msg": "Found duplicate objects with conflicting data while generating configuration for EthernetInterfaces. {'name': 'Ethernet1', 'description': 'P2P_DC1-POD2-SP1_Ethernet8', 'shutdown': False, 'mtu': 9214, 'ip_address': '192.168.4.2/31', 'metadata': {'peer': 'DC1-POD2-SP1', 'peer_interface': 'Ethernet8', 'peer_type': 'spine'}, 'switchport': {'enabled': False}} conflicts with {'name': 'Ethernet1', 'description': 'P2P_DC1-POD1-SP1_Ethernet8', 'shutdown': False, 'mtu': 9214, 'ip_address': '192.168.3.2/31', 'metadata': {'peer': 'DC1-POD1-SP1', 'peer_interface': 'Ethernet8', 'peer_type': 'spine'}, 'switchport': {'enabled': False}}."}
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: pyavd._errors.AristaAvdDuplicateDataError
fatal: [SSP1 -> localhost]: FAILED! => {"changed": false, "msg": "Found duplicate objects with conflicting data while generating configuration for EthernetInterfaces. {'name': 'Ethernet1', 'description': 'P2P_DC1-POD2-SP1_Ethernet7', 'shutdown': False, 'mtu': 9214, 'ip_address': '192.168.4.0/31', 'metadata': {'peer': 'DC1-POD2-SP1', 'peer_interface': 'Ethernet7', 'peer_type': 'spine'}, 'switchport': {'enabled': False}} conflicts with {'name': 'Ethernet1', 'description': 'P2P_DC1-POD1-SP1_Ethernet7', 'shutdown': False, 'mtu': 9214, 'ip_address': '192.168.3.0/31', 'metadata': {'peer': 'DC1-POD1-SP1', 'peer_interface': 'Ethernet7', 'peer_type': 'spine'}, 'switchport': {'enabled': False}}."}

The workaround is to use the following knob in the Spines:

fabric_numbering_node_id_pool: "fabric_name={fabric_name}{dc_name?</dc_name=}{type?</type=}"

Which does not take into account the POD_name, so spines get a unique ID allocated:

node_id_pools:
  fabric_name=FABRIC/dc_name=DC1/pod_name=DC1-POD1/type=l3leaf:
    hostname=DC1-POD1-CL1: 1
    hostname=DC1-POD1-CL2: 2
    hostname=DC1-POD1-CL3: 3
    hostname=DC1-POD1-CL4: 4
  fabric_name=FABRIC/dc_name=DC1/pod_name=DC1-POD2/type=l3leaf:
    hostname=DC1-POD2-CL1: 1
    hostname=DC1-POD2-CL2: 2
  fabric_name=FABRIC/dc_name=DC1/type=spine:
    hostname=DC1-POD1-SP1: 1
    hostname=DC1-POD1-SP2: 2
    hostname=DC1-POD2-SP1: 3
    hostname=DC1-POD2-SP2: 4
  fabric_name=FABRIC/type=super-spine:
    hostname=SSP1: 1
    hostname=SSP2: 2

Note with this workaround some addressing will be lost on second POD (id 3 and 4 in the example), since that will impact the subnet allocation as well.

It would be nice to be able to achieve this without modifying the default fabric_numbering_node_id_pool.

Which component of AVD is impacted

eos_designs

Use case example

PODs with superspines, when using default interfaces and automatic fabric numbering.

Describe the solution you would like

  1. automatic allocation for PODs + superspines without resulting in conflicts.

  2. Ideally being able to have overlapping IDs in each POD and managing multiple pools & interface allocations in the superspines. Maybe support for multiple downlink_interfaces (by POD) in the superspines?

Describe alternatives you have considered

Using the earlier knob to avoid considering POD_name in spines.

Additional context

No response

Contributing Guide

  • I agree to follow this project Code of Conduct

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions