Skip to content

Honoring SVI profile VRF/Loopback on assigning VRF to switch #6572

@c-po

Description

@c-po

Issue Summary

Sorry - I was not able to come up with a better sounding headline.

@kmueller68 and I am currently evaluating the EOS feature to relay DHCP request through a "service VRF" which is different from the VRF where the current SVI resides in.

svi_profiles:
  - profile: svi_profile_01
    ip_helpers:
      - ip_helper: 10.0.2.30
        source_interface: loopback107
        source_vrf: blue
      - ip_helper: 10.0.23.153
        source_interface: loopback107
        source_vrf: blue
tenants:
  - name: Tenant_A
    vrfs:
      - name: blue
        vrf_vni: 54
        vtep_diagnostic:
          loopback: 107
          loopback_ip_pools:
            - pod: DC1
              ipv4_pool: 10.0.240.0/26

      - name: red
        vrf_vni: 51
        svis:
          - id: 482
            name: "red"
            tags: [dev]
            enabled: true
            igmp_snooping_enabled: false
            ip_address_virtual: 10.101.65.65/26
            mtu: 1500
            profile: svi_profile_01

Imagine aboves simple YML definition to achieve what we need. This will render:

interface Vlan482
   mtu 1500
   vrf red
   ip helper-address 10.0.2.30 vrf blue source-interface Loopback107
   ip helper-address 10.0.23.153 vrf blue source-interface Loopback107
   ip address virtual 10.101.65.65/26

All good so far. The issue that we face is when building up the switch/node structure.

l3leaf:
  node_groups:
    - group: DC1_LEAF15
      bgp_as: 65117
      filter:
        tenants: [Tenant_A]
        tags: [prod, dev]
      nodes:
        - name: leaf-1a1.DC1
          id: 29

VRF blue is not detected to be in service on said leaf for the DHCP relay. I wish this would somehow be able to
be detected automatically.

One way I poked around was extending the Switch node_group by:

l3leaf:
  node_groups:
    - group: DC1_LEAF15
      filter:
        always_include_vrfs_in_tenants: [Tenant_A] # pull in all VRFs - even ones I do not need
        allow_vrfs: # MANUALLY select VRFs I only need on the switch
          - red
          - blue
          - gree
          - yellow

This is cumbersome and error prone. It would be nice if this could be auto detected or at least have an extra_vrf: key to define additional VRFs on said switch and now walking through allow/deny lists.

Which components of AVD are impacted

eos_designs

AVD version

5.7.2

Ansible version

ansible [core 2.18.12]

Python version

python version = 3.11.2 (main, Apr 28 2025, 14:11:48) [GCC 12.2.0] (/usr/bin/python3)

How do you run AVD ?

Other

Steps to reproduce

Relevant log output

Contributing Guide

  • I agree to follow this project Code of Conduct

Metadata

Metadata

Assignees

No one assigned

    Labels

    type: bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions