Skip to content

IPv6 MLD listener report going to wrong interface #6247

@martijncoenen

Description

@martijncoenen

Important notices

Before you add a new report, we ask you kindly to acknowledge the following:

Describe the bug

I'm running opnsense virtualized inside proxmox; LAN/WAN are both virtio interfaces, which are each mapped to a bridge in proxmox, and the bridges map to two physical interfaces. My network has several unifi switches, and I have IGMP snooping configured on several VLANs on the internal LAN. This results in one of the switches to send out MLD queries for IPv6, to determine what multicast addresses a host/port is interested in. I confirmed that these multicast queries arrive correctly at the bridge in proxmox, and in turn in the virtio interface in opnsense. This is captured on the vtnet1_vlan100 interface, which corresponds to vlan100 on the LAN:

9:47:15.748139 IP6 fe80::f692:bfff:fe81:337c > ff02::1: HBH ICMP6, multicast listener querymax resp delay: 10000 addr: ::, length 24

this interface has IPv6 configured; my ISP gives out a prefix over pppoe, and I use "track interface" with WAN as upstream.

Interestingly, when pppoe is disconnected, everything looks fine, and the mld listener reports are sent back (:9380 is the vtnet1_vlan100 interface) correctly:

19:47:18.462474 IP6 fe80::9ca3:3dff:fea4:9380 > ff05::1:3: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff05::1:3, length 24
19:47:19.071261 IP6 fe80::9ca3:3dff:fea4:9380 > ff02::2:ff49:d8be: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::2:ff49:d8be, length 24
19:47:19.271584 IP6 fe80::9ca3:3dff:fea4:9380 > ff02::1:ffa4:9380: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ffa4:9380, length 24
19:47:19.683804 IP6 fe80::9ca3:3dff:fea4:9380 > ff02::2:49d8:bec6: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::2:49d8:bec6, length 24
19:47:22.090143 IP6 fe80::9ca3:3dff:fea4:9380 > ff02::1:2: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:2, length 24

but when I setup a connection to my ISP over pppoe, the queries still arrive on vtnet1_vlan100, but suddenly, the reports go out over the pppoe0 interface, instead of the vtnet1_vlan100 interface.

# tcpdump -i pppoe0 | grep -i multi
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on pppoe0, link-type NULL (BSD loopback), capture size 262144 bytes
19:54:20.570018 IP6 fe80::9ca3:3dff:fea4:9380 > ff02::2:49d8:bec6: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::2:49d8:bec6, length 24
19:54:21.370026 IP6 fe80::9ca3:3dff:fea4:9380 > ff05::1:3: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff05::1:3, length 24
19:54:21.576229 IP6 fe80::9ca3:3dff:fea4:9380 > ff02::1:ffa4:9380: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ffa4:9380, length 24
...

I think this probably happens because, as soon as PPPoE is connected, a default route for the pppoe0 interface is added to the routing table, and the MLD report goes over the default route. That seems wrong though - I think the report should at least go back over the interface where it came from (eg, vtnet1_vlan100) in this case.

Anyway, the result of this is that the linux bridge doesn't know that opnsense is interested in the "all routers" multicast address at ff02::2, and so when a client on the LAN sends an IPv6 router solicitation, that doesn't end up at my opnsense box, and things break.

I've only just set up IPv6 locally, so I'm not sure if this issue is specific to this version or not.

To Reproduce

I described the environment in the description above. Not sure how much of that is relevant to reproduce; I suspect at least some of it is, because otherwise I would expect many more users to hit this issue.

Expected behavior

Multicast MLD reports should be sent back out over the interface from which the query was received.

Describe alternatives you considered

I can workaround this problem by disabling IGMP snooping internally, or by configuring the proxmox bridge to just forward all multicast packets to all ports. Neither is optimal.

Screenshots

N/A

Relevant log files

N/A

Additional context

N/A

Environment

Software version used and hardware type if relevant, e.g.:

OPNsense 22.7.10_2 (amd64, OpenSSL), virtualized in Proxmox
Ryzen 3700X
virtio network adapters (backed by proxmox bridge)

Metadata

Metadata

Assignees

No one assigned

    Labels

    help wantedContributor missing / timeoutsupportCommunity support or awaiting triage

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions