rootless dns resolv.conf race condition? #26646
Replies: 3 comments 1 reply
-
In case it helps anyone else.
|
Beta Was this translation helpful? Give feedback.
-
There are in general a fair amount of race condition on startup since we just copy the hosts resolv.conf which of course would fail if it is not fully ready.
I would assume that should generally work, are you sure the network was fully configured after 1 minute and resolv.conf had the right content? One thing to note here (if DisableDNS=true is not used) dns will routed through aardvark-dns which listens on the bridge ip. And aardvark-dns will read the custom resolv.conf from the rootless-netns. |
Beta Was this translation helpful? Give feedback.
-
I have continued looking into this, and it seems to be some kind of routing issue possibly on my side. The dns is working properly whether DisableDNS is True or False. Content of resolv.conf is different, but resolution is still happening. I do have a working DNS server on my LAN (Opnsense -> Unbound -> DnsMasq). The issue is accessing anything via ipv6 outside the container. So perhaps I am missing an additional route that needs to be configured on my router or elsewhere.
created via quadlet
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Am using pasta.
I'm trying to understand why my rootless containers, which start on boot via a series of quadlets, are getting an incorrect resolv.conf?
I am specifying a custom network, which is simply [Network] for each quadlet to use.
If I remove this Network= statement from the quadlet, I don't have this problem. But start to have port conflict issues as all quadlets now share the same network.
I am using systemd-resolved with stub-resolver.
This seems to be some sort of condition where podman or pasta is getting my dns configuration before it has properly been assigned via dhcp or router advertisement.
If after all the containers kicked off during boot have started, I run this while logged in as the same user under which all the containers are running:
podman run --rm alpine:latest sh -c "apk add --quiet --no-cache bind-tools && cat /etc/resolv.conf && time dig +tcp google.com"
The properly configured resolv.conf is output, that includes my LAN's local DNS ip address.
Why do none of the previously started containers have the proper resolv.conf.
Thinking this is some sort if timing issue, I modified podman-user-wait-network-online.service to just be a 1 minute delay.
(Perhaps whatever is configuring dns for these containers as they start will now be using the fully configured resolv.conf)
Yet my rootless containers started via quadlets with their custom networks have a resolv.conf that is from earlier in the boot process, not the one in /run/systemd/resolve/resolv.conf at the time they are starting. I verified this my modifying systemd-network-online.service to output my resolv.conf, and indeed it is not yet configured properly and the same content as what is showing in my containers even with a one minute delay.
Any suggestions on how get the proper resolv.conf loaded? I would rather not mount /run/system/resolve/resolv.conf into each container... but if I have to... I know I can also just remove the custom networks and juggle the ports around so there are no conflicts.
Is there another podman related service I can delay until my DNS configuration is as it should be?
-- Base on stat, creation time, my /run/systemd/resolve/resolv.conf is written here, after the container-runner use session in started, and before the delay ends and pods begin to startup:
Thanks,
-Greg
Beta Was this translation helpful? Give feedback.
All reactions