Replies: 4 comments
-
|
Oh, I should also note I've tried setting the socket keepalive settings as well and it seems to have no effect on how long the socket stays open |
Beta Was this translation helpful? Give feedback.
-
|
I believe what you are seeing is #10143 |
Beta Was this translation helpful? Give feedback.
-
If we are closing the TCP connection while there are responses waiting, this is a bug, though. Can you show us a packet capture and perhaps a way to reproduce the issue? |
Beta Was this translation helpful? Give feedback.
-
|
I tried to reproduce this again but after updating dnsdist to 1.9 and fixing a few bugs in my code I was unable. I have an environment set up if you want to test any of this again, but now I am seeing the behavior where dnsdist only closes after not receiving a message for |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I expect the setTCPRecvTimeout to be the length of time to wait to close a tcp connection if there's NO activity, but it appears to close every 2s (or whatever setting you pass to this config) even if the connection is in use, being written to or there are pending responses
Some more background:
I am implementing a multiplexing TCP connection to dnsdist (in Rust, I can share the code if it's helpful). To test the implementation, I'm running dnsdist set to spoof 1.2.3.4 for all queries.
My client opens a tcp connection and starts to send DNS messages, but dnsdist appears to close the TCP connection after 2s regardless of the fact that I am still sending messages and there are pending responses waiting to come back. The result is that my socket will EOF and I've got a bunch of pending messages that fail.
If I set setTCPRecvTimeout to 0, I don't see this behaviour, but I'm not sure if dnsdist will ever close the connection. Is it a bad idea to set to 0?
Are there any other settings I should set in dnsdist if I'm going to have a smaller number of TCP connections that pipeline many messages vs sending one message per TCP connection? Thus far I've been unable to make the multiplexing faster than simply opening a new TCP stream per message, which is a little surprising.
Beta Was this translation helpful? Give feedback.
All reactions