-
Couldn't load subscription status.
- Fork 38.8k
Description
The question is with reference to netty/netty#13817.
In our application, we use netty-reactor libraries (and it's dependencies). When we upgraded the to version 4.1.107+, we observed multiple UDP ports in use.
e.g. From netstat output:
udp6 0 0 :::38513 :::* 1625810/java
udp6 0 0 :::56579 :::* 1625810/java
udp6 0 0 :::60207 :::* 1625810/java
udp6 0 0 :::60481 :::* 1625810/java
In our application this is how we instantiate the ReactorNettyTcpClient :
ReactorNettyTcpClient<byte[]> tcpClient = new ReactorNettyTcpClient<>(configurer -> configurer
.host(getHostName())
.port(brokerRelayPort)
.secure(SslProvider.builder().sslContext(getSslContext()).build()), new StompReactorNettyCodec());
Upon enabling debugging, below is the stack trace of the thread that creates the DNSNameResolver instance and thus uses one UDP port.
"tcp-client-loop-nio-2@40650" tid=0xb8 nid=NA runnable
java.lang.Thread.State: RUNNABLE
at io.netty.resolver.dns.DnsNameResolver.(DnsNameResolver.java:440)
at io.netty.resolver.dns.DnsNameResolverBuilder.build(DnsNameResolverBuilder.java:594)
at io.netty.resolver.dns.DnsAddressResolverGroup.newNameResolver(DnsAddressResolverGroup.java:114)
at io.netty.resolver.dns.DnsAddressResolverGroup.newResolver(DnsAddressResolverGroup.java:92)
at io.netty.resolver.dns.DnsAddressResolverGroup.newResolver(DnsAddressResolverGroup.java:77)
at io.netty.resolver.AddressResolverGroup.getResolver(AddressResolverGroup.java:70)
- locked <0xa2e1> (a java.util.IdentityHashMap)
at reactor.netty.transport.TransportConnector.doResolveAndConnect(TransportConnector.java:303)
at reactor.netty.transport.TransportConnector.lambda$connect$6(TransportConnector.java:165)
at reactor.netty.transport.TransportConnector$$Lambda/0x000000080248a500.apply(Unknown Source:-1)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:132)
at reactor.netty.transport.TransportConnector$MonoChannelPromise._subscribe(TransportConnector.java:638)
at reactor.netty.transport.TransportConnector$MonoChannelPromise.lambda$subscribe$0(TransportConnector.java:550)
at reactor.netty.transport.TransportConnector$MonoChannelPromise$$Lambda/0x000000080248a740.run(Unknown Source:-1)
at io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:173)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:166)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:470)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:569)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.runWith(Unknown Source:-1)
at java.lang.Thread.run(Unknown Source:-1)
Debugging also revealed that, the
public ReactorNettyTcpClient(Function<TcpClient, TcpClient> clientConfigurer, ReactorNettyCodec<P> codec) {
Assert.notNull(codec, "ReactorNettyCodec is required");
this.channelGroup = new DefaultChannelGroup(ImmediateEventExecutor.INSTANCE);
this.loopResources = LoopResources.create("tcp-client-loop");
this.poolResources = ConnectionProvider.create("tcp-client-pool", 10000);
this.codec = codec;
this.tcpClient = clientConfigurer.apply(TcpClient
.create(this.poolResources)
.runOn(this.loopResources, false)
.doOnConnected(conn -> this.channelGroup.add(conn.channel())));
}
this.loopResources = LoopResources.create("tcp-client-loop");
This maintains a map of event loop (NIOEventLoop in our case) and address resolver.
Wanted to understand what causes multiple different event loop instances to be created which in turn creates multiple address resolvers and thus multiple ports get in use.
Is there any setting to control the number of NIOEventLoop instance creation and/or the address resolver instances?
Our application in different configurations have 1, 8, upto 40 UDP ports getting used and that needs an explanation.
From our side, this was result of netty TPIP upgrade with no application code/configuration changes as such.
Appreciate your help.
Thanks,
Amit