-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Description
I am implementing a high‑throughput TCP gateway using Spring Boot 4, Spring Framework 7, and Spring Integration IP 7.0.0. The system is designed for a long‑term target of 10,000+ concurrent persistent TCP connections, each sending telemetry frames.
However, even under very moderate load (around 1,000–3,000 concurrent connections, each sending 1 message every ~10 seconds), the server begins dropping connections with repeated:
Timed out waiting for IO
Timed out waiting for buffer space
This happens even though:
- I am using
TcpNioConnectionFactory - Direct buffers are enabled
- Connection factory has large OS buffer sizes
- A custom
CompositeExecutor(IO + assembler) is configured with large pools, no queues, abort/caller policies - A very fast deserializer is in place with zero copying
I read through the NIO note in the Spring docs:
https://docs.spring.io/spring-integration/reference/ip/note-nio.html
…and also past issues, especially this one:
#2222
But the symptoms remain identical even with the recommended setup.
I would like to confirm whether this is:
- a misconfiguration on my side, or
- a limitation/bug in Spring Integration NIO, or
- something requiring special tuning at OS/JDK level.
Any guidance would be hugely appreciated.
Load pattern
- ~1,000–3,000 concurrent persistent TCP connections in current tests
- Each connection sends about 1 small message every ~10 seconds
Logs
Below is an example of the recurring error :
ERROR [ nio-asm-19] TcpNioConnection : Read exception 0:0:0:0:0:0:0:1:61393:3333:da8e7d48-65d3-484a-9045-4f888f0f82a2 IOException:null:Timed out waiting for IO
ERROR [ nio-io-2] TcpNioConnection : Exception on Read 0:0:0:0:0:0:0:1:61393:3333:da8e7d48-65d3-484a-9045-4f888f0f82a2 Timed out waiting for buffer space
java.io.IOException: Timed out waiting for buffer space
at org.springframework.integration.ip.tcp.connection.TcpNioConnection$ChannelInputStream.write(TcpNioConnection.java:824) ~[spring-integration-ip-7.0.0.jar:7.0.0]
at org.springframework.integration.ip.tcp.connection.TcpNioConnection.sendToPipe(TcpNioConnection.java:494) ~[spring-integration-ip-7.0.0.jar:7.0.0]
at org.springframework.integration.ip.tcp.connection.TcpNioConnection.doRead(TcpNioConnection.java:477) ~[spring-integration-ip-7.0.0.jar:7.0.0]
at org.springframework.integration.ip.tcp.connection.TcpNioConnection.readPacket(TcpNioConnection.java:534) ~[spring-integration-ip-7.0.0.jar:7.0.0]
at org.springframework.integration.ip.tcp.connection.AbstractConnectionFactory.lambda$keyReadable$0(AbstractConnectionFactory.java:774) ~[spring-integration-ip-7.0.0.jar:7.0.0]
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1090) ~[na:na]
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:614) ~[na:na]
at java.base/java.lang.Thread.run(Thread.java:1474) ~[na:na]
This eventually leads to connection closures.
Environment
- Spring Boot: 4.0.0
- Spring Framework: 7.x
- Spring Integration IP: 7.0.0
- JDK: Java 25
- Platform: macOS (same behavior observed on Linux)
Using:
Tcp.inboundAdapterTcpNioConnectionFactoryCompositeExecutor(IO + assembler pools)VirtualThreadTaskExecutorfor downstream message processing- Direct buffers enabled
Connection Factory Configuration
(only relevant pieces shown)
factory.setUsingDirectBuffers(true);
factory.setSoReceiveBufferSize(256 * 1024);
factory.setSoSendBufferSize(256 * 1024);
factory.setSoTcpNoDelay(true);
factory.setSoKeepAlive(true);
factory.setSingleUse(false);
factory.setReadDelay(10_000); // tested with different values
factory.setBacklog(65_535);
factory.setTaskExecutor(tcpNioCompositeExecutor())
factory.setNioHarvestInterval(5_000);Custom CompositeExecutor
public CompositeExecutor tcpNioCompositeExecutor() {
int cores = Runtime.getRuntime().availableProcessors();
int ioCore = Math.max(4, cores);
int ioMax = Math.max(cores * 4, 16);
int asmCore = Math.max(cores * 4, 16);
int asmMax = Math.max(cores * 16, 64);
ThreadPoolTaskExecutor io = new ThreadPoolTaskExecutor();
io.setThreadNamePrefix("nio-io-");
io.setCorePoolSize(ioCore);
io.setMaxPoolSize(ioMax);
io.setQueueCapacity(0);
io.setRejectedExecutionHandler(new ThreadPoolExecutor.AbortPolicy());
io.initialize();
ThreadPoolTaskExecutor asm = new ThreadPoolTaskExecutor();
asm.setThreadNamePrefix("nio-asm-");
asm.setCorePoolSize(asmCore);
asm.setMaxPoolSize(asmMax);
asm.setQueueCapacity(0);
asm.setRejectedExecutionHandler(new ThreadPoolExecutor.CallerRunsPolicy());
asm.initialize();
return new CompositeExecutor(io, asm);
}Please let me know if I am missing something in my configuration or understanding.
If not, a fix or clarification would be highly appreciated.