- 
                Notifications
    You must be signed in to change notification settings 
- Fork 38.8k
Description
Hello, I was trying to reproduce some out-of-memory problems and I think I was able to reproduce with a very simple setup. I have a spring-boot project with Webflux (v3.5.3) where I enable the following:
management:
  endpoints:
    web:
      exposure:
        include: prometheus,healthI configured a WebClient with the only customization being adding a ServerOAuth2AuthorizedClientExchangeFilterFunction for OAuth2. I then defined the following:
private val counter = AtomicLong()
suspend fun <T> get(fqdn: String, typeReference: ParameterizedTypeReference<T>, clientRegistrationId: String = "OAuth2") {
    webClient.get()
        .uri(fqdn)
        .attributes(ServerOAuth2AuthorizedClientExchangeFilterFunction.clientRegistrationId(clientRegistrationId))
        .retrieve()
        .bodyToMono(typeReference)
        .awaitSingle()
        .let {
            if (counter.incrementAndGet().mod(45L) == 0L) {
                log.info("GET {} succeeded.", fqdn)
            }
        }
}I defined 3 @Scheduled functions that call this get every second, and I run this application with these flags: -XX:MaxDirectMemorySize=100k -Dio.netty.maxDirectMemory=0. That's pretty much it.
For 30 minutes or so, this ran without issues. I then performed a GET against /actuator/prometheus to look at jvm_buffer_memory_used_bytes. Shortly after that, many exceptions like this one were logged:
2025-08-19 09:56:34,897 WARN  reactor.core.Exceptions:304 [reactor-http-nio-11] - throwIfFatal detected a jvm fatal exception, which is thrown and logged below:
java.lang.OutOfMemoryError: Cannot reserve 32768 bytes of direct buffer memory (allocated: 102295, limit: 102400)
	at java.base/java.nio.Bits.reserveMemory(Bits.java:178)
	at java.base/java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:111)
	at java.base/java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:360)
	at io.netty.buffer.UnpooledDirectByteBuf.allocateDirect(UnpooledDirectByteBuf.java:104)
	at io.netty.buffer.UnpooledDirectByteBuf.<init>(UnpooledDirectByteBuf.java:64)
	at io.netty.buffer.UnpooledUnsafeDirectByteBuf.<init>(UnpooledUnsafeDirectByteBuf.java:41)
	at io.netty.buffer.UnsafeByteBufUtil.newUnsafeDirectByteBuf(UnsafeByteBufUtil.java:687)
	at io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:406)
	at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:188)
	at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:179)
	at io.netty.buffer.AbstractByteBufAllocator.ioBuffer(AbstractByteBufAllocator.java:140)
	at io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:120)
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:150)
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:796)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:732)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:658)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:998)
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	at java.base/java.lang.Thread.run(Thread.java:1583)
This also somehow completely stopped the scheduled functions.
Is it possible the Prometheus machinery is keeping references to buffers that then cannot be freed?