Prune idle sessions before starting new ones #701
+472
−282
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR replaces #677. The key difference between this PR and the old one is that extra idle sessions are closed immediately before starting new ones instead of waiting for the IdleTrackingBackgroundService to clean up the next time the periodic timer fires. To achieve this, I made the ConcurrentDictionary tracking the stateful StreamableHttpSessions a singleton service that's also responsible for pruning idle sessions. StreamableHttpSession was previously named
HttpMcpSession<TTransport>
, but it's no longer shared with the SseHandlerYou can look at the description of #677 to see the consequences of creating too many new sessions without first closing and unrooting a corresponding number of idle sessions. The tl;dr is that overallocating could lead to thread pool starvation as hundreds of threads had to wait on the GC to allocate heap space. This thread pool starvation created a vicious cycle because it prevented the IdleTrackingBackgroundService from unrooting the idle sessions causing more of them to get promoted and creating more work for the GC.
In order to reduce contention, I reuse the sorted _idleSessionIds list to find the most idle session to remove next. This list only gets repopulated every 5 seconds on a background loop, or if we run out of new idle sessions to close to make room for new ones. This isn't perfect, because sessions may briefly become active while siting in the _idleSessionIds list, but not get resorted. However, this is only a problem if the server is at the MaxIdleSessionCount, and that every idle session that was less recently active during the last sort has already been closed. Considering a sort should happen at least every 5 seconds when sessions are pruned, I think this is a fair tradeoff to reduce global synchronization on session creation (at least when under the MaxIdleSessionCount) and every time a session becomes idle.
In my testing on a with 16core/64GB VM, a 100,000 idle session limit (the old default) caused the server process to consume 2-3 GB of memory according to Task Manager and limited new session creation rate to about 60 sessions/second after reaching the MaxIdleSessionCount. At a 10,000 idle session limit (the new default), the process memory usage dropped to about 300MB session creation rate increased to about 900 sessions/second. And at the even lower 1,000 idle session limit, the process memory usage dropped further to about 180MB the session creation rate increased again to about 5,000 sessions/second. All of these numbers are stable over repeated runs after having reached the MaxIdleSessionCount.
MaxIdleSessionCount = 10,000 (New default)
MaxIdleSessionCount = 100,000 (Old default)
MaxIdleSessionCount = 1,000 (Lower than default)
Single Session Tool Call
The
MaxIdleSessionCount
has no apparent affect on this test, and I wouldn't expect it to, since we still look up existing sessions the same way we did previously.