fix: add LRU eviction to session/replay caches to reduce RAM usage#40
Closed
Clav3rbot wants to merge 1 commit intoadn8naiagent:devfrom
Closed
fix: add LRU eviction to session/replay caches to reduce RAM usage#40Clav3rbot wants to merge 1 commit intoadn8naiagent:devfrom
Clav3rbot wants to merge 1 commit intoadn8naiagent:devfrom
Conversation
…endpoint - Replace unbounded _session_cache dict with OrderedDict LRU (max 2, configurable via SESSION_CACHE_MAX env var). Each fastf1 Session holds 200-800MB; without eviction memory grows indefinitely. - Replace unbounded _replay_cache dict with OrderedDict LRU (max 3, configurable via REPLAY_CACHE_MAX env var). - Add clear_session_cache() and clear_replay_cache() helpers with gc.collect(). - Add POST /api/admin/cache/clear endpoint to manually release all in-memory caches.
Owner
|
Apologies @Clav3rbot, I had picked up this issue and been fixing it along with a few of the other items you've raised as PRs. Have fixed to evict cache after session no longer actively used. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What
Adds LRU eviction policies to the two largest in-memory caches and a manual cache-clear admin endpoint.
Why
The backend consumes 1-2 GB of RAM that is never freed, even after all clients disconnect. The root cause is two unbounded caches:
_session_cacheinservices/f1_data.py: stores fullfastf1.core.Sessionobjects (200-800 MB each) indefinitely_replay_cacheinrouters/replay.py: stores all replay frames for every viewed session indefinitelyAs users browse different sessions, memory grows without bound and is never reclaimed.
How
dictwithOrderedDict-based LRU. Default max 2 sessions (configurable viaSESSION_CACHE_MAXenv var). Oldest sessions are evicted +gc.collect()on insert.REPLAY_CACHE_MAXenv var).clear_session_cache()/clear_replay_cache(): helper functions that clear all entries and trigger garbage collection.POST /api/admin/cache/clear: new endpoint to manually release all caches and reclaim memory on demand.Expected Impact
RAM usage capped at ~1-1.6 GB max (2 sessions) instead of growing indefinitely. Memory is proactively freed when new sessions are loaded.