Explain caching implementation issues causing test failures #308
+0
−0
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
The recent type annotation changes replaced dynamic
lru_cacheconfiguration with static@cachedecorator onget_factual_valueandget_target_value, breaking cache size configuration and causing memory leaks.Issues identified
Ignored
cache_sizeparameter:@cacheis equivalent tolru_cache(maxsize=None). Tests explicitly usingcache_size=0to prevent memory exhaustion will fail.Memory leak:
@cacheon instance methods holds references toself, preventing garbage collection. The original_setup_caching()avoided this by applying caching dynamically per-instance.Context
The original implementation applied caching in
__init__:This allowed per-instance cache configuration and proper garbage collection, as documented in the code comments referencing https://rednafi.com/python/lru_cache_on_methods/.
The new decorator-based approach:
Cannot support configurable cache sizes or prevent holding references to constraint instances.
💬 We'd love your input! Share your thoughts on Copilot coding agent in our 2 minute survey.