-
-
Notifications
You must be signed in to change notification settings - Fork 15
feat: Add optional Redis caching infrastructure #803
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Add foundation for Redis caching support with fastapi-cache2: - Install fastapi-cache2[redis] dependency - Add Redis configuration to .env.example - Add Redis settings to Settings class with redis_url property - Default to disabled (REDIS_ENABLED=false) - Falls back to in-memory cache when disabled All tests pass. No breaking changes.
- Add cache initialization in lifespan function - Support Redis backend when REDIS_ENABLED=true - Gracefully fall back to InMemory backend if Redis unavailable - Add proper cleanup: close Redis connection on shutdown - Add CACHE logging category for cache operations - Update logging test to include CACHE category Cache backend logs initialization type (Redis or InMemory). All tests pass. No breaking changes.
Create reusable cache infrastructure: - Add app/cache module with decorators, key builders, invalidation - cached() decorator with project defaults from settings - user_scoped_key_builder for user-specific caching - paginated_key_builder for list endpoints - user_paginated_key_builder for combined scenarios - Cache invalidation helpers for users, API keys, patterns - Fix Redis client close method (close() not aclose()) All utilities follow project patterns (Google docstrings, type hints). All tests pass. Ready for endpoint integration.
- Add PickleCoder support to cache decorator for SQLAlchemy models - Apply caching to GET /users/me with 300s TTL - Add cache invalidation when user is edited - Fix key builder parameter names to match fastapi-cache interface - Add cache initialization fixture to test suite for InMemory backend - All tests pass (566 passed, 99% coverage) The key insight was that SQLAlchemy ORM models require PickleCoder instead of the default JsonCoder for proper serialization, and that key builder functions must use parameter names without underscores (response, args, kwargs) to match fastapi-cache's calling convention.
- Set PickleCoder as default in cached() wrapper for SQLAlchemy ORM models - Remove explicit coder parameter from endpoints (cleaner API) - Update docstring to reflect new default and when to override - Pickle is ~66% faster than JSON for Python objects - All our cached endpoints return ORM models, not Pydantic models This simplifies the caching API since all our priority endpoints (GET /users/me, /users/, /users/keys) return SQLAlchemy models which require PickleCoder anyway. Trade-off: Pickle produces ~2x larger payloads than JSON, but the faster serialization/deserialization for complex objects and the ability to cache any Python type make this worthwhile for our use case.
Previously cache keys were structured as "user:get_my_user:123" but
invalidation tried to clear namespace "user:123", causing a mismatch.
Changes:
- Restructure cache keys to place user_id in namespace: "user:123:get_my_user"
- Update user_scoped_key_builder: namespace:user_id:func_name
- Update user_paginated_key_builder: namespace:user_id:func_name:page:N:size:M
- Update invalidate_api_keys_cache to match new structure
Now when PUT /users/{user_id} calls invalidate_user_cache(123), it
clears namespace "user:123" which matches all cached keys for that user.
This fixes the issue where updating a user via PUT endpoint didn't
invalidate the cached GET /users/me data.
Note: Browser/HTTP caching is separate from server-side Redis caching.
Swagger UI may still show cached responses due to browser Cache-Control
headers. Use DevTools "Disable cache" during testing. Real API clients
(mobile apps, SPAs, server-to-server) are unaffected.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Sonnet 4.5 <[email protected]>
Signed-off-by: Grant Ramsay <[email protected]>
- Add users_list_key_builder for dual-mode endpoint (paginated/single)
- Cache both paginated list and single user lookups
- Add cache invalidation to all user modification endpoints:
- edit_user, delete_user, make_admin, ban_user, unban_user
- Add cache invalidation to user registration
- Update invalidate_user_cache to clear both namespaces:
- "user:{user_id}" for /users/me style cache
- "users:{user_id}" for single user lookup cache
Cache key structure:
- Single user: "users:{user_id}:single"
- Paginated list: "users:list:page:N:size:M"
All 566 tests pass, 99% coverage
Add comprehensive cache logging to debug and monitor cache behavior:
- Add CacheLoggingMiddleware to log cache hits, misses, and timing
- Log incoming Cache-Control headers that bypass cache
- Add detailed key builder logging showing namespace and generated keys
- All cache activity logging uses DEBUG level for clean production logs
- Cache invalidation operations remain INFO level (important events)
The middleware revealed that browser Cache-Control: no-cache headers
were causing all requests from Swagger UI to bypass cache (expected
behavior). Terminal requests without that header show proper cache
hits with ~15x speedup (30ms -> 2ms).
Cache is confirmed working:
- GET /users/me: 30ms miss -> 2ms hit
- GET /users/: 25ms miss -> 2ms hit
- Invalidation on PUT /users/{id} works correctly
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Sonnet 4.5 <[email protected]>
📝 WalkthroughWalkthroughAdds an optional response caching subsystem (in-memory or Redis) including a project-default Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant App as FastAPI App
participant Middleware as CacheLoggingMiddleware
participant Decorator as `@cached`
participant Cache as FastAPICache (Redis/InMemory)
participant Handler as Route Handler
participant DB as Database
rect rgb(232,245,255)
Note over Client,Cache: Cache Hit
Client->>App: GET /users
App->>Middleware: incoming request
Middleware->>Decorator: forward to cached decorator/route
Decorator->>Cache: build key & get
Cache-->>Decorator: HIT
Decorator-->>Middleware: return cached response (X-FastAPI-Cache: HIT)
Middleware->>Middleware: log HIT & duration
Middleware-->>Client: response
end
rect rgb(255,244,230)
Note over Client,DB: Cache Miss
Client->>App: GET /users
App->>Middleware: incoming request
Middleware->>Decorator: forward to cached decorator/route
Decorator->>Cache: build key & get
Cache-->>Decorator: MISS
Decorator->>Handler: call route handler
Handler->>DB: query
DB-->>Handler: results
Handler-->>Decorator: response
Decorator->>Cache: set key with TTL
Decorator-->>Middleware: return response (X-FastAPI-Cache: MISS)
Middleware->>Middleware: log MISS & duration
Middleware-->>Client: response
end
rect rgb(240,255,244)
Note over Client,Cache: Invalidation (mutation)
Client->>App: PATCH /users/{id}
App->>Handler: route handler
Handler->>DB: update user
DB-->>Handler: success
Handler->>Cache: call invalidate_user_cache / invalidate_users_list_cache
Cache-->>Handler: cleared
Handler-->>Client: response
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes
Pre-merge checks and finishing touches✅ Passed checks (5 passed)
✨ Finishing touches
Comment |
Coverage summary from CodacySee diff coverage on Codacy
Coverage variation details
Coverage variation is the difference between the coverage for the head and common ancestor commits of the pull request branch: Diff coverage details
Diff coverage is the percentage of lines that are covered by tests out of the coverable lines that the pull request added or modified: See your quality gate settings Change summary preferences |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
app/resources/user.py (1)
75-92: Cache invalidation for single-user lookups won't work — namespace mismatch.The
@cacheddecorator onget_userspassesnamespace="users", andusers_list_key_builderbuilds cache keys like"users:123:single". However,invalidate_user_cacheattempts to clear withnamespace=f"users:{user_id}"(e.g.,"users:123").Since the actual namespace during caching is
"users"(not"users:{user_id}"), theFastAPICache.clear()call won't match and won't invalidate these single-user cache entries. The user_id is embedded in the cache key itself by the key builder, not in the namespace parameter.To fix, either:
- Clear with
namespace="users"followed by a separate key pattern clear, or- Use the actual namespace used during caching and build the invalidation key differently.
🧹 Nitpick comments (6)
.env.example (1)
131-147: Consider reordering Redis configuration keys.The Redis configuration is comprehensive and well-documented. However, the static analysis tool flags that the keys are not in alphabetical order (lines 141-142). While this is purely cosmetic, maintaining alphabetical order can improve consistency.
🔎 Optional reordering for alphabetical consistency
# Redis Cache Configuration (optional - falls back to in-memory if # not set) # Enable Redis caching for improved API performance on read-heavy # endpoints # When disabled, uses in-memory cache (suitable for single-instance # deployments) # See: docs/usage/caching.md for details +REDIS_DB=0 REDIS_ENABLED=false REDIS_HOST=localhost +REDIS_PASSWORD= REDIS_PORT=6379 -REDIS_PASSWORD= -REDIS_DB=0 # Default cache TTL (Time To Live) in seconds # How long cached responses are stored before expiring # Individual endpoints may override this value CACHE_DEFAULT_TTL=300Based on learnings, static analysis hints for .env file ordering.
app/config/settings.py (1)
129-136: Consider validation for redis_password when redis_enabled is True.The empty default for
redis_passwordcould be a security risk in production if Redis is enabled without authentication. Consider adding a validator that warns or fails whenredis_enabled=Truebutredis_passwordis empty.🔎 Suggested validator
Add this validator after the existing field validators:
@field_validator("redis_password") @classmethod def validate_redis_password(cls: type[Settings], value: str, info) -> str: """Warn if Redis is enabled without a password.""" # Note: Can't access redis_enabled here due to validation order # Consider a model_validator instead if needed if not value: logger.warning( "Redis password is empty. Ensure Redis authentication " "is disabled or set REDIS_PASSWORD in production." ) return valueOr use a
model_validatorto check both fields together:from pydantic import model_validator @model_validator(mode='after') def validate_redis_config(self) -> 'Settings': """Ensure Redis password is set when Redis is enabled.""" if self.redis_enabled and not self.redis_password: logger.warning( "Redis is enabled without authentication. " "Set REDIS_PASSWORD in production environments." ) return selfapp/resources/auth.py (1)
75-76: Cache invalidation correctly placed.The invalidation call is properly positioned after successful registration and before returning the response. This ensures the users list cache reflects the newly registered user.
Optional: Add error handling for cache invalidation
If cache invalidation failures should not affect the registration response, consider wrapping it in a try-except:
# Invalidate users list cache after registration try: await invalidate_users_list_cache() except Exception: # Log but don't fail the request category_logger.warning( "Failed to invalidate users list cache after registration", LogCategory.CACHE )However, this may not be necessary if
invalidate_users_list_cache()is designed to never raise exceptions.app/main.py (1)
87-94: Consider catching additional Redis exceptions.The exception handling catches
ConnectionErrorandTimeoutError, but Redis can raise other exceptions likeredis.exceptions.AuthenticationError(wrong password) orredis.exceptions.ResponseError. These would currently propagate and crash the app instead of falling back gracefully.🔎 Proposed fix to catch broader Redis exceptions
- except (ConnectionError, TimeoutError) as e: + except Exception as e: logger.warning( "Failed to connect to Redis: %s. " "Falling back to in-memory cache.", e, ) FastAPICache.init(InMemoryBackend()) redis_client = NoneAlternatively, import and catch specific Redis exceptions:
from redis.exceptions import RedisError # ... except (ConnectionError, TimeoutError, RedisError) as e:app/middleware/cache_logging.py (1)
55-62: Consider removing or making the "NO CACHE HEADER" logging configurable.This conditional logging for
/usersendpoints without cache headers appears to be debug-oriented and could generate noise in logs for non-cached operations (POST, PUT, DELETE) or when cache is disabled. Consider either:
- Removing this block if it was debugging code
- Making it conditional on a verbose/trace log level
- Broadening it to cover all cached endpoints rather than hardcoding
/usersapp/cache/invalidation.py (1)
79-102: Clarifyinvalidate_patternbehavior - namespace vs pattern matching.The docstring suggests wildcard pattern matching (e.g.,
"user:*"), butFastAPICache.clear(namespace=pattern)clears keys under an exact namespace match, not wildcard patterns. This could mislead callers into expecting Redis-style glob patterns.Consider either:
- Renaming to
invalidate_namespaceto reflect actual behavior- Updating the docstring to clarify that this clears an exact namespace, not a glob pattern
- Implementing actual pattern-based invalidation if Redis wildcards are needed
🔎 Proposed docstring clarification
async def invalidate_pattern(pattern: str) -> None: - """Invalidate cache keys matching a pattern. + """Invalidate cache keys under a namespace. - WARNING: This clears the entire namespace matching the pattern. - Use with caution. + Clears all cache entries stored under the given namespace. Args: - pattern: Cache key pattern to match (e.g., "user:*"). + pattern: Cache namespace to clear. Example: ```python - # Clear all user-related caches - await invalidate_pattern("user:*") + # Clear all caches under "user:123" namespace + await invalidate_pattern("user:123") ``` - - Note: - Pattern matching depends on the cache backend. Redis supports - wildcards, but InMemoryBackend may not. """
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (17)
.env.example.gitignoreapp/cache/__init__.pyapp/cache/decorators.pyapp/cache/invalidation.pyapp/cache/key_builders.pyapp/config/log_config.pyapp/config/settings.pyapp/main.pyapp/middleware/cache_logging.pyapp/resources/auth.pyapp/resources/user.pypyproject.tomlrequirements-dev.txtrequirements.txttests/conftest.pytests/unit/test_logging.py
🧰 Additional context used
🧬 Code graph analysis (6)
app/resources/auth.py (1)
app/cache/invalidation.py (1)
invalidate_users_list_cache(39-52)
app/cache/decorators.py (1)
app/config/settings.py (1)
get_settings(240-242)
app/cache/key_builders.py (2)
app/config/log_config.py (1)
LogCategory(12-27)app/logs.py (1)
debug(41-44)
app/resources/user.py (4)
app/cache/decorators.py (1)
cached(12-58)app/cache/invalidation.py (2)
invalidate_user_cache(12-36)invalidate_users_list_cache(39-52)app/cache/key_builders.py (2)
user_scoped_key_builder(13-50)users_list_key_builder(84-119)app/database/db.py (1)
get_database(78-81)
app/cache/invalidation.py (2)
app/config/log_config.py (1)
LogCategory(12-27)app/logs.py (1)
info(26-29)
app/main.py (3)
app/middleware/cache_logging.py (1)
CacheLoggingMiddleware(15-64)app/config/settings.py (2)
get_settings(240-242)redis_url(143-154)app/logs.py (2)
info(26-29)warning(36-39)
🪛 dotenv-linter (4.0.0)
.env.example
[warning] 141-141: [UnorderedKey] The REDIS_PASSWORD key should go before the REDIS_PORT key
(UnorderedKey)
[warning] 142-142: [UnorderedKey] The REDIS_DB key should go before the REDIS_ENABLED key
(UnorderedKey)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
- GitHub Check: Codacy Static Code Analysis
- GitHub Check: test (3.13)
- GitHub Check: test (3.10)
- GitHub Check: test (3.12)
- GitHub Check: test (3.11)
- GitHub Check: test (3.14)
🔇 Additional comments (18)
.gitignore (1)
227-227: LGTM!Adding
NOTES.mdto the .gitignore list is a reasonable and consistent practice for local developer notes. The placement within the "AI Agent stuff" section aligns naturally with similar temporary/local development files already being ignored.app/config/log_config.py (1)
26-27: LGTM!The new
CACHElogging category is properly integrated following the existing pattern, and theALLflag correctly includes it.requirements.txt (1)
18-21: Verify the async-timeout constraint widening.The constraint for
async-timeoutwas widened frompython_full_version < '3.11'topython_full_version <= '3.11.2'. This change affects compatibility. Please verify this is intentional and aligns with the Redis package requirements.requirements-dev.txt (2)
354-355: Verify Redis version is current and secure.Same as the production requirements,
redis==4.6.0should be verified for compatibility and security.Refer to the verification request in requirements.txt for the redis package version check.
23-26: Verify the async-timeout constraint change.The constraint widening mirrors the change in requirements.txt. Ensure this is compatible with your development and testing requirements.
pyproject.toml (1)
35-35: No action required — fastapi-cache2[redis] is at the latest stable version with no known security vulnerabilities.Version 0.2.2 is the current latest release on PyPI (Jul 24, 2024) and has no publicly reported vulnerabilities in Snyk or Safety DB.
app/cache/__init__.py (1)
1-28: LGTM!The module structure is clean with well-organized imports and a properly defined public API surface.
app/cache/decorators.py (1)
12-58: LGTM!The decorator wrapper correctly applies project-wide defaults and properly delegates to the fastapi-cache library. The docstring example is clear and helpful.
app/cache/key_builders.py (4)
13-50: LGTM!The key builder correctly extracts user ID from request state and constructs a namespaced cache key. The "anonymous" fallback is a reasonable default for unauthenticated requests.
53-81: LGTM!The paginated key builder correctly extracts pagination parameters with sensible defaults.
84-119: LGTM - dual-mode behavior is well-documented.The key builder correctly handles both single user lookup and paginated list modes with distinct key patterns. The docstring clearly explains the behavior.
122-155: LGTM!The user-paginated key builder correctly combines user ID and pagination parameters for user-specific paginated endpoints.
tests/unit/test_logging.py (1)
29-41: LGTM!The test correctly updates the expected composition of
LogCategory.ALLto include the newLogCategory.CACHEflag, ensuring the test stays in sync with the updated enum definition inapp/config/log_config.py.app/main.py (1)
73-104: Cache lifecycle management looks good.The initialization logic properly:
- Tests Redis connectivity with
ping()before committing to the backend- Falls back gracefully to in-memory when Redis is unavailable or disabled
- Cleans up the Redis connection on shutdown
app/middleware/cache_logging.py (1)
18-54: LGTM!The middleware correctly:
- Measures request duration with appropriate precision
- Logs cache hits/misses at DEBUG level (non-intrusive)
- Handles the
X-FastAPI-Cacheheader case-insensitively- Passes through the response unchanged
app/cache/invalidation.py (1)
12-52: LGTM!The invalidation utilities are well-structured:
invalidate_user_cachecorrectly clears both user-scoped namespaces (user:{id}andusers:{id}) matching the key buildersinvalidate_users_list_cacheclears the paginated list namespace- Logging provides visibility into cache invalidation events
- Docstrings include helpful usage examples
app/resources/user.py (2)
122-125: LGTM - consistent cache invalidation pattern.All mutating endpoints properly invalidate both user-specific and list caches after the database operation completes. This ensures cache coherence across:
- Role changes (
make_admin)- Ban/unban operations
- User edits
- User deletions
Also applies to: 161-164, 183-186, 204-207, 224-227
101-109: LGTM!The caching setup for
/users/meis appropriate with user-scoped keys. Thenoqa: ARG001comments correctly suppress warnings for theResponseparameter required by the cache decorator.
Signed-off-by: Grant Ramsay <[email protected]>
The redis_url property now properly URL-encodes the password using urllib.parse.quote() to prevent connection failures when passwords contain special characters like @, :, /, #, ?, etc. Example: password "p@ss:word/123" becomes "p%40ss%3Aword%2F123"
Updated test cache fixture to avoid repeated FastAPICache.init() calls that can cause race conditions: - Check if backend is already InMemoryBackend before calling init() - Use getattr to safely access _backend without triggering assertion - Always clear cache to ensure test isolation - Add explanatory comments about the approach This implements Option B from the PR review: conditional init check rather than session-scoped fixture (which has pytest-asyncio compatibility issues).
Add model validator to warn when Redis is enabled without a password, helping prevent accidental production deployments with insecure Redis configurations. Changes: - Add validate_redis_security model validator - Warns when redis_enabled=True but redis_password is empty - Uses WARNING level (not error) to allow valid scenarios: - Development environments (localhost) - Private networks/containers - Redis using ACL or TLS authentication - Import model_validator from pydantic The validator checks both fields together after all field validation completes, ensuring redis_enabled and redis_password are available.
- Add Redis password URL-encoding to handle special characters safely - Add error handling to cache invalidation functions for graceful degradation (app continues with stale cache if Redis fails) - Add Redis security warning when password is empty (logs to both console and file via dual logger approach
Signed-off-by: Grant Ramsay <[email protected]>
- Create api_keys_list_key_builder for list endpoints (handles both regular user and admin endpoints) - Create api_key_single_key_builder for individual key lookups - Add @cached decorator to all 3 GET endpoints: * GET /users/keys (list for authenticated user) * GET /users/keys/by-user/{user_id} (admin list) * GET /users/keys/{key_id} (single key lookup) - Add cache invalidation to all mutation operations: * POST /users/keys (create) - invalidates list cache * PATCH endpoints (update) - invalidates list + single key cache * DELETE endpoints (delete) - invalidates list + single key cache - Use contextlib.suppress for graceful cache failure handling - 5-minute TTL matches user endpoints for consistency
Remove the conditional logging for /users endpoints without cache headers. This was debug code that generated noise in logs for: - Non-cached operations (POST, PUT, DELETE mutations) - Disabled cache scenarios - Expected behavior (mutations never have cache headers) Existing CACHE HIT/MISS logging provides sufficient visibility into cache behavior without false positives.
The function name and docstring implied glob pattern matching (e.g., "user:*"), but the implementation uses namespace prefix clearing, not pattern matching. Changes: - Rename invalidate_pattern → invalidate_namespace - Update docstring to clarify it clears a namespace prefix - Remove misleading wildcard examples - Add note that it's useful for custom endpoint groups without dedicated invalidation helpers This utility function is now accurately named and documented for template users extending the codebase with new endpoints.
9b933ec to
7e89d9e
Compare
Signed-off-by: Grant Ramsay <[email protected]>
gives us 100% branch and total coveage Signed-off-by: Grant Ramsay <[email protected]>
Signed-off-by: Grant Ramsay <[email protected]>
7eac58c to
656af88
Compare
…nsate Signed-off-by: Grant Ramsay <[email protected]>
Changes caching from always-on to opt-in for consistency with other optional features (metrics, admin panel). - Add CACHE_ENABLED setting (default: false) - Make @cached() decorator no-op when disabled - Conditionally initialize FastAPICache in lifespan - Update .env.example with CACHE_ENABLED - Add pytest env CACHE_ENABLED=true for tests - Add test coverage for disabled cache scenarios When CACHE_ENABLED=false: no cache initialization, decorator is no-op When CACHE_ENABLED=true: uses Redis or in-memory backend as before
Adds complete documentation for the opt-in caching system: - New docs/usage/caching.md with full caching guide - Updated docs/index.md with caching features - Updated docs/usage/configuration/environment.md with cache settings - Added to mkdocs.yml navigation - Added testing notes (tests run with CACHE_ENABLED=true) Documentation covers: - Enabling caching (disabled by default) - Configuration options (CACHE_ENABLED, REDIS_ENABLED, etc.) - In-memory vs Redis backends - Using @cached() decorator - Cache invalidation utilities - Cache key builders - Monitoring and troubleshooting - Performance impact - Testing configuration
82acd78 to
2e86b1f
Compare
Replace inflated performance claims with realistic, tested numbers: - Changed "50-100x faster" to "5-30x faster" throughout docs - Updated performance table with extrapolated numbers for 50 users/10 keys - Added context about dataset size and query complexity - Updated response time ranges from "100-500ms" to "5-60ms" Performance benchmarks based on actual testing with 11 users: - GET /users/: ~15ms MISS / ~2ms HIT (~7.5x faster) - GET /users/keys: ~40ms MISS / ~1.5ms HIT (~27x faster) - GET /users/me: ~3ms MISS / ~1ms HIT (~3x faster)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (6)
.env.example (1)
94-112: Optional: Consider consistent comment line wrapping.The LOG_CATEGORIES documentation has inconsistent line wrapping (lines 94-95 and 108-111 break mid-sentence). While functional, consistent wrapping would improve readability.
🔎 Example of consistent wrapping
-# ALL - Log everything (recommended for -# development/debugging) +# ALL - Log everything (recommended for development/debugging)tests/integration/test_cache.py (1)
38-67: Test correctly validates cache HIT scenario.The test properly creates an admin user, generates a token, and verifies the cache HIT header on the second request. The approach is sound and achieves the test objective.
Optional: Consider reusing admin_user directly
Line 54 creates a new
Userobject with just the ID for token generation. While this works correctly, you could simplify by reusing the existingadmin_userobject:admin_user_id = admin_user.id await test_db.commit() - token = AuthManager.encode_token(User(id=admin_user_id)) + token = AuthManager.encode_token(admin_user)The current approach is defensive and valid (avoiding potential session/lazy-loading issues), so this is purely optional.
tests/unit/test_cache.py (1)
74-268: Comprehensive key builder test coverage.The tests thoroughly validate all key builders with various scenarios:
- Authenticated vs unauthenticated users
- Query parameters (page, size, user_id, key_id)
- Admin vs regular user contexts
All assertions verify correct cache key format construction.
Optional: More explicit mock configuration on line 102
Line 102 uses
del mock_request.state.userto simulate absence of a user. While this likely works with MagicMock, a more explicit approach would be:def test_user_scoped_key_builder_without_user(self) -> None: """Test key builder without authenticated user.""" mock_request = MagicMock(spec=Request) - # No user attribute - del mock_request.state.user + # Configure state to not have user attribute + type(mock_request.state).user = PropertyMock(side_effect=AttributeError)Or simply configure the spec more precisely. The current approach should work but is slightly fragile with MagicMock's auto-vivification behavior.
docs/usage/caching.md (1)
1-480: Outstanding comprehensive documentation.This caching documentation is thorough, well-organized, and provides practical guidance for users. The content covers:
- Clear overview and enabling instructions
- Detailed configuration with examples
- Usage patterns with code examples
- Cache invalidation strategies
- Monitoring and debugging techniques
- Troubleshooting guidance
The performance benchmarks are realistic and helpful for setting expectations.
Address markdown formatting issues flagged by linters
Several minor formatting issues were flagged by markdownlint:
1. Add blank lines around fenced code blocks (lines 41, 50, 60, 65):
1. Ensure Redis is running and accessible: + ```bash # Install Redis (Linux) sudo apt install redis-server # Or via Docker docker run -d -p 6379:6379 redis:alpine ``` + 2. Add to your `.env` file:2. Add language specification to log output example (line 381):
Example log output: -``` +```text 2026-01-02 14:23:45 | CACHE | CACHE MISS: GET /users/ (52.34ms) 2026-01-02 14:23:50 | CACHE | CACHE HIT: GET /users/ (1.89ms)These changes improve markdown consistency and tooling compatibility. </details> </blockquote></details> <details> <summary>app/main.py (1)</summary><blockquote> `79-117`: **LGTM! Well-structured cache initialization with proper fallback behavior.** The implementation correctly: - Provides a security warning when Redis is enabled without authentication - Attempts Redis connection with a health check (`ping()`) - Falls back gracefully to in-memory cache on connection failures - Logs the outcome of each initialization path One minor observation: the `OSError` exception could also occur during Redis connection attempts (network-level failures). Consider adding it to the exception tuple for completeness. <details> <summary>🔎 Optional: Broaden exception handling</summary> ```diff - except (ConnectionError, TimeoutError, RedisError) as e: + except (ConnectionError, TimeoutError, RedisError, OSError) as e:app/resources/api_key.py (1)
198-203: Consider extracting duplicated invalidation logic.The cache invalidation pattern (list + single key) is duplicated between
_update_api_key_common(lines 145-150) and_delete_api_key_common(lines 198-203). Consider extracting to a helper function for consistency and maintainability.🔎 Optional: Extract invalidation helper
async def _invalidate_api_key_caches(user_id: int, key_id: UUID) -> None: """Invalidate both list and single API key caches.""" await invalidate_api_keys_cache(user_id) with suppress(RedisError, OSError, RuntimeError): await FastAPICache.clear(namespace=f"apikey:{user_id}:{key_id}")Then use it in both functions:
- # Invalidate caches: list cache and the specific key cache - await invalidate_api_keys_cache(user_id) - # Also invalidate the single key cache (uses singular "apikey" namespace) - # Graceful degradation - cache will expire via TTL if clear fails - with suppress(RedisError, OSError, RuntimeError): - await FastAPICache.clear(namespace=f"apikey:{user_id}:{key_id}") + await _invalidate_api_key_caches(user_id, key_id)
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (23)
.env.example.gitignore.pre-commit-config.yamlapp/cache/__init__.pyapp/cache/decorators.pyapp/cache/invalidation.pyapp/cache/key_builders.pyapp/config/settings.pyapp/main.pyapp/middleware/cache_logging.pyapp/resources/api_key.pydocs/index.mddocs/usage/caching.mddocs/usage/configuration/environment.mdmkdocs.ymlpyproject.tomlrequirements-dev.txttests/conftest.pytests/integration/test_cache.pytests/integration/test_user_routes.pytests/unit/test_cache.pytests/unit/test_lifespan.pytests/unit/test_validation.py
✅ Files skipped from review due to trivial changes (1)
- docs/index.md
🚧 Files skipped from review as they are similar to previous changes (3)
- pyproject.toml
- .gitignore
- app/cache/init.py
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: seapagan
Repo: seapagan/fastapi-template PR: 803
File: requirements.txt:164-165
Timestamp: 2026-01-01T19:57:45.447Z
Learning: The fastapi-cache2 package (versions <= 0.2.2) requires redis<5.0.0, so projects using fastapi-cache2 cannot upgrade to redis 5.x or 7.x without breaking compatibility. redis 4.6.0 is the appropriate version to use with fastapi-cache2.
📚 Learning: 2026-01-01T19:57:45.447Z
Learnt from: seapagan
Repo: seapagan/fastapi-template PR: 803
File: requirements.txt:164-165
Timestamp: 2026-01-01T19:57:45.447Z
Learning: The fastapi-cache2 package (versions <= 0.2.2) requires redis<5.0.0, so projects using fastapi-cache2 cannot upgrade to redis 5.x or 7.x without breaking compatibility. redis 4.6.0 is the appropriate version to use with fastapi-cache2.
Applied to files:
requirements-dev.txtdocs/usage/caching.md
🧬 Code graph analysis (11)
tests/integration/test_cache.py (5)
app/database/helpers.py (1)
hash_password(18-34)app/managers/auth.py (1)
encode_token(50-76)app/models/enums.py (1)
RoleType(7-11)app/models/user.py (1)
User(10-36)tests/conftest.py (2)
client(88-101)test_db(81-84)
tests/integration/test_user_routes.py (3)
tests/integration/test_api_key_routes.py (1)
get_test_user(25-40)app/models/user.py (1)
User(10-36)app/managers/auth.py (1)
encode_token(50-76)
tests/unit/test_cache.py (3)
app/cache/decorators.py (1)
cached(12-69)app/cache/invalidation.py (4)
invalidate_api_keys_cache(82-112)invalidate_namespace(115-145)invalidate_user_cache(18-54)invalidate_users_list_cache(57-79)app/cache/key_builders.py (6)
api_key_single_key_builder(205-241)api_keys_list_key_builder(158-202)paginated_key_builder(53-81)user_paginated_key_builder(122-155)user_scoped_key_builder(13-50)users_list_key_builder(84-119)
app/middleware/cache_logging.py (2)
app/config/log_config.py (1)
LogCategory(12-27)app/logs.py (1)
debug(41-44)
app/resources/api_key.py (3)
app/cache/key_builders.py (2)
api_key_single_key_builder(205-241)api_keys_list_key_builder(158-202)app/cache/decorators.py (1)
cached(12-69)app/cache/invalidation.py (1)
invalidate_api_keys_cache(82-112)
tests/unit/test_validation.py (1)
app/config/settings.py (2)
Settings(30-243)redis_url(146-161)
app/cache/key_builders.py (2)
app/config/log_config.py (1)
LogCategory(12-27)app/logs.py (1)
debug(41-44)
app/cache/invalidation.py (2)
app/config/log_config.py (1)
LogCategory(12-27)app/logs.py (2)
info(26-29)error(31-34)
app/main.py (4)
app/config/log_config.py (1)
get_log_config(117-122)app/config/settings.py (1)
redis_url(146-161)app/middleware/cache_logging.py (1)
CacheLoggingMiddleware(14-54)app/logs.py (2)
warning(36-39)info(26-29)
app/cache/decorators.py (1)
app/config/settings.py (1)
get_settings(247-249)
tests/unit/test_lifespan.py (2)
app/config/settings.py (1)
redis_url(146-161)app/main.py (1)
lifespan(58-124)
🪛 dotenv-linter (4.0.0)
.env.example
[warning] 145-145: [UnorderedKey] The REDIS_PASSWORD key should go before the REDIS_PORT key
(UnorderedKey)
[warning] 146-146: [UnorderedKey] The REDIS_DB key should go before the REDIS_ENABLED key
(UnorderedKey)
🪛 LanguageTool
docs/usage/caching.md
[grammar] ~316-~316: Use a hyphen to join words.
Context: ...user list endpoints. Handles both single user lookups and paginated lists. **`ap...
(QB_NEW_EN_HYPHEN)
[style] ~459-~459: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...? - Is Redis running and accessible? - Is the @cached()` decorator AFTER the rou...
(ENGLISH_WORD_REPEAT_BEGINNING_RULE)
🪛 markdownlint-cli2 (0.18.1)
docs/usage/caching.md
41-41: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
50-50: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
60-60: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
65-65: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
381-381: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Codacy Static Code Analysis
- GitHub Check: test (3.14)
🔇 Additional comments (35)
.pre-commit-config.yaml (1)
30-30: LGTM!Adding
types-redisto mypy's additional dependencies correctly supports type checking for the new Redis integration..env.example (1)
131-151: LGTM! Comprehensive cache configuration documentation.The cache configuration block is well-documented with clear explanations of each setting, usage scenarios (in-memory vs Redis), and sensible defaults. The comments effectively guide users through the opt-in caching setup.
tests/unit/test_lifespan.py (2)
8-9: LGTM!Appropriate imports to support cache backend type checking in the new tests.
112-281: LGTM! Comprehensive cache lifecycle testing.The five new test methods thoroughly cover the caching initialization scenarios:
- Graceful fallback from Redis to in-memory on connection errors
- Correct backend selection based on settings
- Security warnings for missing authentication
- Proper cleanup of Redis connections
- Behavior when caching is disabled
The tests properly mock dependencies, verify logging output, and assert backend types. Excellent coverage of the caching integration.
tests/conftest.py (2)
10-11: LGTM!Appropriate imports to support the cache initialization and clearing fixture.
41-57: LGTM! Properly addresses past review feedback.The fixture correctly implements the recommended pattern from the previous review by:
- Only calling
FastAPICache.init()when the backend is not already anInMemoryBackend(lines 52-54)- Always clearing the cache for test isolation (line 57)
- Using
getattrto safely check the internal_backendattribute (line 52)This avoids the race condition risk of repeated initialization while ensuring each test starts with a clean cache state.
app/config/settings.py (3)
8-8: LGTM!Appropriate import for URL-encoding Redis passwords with special characters.
130-139: LGTM! Well-designed opt-in cache configuration.The cache settings provide a clean opt-in design with sensible defaults:
- Caching disabled by default (
cache_enabled=False)- Redis disabled by default with in-memory fallback
- Reasonable 5-minute default TTL
- Standard Redis connection parameters
This non-breaking approach allows gradual adoption without impacting existing deployments.
145-161: LGTM! Properly addresses past review feedback.The
redis_urlproperty correctly implements URL-encoding of the Redis password usingquote(self.redis_password, safe="")(line 156), preventing connection failures when passwords contain special characters like@,:, or/. The conditional logic appropriately handles both authenticated and non-authenticated Redis connections.tests/unit/test_validation.py (1)
199-269: LGTM! Comprehensive test coverage for Redis URL generation.The test suite thoroughly covers password encoding scenarios including empty passwords, simple passwords, special characters, and unicode. The special characters test correctly validates URL encoding (e.g.,
@→%40,:→%3A), and the unicode test appropriately uses format validation rather than exact encoding assertions.app/cache/invalidation.py (4)
18-54: LGTM! Proper multi-namespace invalidation with graceful error handling.The function correctly clears both user-scoped cache entries (
user:{user_id}) and single-user lookups (users:{user_id}), matching the key builder patterns. Error handling ensures the application continues functioning if cache invalidation fails.
57-79: LGTM! Clean implementation for list cache invalidation.The function properly clears the paginated users list cache with appropriate error handling.
82-112: LGTM! Consistent API keys cache invalidation.The function correctly clears user-specific API keys cache with the same graceful error handling pattern.
115-145: LGTM! Flexible namespace invalidation utility.The generic invalidation function provides a useful escape hatch for custom caching scenarios while maintaining the same error handling guarantees.
app/cache/key_builders.py (6)
13-50: LGTM! Clean user-scoped cache key generation.The function properly handles both authenticated and anonymous users, and the debug logging will be helpful for troubleshooting cache behavior.
53-81: LGTM! Straightforward pagination key builder.The function correctly incorporates pagination parameters into the cache key with sensible defaults.
84-119: LGTM! Flexible key builder for dual-mode endpoint.The function correctly handles both single-user lookup and paginated list modes by checking query parameters first, which is appropriate for GET requests.
122-155: LGTM! Effective combination of user scope and pagination.The function properly combines user identification with pagination parameters, enabling user-specific paginated cache invalidation.
158-202: LGTM! Handles both regular and admin endpoints correctly.The function appropriately checks kwargs first for the admin endpoint's path parameter before falling back to the authenticated user, with helpful debug logging.
205-241: LGTM! Precise cache key for individual API keys.The function correctly includes both user_id and key_id for precise cache targeting, with comprehensive debug logging.
tests/integration/test_user_routes.py (1)
262-278: LGTM! Improved test robustness with dynamic ID handling.The change from hardcoded IDs to dynamically obtained IDs makes the test more reliable and less dependent on database state. The
flush()call correctly ensures theadmin_user.idis populated before use.mkdocs.yml (1)
81-81: LGTM! Appropriate documentation navigation addition.The new caching documentation entry is properly positioned within the Usage section, making it easily discoverable for users.
docs/usage/configuration/environment.md (1)
341-480: Excellent documentation for the caching feature.The Redis and caching configuration section is comprehensive, well-structured, and provides clear guidance for different deployment scenarios (development vs production). The examples are practical, and the performance notes with realistic benchmarks help users understand the benefits. The automatic fallback behavior and testing configuration notes are particularly helpful.
tests/integration/test_cache.py (1)
19-37: LGTM - Clean coverage test.The test correctly verifies that the middleware processes requests with Cache-Control headers without errors. The explicit comment about coverage intent is helpful.
tests/unit/test_cache.py (2)
28-72: Well-structured decorator tests.Both tests correctly validate the decorator's behavior:
- Proper default TTL resolution when
expire=None- True no-op behavior (identity preservation) when caching is disabled
The use of
isfor identity checking in line 71 is exactly right.
270-397: Excellent invalidation function test coverage.The tests comprehensively validate:
- Correct namespace targeting for each invalidation function
- Multiple namespace clearing (user cache clears both
user:123andusers:123)- Graceful error handling for RedisError, OSError, and RuntimeError
The error handling tests are particularly important, ensuring the application degrades gracefully when cache operations fail.
app/middleware/cache_logging.py (1)
1-54: Clean and focused middleware implementation.The CacheLoggingMiddleware correctly:
- Measures request duration without impacting response flow
- Logs Cache-Control headers for debugging (debug level)
- Reports cache hits and misses via the X-FastAPI-Cache header
- Uses appropriate log categories (LogCategory.CACHE)
The case-insensitive comparison for "HIT" (line 42) is good defensive programming. The middleware has minimal overhead and provides valuable observability for cache performance.
app/main.py (2)
119-124: LGTM! Proper resource cleanup on shutdown.The Redis client is correctly closed only when it was successfully initialized, preventing errors during shutdown when using in-memory cache or when Redis connection failed.
172-173: LGTM! Middleware ordering is appropriate.
CacheLoggingMiddlewareis added afterLoggingMiddleware, so it will execute first in the request chain. This ensures cache hit/miss logging captures accurate timing before other logging middleware adds overhead.app/resources/api_key.py (3)
57-67: LGTM! Caching decorator properly configured with appropriate key builder.The
api_keys_list_key_buildercorrectly scopes cache keys by user ID, ensuring users only see their own cached API keys list.
91-106: LGTM! Single API key caching with appropriate namespace separation.Using
namespace="apikey"(singular) versus"apikeys"(plural) for list endpoints is a good practice, allowing targeted invalidation of individual keys without clearing the entire list cache.
144-150: Cache invalidation approach is correct.The
FastAPICache.clear(namespace=f"apikey:{user_id}:{key_id}")call correctly invalidates the cache entry. Thenamespaceparameter is designed to remove all cache entries belonging to that namespace, and the namespace format matches what the@cacheddecorator uses for the single key cache.app/cache/decorators.py (2)
52-68: LGTM! Clean no-op pattern when caching is disabled.The implementation correctly:
- Returns a no-op decorator when
CACHE_ENABLED=false, ensuring zero overhead- Applies sensible defaults from settings for TTL and coder
- Uses
PickleCoderfor SQLAlchemy ORM object serializationSince decorator application happens at import time (not per-request), the multiple
get_settings()calls are acceptable.
12-51: Well-documented public API with clear usage example.The docstring thoroughly explains the decorator's behavior, parameters, and includes a practical usage example demonstrating the required
RequestandResponseparameters for cached endpoints.requirements-dev.txt (1)
115-116: LGTM—redis version is correctly pinned.The redis version (4.6.0) is appropriately constrained to match fastapi-cache2==0.2.2 compatibility requirements. Per learnings, fastapi-cache2 (≤0.2.2) requires redis<5.0.0, and this pin avoids the breaking changes in redis 5.x or 7.x.
Also applies to: 356-357
this fixes potential issues in python versions >=3.11.3 Signed-off-by: Grant Ramsay <[email protected]>
Signed-off-by: Grant Ramsay <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
app/main.py (1)
33-34: Minor typo in comment."logguru" should be "loguru".
🔎 Proposed fix
-# Initialize logguru logging +# Initialize loguru logging get_log_config()docs/usage/caching.md (1)
319-321: Minor grammar improvement.Consider using a hyphen: "single-user lookups" instead of "single user lookups" for compound adjective clarity.
🔎 Proposed fix
**`users_list_key_builder`** -Specialized builder for user list endpoints. Handles both single user -lookups and paginated lists. +Specialized builder for user list endpoints. Handles both single-user +lookups and paginated lists.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
app/main.pydocs/usage/caching.md
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: seapagan
Repo: seapagan/fastapi-template PR: 803
File: requirements.txt:164-165
Timestamp: 2026-01-01T19:57:45.447Z
Learning: The fastapi-cache2 package (versions <= 0.2.2) requires redis<5.0.0, so projects using fastapi-cache2 cannot upgrade to redis 5.x or 7.x without breaking compatibility. redis 4.6.0 is the appropriate version to use with fastapi-cache2.
📚 Learning: 2026-01-01T19:57:45.447Z
Learnt from: seapagan
Repo: seapagan/fastapi-template PR: 803
File: requirements.txt:164-165
Timestamp: 2026-01-01T19:57:45.447Z
Learning: The fastapi-cache2 package (versions <= 0.2.2) requires redis<5.0.0, so projects using fastapi-cache2 cannot upgrade to redis 5.x or 7.x without breaking compatibility. redis 4.6.0 is the appropriate version to use with fastapi-cache2.
Applied to files:
docs/usage/caching.md
🪛 LanguageTool
docs/usage/caching.md
[grammar] ~320-~320: Use a hyphen to join words.
Context: ...user list endpoints. Handles both single user lookups and paginated lists. **`ap...
(QB_NEW_EN_HYPHEN)
[style] ~463-~463: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...? - Is Redis running and accessible? - Is the @cached()` decorator AFTER the rou...
(ENGLISH_WORD_REPEAT_BEGINNING_RULE)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
- GitHub Check: Codacy Static Code Analysis
- GitHub Check: test (3.11)
- GitHub Check: test (3.14)
- GitHub Check: test (3.10)
- GitHub Check: test (3.13)
- GitHub Check: test (3.12)
🔇 Additional comments (14)
app/main.py (4)
12-28: LGTM!The imports are well-organized, separating caching infrastructure (
fastapi_cache), Redis client (redis.asyncio), and local middleware/config modules.
79-117: Good implementation of cache initialization with graceful fallback.The initialization logic properly handles:
- Redis connection testing with
ping()- Fallback to in-memory when Redis fails
- Clear logging for each state
- Security warning for missing authentication
The exception handling on line 105 is comprehensive for Redis connection failures.
119-124: LGTM!Proper cleanup pattern - the conditional check ensures
close()is only called when a Redis connection was successfully established.
172-174: LGTM!The cache logging middleware is correctly added. Note that FastAPI executes middleware in LIFO order, so
CacheLoggingMiddlewarewill wrap requests beforeLoggingMiddleware, which is appropriate for independent cache timing.docs/usage/caching.md (10)
1-17: LGTM!Clear and concise overview that accurately describes the caching feature's purpose and supported backends.
19-74: LGTM!The setup instructions are practical and cover both development (in-memory) and production (Redis) scenarios appropriately.
76-120: LGTM!The operating modes and cache key structure are clearly documented and align with the implementation.
122-182: LGTM!Comprehensive configuration documentation with practical examples for different deployment scenarios.
184-238: LGTM!The usage examples are clear, and the decorator order note is critical for correct behavior.
240-294: LGTM!The invalidation examples are clear, and the graceful error handling note properly sets expectations for production behavior.
359-401: LGTM!Practical monitoring guidance with clear examples for checking cache headers and enabling logging.
403-429: LGTM!The performance documentation is balanced, covering both benefits and overhead with appropriately caveated benchmarks.
455-473: LGTM!The troubleshooting checklist format is user-friendly and covers the common issues effectively. The repetitive "Is" pattern is appropriate for a checklist structure.
475-484: LGTM!Good cross-references to related documentation for further context.
Signed-off-by: Grant Ramsay <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (2)
app/main.py (2)
93-112: Close Redis client on connection failure to avoid potential resource leak.If
Redis.from_url()succeeds butping()fails, the client object exists but is set toNonewithout being closed. While redis-py uses lazy connections, explicitly closing ensures any allocated resources are released.🔎 Proposed fix
try: redis_client = Redis.from_url( get_settings().redis_url, encoding="utf8", decode_responses=False, ) await redis_client.ping() FastAPICache.init( RedisBackend(redis_client), prefix="fastapi-cache", ) logger.info("Redis cache backend initialized successfully.") except (ConnectionError, TimeoutError, RedisError, OSError) as e: logger.warning( "Failed to connect to Redis: %s. " "Falling back to in-memory cache.", e, ) + if redis_client: + await redis_client.close() FastAPICache.init(InMemoryBackend()) redis_client = None
83-91: Consider consolidating duplicate warning logs.The Redis authentication warning is logged to both
logger(uvicorn/console) andloguru_logger(file). This ensures visibility but may produce duplicate console output if loguru is also configured for console logging. Consider using only the category-aware logger if it can route to both outputs, or add a brief comment explaining the intentional dual logging.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
app/main.pydocs/usage/caching.md
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: seapagan
Repo: seapagan/fastapi-template PR: 803
File: requirements.txt:164-165
Timestamp: 2026-01-01T19:57:45.447Z
Learning: The fastapi-cache2 package (versions <= 0.2.2) requires redis<5.0.0, so projects using fastapi-cache2 cannot upgrade to redis 5.x or 7.x without breaking compatibility. redis 4.6.0 is the appropriate version to use with fastapi-cache2.
📚 Learning: 2026-01-01T19:57:45.447Z
Learnt from: seapagan
Repo: seapagan/fastapi-template PR: 803
File: requirements.txt:164-165
Timestamp: 2026-01-01T19:57:45.447Z
Learning: The fastapi-cache2 package (versions <= 0.2.2) requires redis<5.0.0, so projects using fastapi-cache2 cannot upgrade to redis 5.x or 7.x without breaking compatibility. redis 4.6.0 is the appropriate version to use with fastapi-cache2.
Applied to files:
docs/usage/caching.md
🧬 Code graph analysis (1)
app/main.py (8)
app/config/helpers.py (1)
get_api_version(29-46)app/config/log_config.py (1)
get_log_config(117-122)app/config/openapi.py (1)
custom_openapi(9-125)app/config/settings.py (1)
get_settings(247-249)app/metrics/instrumentator.py (1)
register_metrics(32-59)app/middleware/cache_logging.py (1)
CacheLoggingMiddleware(14-54)app/middleware/logging_middleware.py (1)
LoggingMiddleware(20-52)app/logs.py (2)
warning(36-39)info(26-29)
🪛 LanguageTool
docs/usage/caching.md
[style] ~463-~463: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...? - Is Redis running and accessible? - Is the @cached()` decorator AFTER the rou...
(ENGLISH_WORD_REPEAT_BEGINNING_RULE)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
- GitHub Check: Codacy Static Code Analysis
- GitHub Check: test (3.12)
- GitHub Check: test (3.13)
- GitHub Check: test (3.14)
- GitHub Check: test (3.10)
- GitHub Check: test (3.11)
🔇 Additional comments (7)
docs/usage/caching.md (3)
1-10: LGTM!The overview provides a clear and concise introduction to the caching system, correctly stating that caching is disabled by default and explaining both backend options with the automatic fallback behavior.
19-76: LGTM!The enabling caching section provides clear step-by-step instructions for both in-memory and Redis backends, with appropriate Docker alternative for Redis installation.
184-207: LGTM!Good documentation of the decorator usage with the important note about decorator ordering. The admonition highlighting that
@cached()must be placed after the route decorator is essential for users.app/main.py (4)
12-31: LGTM!Imports are well-organized, correctly using
redis.asyncio.Redisfor async operations and separatingRedisErrorfor exception handling. Based on learnings, redis 4.6.0 is the appropriate version for fastapi-cache2 compatibility.
33-37: LGTM!Early initialization of loguru via
get_log_config()ensures consistent logging configuration before any log messages are emitted.
119-124: LGTM!Proper cleanup pattern using the lifespan context manager to close the Redis connection on shutdown.
169-173: LGTM!The middleware ordering is correct. Since Starlette processes middleware in reverse registration order,
CacheLoggingMiddlewarewill see the response headers (includingX-FastAPI-Cache) after they're set by the cache layer.
Update documentation to reflect the new caching infrastructure: - Add caching feature description to README Functionality section - Document app/cache/ directory structure in project organization - Update middleware section to include cache logging middleware
- Add defensive cleanup: close Redis client if connection fails during ping, preventing resource leaks - Update troubleshooting checklist to check CACHE_ENABLED first before backend-specific settings (REDIS_ENABLED, Redis connectivity) - Fix test mocks: add close() AsyncMock to tests that simulate Redis connection failures The cleanup ensures proper resource management even when Redis is unreachable. Tests updated to properly mock the close() async method.
Change anchor from #logging to #configure-logging-optional to match the actual heading in environment.md. Also fix line length to stay under 80 characters.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (1)
docs/usage/caching.md (1)
455-468: Troubleshooting checklist already prioritizesCACHE_ENABLEDcorrectly.The checklist now properly lists
CACHE_ENABLEDas the master switch first, followed by Redis-specific checks. The previous review comment flagging this issue has been addressed.
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
app/main.pydocs/usage/caching.mdtests/unit/test_lifespan.py
🚧 Files skipped from review as they are similar to previous changes (2)
- app/main.py
- tests/unit/test_lifespan.py
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: seapagan
Repo: seapagan/fastapi-template PR: 803
File: requirements.txt:164-165
Timestamp: 2026-01-01T19:57:45.447Z
Learning: The fastapi-cache2 package (versions <= 0.2.2) requires redis<5.0.0, so projects using fastapi-cache2 cannot upgrade to redis 5.x or 7.x without breaking compatibility. redis 4.6.0 is the appropriate version to use with fastapi-cache2.
📚 Learning: 2026-01-01T19:57:45.447Z
Learnt from: seapagan
Repo: seapagan/fastapi-template PR: 803
File: requirements.txt:164-165
Timestamp: 2026-01-01T19:57:45.447Z
Learning: The fastapi-cache2 package (versions <= 0.2.2) requires redis<5.0.0, so projects using fastapi-cache2 cannot upgrade to redis 5.x or 7.x without breaking compatibility. redis 4.6.0 is the appropriate version to use with fastapi-cache2.
Applied to files:
docs/usage/caching.md
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
- GitHub Check: Codacy Static Code Analysis
- GitHub Check: test (3.10)
- GitHub Check: test (3.14)
- GitHub Check: test (3.11)
- GitHub Check: test (3.12)
- GitHub Check: test (3.13)
Add compatibility warning to caching documentation explaining that fastapi-cache2==0.2.2 requires redis-py 4.6.0 (not 5.x). Clarify distinction between Redis Server (any version) and redis-py Python client (must be 4.6.0). Update TODO.md to mark basic caching as complete and add future task to investigate upgrading to redis-py 5.x support (either wait for new PyPI release, fork, use git HEAD, or switch libraries).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
app/main.py (1)
79-119: LGTM! Robust cache initialization with proper fallback.The cache initialization logic is well-designed:
- Security-conscious: warns when Redis password is missing (lines 83-91)
- Resilient: comprehensive exception handling catches connection failures (line 105)
- Clean fallback: properly closes failed Redis client before switching to in-memory backend (lines 111-114)
- Clear logging: logs success, warnings, and fallback scenarios appropriately
The
decode_responses=Falseat line 97 is correct for FastAPICache's Redis backend, which stores serialized data.💡 Optional: Consider consolidating dual logging
Lines 90-91 log the same warning to both
logger(uvicorn console) andloguru_logger(file). While this ensures visibility in both console and file logs, consider whether the logging middleware should already handle this routing automatically via the category-based system:- logger.warning(warning_msg) # Console via uvicorn - loguru_logger.warning(warning_msg) # File via loguru + from app.logs import category_logger + from app.config.log_config import LogCategory + category_logger.warning(warning_msg, LogCategory.CACHE)This would use the existing category logging infrastructure and respect the
LOG_CATEGORIESconfiguration. However, the current approach guarantees visibility for this critical security warning regardless of log category settings.
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
TODO.mdapp/config/log_config.pyapp/main.pydocs/usage/configuration/environment.mdtests/unit/test_logging.py
🚧 Files skipped from review as they are similar to previous changes (2)
- app/config/log_config.py
- TODO.md
🧰 Additional context used
🧬 Code graph analysis (2)
tests/unit/test_logging.py (2)
app/config/log_config.py (2)
LogCategory(13-28)setup_logging(81-120)app/logs.py (1)
info(26-29)
app/main.py (5)
app/config/log_config.py (1)
get_log_config(127-132)app/config/settings.py (2)
get_settings(247-249)redis_url(146-161)app/middleware/cache_logging.py (1)
CacheLoggingMiddleware(14-54)app/middleware/logging_middleware.py (1)
LoggingMiddleware(20-52)app/logs.py (2)
warning(36-39)info(26-29)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
- GitHub Check: Codacy Static Code Analysis
- GitHub Check: test (3.11)
- GitHub Check: test (3.13)
- GitHub Check: test (3.10)
- GitHub Check: test (3.12)
- GitHub Check: test (3.14)
🔇 Additional comments (9)
tests/unit/test_logging.py (2)
39-39: LGTM! Correct test update for new CACHE category.The test properly includes
LogCategory.CACHEin the expected combination, aligning with the new cache logging category added inapp/config/log_config.py.
317-356: LGTM! Well-structured test for uvicorn reload scenario.The test comprehensively validates the reload detection logic:
- Properly mocks
sys.argvwith--reloadflag- Verifies
logging.getLogger("uvicorn")is called- Asserts the expected info message is logged
- Confirms
enqueue=Falseis set for synchronous writes during developmentThis directly covers the reload handling logic in
app/config/log_config.pylines 105-108 and ensures the multiprocessing resource tracker warnings are avoided.docs/usage/configuration/environment.md (3)
341-480: LGTM! Comprehensive and well-structured caching documentation.This section provides excellent guidance for configuring the caching system:
- Clear explanation of opt-in nature and fallback behavior
- Appropriate security warnings (missing password, network isolation)
- Multiple practical examples covering common deployment scenarios
- Quantified performance benefits (5-30x improvement, 1-2ms cache hits)
- Proper cross-references to related documentation
The documentation accurately reflects the implementation in
app/main.py(cache initialization with fallback) andapp/config/settings.py(redis_url with password encoding).
492-497: LGTM! Clear explanation of uvicorn reload behavior.The note accurately explains why
enqueueis disabled in development when usinguvicorn --reload, preventing multiprocessing semaphore warnings. This aligns with the implementation inapp/config/log_config.pylines 105-108 and is validated by the new test intests/unit/test_logging.pylines 317-356.
617-617: LGTM! CACHE category added to logging table.Correctly documents the new
CACHElogging category for monitoring cache operations, hits, and misses, consistent with the category definition inapp/config/log_config.py.app/main.py (4)
12-18: LGTM! Appropriate imports for cache infrastructure.All new imports are used in the lifespan function and middleware setup:
FastAPICache,InMemoryBackend,RedisBackendfor cache initializationRedis,RedisErrorfor Redis client and error handlingloguru_loggerfor file logging alongside uvicorn console loggingget_log_configfor initializing log configurationCacheLoggingMiddlewarefor cache activity monitoringAlso applies to: 23-23, 28-28
62-63: LGTM! Proper log configuration initialization.Calling
get_log_config()early in the lifespan ensures the logging infrastructure is properly set up before any cache initialization logging occurs. This is especially important given the dual logging approach (console + file) used for cache warnings.
123-129: LGTM! Proper cleanup on shutdown.The shutdown logic correctly:
- Drains the loguru queue with
loguru_logger.complete()to avoid semaphore warnings- Closes the Redis connection if it was successfully opened (checking
redis_clientis not None)- Logs the cleanup action for observability
This ensures clean resource management when the application terminates.
177-178: LGTM! Cache logging middleware properly registered.The
CacheLoggingMiddlewareis added afterLoggingMiddleware, which is the correct order to ensure request logging occurs before cache-specific logging. This allows the cache middleware to inspect and log cache headers set by the@cached()decorator.
|
The 2 log calls here are by design. loguru is set not to log to console, only file. In this case I want both logged since it's important |
Summary
Adds optional response caching infrastructure for improved API performance and reduced database load. Caching is opt-in and disabled by default for consistency with other optional features (metrics, admin panel).
Closes #116
Key Features
✅ Opt-in by design - Disabled by default via
CACHE_ENABLED=false✅ Dual backend support - In-memory (development) or Redis (production)
✅ Automatic fallback - Gracefully falls back to in-memory if Redis fails
✅ Cache invalidation - Smart invalidation when data changes
✅ Monitoring & logging - Cache hit/miss logging and middleware
✅ 100% test coverage - Comprehensive unit and integration tests
✅ Full documentation - Complete user and developer guides
Implementation Details
Configuration & Settings
New environment variables:
CACHE_ENABLED(default:false) - Master switch to enable/disable cachingREDIS_ENABLED(default:false) - Choose Redis or in-memory backendREDIS_HOST,REDIS_PORT,REDIS_PASSWORD,REDIS_DB- Redis connection settingsCACHE_DEFAULT_TTL(default:300) - Default cache expiry in secondsOperating modes:
CACHE_ENABLED=false) - No caching, zero overhead, decorator is no-opCACHE_ENABLED=true,REDIS_ENABLED=false) - Development/single-instanceCACHE_ENABLED=true,REDIS_ENABLED=true) - Production/multi-instance with fallbackCache Infrastructure
Core modules (
app/cache/):decorators.py-@cached()decorator wrapper with project defaultskey_builders.py- Functions to generate cache keys:user_scoped_key_builder- User-specific endpointsusers_list_key_builder- Dual-mode for single user or paginated listspaginated_key_builder- Generic paginated endpointsapi_keys_list_key_builder- API keys lists (user and admin contexts)api_key_single_key_builder- Single API key lookupsinvalidation.py- Cache invalidation helpers:invalidate_user_cache()- Clear user-specific cachesinvalidate_users_list_cache()- Clear users list cacheinvalidate_api_keys_cache()- Clear API keys cacheinvalidate_namespace()- Generic namespace invalidationCache logging middleware (
app/middleware/cache_logging.py):Cache-ControlheadersCached Endpoints
GET /users/ endpoint:
GET /users/me endpoint:
GET /users/keys endpoints:
/users/keys) and admin (/users/{user_id}/keys)Testing
Comprehensive test coverage (100%):
CACHE_ENABLED=trueinpyproject.tomlTest results:
Documentation
New documentation:
docs/usage/caching.md- Complete caching guide (477 lines)@cached()decoratorUpdated documentation:
docs/index.md- Added caching featuresdocs/usage/configuration/environment.md- Redis/cache settings (141 lines added)mkdocs.yml- Navigation structureArchitecture Notes
Cache key structure:
{namespace}:{user_id}:{function_name}or{namespace}:list:page:{page}:size:{size}user:123:get_my_userusers:list:page:1:size:50apikeys:456:list_api_keysWhy PickleCoder?
response_modelconverts to Pydantic after cache retrievalGraceful error handling:
Performance Impact
Tested with small dataset (11 users):
Overhead when disabled:
Overhead when enabled (cache miss):
Migration Notes
No breaking changes - This is purely additive:
@cached()decoratorTo enable caching:
CACHE_ENABLED=truein.envREDIS_ENABLED=truefor productionChecklist
.env.exampleupdatedFuture Enhancements
Potential future work (not in this PR):
🤖 Generated with Claude Code
Co-Authored-By: Claude Sonnet 4.5 [email protected]
Summary by CodeRabbit
New Features
Documentation
Bug Fixes
Tests
Chores
✏️ Tip: You can customize this high-level summary in your review settings.