Conversation
|
Caution Review failedThe pull request is closed. WalkthroughAdds a cached settings factory Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant App as FastAPI App
participant Config as config.get_settings
participant S as Settings (cached)
participant DB as db.get_engine_url
participant Route as routes_sync.get_hygraph_service
participant H as HygraphService
App->>Config: get_settings()
note right of Config #D1F2E8: @lru_cache returns cached instance
Config-->>App: S
DB->>Config: get_settings()
Config-->>DB: S
DB->>DB: Resolve DB URL (may still reference Settings())
Route->>Config: get_settings() (module init)
Config-->>Route: S
Route->>H: Init _hygraph_service with S.hygraph_token
Route-->>Route: get_hygraph_service() returns _hygraph_service
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: ASSERTIVE Plan: Pro 📒 Files selected for processing (3)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
backend/api/routes_sync.py (1)
33-33: Fix line length to comply with linting rules.Line 33 exceeds the 88-character limit (93 characters). Split the condition or use an intermediate variable.
- if not x_hygraph_signature or not service.verify_signature(raw, x_hygraph_signature): + signature_valid = ( + x_hygraph_signature and service.verify_signature(raw, x_hygraph_signature) + ) + if not signature_valid: raise HTTPException(status_code=401, detail="invalid signature")
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (4)
backend/api/config.py(2 hunks)backend/api/db.py(2 hunks)backend/api/main.py(2 hunks)backend/api/routes_sync.py(2 hunks)
🧰 Additional context used
🧬 Code graph analysis (3)
backend/api/routes_sync.py (2)
backend/api/config.py (1)
get_settings(50-53)backend/services/hygraph_service.py (2)
HygraphService(11-44)__init__(12-15)
backend/api/db.py (2)
backend/api/config.py (1)
get_settings(50-53)backend/alembic/env.py (1)
get_url(27-31)
backend/api/main.py (1)
backend/api/config.py (1)
get_settings(50-53)
🪛 GitHub Actions: CI
backend/api/routes_sync.py
[error] 32-32: E501 Line too long (93 > 88) for signature verification logic.
🔇 Additional comments (7)
backend/api/config.py (1)
3-3: LGTM!Adding
lru_cacheimport is appropriate for implementing a cached settings singleton.backend/api/db.py (2)
11-11: LGTM!Import correctly updated to use the cached settings factory.
35-35: All direct Settings() instantiations have been replaced
Search returned only docstrings inbackend/api/config.py; no remainingSettings()calls outside the cached factory.backend/api/routes_sync.py (2)
10-10: LGTM!Import correctly updated to use the cached settings factory.
19-19: LGTM!Correctly uses the cached settings factory to retrieve configuration.
backend/api/main.py (2)
9-9: LGTM!Import correctly updated to use the cached settings factory.
24-24: LGTM!The module-level initialization correctly uses the cached settings factory. This ensures a single
Settingsinstance is shared across the application, which is the intended behavior of this PR.
| @lru_cache() | ||
| def get_settings() -> Settings: | ||
| """Return a cached instance of :class:`Settings`.""" | ||
|
|
||
| return Settings() |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider making the cache size explicit.
The implementation is correct, but since get_settings() takes no parameters, only one instance will ever be cached. Consider using @lru_cache (no parentheses) or @lru_cache(maxsize=1) to make the singleton intent clearer.
-@lru_cache()
+@lru_cache
def get_settings() -> Settings:
"""Return a cached instance of :class:`Settings`."""
-
return Settings()Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In backend/api/config.py around lines 49 to 53 the decorator uses @lru_cache()
which creates a cache with the default maxsize (128) even though get_settings()
has no parameters; change the decorator to either @lru_cache (no parentheses) or
@lru_cache(maxsize=1) so the singleton intent is explicit and only one Settings
instance is ever cached.
|
This branch has conflicts that must be resolved backend/api/db.py |
There was a problem hiding this comment.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (1)
backend/api/routes_sync.py(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
backend/api/routes_sync.py (1)
backend/api/config.py (1)
get_settings(50-53)
🔇 Additional comments (2)
backend/api/routes_sync.py (2)
10-10: LGTM: Import change aligns with settings helper pattern.The switch from importing
Settingsdirectly to usingget_settings()is appropriate for centralizing settings instantiation.
22-25: LGTM: Function correctly returns the singleton instance.The refactored function appropriately returns the pre-instantiated service and includes a clear docstring. The
Depends()integration on the endpoint remains compatible, though the dependency injection benefit is reduced with a singleton.
| # Using hygraph_token as webhook secret in MVP; adjust if separate secret provided | ||
| settings = get_settings() | ||
| _hygraph_service = HygraphService(webhook_secret=settings.hygraph_token) |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
🧩 Analysis chain
Verify initialization safety and confirm lru_cache decorator.
Module-level initialization creates several concerns:
-
Import-time failure risk: If
get_settings()orHygraphServiceinitialization fails (e.g., missing env vars, invalid token), the entire module import fails rather than failing gracefully at request time. -
Testability degradation: The singleton is created once at import, making it harder to:
- Mock the service in tests
- Reset state between test cases
- Test with different configurations
-
Missing cache decorator: The relevant code snippet shows
get_settings()without an@lru_cachedecorator, which contradicts the PR description stating "lru_cache-backed get_settings helper." This meansSettings()is instantiated on every call toget_settings(), negating the caching benefit at this layer.
Run the following script to verify the lru_cache decorator and search for potential test files that might be affected:
🏁 Script executed:
#!/bin/bash
# Verify lru_cache decorator and find test files
echo "=== Checking get_settings implementation for lru_cache decorator ==="
rg -A 3 -B 1 'def get_settings' backend/api/config.py
echo -e "\n=== Finding test files that might mock or use HygraphService ==="
fd -e py test backend/tests --exec rg -l 'HygraphService|routes_sync'
echo -e "\n=== Checking for existing test patterns around settings ==="
rg -n 'Settings|get_settings' backend/tests/ --type py -C 2 2>/dev/null || echo "No tests directory found or no matches"Length of output: 359
Lazy-initialize HygraphService instead of module-level instantiation
Avoid calling get_settings() and creating _hygraph_service at import time (backend/api/routes_sync.py:17–19). Import-time init can crash the app on missing/invalid env vars and makes tests hard to mock or reset. Move this into a factory or a get_hygraph_service() function (with internal caching) so it fails gracefully at runtime and is mockable in tests.
🤖 Prompt for AI Agents
In backend/api/routes_sync.py around lines 17 to 19, avoid calling
get_settings() and instantiating HygraphService at import time; instead create a
get_hygraph_service() factory that lazy-initializes and caches a module-level
_hygraph_service on first call (calling get_settings() there), so creation
happens at runtime, failures surface gracefully, and tests can monkeypatch or
reset the cached instance; ensure the factory returns the cached instance on
subsequent calls and provide a simple way to reset or replace the cached service
in tests if needed.
|
Note Docstrings generation - SUCCESS |
Docstrings generation was requested by @shayancoin. * #23 (comment) The following files were modified: * `backend/api/config.py` * `backend/api/routes_sync.py`
Docstrings generation was requested by @shayancoin. * #23 (comment) The following files were modified: * `backend/api/config.py` * `backend/api/routes_sync.py` Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
* Cache API settings helper * Reuse cached Hygraph service * 📝 Add docstrings to `codex/add-lru_cache-for-get_settings-helper` (#38) Docstrings generation was requested by @shayancoin. * #23 (comment) The following files were modified: * `backend/api/config.py` * `backend/api/routes_sync.py` Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> --------- Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Summary
get_settingshelper to reuse a singleSettingsinstanceTesting
https://chatgpt.com/codex/tasks/task_e_68ef016c3b548330ac27ebb75ca35df9
Summary by CodeRabbit
Refactor
Chores