chore: update Albert Park assets and release sync trigger#35
chore: update Albert Park assets and release sync trigger#35
Conversation
📝 WalkthroughWalkthroughRemote asset support and tooling were added: a new AssetClient with caching is introduced and integrated into renderers; multiple track-conversion and scoring scripts and utilities were added; CI workflow trigger changed to release.published; changelog, env examples, tests, and docs updated accordingly. Changes
Sequence Diagram(s)sequenceDiagram
participant Renderer
participant AssetClient
participant RemoteAPI
participant LocalCache
Renderer->>AssetClient: get_track_image(candidate_ids, variant)
AssetClient->>AssetClient: is_enabled()?
alt enabled
AssetClient->>RemoteAPI: GET /v1/tracks/{id}/manifest
RemoteAPI-->>AssetClient: manifest (variants, metadata) / error
alt manifest OK and variant supported
AssetClient->>RemoteAPI: GET /v1/tracks/{id}/asset?variant=...
RemoteAPI-->>AssetClient: binary bytes
AssetClient->>LocalCache: write manifest + binary (cache)
AssetClient->>AssetClient: decode bytes -> PIL Image
AssetClient-->>Renderer: PIL Image
else manifest missing or fetch error
AssetClient->>LocalCache: read cached manifest/binary
alt cache hit and fresh
AssetClient->>AssetClient: decode cached bytes -> PIL Image
AssetClient-->>Renderer: PIL Image
else
AssetClient-->>Renderer: None
end
end
else disabled
AssetClient-->>Renderer: None
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Tip Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs). Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request primarily focuses on maintenance and workflow enhancements. It updates a specific track asset to its most current version and refines the continuous integration process by modifying the timing of the Highlights
Changelog
Ignored Files
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
|
Overall Grade Focus Area: Security |
Security Reliability Complexity Hygiene |
Feedback
- Insecure temporary resource handling
- Multiple scripts create/open temp files and accept file: (and other) schemes without validation, causing race conditions and unintended local-file access; centralize secure tempfile usage (mkstemp/NamedTemporaryFile) and whitelist/normalize URL schemes to stop it.
- Uninitialized, module-level mutable state
- Globals like SOURCEDATA are referenced or mutated via global statements with no clear initializer, making behavior order-dependent; encapsulate state in explicit objects or initializer functions and remove module-level mutable state to break the pattern.
- Accumulated refactor drift and dead code
- Redundant lambdas, unused imports and instance methods that don't use self indicate copy‑paste growth and neglected refactors; remove dead code, convert instance-free methods to static/module functions, and perform targeted refactors to prevent recurrence.
Code Review Summary
| Analyzer | Status | Updated (UTC) | Details |
|---|---|---|---|
| Python | Mar 7, 2026 8:39p.m. | Review ↗ | |
| Docker | Mar 7, 2026 8:39p.m. | Review ↗ | |
| Secrets | Mar 7, 2026 8:39p.m. | Review ↗ |
There was a problem hiding this comment.
Code Review
This pull request updates the Albert Park track assets, modifies the CI sync workflow trigger, and adds a corresponding entry to the changelog. The changes appear correct, but I've identified a minor formatting issue in CHANGELOG.md where a heading is redundant. I've provided a suggestion to improve the readability of the changelog.
Note: Security Review has been skipped due to the limited scope of the PR.
Greptile SummaryThis PR refreshes the Albert Park circuit assets (PNG source and pre-processed BMP) and changes the
Confidence Score: 4/5
Important Files Changed
Sequence DiagramsequenceDiagram
participant GH as GitHub
participant WF as sync-main-to-dev workflow
participant Runner as Self-hosted Runner
participant Main as origin/main
participant Dev as origin/dev
Note over GH,Dev: Trigger: release published (was: push to main)
GH->>WF: release.published event (context.sha = tag commit)
WF->>Runner: Start job
Runner->>Main: git fetch origin main dev
Runner->>Dev: git checkout -B sync/main-to-dev origin/dev
Runner->>Main: git merge --no-ff origin/main
alt No changes / already up to date
Runner-->>WF: has_changes=false → skip push & PR
else Merge conflict
Runner-->>WF: has_conflict=true → fail job
else Changes detected
Runner->>Dev: git push --force-with-lease origin sync/main-to-dev
Runner->>GH: Create / update sync PR (head=sync/main-to-dev, base=dev)
Runner->>GH: Trigger CI workflow_dispatch on sync/main-to-dev
Runner->>GH: Enable auto-merge (MERGE method) on sync PR
end
Last reviewed commit: dd2130e |
There was a problem hiding this comment.
Pull request overview
This PR makes three maintenance changes: updating the pre-processed Albert Park track BMP asset, changing the sync-main-to-dev workflow trigger from push on main to release: published, and adding a 1.2.7 CHANGELOG entry documenting both changes.
Changes:
- Replaces the pre-processed 1-bit BMP for Albert Park (
app/assets/tracks_processed/albert_park.bmp) with a refreshed version. - Switches the
sync-main-to-devworkflow trigger from every push tomainto only fire on published GitHub releases, reducing noise. - Adds the
1.2.7changelog entry covering both changes.
Reviewed changes
Copilot reviewed 2 out of 4 changed files in this pull request and generated 1 comment.
| File | Description |
|---|---|
app/assets/tracks_processed/albert_park.bmp |
Updated pre-processed 1-bit BMP track asset for Albert Park |
.github/workflows/sync-main-to-dev.yml |
Changed workflow trigger to release: published and updated PR body text to match |
CHANGELOG.md |
Added 1.2.7 entry documenting the asset refresh and sync timing change |
There was a problem hiding this comment.
🧹 Nitpick comments (1)
CHANGELOG.md (1)
8-18: Missing### Backendheading for consistency.The 1.2.7 entry is missing the
### Backendheading that precedes the<details>block in other entries (see 1.2.6 at lines 24-27). This causes the heading level to jump from##(h2) to####(h4), which is flagged by markdownlint (MD001).📝 Proposed fix
## [1.2.7] - 2026-03-05 +### Backend + <details markdown="1"> <summary>Backend</summary>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@CHANGELOG.md` around lines 8 - 18, The release note for "## [1.2.7] - 2026-03-05" is missing the intermediate "### Backend" heading before the existing <details markdown="1"> block, causing a jump from h2 to h4 and triggering markdownlint MD001; insert a single line "### Backend" immediately above the <details markdown="1"> block (so the sequence is "## [1.2.7]...", then "### Backend", then the <details>...</details>) to restore consistent heading levels for this entry.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@CHANGELOG.md`:
- Around line 8-18: The release note for "## [1.2.7] - 2026-03-05" is missing
the intermediate "### Backend" heading before the existing <details
markdown="1"> block, causing a jump from h2 to h4 and triggering markdownlint
MD001; insert a single line "### Backend" immediately above the <details
markdown="1"> block (so the sequence is "## [1.2.7]...", then "### Backend",
then the <details>...</details>) to restore consistent heading levels for this
entry.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 84a3c06f-3b25-480d-afa1-3703b15ecce0
⛔ Files ignored due to path filters (2)
app/assets/tracks/albert_park.pngis excluded by!**/*.pngapp/assets/tracks_processed/albert_park.bmpis excluded by!**/*.bmp
📒 Files selected for processing (2)
.github/workflows/sync-main-to-dev.ymlCHANGELOG.md
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: test
🧰 Additional context used
🪛 markdownlint-cli2 (0.21.0)
CHANGELOG.md
[warning] 13-13: Heading levels should only increment by one level at a time
Expected: h3; Actual: h4
(MD001, heading-increment)
🔇 Additional comments (3)
.github/workflows/sync-main-to-dev.yml (3)
4-5: LGTM!The trigger change from
pushon main torelease: publishedaligns well with the stated goal of syncingdevafter each release. This ensures the sync happens at the right moment in the release lifecycle.
29-30: Good addition for traceability.Capturing the
main_shaexplicitly after fetch ensures the PR body reflects the exact commit being synced, regardless of any timing differences with the release event'scontext.sha.
62-70: LGTM!The updated PR body clearly documents both the release event commit and the actual main HEAD being synced, which improves auditability for the sync operation.
Let runtime rendering prefer the asset service while keeping local fallbacks so the app can migrate without breaking existing track delivery. Sync Albert Park assets, helper scripts, and tests with the new loading path so the integration stays verifiable.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (9)
scripts/convert_track_assets.py (1)
17-20: Consider documenting URL update process.The hardcoded F1 track URL includes a version timestamp (
v1751632426), which may become stale if F1 updates their assets. Consider adding a comment noting how/when to update these URLs, or documenting this in the track conversion workflow.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/convert_track_assets.py` around lines 17 - 20, DEFAULT_F1_TRACK_URLS contains a hardcoded asset URL with a version token (v1751632426) that can become stale; update the DEFAULT_F1_TRACK_URLS entry for "albert_park" to include a comment explaining that the numeric version is the CDN image version and how to refresh it (e.g., fetch the latest asset path from the F1 media source or update the token when assets are rotated), or move the URL to configurable storage (env/config) used by scripts/convert_track_assets.py so it can be updated without code changes; annotate the DEFAULT_F1_TRACK_URLS constant with this guidance and, if applicable, add a short note in the track conversion workflow/docs about when and how to update the versioned URL.app/services/asset_client.py (1)
78-83: Bareexcept Exceptionin_read_jsonmay hide issues.The catch-all exception handling silently returns
Nonefor any error, including unexpected ones likeMemoryError. Consider logging at debug level or catching specific exceptions (json.JSONDecodeError,OSError).♻️ Optional: Log debug info on read failures
`@staticmethod` def _read_json(path: Path) -> dict[str, Any] | None: try: return json.loads(path.read_text()) - except Exception: + except (json.JSONDecodeError, OSError) as exc: + logger.debug("Failed to read cached JSON %s: %s", path, exc) return None🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/services/asset_client.py` around lines 78 - 83, The helper _read_json currently swallows all exceptions which can hide serious errors; update AssetClient._read_json to catch only expected errors (json.JSONDecodeError and OSError) and let other exceptions propagate, and optionally log the failure at debug level with the path and exception; reference the method name _read_json and the use of Path.read_text()/json.loads() when making this change.scripts/search_track_1bit_layered_parallel.py (1)
237-261: Minor: Consider returning a copy when iterations <= 0.When
iterations <= 0, the function returns the original mask reference. If callers later modify the returned mask, they would inadvertently modify the input. While current usage appears safe, returningmaskdirectly couples caller behavior to implementation details.🛡️ Defensive copy for zero iterations
def _dilate_mask(mask: list[list[bool]], iterations: int) -> list[list[bool]]: if iterations <= 0: - return mask + return [row[:] for row in mask]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/search_track_1bit_layered_parallel.py` around lines 237 - 261, The function _dilate_mask should not return the original mask reference when iterations <= 0; instead return a defensive copy to avoid callers mutating the input. Change the early-return branch in _dilate_mask (the iterations <= 0 case) to return a shallow copy of each row (e.g., construct a new list of lists from mask) so the returned value is independent of the input while preserving the same shape and boolean values.scripts/track_conversion_utils.py (3)
302-315: IoU returns 1.0 for empty masks—document this edge case.Returning
1.0whenunion == 0(both masks empty) treats "no pixels in either" as a perfect match. This is a reasonable semantic choice for this scoring context, but it's non-obvious. Consider adding a brief comment.📝 Document edge case
def _intersection_over_union(mask_a: list[list[bool]], mask_b: list[list[bool]]) -> float: """Compute IoU for two bool masks.""" width, height = _mask_shape(mask_a) intersection = 0 union = 0 for y in range(height): for x in range(width): a = mask_a[y][x] b = mask_b[y][x] if a and b: intersection += 1 if a or b: union += 1 + # Empty masks are considered a perfect match (both have nothing to compare) return intersection / union if union else 1.0🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/track_conversion_utils.py` around lines 302 - 315, The function _intersection_over_union currently returns 1.0 when union == 0 (both masks empty); add a concise inline comment or docstring note in _intersection_over_union explaining that this edge case treats two empty masks as a perfect match (IoU = 1.0) so future readers understand the intentional semantic choice.
395-417: Same defensive copy consideration applies here.When
iterations <= 0, returning the input mask directly can lead to aliasing issues if callers modify the result.🛡️ Defensive copy
def _dilate_mask(mask: list[list[bool]], iterations: int) -> list[list[bool]]: """Dilate a bool mask with a 3x3 max filter.""" if iterations <= 0: - return mask + return [row[:] for row in mask]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/track_conversion_utils.py` around lines 395 - 417, The early return in _dilate_mask returns the input mask directly when iterations <= 0, risking aliasing; change that to return a defensive deep copy of the mask (i.e., a new list of row lists) so callers cannot mutate the original via the returned value. Locate _dilate_mask and replace the direct return with constructing and returning a new list where each row is copied (e.g., list(row) or row.copy()) to preserve the original mask.
117-124: Network request lacks error handling.The
download_filefunction will raiseurllib.error.URLErrororurllib.error.HTTPErroron network failures, and these exceptions propagate to callers. Consider whether callers handle these appropriately, or add explicit documentation about expected exceptions.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/track_conversion_utils.py` around lines 117 - 124, The download_file function currently lets network errors propagate; wrap the urllib.request.urlopen call (and response.read()) in a try/except that catches urllib.error.URLError and urllib.error.HTTPError, then either raise a clearer custom exception (or re-raise with additional context) so callers get a descriptive error, and ensure any partially written files are not left on disk (use a temp file + output_path.write_bytes on success or unlink on failure); update the function docstring to list the possible exceptions (URLError/HTTPError) if you choose to document rather than convert them.scripts/score_track_render.py (2)
35-40: Default source path assumes execution from repository root.The default
--source-pathofPath("app/assets/tracks/albert_park.png")is a relative path. If the script is executed from thescripts/directory, the path resolution will fail. Consider usingPath(__file__).parent.parent / "app" / "assets" / ...for robustness, or document the expected working directory.🛠️ Use script-relative path resolution
+PROJECT_ROOT = Path(__file__).resolve().parent.parent + def parse_args() -> argparse.Namespace: parser = argparse.ArgumentParser(description="Score rendered 1-bit track quality.") # ... other args ... parser.add_argument( "--source-path", type=Path, - default=Path("app/assets/tracks/albert_park.png"), + default=PROJECT_ROOT / "app" / "assets" / "tracks" / "albert_park.png", help="Source colorful track image used to build semantic reference", ) return parser.parse_args()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/score_track_render.py` around lines 35 - 40, The default value for the "--source-path" argument is a relative path that breaks when the script runs from a different CWD; update the parser.add_argument default for "--source-path" to construct an absolute script-relative path (using Path(__file__).resolve().parent or parent.parent) to point to the app/assets/tracks/albert_park.png file so resolution is robust regardless of working directory, keeping the argument name ("--source-path") and parser.add_argument usage intact.
44-56: Consider adding source path validation and error handling.The
mainfunction will fail with a generic PIL error ifsource_pathdoesn't exist, and with aURLErrorif the endpoint is unreachable. Explicit validation would provide clearer error messages for common failure modes.🛡️ Add validation
def main() -> int: args = parse_args() + if not args.source_path.exists(): + print(f"Error: Source image not found: {args.source_path}") + return 1 + reference = build_track_semantic_reference(args.source_path) - rendered = fetch_calendar_render(args.endpoint_url, args.tz) + try: + rendered = fetch_calendar_render(args.endpoint_url, args.tz) + except Exception as e: + print(f"Error: Failed to fetch from endpoint: {e}") + return 1 + metrics = evaluate_rendered_semantic_quality(rendered, reference)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/score_track_render.py` around lines 44 - 56, The main function lacks validation and error handling for missing source files and network failures: before calling build_track_semantic_reference(args.source_path) check that args.source_path.exists() and is_file(), and raise/print a clear error and return non-zero if not; wrap calls to build_track_semantic_reference and fetch_calendar_render in try/except to catch filesystem/IOErrors and network exceptions (e.g., URLError/requests.exceptions.RequestException) and emit descriptive messages; also wrap rendered.save(args.save_preview) in a try/except to handle PIL/IO errors, ensure args.save_preview.parent.mkdir is guarded and errors are handled, and return a non-zero exit code on failure so main reports failure instead of crashing.tests/test_renderer.py (1)
955-974: Tests verify remote-first loading correctly, but consider adding an integration test.The test properly validates that:
get_asset_client()is called with the expected variant"1bit"- The circuit ID is correctly passed to the remote client
- The remote image is returned when available
However, per coding guidelines, renderer changes should include tests verifying exact BMP properties (800x480, mode="1", format="BMP"). While this unit test appropriately isolates
_load_track_image, consider adding a complementary integration test that renders a full calendar with mocked remote assets and verifies the final BMP output properties.As per coding guidelines: "All renderer changes MUST include tests in
tests/test_renderer.pythat verify exact BMP properties: 800x480, mode='1', format='BMP'."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_renderer.py` around lines 955 - 974, Add an integration test in tests/test_renderer.py that exercises the Renderer end-to-end (using a mocked remote asset client via get_asset_client or monkeypatch) to render a full calendar and assert the produced image has exact BMP properties: size (800, 480), mode "1", and format "BMP"; reuse the existing approach that patches get_asset_client and call the public rendering entrypoint on Renderer (rather than only _load_track_image) to generate the final output, then open/inspect the resulting Image object and assert image.size == (800, 480), image.mode == "1" and image.format == "BMP".
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@app/services/asset_client.py`:
- Around line 97-106: Convert the blocking HTTP calls to async: change
get_manifest and _fetch_binary_bytes in app/services/asset_client.py to async
functions that use httpx.AsyncClient with await (using async with
httpx.AsyncClient(...)) and response = await client.get(...); update their
return paths to await response.aread()/response.json() as appropriate and
preserve the cache logic; then update all callers — notably get_track_image, any
usages in renderer.render_calendar and spectra6_renderer.render_calendar, and
the async _render_calendar flow — to await the new async methods (propagate
async up the call chain where necessary) so no synchronous httpx.Client calls
remain. Ensure headers method and timeout usage remain compatible and that
exception handling still falls back to reading the cache.
---
Nitpick comments:
In `@app/services/asset_client.py`:
- Around line 78-83: The helper _read_json currently swallows all exceptions
which can hide serious errors; update AssetClient._read_json to catch only
expected errors (json.JSONDecodeError and OSError) and let other exceptions
propagate, and optionally log the failure at debug level with the path and
exception; reference the method name _read_json and the use of
Path.read_text()/json.loads() when making this change.
In `@scripts/convert_track_assets.py`:
- Around line 17-20: DEFAULT_F1_TRACK_URLS contains a hardcoded asset URL with a
version token (v1751632426) that can become stale; update the
DEFAULT_F1_TRACK_URLS entry for "albert_park" to include a comment explaining
that the numeric version is the CDN image version and how to refresh it (e.g.,
fetch the latest asset path from the F1 media source or update the token when
assets are rotated), or move the URL to configurable storage (env/config) used
by scripts/convert_track_assets.py so it can be updated without code changes;
annotate the DEFAULT_F1_TRACK_URLS constant with this guidance and, if
applicable, add a short note in the track conversion workflow/docs about when
and how to update the versioned URL.
In `@scripts/score_track_render.py`:
- Around line 35-40: The default value for the "--source-path" argument is a
relative path that breaks when the script runs from a different CWD; update the
parser.add_argument default for "--source-path" to construct an absolute
script-relative path (using Path(__file__).resolve().parent or parent.parent) to
point to the app/assets/tracks/albert_park.png file so resolution is robust
regardless of working directory, keeping the argument name ("--source-path") and
parser.add_argument usage intact.
- Around line 44-56: The main function lacks validation and error handling for
missing source files and network failures: before calling
build_track_semantic_reference(args.source_path) check that
args.source_path.exists() and is_file(), and raise/print a clear error and
return non-zero if not; wrap calls to build_track_semantic_reference and
fetch_calendar_render in try/except to catch filesystem/IOErrors and network
exceptions (e.g., URLError/requests.exceptions.RequestException) and emit
descriptive messages; also wrap rendered.save(args.save_preview) in a try/except
to handle PIL/IO errors, ensure args.save_preview.parent.mkdir is guarded and
errors are handled, and return a non-zero exit code on failure so main reports
failure instead of crashing.
In `@scripts/search_track_1bit_layered_parallel.py`:
- Around line 237-261: The function _dilate_mask should not return the original
mask reference when iterations <= 0; instead return a defensive copy to avoid
callers mutating the input. Change the early-return branch in _dilate_mask (the
iterations <= 0 case) to return a shallow copy of each row (e.g., construct a
new list of lists from mask) so the returned value is independent of the input
while preserving the same shape and boolean values.
In `@scripts/track_conversion_utils.py`:
- Around line 302-315: The function _intersection_over_union currently returns
1.0 when union == 0 (both masks empty); add a concise inline comment or
docstring note in _intersection_over_union explaining that this edge case treats
two empty masks as a perfect match (IoU = 1.0) so future readers understand the
intentional semantic choice.
- Around line 395-417: The early return in _dilate_mask returns the input mask
directly when iterations <= 0, risking aliasing; change that to return a
defensive deep copy of the mask (i.e., a new list of row lists) so callers
cannot mutate the original via the returned value. Locate _dilate_mask and
replace the direct return with constructing and returning a new list where each
row is copied (e.g., list(row) or row.copy()) to preserve the original mask.
- Around line 117-124: The download_file function currently lets network errors
propagate; wrap the urllib.request.urlopen call (and response.read()) in a
try/except that catches urllib.error.URLError and urllib.error.HTTPError, then
either raise a clearer custom exception (or re-raise with additional context) so
callers get a descriptive error, and ensure any partially written files are not
left on disk (use a temp file + output_path.write_bytes on success or unlink on
failure); update the function docstring to list the possible exceptions
(URLError/HTTPError) if you choose to document rather than convert them.
In `@tests/test_renderer.py`:
- Around line 955-974: Add an integration test in tests/test_renderer.py that
exercises the Renderer end-to-end (using a mocked remote asset client via
get_asset_client or monkeypatch) to render a full calendar and assert the
produced image has exact BMP properties: size (800, 480), mode "1", and format
"BMP"; reuse the existing approach that patches get_asset_client and call the
public rendering entrypoint on Renderer (rather than only _load_track_image) to
generate the final output, then open/inspect the resulting Image object and
assert image.size == (800, 480), image.mode == "1" and image.format == "BMP".
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 3856c24a-c0fb-468e-bdd3-1039e172bde5
⛔ Files ignored due to path filters (3)
app/assets/tracks/albert_park.pngis excluded by!**/*.pngapp/assets/tracks_processed/albert_park.bmpis excluded by!**/*.bmpapp/assets/tracks_spectra6/albert_park.bmpis excluded by!**/*.bmp
📒 Files selected for processing (13)
.env.exampleapp/config.pyapp/services/asset_client.pyapp/services/renderer.pyapp/services/spectra6_renderer.pydocs/track-conversion-log.mdscripts/convert_track_assets.pyscripts/score_track_render.pyscripts/search_track_1bit_layered_parallel.pyscripts/search_track_1bit_params.pyscripts/track_conversion_utils.pytests/test_config.pytests/test_renderer.py
✅ Files skipped from review due to trivial changes (1)
- docs/track-conversion-log.md
📜 Review details
🧰 Additional context used
📓 Path-based instructions (4)
app/**/*.py
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
app/**/*.py: Always usetranslator.get(key, fallback)pattern for translations. Session names use prefixsession_(e.g.,session_race,session_qualifying). See translations/en.json for all keys.
All configuration values must come from environment variables viaapp/config.py. Never hardcode API URLs, display dimensions, or monitoring DSNs - useconfig.JOLPICA_API_URL,config.DISPLAY_WIDTH/HEIGHT,config.SENTRY_DSN.
Feature flags likeUMAMI_ENABLEDcontrol optional services - code must handle disabled features gracefully without raising exceptions.
Files:
app/services/asset_client.pyapp/services/spectra6_renderer.pyapp/services/renderer.pyapp/config.py
app/services/*.py
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
app/services/*.py: Usehttpx.AsyncClientfor HTTP calls, neverrequests. Analytics tracking is fire-and-forget:asyncio.create_task(_send_analytics(...))
Session schedule order matters: always sort by datetime withsort(key=lambda x: x["datetime"])
Files:
app/services/asset_client.pyapp/services/spectra6_renderer.pyapp/services/renderer.py
app/services/*renderer.py
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
app/services/*renderer.py: All images MUST be 1-bit mode (Image.new("1", (800, 480), 1)) for E-Ink compatibility. Never use "L" or "RGB" modes.
When drawing on 1-bit images, usefill=0for black andfill=1for white. Never hardcode coordinates - use pixel-precise layout constants fromself.layoutdict.
Session types MUST usesession_prefix in translation keys:session_fp1,session_fp2,session_fp3,session_qualifying,session_sprint,session_race.
The_draw_track_map()method creates stylized placeholder graphics using rounded rectangles with checkered accent stripes. All dimensions come fromself.layoutconstants. Handle missing circuit data gracefully via_get_circuit_details()using.get()with defaults.
Font loading must account for fallback to default if DejaVuSans is missing. Test in Docker, not just locally.
BMP format is little-endian, 1-bit depth - do not manually construct headers. Use Pillow's Image.save() with BMP format.
Files:
app/services/spectra6_renderer.pyapp/services/renderer.py
tests/test_renderer.py
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
tests/test_renderer.py: All renderer changes MUST include tests intests/test_renderer.pythat verify exact BMP properties: 800x480, mode="1", format="BMP". Usemock_race_datafixture pattern.
Test both Czech (cs) and English (en) translations by following thetest_render_calendar_czechpattern in test cases.
Files:
tests/test_renderer.py
🧠 Learnings (4)
📚 Learning: 2026-03-01T20:49:42.387Z
Learnt from: CR
Repo: Rhiz3K/InkyCloud-F1 PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-03-01T20:49:42.387Z
Learning: Applies to app/services/*renderer.py : The `_draw_track_map()` method creates stylized placeholder graphics using rounded rectangles with checkered accent stripes. All dimensions come from `self.layout` constants. Handle missing circuit data gracefully via `_get_circuit_details()` using `.get()` with defaults.
Applied to files:
app/services/spectra6_renderer.pyapp/services/renderer.py
📚 Learning: 2026-03-01T20:49:42.387Z
Learnt from: CR
Repo: Rhiz3K/InkyCloud-F1 PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-03-01T20:49:42.387Z
Learning: Applies to tests/test_renderer.py : All renderer changes MUST include tests in `tests/test_renderer.py` that verify exact BMP properties: 800x480, mode="1", format="BMP". Use `mock_race_data` fixture pattern.
Applied to files:
tests/test_renderer.py
📚 Learning: 2026-03-01T20:49:42.387Z
Learnt from: CR
Repo: Rhiz3K/InkyCloud-F1 PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-03-01T20:49:42.387Z
Learning: Applies to app/services/*renderer.py : Font loading must account for fallback to default if DejaVuSans is missing. Test in Docker, not just locally.
Applied to files:
tests/test_renderer.py
📚 Learning: 2026-03-01T20:49:42.387Z
Learnt from: CR
Repo: Rhiz3K/InkyCloud-F1 PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-03-01T20:49:42.387Z
Learning: Applies to tests/test_renderer.py : Test both Czech (`cs`) and English (`en`) translations by following the `test_render_calendar_czech` pattern in test cases.
Applied to files:
tests/test_config.py
🪛 dotenv-linter (4.0.0)
.env.example
[warning] 70-70: [UnorderedKey] The ASSET_API_TOKEN key should go before the ASSET_API_URL key
(UnorderedKey)
🔇 Additional comments (21)
.env.example (1)
65-73: LGTM!The new asset API configuration block is well-documented with sensible defaults that align with existing patterns in the file (e.g.,
/app/data/path prefix). The static analysis hint about alphabetical ordering ofASSET_API_TOKENvsASSET_API_URLis a stylistic preference that doesn't affect functionality.scripts/search_track_1bit_params.py (1)
1-166: LGTM!The script is well-structured for parameter optimization. Key observations:
sample_params()correctly provides all 15 requiredTrack1BitParamsfields- Exception handling in the fetch loop (lines 124-128) appropriately continues on failures
- Graceful handling when no successful trials are produced (lines 149-151)
- The scoring comparison
if score > best_scorecorrectly retains higher-scoring candidates per thescore_track_metricsfunction semanticsapp/config.py (2)
114-131: LGTM!The new asset configuration fields follow established patterns:
ASSET_API_URLwithOptional[str]and graceful fallback toNoneon invalid URLsASSET_CACHE_TTL_HOURSwithgt=0constraint ensuring positive values- Default paths consistent with existing Docker container paths
270-311: LGTM!The validators follow the established patterns in this file:
validate_asset_api_urlmirrors thevalidate_s3_endpointapproach with graceful degradationnormalize_asset_api_tokenproperly handles whitespace and empty stringsvalidate_asset_cache_ttlfollows thevalidate_timeoutpatterntests/test_config.py (1)
17-18: LGTM!Good test coverage for the new asset configuration:
- Verifies default values (
ASSET_API_URL=None,ASSET_CACHE_TTL_HOURS=24)- Tests that invalid values (
"not-a-url","0") fall back to safe defaultsAlso applies to: 29-30, 41-42
scripts/convert_track_assets.py (1)
82-125: LGTM!The main function has clean control flow:
- Proper URL resolution with fallback to defaults
- Clear error handling for missing source files
- Conditional execution based on skip flags
- Informative output with conversion statistics
app/services/asset_client.py (1)
143-171: LGTM: Robust fallback logic inget_track_image.The method correctly:
- Returns
Noneearly if the client is disabled- Iterates through normalized candidate IDs
- Validates variants against manifest when available
- Falls back gracefully on decode failures with appropriate logging
- Returns
Noneonly after exhausting all candidatesapp/services/renderer.py (2)
1023-1032: LGTM: Remote asset fetch with local fallback.The implementation correctly:
- Builds candidate IDs including mapped variants and lowercase forms
- Attempts remote fetch first via
get_asset_client().get_track_image()- Falls back to local BMP/PNG loading if remote returns
None- Maintains backward compatibility with existing local asset resolution
982-983: Good: Explicitint()casts for paste coordinates.The explicit casts ensure pixel-precise positioning and prevent potential type errors if the arithmetic produces floats.
app/services/spectra6_renderer.py (2)
327-336: LGTM: Remote asset fetch for Spectra6 variant.The implementation correctly:
- Builds candidate IDs with normalized, original, and lowercase variants
- Requests the
"spectra6"variant from the asset client- Falls back to local BMP loading when remote returns
None
291-295: LGTM: Type-safe paste coordinates.The explicit
int()casts and tuple creation ensure correct types forimage.paste(). The.convert("RGB")on line 295 properly handles images that may not already be in RGB mode.tests/test_renderer.py (2)
21-21: LGTM!The import alias for
spectra6_renderermodule enables proper mocking ofget_asset_clientat the correct namespace where it's used in the production code.
1044-1061: Spectra6 remote asset test follows the correct pattern.The test appropriately:
- Uses the module-level monkeypatch target (
spectra6_renderer_module.get_asset_client)- Verifies the
"spectra6"variant is requested- Checks circuit ID inclusion in the request
Same consideration applies here regarding integration-level BMP verification for completeness.
scripts/search_track_1bit_layered_parallel.py (4)
45-47: Global mutable state for worker initialization is appropriate.Using module-level globals with
ProcessPoolExecutor's initializer pattern is the standard approach for sharing read-only data across worker processes. The data is initialized once per worker and remains constant during execution.
183-234: Connected components implementation is correct.The BFS-based 4-neighborhood connected component labeling is implemented correctly. Using
queue.pop()(LIFO) instead ofqueue.popleft()(FIFO) gives DFS traversal order, but for connected component extraction the traversal order doesn't affect correctness—only the final set of connected pixels matters.
627-662: Finalist verification modifies shared output path repeatedly.Each finalist iteration writes to
output_pathbefore fetching the render. This is intentional for the verification workflow (testing each candidate against the live endpoint), but be aware that if the script is interrupted, the output file may contain an intermediate (non-best) candidate.The final write at line 661 ensures the best result is persisted, which is correct.
767-768: LGTM!Clean entry point using
raise SystemExit(main())pattern for proper exit code propagation.scripts/track_conversion_utils.py (4)
1-6: LGTM!Clear module docstring explaining the script-friendly design with no app imports. This enables standalone usage during asset iteration workflows.
339-392: Proper BFS using deque for connected components.This implementation correctly uses
deque.popleft()for FIFO traversal, providing true BFS behavior. The component statistics (bounding box, area, fill ratio) are accurately computed.
846-857: Size validation provides clear error messaging.The explicit check that cropped dimensions match
reference.preview_sizewill catch configuration mismatches early with a descriptive error message.
945-959: URL construction correctly handles existing query parameters.The function properly parses, filters, and reconstructs the URL with the timezone parameter, avoiding duplicate
tzkeys in the query string.
| try: | ||
| with httpx.Client(timeout=self.timeout, follow_redirects=True) as client: | ||
| response = client.get(url, headers=self._headers()) | ||
| response.raise_for_status() | ||
| cache_path.parent.mkdir(parents=True, exist_ok=True) | ||
| cache_path.write_text(response.text) | ||
| return response.json() | ||
| except Exception as exc: | ||
| logger.debug("Failed to fetch track manifest for %s: %s", circuit_id, exc) | ||
| return self._read_json(cache_path) if cache_path.exists() else None |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if renderers or asset client are called from async contexts
rg -n "async def.*render|await.*render|await.*get_track_image|await.*get_asset_client" app/Repository: Rhiz3K/InkyCloud-F1
Length of output: 194
🏁 Script executed:
#!/bin/bash
# Search for calls to AssetClient methods from async functions
rg -n "get_track_image|get_asset_client|AssetClient" app/routes/ app/main.py -A 2Repository: Rhiz3K/InkyCloud-F1
Length of output: 45
🏁 Script executed:
#!/bin/bash
# Look at the _render_calendar function to see if it uses AssetClient
sed -n '238,400p' app/routes/images.py | head -80Repository: Rhiz3K/InkyCloud-F1
Length of output: 3078
🏁 Script executed:
#!/bin/bash
# Check all httpx.Client (sync) usage in app/services to see if it's consistent
rg -n "httpx\.Client\(" app/services/Repository: Rhiz3K/InkyCloud-F1
Length of output: 299
🏁 Script executed:
#!/bin/bash
# Search for AssetClient imports and usage across entire codebase
rg -n "AssetClient|from.*asset_client|import.*asset_client" app/ --type pyRepository: Rhiz3K/InkyCloud-F1
Length of output: 418
🏁 Script executed:
#!/bin/bash
# Check if AssetClient class is even instantiated anywhere
rg -n "AssetClient\(" app/ --type pyRepository: Rhiz3K/InkyCloud-F1
Length of output: 120
🏁 Script executed:
#!/bin/bash
# Look for all functions in asset_client.py to understand the full scope
rg -n "^\s*def |^\s*async def " app/services/asset_client.pyRepository: Rhiz3K/InkyCloud-F1
Length of output: 927
🏁 Script executed:
#!/bin/bash
# Check how get_asset_client is used in spectra6_renderer.py and renderer.py
rg -n "get_asset_client\(\)" app/services/spectra6_renderer.py app/services/renderer.py -B 2 -A 3Repository: Rhiz3K/InkyCloud-F1
Length of output: 911
🏁 Script executed:
#!/bin/bash
# Check if renderer methods are called from async contexts
rg -n "renderer\.render_calendar\(|render_calendar" app/routes/images.py -B 2 -A 2Repository: Rhiz3K/InkyCloud-F1
Length of output: 531
🏁 Script executed:
#!/bin/bash
# Look for get_track_image calls in renderers
rg -n "get_track_image\(" app/services/ --type py -B 2 -A 2Repository: Rhiz3K/InkyCloud-F1
Length of output: 1246
🏁 Script executed:
#!/bin/bash
# Check if render_calendar method is async or sync
rg -n "def render_calendar|async def render_calendar" app/services/renderer.py app/services/spectra6_renderer.py -B 2 -A 5Repository: Rhiz3K/InkyCloud-F1
Length of output: 1023
🏁 Script executed:
#!/bin/bash
# Check the full context of where get_track_image is called in renderer methods
sed -n '1020,1035p' app/services/renderer.pyRepository: Rhiz3K/InkyCloud-F1
Length of output: 613
Use httpx.AsyncClient instead of httpx.Client.
The coding guidelines explicitly require: "Use httpx.AsyncClient for HTTP calls, never requests" for app/services/*.py. This module uses synchronous httpx.Client() at lines 98 and 118, which violates the guideline.
Additionally, get_track_image() is called from within renderer.render_calendar() (synchronous methods in app/services/renderer.py and app/services/spectra6_renderer.py), which are invoked from the async _render_calendar() function. Using blocking HTTP calls in this chain can block the event loop.
Convert get_manifest() and _fetch_binary_bytes() to async methods using httpx.AsyncClient, and update the callers accordingly.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/services/asset_client.py` around lines 97 - 106, Convert the blocking
HTTP calls to async: change get_manifest and _fetch_binary_bytes in
app/services/asset_client.py to async functions that use httpx.AsyncClient with
await (using async with httpx.AsyncClient(...)) and response = await
client.get(...); update their return paths to await
response.aread()/response.json() as appropriate and preserve the cache logic;
then update all callers — notably get_track_image, any usages in
renderer.render_calendar and spectra6_renderer.render_calendar, and the async
_render_calendar flow — to await the new async methods (propagate async up the
call chain where necessary) so no synchronous httpx.Client calls remain. Ensure
headers method and timeout usage remain compatible and that exception handling
still falls back to reading the cache.
Summary
app/assets/tracks/albert_park.pngand regenerateapp/assets/tracks_processed/albert_park.bmpfrom the latest source..github/workflows/sync-main-to-dev.ymlon published releases sodevis synced frommainafter each release.1.2.7entry toCHANGELOG.mdcovering the Albert Park refresh and sync timing change.Testing
Summary by CodeRabbit
New Features
Documentation
Chores
Tests