Skip to content

chore: update Albert Park assets and release sync trigger#35

Open
Rhiz3K wants to merge 4 commits intomainfrom
release/v1.2.7-albert-park
Open

chore: update Albert Park assets and release sync trigger#35
Rhiz3K wants to merge 4 commits intomainfrom
release/v1.2.7-albert-park

Conversation

@Rhiz3K
Copy link
Owner

@Rhiz3K Rhiz3K commented Mar 5, 2026

Summary

  • Update app/assets/tracks/albert_park.png and regenerate app/assets/tracks_processed/albert_park.bmp from the latest source.
  • Trigger .github/workflows/sync-main-to-dev.yml on published releases so dev is synced from main after each release.
  • Add 1.2.7 entry to CHANGELOG.md covering the Albert Park refresh and sync timing change.

Testing

  • Not run locally (asset/docs/CI workflow changes only).

Summary by CodeRabbit

  • New Features

    • Remote track asset support with local caching and graceful fallback.
    • New CLI tools to convert, score, and search/optimize track artwork and generate previews.
  • Documentation

    • Added track conversion log and updated changelog entry for v1.2.7.
  • Chores

    • Added environment variables for asset API, caching and prefetch controls.
    • Adjusted sync workflow to trigger on published releases and record synced SHAs.
  • Tests

    • Added tests verifying remote-then-local asset loading behavior.

Copilot AI review requested due to automatic review settings March 5, 2026 10:27
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 5, 2026

📝 Walkthrough

Walkthrough

Remote asset support and tooling were added: a new AssetClient with caching is introduced and integrated into renderers; multiple track-conversion and scoring scripts and utilities were added; CI workflow trigger changed to release.published; changelog, env examples, tests, and docs updated accordingly.

Changes

Cohort / File(s) Summary
Workflow
​.github/workflows/sync-main-to-dev.yml
Changed trigger from push on main to release (published). Capture/export main_sha and include release commit SHA and main HEAD in PR body.
Configuration & Env
app/config.py, .env.example
Added asset-related config fields (ASSET_API_URL, ASSET_API_TOKEN, ASSET_CACHE_DIR, ASSET_CACHE_TTL_HOURS, ASSET_PREFETCH_ENABLED) with validation and defaults; appended env example variables.
Asset service
app/services/asset_client.py
New AssetClient implementing remote fetch, disk caching (manifest + binaries), TTL handling, manifest lookup and image decoding; get_asset_client() and test reset helper added.
Renderer integration
app/services/renderer.py, app/services/spectra6_renderer.py
Integrated remote asset lookup before local asset fallback; added candidate ID normalization and integer-safe paste coordinates.
Track conversion tooling
scripts/track_conversion_utils.py, scripts/convert_track_assets.py, scripts/score_track_render.py, scripts/search_track_1bit_params.py, scripts/search_track_1bit_layered_parallel.py, scripts/search_track_1bit_layered_parallel.py
Large new utilities and CLI scripts for downloading, converting (1-bit & Spectra6), semantic reference construction, scoring, parameter search, and parallel layered-search tuner; many image-processing, evaluation and scoring routines added.
Docs & Logs
docs/track-conversion-log.md, CHANGELOG.md
Added long-form track-conversion log and a new changelog entry (1.2.7, 2026-03-05) documenting backend/asset changes.
Tests
tests/test_config.py, tests/test_renderer.py
Added tests for new config defaults/validation and for renderer behavior to prefer remote asset client before local fallback.
Misc
scripts/* (new scripts)
Multiple new large scripts added for search, scoring, and conversion workflows (see file list above).

Sequence Diagram(s)

sequenceDiagram
    participant Renderer
    participant AssetClient
    participant RemoteAPI
    participant LocalCache

    Renderer->>AssetClient: get_track_image(candidate_ids, variant)
    AssetClient->>AssetClient: is_enabled()?
    alt enabled
        AssetClient->>RemoteAPI: GET /v1/tracks/{id}/manifest
        RemoteAPI-->>AssetClient: manifest (variants, metadata) / error
        alt manifest OK and variant supported
            AssetClient->>RemoteAPI: GET /v1/tracks/{id}/asset?variant=...
            RemoteAPI-->>AssetClient: binary bytes
            AssetClient->>LocalCache: write manifest + binary (cache)
            AssetClient->>AssetClient: decode bytes -> PIL Image
            AssetClient-->>Renderer: PIL Image
        else manifest missing or fetch error
            AssetClient->>LocalCache: read cached manifest/binary
            alt cache hit and fresh
                AssetClient->>AssetClient: decode cached bytes -> PIL Image
                AssetClient-->>Renderer: PIL Image
            else
                AssetClient-->>Renderer: None
            end
        end
    else disabled
        AssetClient-->>Renderer: None
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Poem

🐰 I hopped through code and fetched a sprite,
Cached a manifest by day and night,
If remote fails I snuggle the cache,
Renderers rejoice with a soft little crash—
Tracks now travel far, stitched tight and right. 🎨

🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 53.40% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Description check ❓ Inconclusive The PR description is incomplete; it lacks the Testing and Screenshots sections specified in the template, and provides minimal testing detail. Add the Testing section with specific test commands or validation steps, and include the Checklist section with appropriate items marked.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The PR title accurately reflects the main changes: Albert Park asset updates and release-triggered sync workflow modification.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch release/v1.2.7-albert-park

Tip

Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs).
Share your feedback on Discord.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request primarily focuses on maintenance and workflow enhancements. It updates a specific track asset to its most current version and refines the continuous integration process by modifying the timing of the main to dev branch synchronization to occur post-release.

Highlights

  • Albert Park Assets Updated: The albert_park.png asset has been replaced with the latest map version, and its processed counterpart, albert_park.bmp, has been regenerated accordingly.
  • Release Sync Trigger Adjusted: The .github/workflows/sync-main-to-dev.yml workflow has been configured to trigger specifically after published releases, ensuring the dev branch is synchronized from main following each new release.
Changelog
  • CHANGELOG.md
    • Added a new entry for version 1.2.7.
    • Documented the update of Albert Park track assets.
    • Recorded the change in main-to-dev sync timing.
Ignored Files
  • Ignored by pattern: .github/workflows/** (1)
    • .github/workflows/sync-main-to-dev.yml
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@deepsource-io
Copy link

deepsource-io bot commented Mar 5, 2026

DeepSource Code Review

We reviewed changes in b20507b...c785174 on this pull request. Below is the summary for the review, and you can see the individual issues we found as inline review comments.

See full review on DeepSource ↗

PR Report Card

Overall Grade  

Focus Area: Security
Security  

Reliability  

Complexity  

Hygiene  

Feedback

  • Insecure temporary resource handling
    • Multiple scripts create/open temp files and accept file: (and other) schemes without validation, causing race conditions and unintended local-file access; centralize secure tempfile usage (mkstemp/NamedTemporaryFile) and whitelist/normalize URL schemes to stop it.
  • Uninitialized, module-level mutable state
    • Globals like SOURCEDATA are referenced or mutated via global statements with no clear initializer, making behavior order-dependent; encapsulate state in explicit objects or initializer functions and remove module-level mutable state to break the pattern.
  • Accumulated refactor drift and dead code
    • Redundant lambdas, unused imports and instance methods that don't use self indicate copy‑paste growth and neglected refactors; remove dead code, convert instance-free methods to static/module functions, and perform targeted refactors to prevent recurrence.

Code Review Summary

Analyzer Status Updated (UTC) Details
Python Mar 7, 2026 8:39p.m. Review ↗
Docker Mar 7, 2026 8:39p.m. Review ↗
Secrets Mar 7, 2026 8:39p.m. Review ↗

@codspeed-hq
Copy link
Contributor

codspeed-hq bot commented Mar 5, 2026

Merging this PR will not alter performance

✅ 7 untouched benchmarks


Comparing release/v1.2.7-albert-park (c785174) with main (b20507b)

Open in CodSpeed

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates the Albert Park track assets, modifies the CI sync workflow trigger, and adds a corresponding entry to the changelog. The changes appear correct, but I've identified a minor formatting issue in CHANGELOG.md where a heading is redundant. I've provided a suggestion to improve the readability of the changelog.

Note: Security Review has been skipped due to the limited scope of the PR.

@greptile-apps
Copy link

greptile-apps bot commented Mar 5, 2026

Greptile Summary

This PR refreshes the Albert Park circuit assets (PNG source and pre-processed BMP) and changes the sync-main-to-dev workflow trigger from every push to main to only published GitHub Releases, so the dev branch is synced once per release rather than on every merge.

  • Asset updateapp/assets/tracks/albert_park.png and app/assets/tracks_processed/albert_park.bmp are binary replacements; no code impact.
  • Workflow trigger change – Switching from push: branches: [main] to release: types: [published] intentionally reduces sync frequency. Note that any commits merged to main between releases will not trigger an automatic sync to dev until the next release is published.
  • context.sha accuracy – For a release event, context.sha resolves to the tagged commit, while the merge step pulls from origin/main directly. If commits arrive on main after the tag is created but before the runner executes, the PR body will cite a stale SHA. Worth addressing for traceability.
  • CHANGELOG – 1.2.7 entry follows the established format and correctly documents both changes.

Confidence Score: 4/5

  • This PR is safe to merge; changes are low-risk asset updates and a straightforward CI trigger adjustment.
  • All four files are either binary assets or a small workflow/changelog edit. The workflow logic itself is unchanged — only the trigger event was updated. The one minor concern (stale context.sha in the PR body) is purely informational and does not affect the correctness of the sync operation.
  • .github/workflows/sync-main-to-dev.yml deserves a quick read to confirm the context.sha traceability trade-off is acceptable.

Important Files Changed

Filename Overview
.github/workflows/sync-main-to-dev.yml Trigger changed from push: branches: [main] to release: types: [published]; body copy updated accordingly. Minor informational inaccuracy: context.sha now reflects the release tag's commit rather than the origin/main HEAD that is actually merged.
CHANGELOG.md 1.2.7 entry added following the established format (heading + <details> block); content accurately describes both changes in this PR.
app/assets/tracks/albert_park.png Binary asset replaced with an updated Albert Park circuit map PNG; no code-level concerns.
app/assets/tracks_processed/albert_park.bmp Regenerated BMP from the updated PNG source; binary-only change with no code-level concerns.

Sequence Diagram

sequenceDiagram
    participant GH as GitHub
    participant WF as sync-main-to-dev workflow
    participant Runner as Self-hosted Runner
    participant Main as origin/main
    participant Dev as origin/dev

    Note over GH,Dev: Trigger: release published (was: push to main)

    GH->>WF: release.published event (context.sha = tag commit)
    WF->>Runner: Start job
    Runner->>Main: git fetch origin main dev
    Runner->>Dev: git checkout -B sync/main-to-dev origin/dev
    Runner->>Main: git merge --no-ff origin/main

    alt No changes / already up to date
        Runner-->>WF: has_changes=false → skip push & PR
    else Merge conflict
        Runner-->>WF: has_conflict=true → fail job
    else Changes detected
        Runner->>Dev: git push --force-with-lease origin sync/main-to-dev
        Runner->>GH: Create / update sync PR (head=sync/main-to-dev, base=dev)
        Runner->>GH: Trigger CI workflow_dispatch on sync/main-to-dev
        Runner->>GH: Enable auto-merge (MERGE method) on sync PR
    end
Loading

Last reviewed commit: dd2130e

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR makes three maintenance changes: updating the pre-processed Albert Park track BMP asset, changing the sync-main-to-dev workflow trigger from push on main to release: published, and adding a 1.2.7 CHANGELOG entry documenting both changes.

Changes:

  • Replaces the pre-processed 1-bit BMP for Albert Park (app/assets/tracks_processed/albert_park.bmp) with a refreshed version.
  • Switches the sync-main-to-dev workflow trigger from every push to main to only fire on published GitHub releases, reducing noise.
  • Adds the 1.2.7 changelog entry covering both changes.

Reviewed changes

Copilot reviewed 2 out of 4 changed files in this pull request and generated 1 comment.

File Description
app/assets/tracks_processed/albert_park.bmp Updated pre-processed 1-bit BMP track asset for Albert Park
.github/workflows/sync-main-to-dev.yml Changed workflow trigger to release: published and updated PR body text to match
CHANGELOG.md Added 1.2.7 entry documenting the asset refresh and sync timing change

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
CHANGELOG.md (1)

8-18: Missing ### Backend heading for consistency.

The 1.2.7 entry is missing the ### Backend heading that precedes the <details> block in other entries (see 1.2.6 at lines 24-27). This causes the heading level to jump from ## (h2) to #### (h4), which is flagged by markdownlint (MD001).

📝 Proposed fix
 ## [1.2.7] - 2026-03-05
 
+### Backend
+
 <details markdown="1">
 <summary>Backend</summary>
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@CHANGELOG.md` around lines 8 - 18, The release note for "## [1.2.7] -
2026-03-05" is missing the intermediate "### Backend" heading before the
existing <details markdown="1"> block, causing a jump from h2 to h4 and
triggering markdownlint MD001; insert a single line "### Backend" immediately
above the <details markdown="1"> block (so the sequence is "## [1.2.7]...", then
"### Backend", then the <details>...</details>) to restore consistent heading
levels for this entry.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@CHANGELOG.md`:
- Around line 8-18: The release note for "## [1.2.7] - 2026-03-05" is missing
the intermediate "### Backend" heading before the existing <details
markdown="1"> block, causing a jump from h2 to h4 and triggering markdownlint
MD001; insert a single line "### Backend" immediately above the <details
markdown="1"> block (so the sequence is "## [1.2.7]...", then "### Backend",
then the <details>...</details>) to restore consistent heading levels for this
entry.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 84a3c06f-3b25-480d-afa1-3703b15ecce0

📥 Commits

Reviewing files that changed from the base of the PR and between b20507b and b7608f9.

⛔ Files ignored due to path filters (2)
  • app/assets/tracks/albert_park.png is excluded by !**/*.png
  • app/assets/tracks_processed/albert_park.bmp is excluded by !**/*.bmp
📒 Files selected for processing (2)
  • .github/workflows/sync-main-to-dev.yml
  • CHANGELOG.md
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: test
🧰 Additional context used
🪛 markdownlint-cli2 (0.21.0)
CHANGELOG.md

[warning] 13-13: Heading levels should only increment by one level at a time
Expected: h3; Actual: h4

(MD001, heading-increment)

🔇 Additional comments (3)
.github/workflows/sync-main-to-dev.yml (3)

4-5: LGTM!

The trigger change from push on main to release: published aligns well with the stated goal of syncing dev after each release. This ensures the sync happens at the right moment in the release lifecycle.


29-30: Good addition for traceability.

Capturing the main_sha explicitly after fetch ensures the PR body reflects the exact commit being synced, regardless of any timing differences with the release event's context.sha.


62-70: LGTM!

The updated PR body clearly documents both the release event commit and the actual main HEAD being synced, which improves auditability for the sync operation.

Let runtime rendering prefer the asset service while keeping local fallbacks so the app can migrate without breaking existing track delivery. Sync Albert Park assets, helper scripts, and tests with the new loading path so the integration stays verifiable.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (9)
scripts/convert_track_assets.py (1)

17-20: Consider documenting URL update process.

The hardcoded F1 track URL includes a version timestamp (v1751632426), which may become stale if F1 updates their assets. Consider adding a comment noting how/when to update these URLs, or documenting this in the track conversion workflow.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/convert_track_assets.py` around lines 17 - 20, DEFAULT_F1_TRACK_URLS
contains a hardcoded asset URL with a version token (v1751632426) that can
become stale; update the DEFAULT_F1_TRACK_URLS entry for "albert_park" to
include a comment explaining that the numeric version is the CDN image version
and how to refresh it (e.g., fetch the latest asset path from the F1 media
source or update the token when assets are rotated), or move the URL to
configurable storage (env/config) used by scripts/convert_track_assets.py so it
can be updated without code changes; annotate the DEFAULT_F1_TRACK_URLS constant
with this guidance and, if applicable, add a short note in the track conversion
workflow/docs about when and how to update the versioned URL.
app/services/asset_client.py (1)

78-83: Bare except Exception in _read_json may hide issues.

The catch-all exception handling silently returns None for any error, including unexpected ones like MemoryError. Consider logging at debug level or catching specific exceptions (json.JSONDecodeError, OSError).

♻️ Optional: Log debug info on read failures
     `@staticmethod`
     def _read_json(path: Path) -> dict[str, Any] | None:
         try:
             return json.loads(path.read_text())
-        except Exception:
+        except (json.JSONDecodeError, OSError) as exc:
+            logger.debug("Failed to read cached JSON %s: %s", path, exc)
             return None
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/services/asset_client.py` around lines 78 - 83, The helper _read_json
currently swallows all exceptions which can hide serious errors; update
AssetClient._read_json to catch only expected errors (json.JSONDecodeError and
OSError) and let other exceptions propagate, and optionally log the failure at
debug level with the path and exception; reference the method name _read_json
and the use of Path.read_text()/json.loads() when making this change.
scripts/search_track_1bit_layered_parallel.py (1)

237-261: Minor: Consider returning a copy when iterations <= 0.

When iterations <= 0, the function returns the original mask reference. If callers later modify the returned mask, they would inadvertently modify the input. While current usage appears safe, returning mask directly couples caller behavior to implementation details.

🛡️ Defensive copy for zero iterations
 def _dilate_mask(mask: list[list[bool]], iterations: int) -> list[list[bool]]:
     if iterations <= 0:
-        return mask
+        return [row[:] for row in mask]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/search_track_1bit_layered_parallel.py` around lines 237 - 261, The
function _dilate_mask should not return the original mask reference when
iterations <= 0; instead return a defensive copy to avoid callers mutating the
input. Change the early-return branch in _dilate_mask (the iterations <= 0 case)
to return a shallow copy of each row (e.g., construct a new list of lists from
mask) so the returned value is independent of the input while preserving the
same shape and boolean values.
scripts/track_conversion_utils.py (3)

302-315: IoU returns 1.0 for empty masks—document this edge case.

Returning 1.0 when union == 0 (both masks empty) treats "no pixels in either" as a perfect match. This is a reasonable semantic choice for this scoring context, but it's non-obvious. Consider adding a brief comment.

📝 Document edge case
 def _intersection_over_union(mask_a: list[list[bool]], mask_b: list[list[bool]]) -> float:
     """Compute IoU for two bool masks."""
     width, height = _mask_shape(mask_a)
     intersection = 0
     union = 0
     for y in range(height):
         for x in range(width):
             a = mask_a[y][x]
             b = mask_b[y][x]
             if a and b:
                 intersection += 1
             if a or b:
                 union += 1
+    # Empty masks are considered a perfect match (both have nothing to compare)
     return intersection / union if union else 1.0
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/track_conversion_utils.py` around lines 302 - 315, The function
_intersection_over_union currently returns 1.0 when union == 0 (both masks
empty); add a concise inline comment or docstring note in
_intersection_over_union explaining that this edge case treats two empty masks
as a perfect match (IoU = 1.0) so future readers understand the intentional
semantic choice.

395-417: Same defensive copy consideration applies here.

When iterations <= 0, returning the input mask directly can lead to aliasing issues if callers modify the result.

🛡️ Defensive copy
 def _dilate_mask(mask: list[list[bool]], iterations: int) -> list[list[bool]]:
     """Dilate a bool mask with a 3x3 max filter."""
     if iterations <= 0:
-        return mask
+        return [row[:] for row in mask]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/track_conversion_utils.py` around lines 395 - 417, The early return
in _dilate_mask returns the input mask directly when iterations <= 0, risking
aliasing; change that to return a defensive deep copy of the mask (i.e., a new
list of row lists) so callers cannot mutate the original via the returned value.
Locate _dilate_mask and replace the direct return with constructing and
returning a new list where each row is copied (e.g., list(row) or row.copy()) to
preserve the original mask.

117-124: Network request lacks error handling.

The download_file function will raise urllib.error.URLError or urllib.error.HTTPError on network failures, and these exceptions propagate to callers. Consider whether callers handle these appropriately, or add explicit documentation about expected exceptions.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/track_conversion_utils.py` around lines 117 - 124, The download_file
function currently lets network errors propagate; wrap the
urllib.request.urlopen call (and response.read()) in a try/except that catches
urllib.error.URLError and urllib.error.HTTPError, then either raise a clearer
custom exception (or re-raise with additional context) so callers get a
descriptive error, and ensure any partially written files are not left on disk
(use a temp file + output_path.write_bytes on success or unlink on failure);
update the function docstring to list the possible exceptions
(URLError/HTTPError) if you choose to document rather than convert them.
scripts/score_track_render.py (2)

35-40: Default source path assumes execution from repository root.

The default --source-path of Path("app/assets/tracks/albert_park.png") is a relative path. If the script is executed from the scripts/ directory, the path resolution will fail. Consider using Path(__file__).parent.parent / "app" / "assets" / ... for robustness, or document the expected working directory.

🛠️ Use script-relative path resolution
+PROJECT_ROOT = Path(__file__).resolve().parent.parent
+
 def parse_args() -> argparse.Namespace:
     parser = argparse.ArgumentParser(description="Score rendered 1-bit track quality.")
     # ... other args ...
     parser.add_argument(
         "--source-path",
         type=Path,
-        default=Path("app/assets/tracks/albert_park.png"),
+        default=PROJECT_ROOT / "app" / "assets" / "tracks" / "albert_park.png",
         help="Source colorful track image used to build semantic reference",
     )
     return parser.parse_args()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/score_track_render.py` around lines 35 - 40, The default value for
the "--source-path" argument is a relative path that breaks when the script runs
from a different CWD; update the parser.add_argument default for "--source-path"
to construct an absolute script-relative path (using
Path(__file__).resolve().parent or parent.parent) to point to the
app/assets/tracks/albert_park.png file so resolution is robust regardless of
working directory, keeping the argument name ("--source-path") and
parser.add_argument usage intact.

44-56: Consider adding source path validation and error handling.

The main function will fail with a generic PIL error if source_path doesn't exist, and with a URLError if the endpoint is unreachable. Explicit validation would provide clearer error messages for common failure modes.

🛡️ Add validation
 def main() -> int:
     args = parse_args()
+    if not args.source_path.exists():
+        print(f"Error: Source image not found: {args.source_path}")
+        return 1
+
     reference = build_track_semantic_reference(args.source_path)
-    rendered = fetch_calendar_render(args.endpoint_url, args.tz)
+    try:
+        rendered = fetch_calendar_render(args.endpoint_url, args.tz)
+    except Exception as e:
+        print(f"Error: Failed to fetch from endpoint: {e}")
+        return 1
+
     metrics = evaluate_rendered_semantic_quality(rendered, reference)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/score_track_render.py` around lines 44 - 56, The main function lacks
validation and error handling for missing source files and network failures:
before calling build_track_semantic_reference(args.source_path) check that
args.source_path.exists() and is_file(), and raise/print a clear error and
return non-zero if not; wrap calls to build_track_semantic_reference and
fetch_calendar_render in try/except to catch filesystem/IOErrors and network
exceptions (e.g., URLError/requests.exceptions.RequestException) and emit
descriptive messages; also wrap rendered.save(args.save_preview) in a try/except
to handle PIL/IO errors, ensure args.save_preview.parent.mkdir is guarded and
errors are handled, and return a non-zero exit code on failure so main reports
failure instead of crashing.
tests/test_renderer.py (1)

955-974: Tests verify remote-first loading correctly, but consider adding an integration test.

The test properly validates that:

  1. get_asset_client() is called with the expected variant "1bit"
  2. The circuit ID is correctly passed to the remote client
  3. The remote image is returned when available

However, per coding guidelines, renderer changes should include tests verifying exact BMP properties (800x480, mode="1", format="BMP"). While this unit test appropriately isolates _load_track_image, consider adding a complementary integration test that renders a full calendar with mocked remote assets and verifies the final BMP output properties.

As per coding guidelines: "All renderer changes MUST include tests in tests/test_renderer.py that verify exact BMP properties: 800x480, mode='1', format='BMP'."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_renderer.py` around lines 955 - 974, Add an integration test in
tests/test_renderer.py that exercises the Renderer end-to-end (using a mocked
remote asset client via get_asset_client or monkeypatch) to render a full
calendar and assert the produced image has exact BMP properties: size (800,
480), mode "1", and format "BMP"; reuse the existing approach that patches
get_asset_client and call the public rendering entrypoint on Renderer (rather
than only _load_track_image) to generate the final output, then open/inspect the
resulting Image object and assert image.size == (800, 480), image.mode == "1"
and image.format == "BMP".
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@app/services/asset_client.py`:
- Around line 97-106: Convert the blocking HTTP calls to async: change
get_manifest and _fetch_binary_bytes in app/services/asset_client.py to async
functions that use httpx.AsyncClient with await (using async with
httpx.AsyncClient(...)) and response = await client.get(...); update their
return paths to await response.aread()/response.json() as appropriate and
preserve the cache logic; then update all callers — notably get_track_image, any
usages in renderer.render_calendar and spectra6_renderer.render_calendar, and
the async _render_calendar flow — to await the new async methods (propagate
async up the call chain where necessary) so no synchronous httpx.Client calls
remain. Ensure headers method and timeout usage remain compatible and that
exception handling still falls back to reading the cache.

---

Nitpick comments:
In `@app/services/asset_client.py`:
- Around line 78-83: The helper _read_json currently swallows all exceptions
which can hide serious errors; update AssetClient._read_json to catch only
expected errors (json.JSONDecodeError and OSError) and let other exceptions
propagate, and optionally log the failure at debug level with the path and
exception; reference the method name _read_json and the use of
Path.read_text()/json.loads() when making this change.

In `@scripts/convert_track_assets.py`:
- Around line 17-20: DEFAULT_F1_TRACK_URLS contains a hardcoded asset URL with a
version token (v1751632426) that can become stale; update the
DEFAULT_F1_TRACK_URLS entry for "albert_park" to include a comment explaining
that the numeric version is the CDN image version and how to refresh it (e.g.,
fetch the latest asset path from the F1 media source or update the token when
assets are rotated), or move the URL to configurable storage (env/config) used
by scripts/convert_track_assets.py so it can be updated without code changes;
annotate the DEFAULT_F1_TRACK_URLS constant with this guidance and, if
applicable, add a short note in the track conversion workflow/docs about when
and how to update the versioned URL.

In `@scripts/score_track_render.py`:
- Around line 35-40: The default value for the "--source-path" argument is a
relative path that breaks when the script runs from a different CWD; update the
parser.add_argument default for "--source-path" to construct an absolute
script-relative path (using Path(__file__).resolve().parent or parent.parent) to
point to the app/assets/tracks/albert_park.png file so resolution is robust
regardless of working directory, keeping the argument name ("--source-path") and
parser.add_argument usage intact.
- Around line 44-56: The main function lacks validation and error handling for
missing source files and network failures: before calling
build_track_semantic_reference(args.source_path) check that
args.source_path.exists() and is_file(), and raise/print a clear error and
return non-zero if not; wrap calls to build_track_semantic_reference and
fetch_calendar_render in try/except to catch filesystem/IOErrors and network
exceptions (e.g., URLError/requests.exceptions.RequestException) and emit
descriptive messages; also wrap rendered.save(args.save_preview) in a try/except
to handle PIL/IO errors, ensure args.save_preview.parent.mkdir is guarded and
errors are handled, and return a non-zero exit code on failure so main reports
failure instead of crashing.

In `@scripts/search_track_1bit_layered_parallel.py`:
- Around line 237-261: The function _dilate_mask should not return the original
mask reference when iterations <= 0; instead return a defensive copy to avoid
callers mutating the input. Change the early-return branch in _dilate_mask (the
iterations <= 0 case) to return a shallow copy of each row (e.g., construct a
new list of lists from mask) so the returned value is independent of the input
while preserving the same shape and boolean values.

In `@scripts/track_conversion_utils.py`:
- Around line 302-315: The function _intersection_over_union currently returns
1.0 when union == 0 (both masks empty); add a concise inline comment or
docstring note in _intersection_over_union explaining that this edge case treats
two empty masks as a perfect match (IoU = 1.0) so future readers understand the
intentional semantic choice.
- Around line 395-417: The early return in _dilate_mask returns the input mask
directly when iterations <= 0, risking aliasing; change that to return a
defensive deep copy of the mask (i.e., a new list of row lists) so callers
cannot mutate the original via the returned value. Locate _dilate_mask and
replace the direct return with constructing and returning a new list where each
row is copied (e.g., list(row) or row.copy()) to preserve the original mask.
- Around line 117-124: The download_file function currently lets network errors
propagate; wrap the urllib.request.urlopen call (and response.read()) in a
try/except that catches urllib.error.URLError and urllib.error.HTTPError, then
either raise a clearer custom exception (or re-raise with additional context) so
callers get a descriptive error, and ensure any partially written files are not
left on disk (use a temp file + output_path.write_bytes on success or unlink on
failure); update the function docstring to list the possible exceptions
(URLError/HTTPError) if you choose to document rather than convert them.

In `@tests/test_renderer.py`:
- Around line 955-974: Add an integration test in tests/test_renderer.py that
exercises the Renderer end-to-end (using a mocked remote asset client via
get_asset_client or monkeypatch) to render a full calendar and assert the
produced image has exact BMP properties: size (800, 480), mode "1", and format
"BMP"; reuse the existing approach that patches get_asset_client and call the
public rendering entrypoint on Renderer (rather than only _load_track_image) to
generate the final output, then open/inspect the resulting Image object and
assert image.size == (800, 480), image.mode == "1" and image.format == "BMP".

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 3856c24a-c0fb-468e-bdd3-1039e172bde5

📥 Commits

Reviewing files that changed from the base of the PR and between b7608f9 and c785174.

⛔ Files ignored due to path filters (3)
  • app/assets/tracks/albert_park.png is excluded by !**/*.png
  • app/assets/tracks_processed/albert_park.bmp is excluded by !**/*.bmp
  • app/assets/tracks_spectra6/albert_park.bmp is excluded by !**/*.bmp
📒 Files selected for processing (13)
  • .env.example
  • app/config.py
  • app/services/asset_client.py
  • app/services/renderer.py
  • app/services/spectra6_renderer.py
  • docs/track-conversion-log.md
  • scripts/convert_track_assets.py
  • scripts/score_track_render.py
  • scripts/search_track_1bit_layered_parallel.py
  • scripts/search_track_1bit_params.py
  • scripts/track_conversion_utils.py
  • tests/test_config.py
  • tests/test_renderer.py
✅ Files skipped from review due to trivial changes (1)
  • docs/track-conversion-log.md
📜 Review details
🧰 Additional context used
📓 Path-based instructions (4)
app/**/*.py

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

app/**/*.py: Always use translator.get(key, fallback) pattern for translations. Session names use prefix session_ (e.g., session_race, session_qualifying). See translations/en.json for all keys.
All configuration values must come from environment variables via app/config.py. Never hardcode API URLs, display dimensions, or monitoring DSNs - use config.JOLPICA_API_URL, config.DISPLAY_WIDTH/HEIGHT, config.SENTRY_DSN.
Feature flags like UMAMI_ENABLED control optional services - code must handle disabled features gracefully without raising exceptions.

Files:

  • app/services/asset_client.py
  • app/services/spectra6_renderer.py
  • app/services/renderer.py
  • app/config.py
app/services/*.py

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

app/services/*.py: Use httpx.AsyncClient for HTTP calls, never requests. Analytics tracking is fire-and-forget: asyncio.create_task(_send_analytics(...))
Session schedule order matters: always sort by datetime with sort(key=lambda x: x["datetime"])

Files:

  • app/services/asset_client.py
  • app/services/spectra6_renderer.py
  • app/services/renderer.py
app/services/*renderer.py

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

app/services/*renderer.py: All images MUST be 1-bit mode (Image.new("1", (800, 480), 1)) for E-Ink compatibility. Never use "L" or "RGB" modes.
When drawing on 1-bit images, use fill=0 for black and fill=1 for white. Never hardcode coordinates - use pixel-precise layout constants from self.layout dict.
Session types MUST use session_ prefix in translation keys: session_fp1, session_fp2, session_fp3, session_qualifying, session_sprint, session_race.
The _draw_track_map() method creates stylized placeholder graphics using rounded rectangles with checkered accent stripes. All dimensions come from self.layout constants. Handle missing circuit data gracefully via _get_circuit_details() using .get() with defaults.
Font loading must account for fallback to default if DejaVuSans is missing. Test in Docker, not just locally.
BMP format is little-endian, 1-bit depth - do not manually construct headers. Use Pillow's Image.save() with BMP format.

Files:

  • app/services/spectra6_renderer.py
  • app/services/renderer.py
tests/test_renderer.py

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

tests/test_renderer.py: All renderer changes MUST include tests in tests/test_renderer.py that verify exact BMP properties: 800x480, mode="1", format="BMP". Use mock_race_data fixture pattern.
Test both Czech (cs) and English (en) translations by following the test_render_calendar_czech pattern in test cases.

Files:

  • tests/test_renderer.py
🧠 Learnings (4)
📚 Learning: 2026-03-01T20:49:42.387Z
Learnt from: CR
Repo: Rhiz3K/InkyCloud-F1 PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-03-01T20:49:42.387Z
Learning: Applies to app/services/*renderer.py : The `_draw_track_map()` method creates stylized placeholder graphics using rounded rectangles with checkered accent stripes. All dimensions come from `self.layout` constants. Handle missing circuit data gracefully via `_get_circuit_details()` using `.get()` with defaults.

Applied to files:

  • app/services/spectra6_renderer.py
  • app/services/renderer.py
📚 Learning: 2026-03-01T20:49:42.387Z
Learnt from: CR
Repo: Rhiz3K/InkyCloud-F1 PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-03-01T20:49:42.387Z
Learning: Applies to tests/test_renderer.py : All renderer changes MUST include tests in `tests/test_renderer.py` that verify exact BMP properties: 800x480, mode="1", format="BMP". Use `mock_race_data` fixture pattern.

Applied to files:

  • tests/test_renderer.py
📚 Learning: 2026-03-01T20:49:42.387Z
Learnt from: CR
Repo: Rhiz3K/InkyCloud-F1 PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-03-01T20:49:42.387Z
Learning: Applies to app/services/*renderer.py : Font loading must account for fallback to default if DejaVuSans is missing. Test in Docker, not just locally.

Applied to files:

  • tests/test_renderer.py
📚 Learning: 2026-03-01T20:49:42.387Z
Learnt from: CR
Repo: Rhiz3K/InkyCloud-F1 PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-03-01T20:49:42.387Z
Learning: Applies to tests/test_renderer.py : Test both Czech (`cs`) and English (`en`) translations by following the `test_render_calendar_czech` pattern in test cases.

Applied to files:

  • tests/test_config.py
🪛 dotenv-linter (4.0.0)
.env.example

[warning] 70-70: [UnorderedKey] The ASSET_API_TOKEN key should go before the ASSET_API_URL key

(UnorderedKey)

🔇 Additional comments (21)
.env.example (1)

65-73: LGTM!

The new asset API configuration block is well-documented with sensible defaults that align with existing patterns in the file (e.g., /app/data/ path prefix). The static analysis hint about alphabetical ordering of ASSET_API_TOKEN vs ASSET_API_URL is a stylistic preference that doesn't affect functionality.

scripts/search_track_1bit_params.py (1)

1-166: LGTM!

The script is well-structured for parameter optimization. Key observations:

  • sample_params() correctly provides all 15 required Track1BitParams fields
  • Exception handling in the fetch loop (lines 124-128) appropriately continues on failures
  • Graceful handling when no successful trials are produced (lines 149-151)
  • The scoring comparison if score > best_score correctly retains higher-scoring candidates per the score_track_metrics function semantics
app/config.py (2)

114-131: LGTM!

The new asset configuration fields follow established patterns:

  • ASSET_API_URL with Optional[str] and graceful fallback to None on invalid URLs
  • ASSET_CACHE_TTL_HOURS with gt=0 constraint ensuring positive values
  • Default paths consistent with existing Docker container paths

270-311: LGTM!

The validators follow the established patterns in this file:

  • validate_asset_api_url mirrors the validate_s3_endpoint approach with graceful degradation
  • normalize_asset_api_token properly handles whitespace and empty strings
  • validate_asset_cache_ttl follows the validate_timeout pattern
tests/test_config.py (1)

17-18: LGTM!

Good test coverage for the new asset configuration:

  • Verifies default values (ASSET_API_URL=None, ASSET_CACHE_TTL_HOURS=24)
  • Tests that invalid values ("not-a-url", "0") fall back to safe defaults

Also applies to: 29-30, 41-42

scripts/convert_track_assets.py (1)

82-125: LGTM!

The main function has clean control flow:

  • Proper URL resolution with fallback to defaults
  • Clear error handling for missing source files
  • Conditional execution based on skip flags
  • Informative output with conversion statistics
app/services/asset_client.py (1)

143-171: LGTM: Robust fallback logic in get_track_image.

The method correctly:

  • Returns None early if the client is disabled
  • Iterates through normalized candidate IDs
  • Validates variants against manifest when available
  • Falls back gracefully on decode failures with appropriate logging
  • Returns None only after exhausting all candidates
app/services/renderer.py (2)

1023-1032: LGTM: Remote asset fetch with local fallback.

The implementation correctly:

  • Builds candidate IDs including mapped variants and lowercase forms
  • Attempts remote fetch first via get_asset_client().get_track_image()
  • Falls back to local BMP/PNG loading if remote returns None
  • Maintains backward compatibility with existing local asset resolution

982-983: Good: Explicit int() casts for paste coordinates.

The explicit casts ensure pixel-precise positioning and prevent potential type errors if the arithmetic produces floats.

app/services/spectra6_renderer.py (2)

327-336: LGTM: Remote asset fetch for Spectra6 variant.

The implementation correctly:

  • Builds candidate IDs with normalized, original, and lowercase variants
  • Requests the "spectra6" variant from the asset client
  • Falls back to local BMP loading when remote returns None

291-295: LGTM: Type-safe paste coordinates.

The explicit int() casts and tuple creation ensure correct types for image.paste(). The .convert("RGB") on line 295 properly handles images that may not already be in RGB mode.

tests/test_renderer.py (2)

21-21: LGTM!

The import alias for spectra6_renderer module enables proper mocking of get_asset_client at the correct namespace where it's used in the production code.


1044-1061: Spectra6 remote asset test follows the correct pattern.

The test appropriately:

  • Uses the module-level monkeypatch target (spectra6_renderer_module.get_asset_client)
  • Verifies the "spectra6" variant is requested
  • Checks circuit ID inclusion in the request

Same consideration applies here regarding integration-level BMP verification for completeness.

scripts/search_track_1bit_layered_parallel.py (4)

45-47: Global mutable state for worker initialization is appropriate.

Using module-level globals with ProcessPoolExecutor's initializer pattern is the standard approach for sharing read-only data across worker processes. The data is initialized once per worker and remains constant during execution.


183-234: Connected components implementation is correct.

The BFS-based 4-neighborhood connected component labeling is implemented correctly. Using queue.pop() (LIFO) instead of queue.popleft() (FIFO) gives DFS traversal order, but for connected component extraction the traversal order doesn't affect correctness—only the final set of connected pixels matters.


627-662: Finalist verification modifies shared output path repeatedly.

Each finalist iteration writes to output_path before fetching the render. This is intentional for the verification workflow (testing each candidate against the live endpoint), but be aware that if the script is interrupted, the output file may contain an intermediate (non-best) candidate.

The final write at line 661 ensures the best result is persisted, which is correct.


767-768: LGTM!

Clean entry point using raise SystemExit(main()) pattern for proper exit code propagation.

scripts/track_conversion_utils.py (4)

1-6: LGTM!

Clear module docstring explaining the script-friendly design with no app imports. This enables standalone usage during asset iteration workflows.


339-392: Proper BFS using deque for connected components.

This implementation correctly uses deque.popleft() for FIFO traversal, providing true BFS behavior. The component statistics (bounding box, area, fill ratio) are accurately computed.


846-857: Size validation provides clear error messaging.

The explicit check that cropped dimensions match reference.preview_size will catch configuration mismatches early with a descriptive error message.


945-959: URL construction correctly handles existing query parameters.

The function properly parses, filters, and reconstructs the URL with the timezone parameter, avoiding duplicate tz keys in the query string.

Comment on lines +97 to +106
try:
with httpx.Client(timeout=self.timeout, follow_redirects=True) as client:
response = client.get(url, headers=self._headers())
response.raise_for_status()
cache_path.parent.mkdir(parents=True, exist_ok=True)
cache_path.write_text(response.text)
return response.json()
except Exception as exc:
logger.debug("Failed to fetch track manifest for %s: %s", circuit_id, exc)
return self._read_json(cache_path) if cache_path.exists() else None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check if renderers or asset client are called from async contexts
rg -n "async def.*render|await.*render|await.*get_track_image|await.*get_asset_client" app/

Repository: Rhiz3K/InkyCloud-F1

Length of output: 194


🏁 Script executed:

#!/bin/bash
# Search for calls to AssetClient methods from async functions
rg -n "get_track_image|get_asset_client|AssetClient" app/routes/ app/main.py -A 2

Repository: Rhiz3K/InkyCloud-F1

Length of output: 45


🏁 Script executed:

#!/bin/bash
# Look at the _render_calendar function to see if it uses AssetClient
sed -n '238,400p' app/routes/images.py | head -80

Repository: Rhiz3K/InkyCloud-F1

Length of output: 3078


🏁 Script executed:

#!/bin/bash
# Check all httpx.Client (sync) usage in app/services to see if it's consistent
rg -n "httpx\.Client\(" app/services/

Repository: Rhiz3K/InkyCloud-F1

Length of output: 299


🏁 Script executed:

#!/bin/bash
# Search for AssetClient imports and usage across entire codebase
rg -n "AssetClient|from.*asset_client|import.*asset_client" app/ --type py

Repository: Rhiz3K/InkyCloud-F1

Length of output: 418


🏁 Script executed:

#!/bin/bash
# Check if AssetClient class is even instantiated anywhere
rg -n "AssetClient\(" app/ --type py

Repository: Rhiz3K/InkyCloud-F1

Length of output: 120


🏁 Script executed:

#!/bin/bash
# Look for all functions in asset_client.py to understand the full scope
rg -n "^\s*def |^\s*async def " app/services/asset_client.py

Repository: Rhiz3K/InkyCloud-F1

Length of output: 927


🏁 Script executed:

#!/bin/bash
# Check how get_asset_client is used in spectra6_renderer.py and renderer.py
rg -n "get_asset_client\(\)" app/services/spectra6_renderer.py app/services/renderer.py -B 2 -A 3

Repository: Rhiz3K/InkyCloud-F1

Length of output: 911


🏁 Script executed:

#!/bin/bash
# Check if renderer methods are called from async contexts
rg -n "renderer\.render_calendar\(|render_calendar" app/routes/images.py -B 2 -A 2

Repository: Rhiz3K/InkyCloud-F1

Length of output: 531


🏁 Script executed:

#!/bin/bash
# Look for get_track_image calls in renderers
rg -n "get_track_image\(" app/services/ --type py -B 2 -A 2

Repository: Rhiz3K/InkyCloud-F1

Length of output: 1246


🏁 Script executed:

#!/bin/bash
# Check if render_calendar method is async or sync
rg -n "def render_calendar|async def render_calendar" app/services/renderer.py app/services/spectra6_renderer.py -B 2 -A 5

Repository: Rhiz3K/InkyCloud-F1

Length of output: 1023


🏁 Script executed:

#!/bin/bash
# Check the full context of where get_track_image is called in renderer methods
sed -n '1020,1035p' app/services/renderer.py

Repository: Rhiz3K/InkyCloud-F1

Length of output: 613


Use httpx.AsyncClient instead of httpx.Client.

The coding guidelines explicitly require: "Use httpx.AsyncClient for HTTP calls, never requests" for app/services/*.py. This module uses synchronous httpx.Client() at lines 98 and 118, which violates the guideline.

Additionally, get_track_image() is called from within renderer.render_calendar() (synchronous methods in app/services/renderer.py and app/services/spectra6_renderer.py), which are invoked from the async _render_calendar() function. Using blocking HTTP calls in this chain can block the event loop.

Convert get_manifest() and _fetch_binary_bytes() to async methods using httpx.AsyncClient, and update the callers accordingly.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/services/asset_client.py` around lines 97 - 106, Convert the blocking
HTTP calls to async: change get_manifest and _fetch_binary_bytes in
app/services/asset_client.py to async functions that use httpx.AsyncClient with
await (using async with httpx.AsyncClient(...)) and response = await
client.get(...); update their return paths to await
response.aread()/response.json() as appropriate and preserve the cache logic;
then update all callers — notably get_track_image, any usages in
renderer.render_calendar and spectra6_renderer.render_calendar, and the async
_render_calendar flow — to await the new async methods (propagate async up the
call chain where necessary) so no synchronous httpx.Client calls remain. Ensure
headers method and timeout usage remain compatible and that exception handling
still falls back to reading the cache.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants