Conversation
WalkthroughIntroduces CI workflows for asset generation/validation and perf testing; adds backend sync endpoints with security, metrics, migrations, and startup behaviors; implements Hygraph service stubs; adds multiple asset-processing scripts; publishes a models manifest and frontend test tooling; and expands backend/frontend tests and OpenAPI spec generation. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant HG as Hygraph
participant Client as Hygraph Webhook
participant API as FastAPI /api/sync/hygraph
participant Sec as Security Validator
participant DB as DB (SyncEvent)
participant Svc as HygraphService
participant M as Prometheus Metrics
Client->>API: POST /api/sync/hygraph (signed)
API->>Sec: validate_hygraph_request(body, signature)
Sec-->>API: ok / error
API->>DB: upsert SyncEvent (dedupe by event_id/body_sha256)
alt First delivery
API-->>Client: 202 Accepted { ok: true }
API--)Svc: Background pull_all()
Svc->>M: inc sync_success/records
else Duplicate
API-->>Client: 200 OK { ok: true, dedup: true }
end
sequenceDiagram
autonumber
participant Admin as Admin Client
participant API as FastAPI /api/sync/hygraph/pull
participant Sec as require_write_token
participant Svc as HygraphService
participant M as Prometheus Metrics
Admin->>API: POST /api/sync/hygraph/pull { type, page_size } + Authorization
API->>Sec: validate token
Sec-->>API: ok / 401/403
API->>Svc: pull_{type} or pull_all(page_size)
Svc-->>API: { counts }
API->>M: inc success/failure, upserted
API-->>Admin: 200 { ok: true, data }
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
Poem
Pre-merge checks and finishing touches❌ Failed checks (2 warnings, 1 inconclusive)
✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting
| async def _process(event_id_local: Optional[str], body_sha_local: str) -> None: | ||
| t0 = time.perf_counter() | ||
| try: | ||
| counts = await HygraphService.pull_all(db) | ||
| for t, c in counts.items(): | ||
| sync_records_upserted_total.labels(t).inc(int(c or 0)) |
There was a problem hiding this comment.
Avoid using request-scoped session inside background task
The webhook handler schedules _process to run after the response is returned, but _process reuses the db session obtained from get_db. FastAPI closes that session in the dependency cleanup once the handler exits, so when HygraphService.pull_all starts executing it will operate on a closed Session and raise InvalidRequestError as soon as it touches the database. Create a new session inside the background task (or delay closing until the task completes) instead of reusing the request-scoped one.
Useful? React with 👍 / 👎.
| from api.routes_quote import router as quote_router | ||
| from api.routes_cnc import router as cnc_router | ||
| from api.routes_sync import router as sync_router | ||
| from prometheus_client import CONTENT_TYPE_LATEST, generate_latest |
There was a problem hiding this comment.
Declare prometheus_client dependency for new /metrics endpoint
The API now imports CONTENT_TYPE_LATEST and generate_latest from prometheus_client, but the backend’s pyproject.toml does not list prometheus_client as a requirement. Deployments without the package installed will crash at import time with ModuleNotFoundError. Add the dependency to the backend project configuration.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Actionable comments posted: 13
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
⛔ Files ignored due to path filters (2)
backend/paform.dbis excluded by!**/*.dbfrontend/package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (28)
.github/workflows/assets.yml(1 hunks).github/workflows/perf-light.yml(1 hunks)backend/alembic/versions/cfe1d8e4e001_add_sync_events.py(1 hunks)backend/api/db.py(1 hunks)backend/api/main.py(6 hunks)backend/api/metrics.py(1 hunks)backend/api/models.py(2 hunks)backend/api/routes_sync.py(1 hunks)backend/api/security.py(1 hunks)backend/scripts/generate_openapi.py(1 hunks)backend/services/hygraph_service.py(1 hunks)backend/tests/conftest.py(1 hunks)backend/tests/test_error_envelopes.py(1 hunks)backend/tests/test_manifest_coverage_backend.py(1 hunks)backend/tests/test_sync_routes_metrics.py(1 hunks)docs/API_SPEC.md(1 hunks)frontend/package.json(2 hunks)frontend/playwright.config.ts(1 hunks)frontend/public/models/manifest.json(1 hunks)frontend/tests/e2e/cache.spec.ts(1 hunks)frontend/tests/manifest.spec.ts(1 hunks)frontend/vitest.config.ts(1 hunks)scripts/gen_glb_manifest.py(1 hunks)scripts/generate_reference_glbs.py(1 hunks)scripts/glb_validate.py(1 hunks)scripts/pack_models.sh(1 hunks)scripts/update_glb_metadata.py(1 hunks)tests/perf/k6-quote-cnc.js(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (13)
backend/tests/test_error_envelopes.py (2)
backend/tests/test_sync_routes_metrics.py (1)
client(19-21)backend/tests/test_routes_quote_cnc.py (1)
test_quote_and_cnc_endpoints(38-63)
scripts/gen_glb_manifest.py (2)
scripts/generate_reference_glbs.py (1)
main(253-268)tests/perf/k6-quote-cnc.js (1)
manifest(106-106)
tests/perf/k6-quote-cnc.js (1)
backend/tests/test_routes_quote_cnc.py (1)
test_quote_and_cnc_endpoints(38-63)
backend/tests/test_manifest_coverage_backend.py (2)
backend/api/db.py (2)
Base(13-14)get_db(53-59)backend/api/models.py (1)
Module(65-110)
backend/api/routes_sync.py (5)
backend/api/db.py (1)
get_db(53-59)backend/api/models.py (1)
SyncEvent(118-133)backend/api/security.py (2)
require_write_token(24-36)validate_hygraph_request(91-108)backend/api/main.py (1)
metrics(105-107)backend/services/hygraph_service.py (5)
HygraphService(11-80)pull_all(62-64)pull_materials(47-49)pull_modules(52-54)pull_systems(57-59)
frontend/tests/e2e/cache.spec.ts (1)
tests/perf/k6-quote-cnc.js (3)
manifestResponse(103-103)etag(116-116)cached(118-118)
backend/tests/conftest.py (1)
backend/api/db.py (1)
Base(13-14)
backend/api/security.py (1)
backend/api/config.py (1)
Settings(9-45)
scripts/glb_validate.py (2)
scripts/update_glb_metadata.py (1)
main(101-109)scripts/gen_glb_manifest.py (1)
main(24-29)
backend/api/main.py (2)
backend/api/db.py (1)
Base(13-14)backend/api/config.py (1)
Settings(9-45)
scripts/update_glb_metadata.py (1)
scripts/glb_validate.py (1)
triangle_count(83-97)
backend/tests/test_sync_routes_metrics.py (1)
backend/services/hygraph_service.py (1)
HygraphService(11-80)
backend/api/models.py (1)
backend/api/db.py (1)
Base(13-14)
🪛 actionlint (1.7.8)
.github/workflows/perf-light.yml
41-41: could not parse as YAML: could not find expected ':'
(syntax-check)
🪛 Ruff (0.14.0)
scripts/gen_glb_manifest.py
1-1: Shebang is present but file is not executable
(EXE001)
backend/api/routes_sync.py
86-86: Local variable payload is assigned to but never used
Remove assignment to unused variable payload
(F841)
scripts/glb_validate.py
1-1: Shebang is present but file is not executable
(EXE001)
12-12: Import from collections.abc instead: Iterable, Sequence
Import from collections.abc
(UP035)
12-12: typing.Dict is deprecated, use dict instead
(UP035)
12-12: typing.List is deprecated, use list instead
(UP035)
12-12: typing.Tuple is deprecated, use tuple instead
(UP035)
36-36: Avoid specifying long messages outside the exception class
(TRY003)
48-48: Avoid specifying long messages outside the exception class
(TRY003)
100-100: Boolean-typed positional argument in function definition
(FBT001)
backend/api/main.py
36-36: Module level import not at top of file
(E402)
37-37: Module level import not at top of file
(E402)
scripts/update_glb_metadata.py
1-1: Shebang is present but file is not executable
(EXE001)
9-9: Import from collections.abc instead: Iterable, Sequence
Import from collections.abc
(UP035)
9-9: typing.Dict is deprecated, use dict instead
(UP035)
9-9: typing.List is deprecated, use list instead
(UP035)
9-9: typing.Tuple is deprecated, use tuple instead
(UP035)
17-17: Avoid specifying long messages outside the exception class
(TRY003)
66-66: Avoid specifying long messages outside the exception class
(TRY003)
71-71: Avoid specifying long messages outside the exception class
(TRY003)
75-75: Avoid specifying long messages outside the exception class
(TRY003)
scripts/generate_reference_glbs.py
1-1: Shebang is present but file is not executable
(EXE001)
21-21: Import from collections.abc instead: Iterable, Sequence
Import from collections.abc
(UP035)
21-21: typing.Dict is deprecated, use dict instead
(UP035)
21-21: typing.List is deprecated, use list instead
(UP035)
21-21: typing.Tuple is deprecated, use tuple instead
(UP035)
🪛 YAMLlint (1.37.1)
.github/workflows/perf-light.yml
[warning] 3-3: truthy value should be one of [false, true]
(truthy)
[error] 42-42: syntax error: could not find expected ':'
(syntax)
🔇 Additional comments (12)
backend/tests/test_error_envelopes.py (3)
11-17: LGTM!The error shape validation correctly checks for:
okfield set toFalseerrorobject withcodeandmessagefieldsThis aligns with the unified error envelope pattern mentioned in the PR context.
20-27: LGTM! Test correctly validates error envelope for invalid JSON.The test appropriately:
- Sends malformed JSON (
'not-json') with JSON content-type header- Accepts both 400 and 422 status codes (common for JSON parsing/validation errors)
- Verifies the error response conforms to the expected envelope structure
30-37: LGTM! Test correctly validates error envelope for CNC export.The test structure mirrors the quote test and correctly validates the error envelope for the
/api/cnc/exportendpoint.backend/scripts/generate_openapi.py (1)
6-9: LGTM! Script correctly generates and outputs OpenAPI spec.The implementation is straightforward and appropriate for a utility script:
- Uses FastAPI's built-in
openapi()method to generate the spec- Pretty-prints JSON with indent for readability
- Suitable for CI/CD pipelines or manual inspection
backend/tests/conftest.py (1)
11-15: LGTM! Fixture ensures DB schema is ready for tests.The autouse fixture correctly:
- Registers SQLAlchemy models via import
- Creates all tables before each test
- Uses function scope to handle cases where tests might drop/remove tables
This supports the new database-backed models and migrations introduced in the PR.
backend/alembic/versions/cfe1d8e4e001_add_sync_events.py (2)
24-26: Verify the intended uniqueness constraints for body_sha256.Three unique constraints are defined:
(source, event_id)- ensures unique events per source(source, body_sha256)- ensures unique body hashes per source(body_sha256)- ensures global uniqueness of body hashesThe standalone
body_sha256constraint (line 26) makes constraint #2 redundant, since a globally unique body_sha256 is already more restrictive than uniqueness within a source.Consider whether global body_sha256 uniqueness is the intended behavior:
- If events from different sources can have identical content (unlikely but possible), remove line 26
- If duplicate content should never exist regardless of source, remove lines 25-25
Review the intended deduplication semantics with the team to clarify whether:
- Duplicate event content should be prevented globally (keep line 26 only)
- Duplicate event content should be allowed across different sources (keep lines 24-25 only)
30-31: LGTM! Downgrade path correctly drops the table.The downgrade function properly reverses the migration by dropping the
sync_eventstable.backend/api/metrics.py (1)
5-21: LGTM! Metrics follow Prometheus best practices.The counter definitions correctly:
- Use descriptive names with
_totalsuffix for counters- Include clear documentation strings
- Use
labelnamesfor the "type" dimension to track different sync operations- Follow standard Prometheus naming conventions
scripts/gen_glb_manifest.py (2)
15-21: LGTM! Manifest entry generation is correct.The function correctly:
- Reads file content as bytes
- Computes SHA-256 hash
- Returns a well-structured manifest entry with file path, hash, and size
24-29: LGTM! Main function generates deterministic manifest.The implementation:
- Uses
sorted()to ensure deterministic output (important for version control)- Aggregates all GLB files matching the pattern
- Outputs pretty-printed JSON for readability
- Returns proper exit code
.github/workflows/assets.yml (2)
34-37: LGTM! GLB validation step correctly fails on warnings.The
--fail-on-warningflag ensures strict validation, which is appropriate for asset quality control in CI.
44-49: LGTM! Manifest upload configuration is appropriate.The artifact upload:
- Uses
if: always()to capture the manifest even if validation fails (useful for debugging)- Uploads to a clearly named artifact for easy retrieval
| - name: Generate reference GLBs | ||
| run: | | ||
| cd paform | ||
| python3 scripts/generate_reference_glbs.py |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider pinning the Python version.
The workflow uses python3 without specifying a version, which relies on the Ubuntu runner's default Python installation (currently 3.12). For reproducibility and to prevent unexpected breakage, consider using actions/setup-python to pin a specific Python version.
Add a Python setup step before line 24:
+ - name: Setup Python
+ uses: actions/setup-python@v5
+ with:
+ python-version: '3.12'
+
- name: Generate reference GLBs📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: Generate reference GLBs | |
| run: | | |
| cd paform | |
| python3 scripts/generate_reference_glbs.py | |
| - name: Setup Python | |
| uses: actions/setup-python@v5 | |
| with: | |
| python-version: '3.12' | |
| - name: Generate reference GLBs | |
| run: | | |
| cd paform | |
| python3 scripts/generate_reference_glbs.py |
🤖 Prompt for AI Agents
In .github/workflows/assets.yml around lines 24 to 27, the job runs python3
without pinning a version; add an actions/setup-python step immediately before
the "Generate reference GLBs" step to install and pin a specific Python version
(e.g., 3.11) using actions/setup-python@v4 and set python-version to the chosen
version so the script runs on a stable, reproducible interpreter; keep the
existing run step but ensure the runner uses the pinned Python (you can keep
invoking python3 or python after setup).
| cd paform | ||
| docker compose --env-file .env.development -f docker-compose.dev.yml up -d --build |
There was a problem hiding this comment.
Remove the failing cd paform hops
After checkout the runner is already inside the repository root (~/work/paform/paform). There is no nested paform/ directory, so cd paform exits with cd: paform: No such file or directory, and because steps run with set -e, the workflow aborts before Docker Compose even runs. Drop the cd paform here (and in the later steps that repeat it) or point to the actual path that contains docker-compose.dev.yml.
🤖 Prompt for AI Agents
In .github/workflows/perf-light.yml around lines 23-24, the step runs `cd
paform` but the runner is already at the repository root
(`~/work/paform/paform`) so the extra `cd paform` fails; remove the `cd paform`
command (and any other duplicate `cd paform` occurrences in subsequent steps) or
update those steps to use the correct path that actually contains
docker-compose.dev.yml so the docker compose command runs from the existing
working directory.
| def get_settings() -> Settings: | ||
| # Instantiate settings outside of Pydantic request validation to avoid DI issues. | ||
| return Settings() |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Cache Settings() construction
BaseSettings hits environment parsing on every instantiation. Wrapping the provider in @lru_cache keeps this dependency lightweight under load.
+from functools import lru_cache
+
+
+@lru_cache
def get_settings() -> Settings:
- # Instantiate settings outside of Pydantic request validation to avoid DI issues.
- return Settings()
+ return Settings()Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In backend/api/security.py around lines 19 to 21, the Settings() factory is
constructed on every call which triggers BaseSettings environment parsing; wrap
the provider with functools.lru_cache to cache a single Settings instance under
load. Add the import "from functools import lru_cache" and decorate the
get_settings function with @lru_cache() so it returns a cached Settings object
while keeping the signature get_settings() -> Settings unchanged.
| expected = getattr(settings, "api_write_token", None) or os.getenv("API_WRITE_TOKEN", "") | ||
| if not expected or not secrets.compare_digest(token, expected): | ||
| raise HTTPException(status_code=HTTP_403_FORBIDDEN, detail="Invalid API write token.") |
There was a problem hiding this comment.
Add typed settings fields so tokens/secrets actually load
Settings (see api/config.py lines 8-44) doesn’t define api_write_token, hygraph_webhook_secret, or max_webhook_body_bytes, so getattr(settings, …) always falls back. Because Pydantic doesn’t export .env values into os.environ, the fallback os.getenv(...) stays empty when you rely on the .env file, which means every write-token or Hygraph webhook check fails with 403/401 even when the secrets are configured. Please add these fields to Settings (or otherwise preload the environment) and reference them directly here instead of the os.getenv fallback to keep local/dev parity.
Suggested changes:
- expected = getattr(settings, "api_write_token", None) or os.getenv("API_WRITE_TOKEN", "")
+ expected = settings.api_write_token- secret = getattr(settings, "hygraph_webhook_secret", None) or os.getenv("HYGRAPH_WEBHOOK_SECRET", "")
- if not verify_hygraph_signature(body, signature or "", secret):
+ secret = settings.hygraph_webhook_secret
+ if not secret or not verify_hygraph_signature(body, signature or "", secret):And in api/config.py:
hygraph_token: str = Field(
default="", description="Hygraph access token"
)
+ api_write_token: str = Field(
+ default="", description="Bearer token required for admin write operations"
+ )
+ hygraph_webhook_secret: str = Field(
+ default="", description="Shared secret used to validate Hygraph webhook signatures"
+ )
+ max_webhook_body_bytes: int = Field(
+ default=2 * 1024 * 1024,
+ description="Maximum accepted webhook payload size in bytes",
+ )Also applies to: 103-105
🤖 Prompt for AI Agents
In backend/api/security.py around lines 33-35 (and likewise around 103-105), the
code falls back to os.getenv because Settings does not declare api_write_token,
hygraph_webhook_secret, or max_webhook_body_bytes, causing tokens from the .env
to never be loaded; update api/config.py to add typed Pydantic fields for
api_write_token: Optional[str] = None, hygraph_webhook_secret: Optional[str] =
None, and max_webhook_body_bytes: int = <sensible_default>, then update
security.py to read these settings directly (e.g., token_expected =
settings.api_write_token) and remove the os.getenv fallback and ensure
secrets.compare_digest is only called when a non-empty expected value exists;
apply the same pattern for hygraph_webhook_secret and max_webhook_body_bytes at
the other referenced lines.
| @@ -0,0 +1,9 @@ | |||
| #!/usr/bin/env python3 | |||
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Make the script executable.
The shebang is present but the file is not marked as executable.
Run the following command to make the script executable:
chmod +x backend/scripts/generate_openapi.pyBased on static analysis hints.
🤖 Prompt for AI Agents
In backend/scripts/generate_openapi.py at line 1, the script has a shebang but
the file is not executable; mark the file as executable by running chmod +x
backend/scripts/generate_openapi.py (or set executable bit in your VCS/CI) so
the script can be invoked directly from the shell.
| with TestingSession() as s: | ||
| m = Module( | ||
| name="Base 600", | ||
| width=600.0, | ||
| height=720.0, | ||
| depth=580.0, | ||
| base_price=100.0, | ||
| material_id="", # not used here | ||
| external_ids={"hygraph_primary": "H123", "glb_code": "base_600"}, | ||
| ) | ||
| s.add(m) | ||
| s.commit() |
There was a problem hiding this comment.
🧩 Analysis chain
Verify empty material_id doesn't violate foreign key constraints.
Line 55 sets material_id="" (empty string), which may violate the foreign key constraint to the materials table if enforced. Since this is an in-memory test DB, the constraint might not be enforced, but it could fail in production scenarios.
Run the following script to check if the Module model has a foreign key constraint on material_id:
🏁 Script executed:
#!/bin/bash
# Check Module model definition for material_id foreign key
rg -nP -A5 'material_id.*=.*mapped_column' backend/api/models.pyLength of output: 267
Use a valid material_id instead of an empty string
material_id is a non-nullable FK to materials.id. Setting it to "" will break referential integrity outside SQLite tests. In this test, create and commit a Material record and use its id here.
backend/tests/test_manifest_coverage_backend.py:55
🤖 Prompt for AI Agents
In backend/tests/test_manifest_coverage_backend.py around lines 48 to 59, the
test creates a Module with material_id="" which violates the non-nullable FK to
materials.id; fix by creating and committing a Material record first (using the
same TestingSession), retrieve its id, and set material_id to that id when
constructing the Module so the FK references a real material and maintains
referential integrity outside SQLite.
| manifest = json.load(open(MANIFEST, "r")) | ||
| files = set(e["file"] for e in manifest.get("models", [])) | ||
| assert "/models/base_600.glb" in files | ||
| assert "/models/base_600@lod1.glb" in files |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Use context manager for file operations.
The file is opened without a context manager, which could lead to resource leaks if an exception occurs.
Apply this diff:
- manifest = json.load(open(MANIFEST, "r"))
+ with open(MANIFEST, "r") as f:
+ manifest = json.load(f)
files = set(e["file"] for e in manifest.get("models", []))📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| manifest = json.load(open(MANIFEST, "r")) | |
| files = set(e["file"] for e in manifest.get("models", [])) | |
| assert "/models/base_600.glb" in files | |
| assert "/models/base_600@lod1.glb" in files | |
| with open(MANIFEST, "r") as f: | |
| manifest = json.load(f) | |
| files = set(e["file"] for e in manifest.get("models", [])) | |
| assert "/models/base_600.glb" in files | |
| assert "/models/base_600@lod1.glb" in files |
🤖 Prompt for AI Agents
In backend/tests/test_manifest_coverage_backend.py around lines 61 to 64, the
manifest file is opened without a context manager; replace the direct open(...)
call with a with open(MANIFEST, "r") as f: block and pass f into json.load so
the file is automatically closed (e.g., with open(MANIFEST, "r") as f: manifest
= json.load(f)); keep the rest of the logic unchanged.
| @@ -0,0 +1,33 @@ | |||
| #!/usr/bin/env python3 | |||
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Make the script executable.
The shebang is present but the file is not marked as executable.
Run the following command to make the script executable:
chmod +x scripts/gen_glb_manifest.pyBased on static analysis hints.
🧰 Tools
🪛 Ruff (0.14.0)
1-1: Shebang is present but file is not executable
(EXE001)
🤖 Prompt for AI Agents
scripts/gen_glb_manifest.py lines 1-1: the file has a shebang but is not
executable; mark the script executable by running chmod +x
scripts/gen_glb_manifest.py (then commit the permission change, e.g., git add
scripts/gen_glb_manifest.py && git commit -m "Make gen_glb_manifest.py
executable").
| for v_idx in range(subdivisions): | ||
| for u_idx in range(subdivisions): | ||
| top_left = start_idx + v_idx * stride + u_idx | ||
| top_right = top_left + 1 | ||
| bottom_left = top_left + stride | ||
| bottom_right = bottom_left + 1 | ||
| indices.extend([top_left, bottom_left, bottom_right]) | ||
| indices.extend([top_left, bottom_right, top_right]) |
There was a problem hiding this comment.
Fix inside-out winding for generated quads.
The current index order (top_left, bottom_left, bottom_right) flips every triangle inside-out. Engines derive normals from this winding (and you’re not exporting explicit normals), so the boxes render black or get back-face culled because the normals point inward. Reorder the triangles so the cross product matches u_vec × v_vec and keeps outward normals.
- indices.extend([top_left, bottom_left, bottom_right])
- indices.extend([top_left, bottom_right, top_right])
+ indices.extend([top_left, top_right, bottom_left])
+ indices.extend([top_right, bottom_right, bottom_left])📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| for v_idx in range(subdivisions): | |
| for u_idx in range(subdivisions): | |
| top_left = start_idx + v_idx * stride + u_idx | |
| top_right = top_left + 1 | |
| bottom_left = top_left + stride | |
| bottom_right = bottom_left + 1 | |
| indices.extend([top_left, bottom_left, bottom_right]) | |
| indices.extend([top_left, bottom_right, top_right]) | |
| for v_idx in range(subdivisions): | |
| for u_idx in range(subdivisions): | |
| top_left = start_idx + v_idx * stride + u_idx | |
| top_right = top_left + 1 | |
| bottom_left = top_left + stride | |
| bottom_right = bottom_left + 1 | |
| indices.extend([top_left, top_right, bottom_left]) | |
| indices.extend([top_right, bottom_right, bottom_left]) |
🤖 Prompt for AI Agents
In scripts/generate_reference_glbs.py around lines 86–93 the triangle index
order produces inside-out winding; change the triangle vertex order so the cross
product follows u_vec × v_vec to keep normals outward. Replace the two extends
so the first triangle is (top_left, bottom_right, bottom_left) and the second is
(top_left, top_right, bottom_right) (i.e., swap bottom_left/bottom_right in the
first triangle and swap top_right/bottom_right positions in the second) to fix
the winding.
| def triangle_count(doc: Dict) -> int: | ||
| total = 0 | ||
| accessors = doc.get("accessors") or [] | ||
| for mesh in doc.get("meshes") or []: | ||
| for prim in mesh.get("primitives", []): | ||
| idx = prim.get("indices") | ||
| if idx is not None: | ||
| accessor = accessors[idx] | ||
| total += accessor.get("count", 0) // 3 | ||
| else: | ||
| pos_idx = prim.get("attributes", {}).get("POSITION") | ||
| if pos_idx is None: | ||
| continue | ||
| accessor = accessors[pos_idx] | ||
| total += accessor.get("count", 0) // 3 | ||
| return total |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Deduplicate triangle_count logic
This function is copied verbatim from scripts/glb_validate.py. Please move it to a shared module (e.g., scripts/glb_utils.py) or import it directly to avoid future drift between validator and packer code paths.
🤖 Prompt for AI Agents
In scripts/update_glb_metadata.py around lines 43 to 58, the triangle_count
function is duplicated from scripts/glb_validate.py; extract it into a shared
module (e.g., scripts/glb_utils.py) that exports triangle_count, then replace
the local definition with an import from that module. Specifically, create
scripts/glb_utils.py containing the triangle_count implementation, update
scripts/update_glb_metadata.py to from scripts.glb_utils import triangle_count
and remove the duplicated function, and also update scripts/glb_validate.py to
import triangle_count from the new shared module instead of defining it locally
so both modules share the same implementation.
|
Note Docstrings generation - SUCCESS |
Docstrings generation was requested by @shayancoin. * #7 (comment) The following files were modified: * `backend/alembic/versions/cfe1d8e4e001_add_sync_events.py` * `backend/api/db.py` * `backend/api/main.py` * `backend/api/routes_sync.py` * `backend/api/security.py` * `backend/services/hygraph_service.py` * `backend/tests/conftest.py` * `backend/tests/test_error_envelopes.py` * `backend/tests/test_manifest_coverage_backend.py` * `backend/tests/test_sync_routes_metrics.py` * `scripts/gen_glb_manifest.py` * `scripts/generate_reference_glbs.py` * `scripts/glb_validate.py` * `scripts/update_glb_metadata.py` * `tests/perf/k6-quote-cnc.js`
Docstrings generation was requested by @shayancoin. * #7 (comment) The following files were modified: * `backend/alembic/versions/cfe1d8e4e001_add_sync_events.py` * `backend/api/db.py` * `backend/api/main.py` * `backend/api/routes_sync.py` * `backend/api/security.py` * `backend/services/hygraph_service.py` * `backend/tests/conftest.py` * `backend/tests/test_error_envelopes.py` * `backend/tests/test_manifest_coverage_backend.py` * `backend/tests/test_sync_routes_metrics.py` * `scripts/gen_glb_manifest.py` * `scripts/generate_reference_glbs.py` * `scripts/glb_validate.py` * `scripts/update_glb_metadata.py` * `tests/perf/k6-quote-cnc.js` Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
PR Type
[Feature | Fix | Documentation | Other() ]
Short Description
...
Tests Added
...
Summary by CodeRabbit
New Features
Documentation
Tests
Chores