Conversation
WalkthroughAdds a perf-focused GitHub Actions workflow, k6 perf test, backend error-envelope tests, OpenAPI spec docs, frontend E2E setup and cache test, model asset manifest and supporting scripts for generating, packing, validating, and updating GLB assets, plus new frontend package scripts and dev deps. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor Dev as PR Event
participant GH as GitHub Actions
participant DC as Docker Compose Stack
participant BE as Backend
participant FE as Frontend
participant K6 as k6 Runner
participant store as Artifact Store
Dev->>GH: PR triggers perf-light workflow
GH->>GH: Checkout repo
GH->>GH: Install k6
GH->>DC: docker-compose up -d (dev)
GH->>BE: Wait for /health (retry)
GH->>FE: Wait for /models/manifest.json (retry)
GH->>BE: Seed via inline Python (POST /materials, /modules)
GH->>K6: Run tests/perf/k6-quote-cnc.js
K6->>BE: Create material/module, quote, cnc export
K6->>FE: (optional) Fetch manifest/model
GH->>store: Upload k6 summary artifact
GH-->>DC: docker-compose down (always)
sequenceDiagram
autonumber
actor VU as k6 VU
participant API as Backend API
participant FE as Frontend (static assets)
rect rgba(220,240,255,0.5)
note right of VU: Setup
VU->>API: POST /api/materials
API-->>VU: 200 {id}
VU->>API: POST /api/modules (materialId)
API-->>VU: 200 {id}
end
rect rgba(220,255,220,0.5)
note right of VU: Test flow
VU->>API: POST /api/quote/generate (moduleId)
API-->>VU: 200 {quote}
VU->>API: POST /api/cnc/export (moduleId)
API-->>VU: 200 {csv}
end
opt Frontend assets check
VU->>FE: GET /models/manifest.json
FE-->>VU: 200 manifest
VU->>FE: GET /models/<file>
FE-->>VU: 200 (Cache-Control: immutable, ETag?)
VU->>FE: GET /models/<file> (If-None-Match)
FE-->>VU: 200 or 304
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60–90 minutes Possibly related PRs
Poem
Pre-merge checks and finishing touches❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
💡 Codex Review
paform/frontend/src/app/configurator/page.tsx
Lines 5 to 7 in b0b5648
The configurator page still hardcodes '/models/BaseCabinet600.glb' for the demo model, but this commit deletes frontend/public/models/BaseCabinet600.glb and replaces it with the new base_600*.glb assets described in the manifest. Loading the page will now request a file that no longer exists, so the viewer fails to display anything. Point the demo to one of the new files or read from the manifest to keep the demo functional.
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting
| - name: Start stack | ||
| run: | | ||
| cd paform | ||
| docker compose --env-file .env.development -f docker-compose.dev.yml up -d --build |
There was a problem hiding this comment.
Avoid cd into non-existent repo directory
Each step in the new perf-light workflow starts with cd paform, but after actions/checkout the runner is already inside the repository root (…/paform). There is no nested paform/paform folder, so the cd command exits with status 1 and, because steps run with set -e, the job fails before the stack is started or any tests execute. Dropping the extra cd (or setting working-directory) is required for the workflow to run.
Useful? React with 👍 / 👎.
|
Note Docstrings generation - SUCCESS |
Docstrings generation was requested by @shayancoin. * #5 (comment) The following files were modified: * `backend/tests/test_error_envelopes.py` * `scripts/gen_glb_manifest.py` * `scripts/generate_reference_glbs.py` * `scripts/glb_validate.py` * `scripts/update_glb_metadata.py` * `tests/perf/k6-quote-cnc.js`
There was a problem hiding this comment.
Actionable comments posted: 5
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
⛔ Files ignored due to path filters (1)
frontend/package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (13)
.github/workflows/perf-light.yml(1 hunks)backend/tests/test_error_envelopes.py(1 hunks)docs/API_SPEC.md(1 hunks)frontend/package.json(2 hunks)frontend/playwright.config.ts(1 hunks)frontend/public/models/manifest.json(1 hunks)frontend/tests/e2e/cache.spec.ts(1 hunks)scripts/gen_glb_manifest.py(1 hunks)scripts/generate_reference_glbs.py(1 hunks)scripts/glb_validate.py(1 hunks)scripts/pack_models.sh(1 hunks)scripts/update_glb_metadata.py(1 hunks)tests/perf/k6-quote-cnc.js(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (5)
scripts/glb_validate.py (1)
scripts/update_glb_metadata.py (1)
triangle_count(43-58)
frontend/tests/e2e/cache.spec.ts (1)
tests/perf/k6-quote-cnc.js (3)
manifestResponse(103-103)etag(116-116)cached(118-118)
scripts/gen_glb_manifest.py (4)
scripts/generate_reference_glbs.py (1)
main(253-268)scripts/glb_validate.py (1)
main(236-260)scripts/update_glb_metadata.py (1)
main(101-109)tests/perf/k6-quote-cnc.js (1)
manifest(106-106)
scripts/update_glb_metadata.py (2)
scripts/glb_validate.py (2)
triangle_count(83-97)main(236-260)scripts/gen_glb_manifest.py (1)
main(24-29)
scripts/generate_reference_glbs.py (3)
scripts/update_glb_metadata.py (2)
write_glb(31-40)main(101-109)scripts/gen_glb_manifest.py (1)
main(24-29)scripts/glb_validate.py (1)
main(236-260)
🪛 actionlint (1.7.8)
.github/workflows/perf-light.yml
41-41: could not parse as YAML: could not find expected ':'
(syntax-check)
🪛 GitHub Actions: CI
scripts/glb_validate.py
[error] 58-58: Returning Any from function declared to return 'dict[Any, Any]' [no-any-return]
[error] 16-16: TYPO: 'LOD' should be 'LOAD' (typos) in constant name
.github/workflows/perf-light.yml
[error] 41-42: check-yaml failed: could not find expected ':'
scripts/pack_models.sh
[error] 5-5: Use of LOD in comments and variable names flagged as incorrect by typos/linters (LOD -> LOAD)
scripts/update_glb_metadata.py
[error] 2-2: TYPO: 'LOD' should be 'LOAD' (typos) in code
scripts/generate_reference_glbs.py
[error] 109-109: No binding for nonlocal 'buffer_bytes' found [misc]
[error] 53-53: TYPO: 'LOD' should be 'LOAD' (typos) in docstring
🪛 Ruff (0.14.0)
scripts/glb_validate.py
1-1: Shebang is present but file is not executable
(EXE001)
12-12: Import from collections.abc instead: Iterable, Sequence
Import from collections.abc
(UP035)
12-12: typing.Dict is deprecated, use dict instead
(UP035)
12-12: typing.List is deprecated, use list instead
(UP035)
12-12: typing.Tuple is deprecated, use tuple instead
(UP035)
36-36: Avoid specifying long messages outside the exception class
(TRY003)
48-48: Avoid specifying long messages outside the exception class
(TRY003)
100-100: Boolean-typed positional argument in function definition
(FBT001)
scripts/gen_glb_manifest.py
1-1: Shebang is present but file is not executable
(EXE001)
scripts/update_glb_metadata.py
1-1: Shebang is present but file is not executable
(EXE001)
9-9: Import from collections.abc instead: Iterable, Sequence
Import from collections.abc
(UP035)
9-9: typing.Dict is deprecated, use dict instead
(UP035)
9-9: typing.List is deprecated, use list instead
(UP035)
9-9: typing.Tuple is deprecated, use tuple instead
(UP035)
17-17: Avoid specifying long messages outside the exception class
(TRY003)
66-66: Avoid specifying long messages outside the exception class
(TRY003)
71-71: Avoid specifying long messages outside the exception class
(TRY003)
75-75: Avoid specifying long messages outside the exception class
(TRY003)
scripts/generate_reference_glbs.py
1-1: Shebang is present but file is not executable
(EXE001)
21-21: Import from collections.abc instead: Iterable, Sequence
Import from collections.abc
(UP035)
21-21: typing.Dict is deprecated, use dict instead
(UP035)
21-21: typing.List is deprecated, use list instead
(UP035)
21-21: typing.Tuple is deprecated, use tuple instead
(UP035)
🪛 YAMLlint (1.37.1)
.github/workflows/perf-light.yml
[warning] 3-3: truthy value should be one of [false, true]
(truthy)
[error] 42-42: syntax error: could not find expected ':'
(syntax)
🔇 Additional comments (10)
frontend/public/models/manifest.json (1)
1-34: LGTM!The manifest structure is valid JSON with consistent naming conventions for models and LOD variants. The sha256 hashes and byte sizes provide integrity verification for asset caching, which aligns with the e2e cache tests.
frontend/tests/e2e/cache.spec.ts (1)
1-21: LGTM!The test correctly validates static asset caching behavior:
- Verifies manifest accessibility
- Checks
immutablecache-control directive- Validates ETag-based conditional requests (200/304 responses)
The lowercase
etagheader access is correct as Playwright normalizes HTTP headers to lowercase.scripts/gen_glb_manifest.py (2)
15-21: LGTM!The
manifest_entry()function correctly computes the SHA256 hash and file size, formatting the output with the expected/models/prefix for web-accessible paths.
24-29: LGTM!The
main()function correctly globs for GLB files, generates entries, and outputs valid JSON. The sorted glob ensures deterministic manifest ordering..github/workflows/perf-light.yml (1)
35-80: Python heredoc syntax is correct; linter false positive.The YAML syntax error reported at lines 41-42 is a false positive from linters that cannot parse embedded scripts within heredocs. The heredoc syntax
python - <<'PY' ... PYis valid Bash and YAML.The embedded Python script correctly:
- Creates a material via POST /api/materials/
- Creates a module using the returned material_id
- Handles errors with proper exception details
To silence the linter warning, you could add a yamllint configuration to exclude embedded scripts, but this is optional.
scripts/pack_models.sh (1)
1-94: Script logic is sound; "LOD" is correct terminology.The pipeline failure flagging "LOD" as a typo is a false positive. "LOD" stands for "Level of Detail" and is standard terminology in 3D graphics for representing models at different complexity levels.
The script correctly:
- Validates gltfpack binary and KTX2 support
- Generates reference models if missing
- Packs LOD0 (full detail) and LOD1 (simplified) variants
- Updates metadata for each LOD level
- Handles legacy assets appropriately
To silence the typo checker, add LOD to your project's allowlist (e.g., in
.typos.tomlor similar):[default.extend-words] LOD = "LOD" # Level of Detailtests/perf/k6-quote-cnc.js (2)
4-11: LGTM! Performance thresholds are well-defined.The thresholds are appropriate for a light performance test:
- Error rate < 1%
- P95 latency < 150ms
These align with the QA.md targets mentioned in the PR objectives.
16-47: LGTM! Resource creation helper is well-structured.The
createMaterialAndModule()function correctly:
- Generates unique resource names with timestamp + random suffix
- Creates material and module in proper sequence
- Validates HTTP 200 responses
- Returns IDs for downstream test usage
frontend/playwright.config.ts (1)
11-19: Verify tests without--disable-web-security.
Remove the flag fromfrontend/playwright.config.ts, runnpm run test:e2e, and share any errors or failures. This will confirm whether the flag is truly required or if a more targeted workaround is available.docs/API_SPEC.md (1)
1-1437: Ensure OpenAPI spec matches FastAPI backendThe machine-readable spec is comprehensive. Prevent drift by generating the spec from your FastAPI app in an environment with dependencies installed and diffing against docs/API_SPEC.md:
cd backend pip install -r requirements.txt python - << 'EOF' > ../generated_spec.json import json from api.main import app print(json.dumps(app.openapi(), indent=2)) EOF diff -u ../docs/API_SPEC.md ../generated_spec.jsonAdd a CI job to automate this consistency check.
| on: | ||
| pull_request: {} |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Optional style fix for consistency.
The pull_request: {} syntax is valid but pull_request: (no empty dict) is more conventional for workflows that trigger on all PR events.
Apply this diff for consistency:
- pull_request: {}
+ pull_request:📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| on: | |
| pull_request: {} | |
| on: | |
| pull_request: |
🧰 Tools
🪛 YAMLlint (1.37.1)
[warning] 3-3: truthy value should be one of [false, true]
(truthy)
🤖 Prompt for AI Agents
.github/workflows/perf-light.yml around lines 3 to 4: replace the unconventional
empty-dict trigger syntax `pull_request: {}` with the more conventional
`pull_request:` (i.e., remove the `{}`) so the workflow uses the standard form
that triggers on all pull request events.
| - name: Start stack | ||
| run: | | ||
| cd paform | ||
| docker compose --env-file .env.development -f docker-compose.dev.yml up -d --build | ||
|
|
||
| - name: Wait for API | ||
| run: | | ||
| cd paform | ||
| for i in {1..60}; do curl -sf http://localhost:8000/healthcheck && break || sleep 2; done | ||
|
|
||
| - name: Wait for Frontend | ||
| run: | | ||
| for i in {1..60}; do curl -sf http://localhost:3000/models/manifest.json && break || sleep 2; done |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider adding failure handling for healthchecks.
The healthcheck loops retry 60 times with 2-second intervals (2 minutes total), but silently succeed if the service never responds. Consider failing explicitly if the loop exhausts retries.
Apply this diff to fail explicitly:
- name: Wait for API
run: |
cd paform
- for i in {1..60}; do curl -sf http://localhost:8000/healthcheck && break || sleep 2; done
+ for i in {1..60}; do
+ curl -sf http://localhost:8000/healthcheck && break || sleep 2
+ if [ $i -eq 60 ]; then echo "API healthcheck failed after 2m"; exit 1; fi
+ done
- name: Wait for Frontend
run: |
- for i in {1..60}; do curl -sf http://localhost:3000/models/manifest.json && break || sleep 2; done
+ for i in {1..60}; do
+ curl -sf http://localhost:3000/models/manifest.json && break || sleep 2
+ if [ $i -eq 60 ]; then echo "Frontend manifest check failed after 2m"; exit 1; fi
+ done📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: Start stack | |
| run: | | |
| cd paform | |
| docker compose --env-file .env.development -f docker-compose.dev.yml up -d --build | |
| - name: Wait for API | |
| run: | | |
| cd paform | |
| for i in {1..60}; do curl -sf http://localhost:8000/healthcheck && break || sleep 2; done | |
| - name: Wait for Frontend | |
| run: | | |
| for i in {1..60}; do curl -sf http://localhost:3000/models/manifest.json && break || sleep 2; done | |
| - name: Wait for API | |
| run: | | |
| cd paform | |
| for i in {1..60}; do | |
| curl -sf http://localhost:8000/healthcheck && break || sleep 2 | |
| if [ $i -eq 60 ]; then echo "API healthcheck failed after 2m"; exit 1; fi | |
| done | |
| - name: Wait for Frontend | |
| run: | | |
| for i in {1..60}; do | |
| curl -sf http://localhost:3000/models/manifest.json && break || sleep 2 | |
| if [ $i -eq 60 ]; then echo "Frontend manifest check failed after 2m"; exit 1; fi | |
| done |
🤖 Prompt for AI Agents
.github/workflows/perf-light.yml lines 21-33: the two healthcheck loops
currently retry but silently continue if the service never becomes healthy;
modify each block so that after the retry loop finishes without success it
prints a clear error message and exits non-zero (e.g., echo "Service X did not
become ready" and exit 1), ensuring the workflow step fails when retries are
exhausted.
| @@ -0,0 +1,33 @@ | |||
| #!/usr/bin/env python3 | |||
There was a problem hiding this comment.
Make the script executable.
The shebang is present but the file is not executable. This prevents direct execution of the script.
Apply this fix:
#!/bin/bash
chmod +x scripts/gen_glb_manifest.pyBased on static analysis.
🧰 Tools
🪛 Ruff (0.14.0)
1-1: Shebang is present but file is not executable
(EXE001)
🤖 Prompt for AI Agents
In scripts/gen_glb_manifest.py around lines 1 to 1 the shebang exists but the
file lacks the executable bit; make the script executable by setting the file
mode to include the user execute bit (e.g., run chmod +x on
scripts/gen_glb_manifest.py) and then stage and commit the change so the
executable bit is preserved in the repository.
| lod1_extras = (lod1_doc.get("nodes") or [{}])[0].get("extras") or {} | ||
| if lod1_extras.get("lod", {}).get("level") != 1: | ||
| issues.append( | ||
| Issue("ERROR", "LOD", f"LOD1 asset {lod1_path.name} must declare extras.lod.level == 1") | ||
| ) |
There was a problem hiding this comment.
Fix LOD1 metadata check to inspect the actual root node.
This block always inspects nodes[0], but update_metadata.py stamps the LOD/QA extras onto whichever node is actually referenced in scenes[scene].nodes[0]. Whenever the root node isn’t entry 0 in the nodes array (which happens with many exported assets), the validator will flag a false error even though the LOD1 file is correct. Resolve it by locating the real root index (reuse gather_root_nodes) before reading its extras.
- lod1_extras = (lod1_doc.get("nodes") or [{}])[0].get("extras") or {}
- if lod1_extras.get("lod", {}).get("level") != 1:
+ lod1_nodes = lod1_doc.get("nodes") or []
+ lod1_roots = gather_root_nodes(lod1_doc)
+ lod1_extras = {}
+ if lod1_roots:
+ lod1_root_idx = lod1_roots[0]
+ if lod1_root_idx < len(lod1_nodes):
+ extras = lod1_nodes[lod1_root_idx].get("extras")
+ if isinstance(extras, dict):
+ lod1_extras = extras
+ if lod1_extras.get("lod", {}).get("level") != 1:
issues.append(
Issue("ERROR", "LOD", f"LOD1 asset {lod1_path.name} must declare extras.lod.level == 1")
)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| lod1_extras = (lod1_doc.get("nodes") or [{}])[0].get("extras") or {} | |
| if lod1_extras.get("lod", {}).get("level") != 1: | |
| issues.append( | |
| Issue("ERROR", "LOD", f"LOD1 asset {lod1_path.name} must declare extras.lod.level == 1") | |
| ) | |
| lod1_nodes = lod1_doc.get("nodes") or [] | |
| lod1_roots = gather_root_nodes(lod1_doc) | |
| lod1_extras = {} | |
| if lod1_roots: | |
| lod1_root_idx = lod1_roots[0] | |
| if lod1_root_idx < len(lod1_nodes): | |
| extras = lod1_nodes[lod1_root_idx].get("extras") | |
| if isinstance(extras, dict): | |
| lod1_extras = extras | |
| if lod1_extras.get("lod", {}).get("level") != 1: | |
| issues.append( | |
| Issue("ERROR", "LOD", f"LOD1 asset {lod1_path.name} must declare extras.lod.level == 1") | |
| ) |
🤖 Prompt for AI Agents
In scripts/glb_validate.py around lines 216 to 220, the code always reads extras
from nodes[0] which can be the wrong root; instead reuse the existing
gather_root_nodes logic to find the actual root node index for the scene (the
node referenced by scenes[scene].nodes[0]) and then read extras from
nodes[real_root_index]; update lod1_extras assignment to locate that root index
first and guard for missing nodes/scene entries before inspecting extras to
avoid false LOD1 errors.
| export default function (data) { | ||
| const headers = { 'Content-Type': 'application/json' }; | ||
| const moduleId = data?.moduleId || createMaterialAndModule().moduleId; | ||
| const payloads = buildPayload(moduleId); | ||
|
|
||
| const quoteResponse = http.post(`${baseUrl}/api/quote/generate`, payloads.quote, { | ||
| headers, | ||
| }); | ||
| check(quoteResponse, { 'quote 200': (r) => r.status === 200 }); | ||
|
|
||
| const cncResponse = http.post(`${baseUrl}/api/cnc/export`, payloads.cnc, { | ||
| headers, | ||
| }); | ||
| check(cncResponse, { | ||
| 'cnc 200 + csv': (r) => r.status === 200 && r.json('csv'), | ||
| }); | ||
|
|
||
| if (feBase) { | ||
| const manifestResponse = http.get(`${feBase}/models/manifest.json`); | ||
| check(manifestResponse, { 'manifest 200': (r) => r.status === 200 }); | ||
| try { | ||
| const manifest = manifestResponse.json(); | ||
| const file = manifest?.models?.[0]?.file; | ||
| if (file) { | ||
| const asset = http.get(`${feBase}${file}`); | ||
| check(asset, { | ||
| 'model 200 + immutable': (r) => { | ||
| const header = r.headers['Cache-Control'] || r.headers['cache-control'] || ''; | ||
| return r.status === 200 && String(header).includes('immutable'); | ||
| }, | ||
| }); | ||
| const etag = asset.headers['Etag'] || asset.headers['ETag']; | ||
| if (etag) { | ||
| const cached = http.get(`${feBase}${file}`, { headers: { 'If-None-Match': etag } }); | ||
| check(cached, { 'model cache 200/304': (r) => r.status === 304 || r.status === 200 }); | ||
| } | ||
| } | ||
| } catch (err) { | ||
| // Ignore JSON parsing issues so perf run can continue. | ||
| } | ||
| } | ||
|
|
||
| sleep(0.5); | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider optional logging in error catch block.
The test logic correctly validates quote/CNC endpoints and optional frontend asset caching. The empty catch block at lines 122-124 intentionally allows perf runs to continue despite manifest parsing errors.
For improved debugging, consider minimal logging:
} catch (err) {
- // Ignore JSON parsing issues so perf run can continue.
+ console.warn('Manifest parsing failed (non-critical):', err);
}This preserves the non-blocking behavior while aiding troubleshooting.
🤖 Prompt for AI Agents
In tests/perf/k6-quote-cnc.js around lines 85 to 128, the catch block that
swallows manifest JSON parsing errors is empty; add a minimal non-blocking log
inside the catch (e.g., console.warn or console.error) that includes a short
message and the error detail so perf runs continue but you can troubleshoot
manifest parsing issues.
Docstrings generation was requested by @shayancoin. * #5 (comment) The following files were modified: * `backend/tests/test_error_envelopes.py` * `scripts/gen_glb_manifest.py` * `scripts/generate_reference_glbs.py` * `scripts/glb_validate.py` * `scripts/update_glb_metadata.py` * `tests/perf/k6-quote-cnc.js` Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Eng#2 — Lock CI gates & publish QA
Admin action requested
Summary by CodeRabbit
New Features
Tests
Chores