Conversation
WalkthroughReplaced Markdown API spec with an OpenAPI 3.1 JSON spec. Added frontend asset scripts and dependencies. Introduced models manifest. Added scripts to generate reference GLBs, pack/compress models, update GLB metadata, validate GLBs, and generate a manifest. Integrated KTX2/Meshopt packing and LOD bookkeeping. No public API code changes beyond documentation. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor Dev as Developer/CI
participant Pack as pack_models.sh
participant GenRef as generate_reference_glbs.py
participant Gltf as gltfpack (external)
participant Meta as update_glb_metadata.py
participant FS as Models Dir
Dev->>Pack: Run pack_models.sh
Pack->>GenRef: Ensure reference GLBs exist
GenRef-->>FS: Write base/tall/wall .glb
Pack->>Gltf: Detect binary + KTX2 support
loop For each source GLB (LOD0)
Pack->>Gltf: Pack LOD0 (meshopt/ktx2)
Gltf-->>FS: Temp packed LOD0
Pack->>Meta: Update metadata (level=0)
alt LOD1 exists
Pack->>Gltf: Pack existing LOD1
else
Pack->>Gltf: Simplify to LOD1 (0.55)
end
Gltf-->>FS: Temp packed LOD1
Pack->>Meta: Update metadata (level=1)
Pack-->>FS: Move final LOD0/LOD1 into place
end
Pack-->>Dev: Packing complete
sequenceDiagram
autonumber
actor User as User/CI
participant Val as glb_validate.py
participant FS as Filesystem
participant LOD1 as LOD1 Loader (optional)
User->>Val: glb_validate.py [paths...] [--warn-threshold N] [--fail-on-warning]
loop For each GLB
Val->>FS: Read GLB bytes
Val->>Val: Parse header + JSON chunk
Val->>Val: Check schema, root node, materials, dimensions
Val->>Val: Compute bounds & triangle count
alt LOD0 file
Val->>LOD1: If @lod1 expected, load paired file
LOD1-->>Val: LOD1 JSON for checks
Val->>Val: Verify LOD levels and triangle caps
end
Val-->>User: Issues (errors first, then warnings)
end
Val-->>User: Summary and exit code
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60–90 minutes Poem
Pre-merge checks and finishing touches❌ Failed checks (3 warnings)
✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting
| def validate(path: Path, fail_on_warning: bool, warn_threshold: int) -> Tuple[int, int, List[Issue]]: | ||
| issues: List[Issue] = [] | ||
| try: | ||
| doc = load_glb(path) | ||
| except Exception as exc: # noqa: BLE001 | ||
| return 0, 0, [Issue("ERROR", "LOAD", f"{path.name}: {exc}")] | ||
|
|
There was a problem hiding this comment.
Treat GLB load failures as errors
When a GLB cannot be read, validate returns (0, 0, [Issue(...)]), so the caller prints the issue but still reports "✅ Pass" and exits with success. This means missing or corrupt assets will not fail CI even though an error is emitted. The function should return a non‑zero error count (and ideally propagate a failing exit code) so that load failures are treated as hard failures.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Actionable comments posted: 15
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
⛔ Files ignored due to path filters (1)
frontend/package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (8)
docs/API_SPEC.md(1 hunks)frontend/package.json(2 hunks)frontend/public/models/manifest.json(1 hunks)scripts/gen_glb_manifest.py(1 hunks)scripts/generate_reference_glbs.py(1 hunks)scripts/glb_validate.py(1 hunks)scripts/pack_models.sh(1 hunks)scripts/update_glb_metadata.py(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (4)
scripts/gen_glb_manifest.py (3)
scripts/generate_reference_glbs.py (1)
main(253-268)scripts/glb_validate.py (1)
main(236-260)scripts/update_glb_metadata.py (1)
main(101-109)
scripts/generate_reference_glbs.py (3)
scripts/update_glb_metadata.py (2)
write_glb(31-40)main(101-109)scripts/gen_glb_manifest.py (1)
main(24-29)scripts/glb_validate.py (1)
main(236-260)
scripts/update_glb_metadata.py (2)
scripts/generate_reference_glbs.py (2)
write_glb(235-250)main(253-268)scripts/glb_validate.py (2)
triangle_count(83-97)main(236-260)
scripts/glb_validate.py (1)
scripts/update_glb_metadata.py (1)
triangle_count(43-58)
🪛 Ruff (0.14.0)
scripts/gen_glb_manifest.py
1-1: Shebang is present but file is not executable
(EXE001)
scripts/generate_reference_glbs.py
1-1: Shebang is present but file is not executable
(EXE001)
21-21: Import from collections.abc instead: Iterable, Sequence
Import from collections.abc
(UP035)
21-21: typing.Dict is deprecated, use dict instead
(UP035)
21-21: typing.List is deprecated, use list instead
(UP035)
21-21: typing.Tuple is deprecated, use tuple instead
(UP035)
scripts/update_glb_metadata.py
1-1: Shebang is present but file is not executable
(EXE001)
9-9: Import from collections.abc instead: Iterable, Sequence
Import from collections.abc
(UP035)
9-9: typing.Dict is deprecated, use dict instead
(UP035)
9-9: typing.List is deprecated, use list instead
(UP035)
9-9: typing.Tuple is deprecated, use tuple instead
(UP035)
17-17: Avoid specifying long messages outside the exception class
(TRY003)
66-66: Avoid specifying long messages outside the exception class
(TRY003)
71-71: Avoid specifying long messages outside the exception class
(TRY003)
75-75: Avoid specifying long messages outside the exception class
(TRY003)
scripts/glb_validate.py
1-1: Shebang is present but file is not executable
(EXE001)
12-12: Import from collections.abc instead: Iterable, Sequence
Import from collections.abc
(UP035)
12-12: typing.Dict is deprecated, use dict instead
(UP035)
12-12: typing.List is deprecated, use list instead
(UP035)
12-12: typing.Tuple is deprecated, use tuple instead
(UP035)
36-36: Avoid specifying long messages outside the exception class
(TRY003)
48-48: Avoid specifying long messages outside the exception class
(TRY003)
100-100: Boolean-typed positional argument in function definition
(FBT001)
🔇 Additional comments (4)
frontend/package.json (1)
43-46: gltfpack flags compatible with v0.25.0
The flags-cc,-tc,-ktx2,-si,-sa,-kn,-km, and-keare supported in gltfpack v0.25.x per official docs.scripts/update_glb_metadata.py (1)
88-92: Remove duplication ofextrasintonodes[0]and update validator
Copyingextrastonodes[0]duplicates metadata on unrelated nodes. Ensure the validator reads LOD metadata from the scene’s root node, then remove:- if nodes and root_index != 0: - first_node = nodes[0] - if not isinstance(first_node.get("extras"), dict): - first_node["extras"] = extrasConfirm the validator change before merging.
frontend/public/models/manifest.json (1)
1-34: LGTM! Clean manifest structure.The manifest follows a clear and consistent structure with all necessary metadata (file path, SHA256 hash, byte size) for the six GLB models and their LOD1 variants. The naming convention with
@lod1suffix effectively distinguishes detail levels.scripts/gen_glb_manifest.py (1)
32-33: LGTM! Standard entry point pattern.The use of
raise SystemExit(main())is the correct pattern for executable Python scripts, ensuring proper exit code propagation.
| "security": [ | ||
| { | ||
| "APIKeyHeader": [] | ||
| } | ||
| ] |
There was a problem hiding this comment.
Correct the Hygraph pull security scheme.
The operation description calls for a Bearer token, yet the security scheme is modeled as an apiKey header. This mismatch breaks client generation (Bearers expect the http scheme with bearer). Please change the security scheme to type: http and scheme: bearer, and keep the operation-level reference aligned.
Also applies to: 1430-1434
🤖 Prompt for AI Agents
In docs/API_SPEC.md around lines 659-663 (and likewise update lines 1430-1434),
the OpenAPI security scheme for the Hygraph pull is defined as an apiKey in a
header while operations expect a Bearer token; change the security scheme
definition to type: http and scheme: bearer (keeping any description/name fields
as needed) and ensure the operation-level security references still point to
that scheme name so clients generate an HTTP Bearer auth header instead of an
apiKey header.
| "get": { | ||
| "summary": "Metrics", | ||
| "operationId": "metrics_metrics_get", | ||
| "responses": { | ||
| "200": { | ||
| "description": "Successful Response", | ||
| "content": { | ||
| "application/json": { | ||
| "schema": {} | ||
| } | ||
| } | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Fix metrics response media type.
The /metrics operation documents application/json, but Prometheus-style metrics are emitted as text/plain; version=0.0.4. The current description will cause autogenerated clients to reject the actual payload. Please switch the documented media type (and schema) to plain text.
🤖 Prompt for AI Agents
In docs/API_SPEC.md around lines 713 to 726, the /metrics GET response is
incorrectly documented as "application/json"; change the response media type to
"text/plain; version=0.0.4" and replace the empty JSON schema with a plain text
schema (e.g., schema: type: string) so the OpenAPI spec matches Prometheus-style
metrics output; ensure the content block uses the exact media type string and a
simple string schema.
| "assets:gen": "python ../scripts/generate_reference_glbs.py", | ||
| "assets:pack": "bash ../scripts/pack_models.sh", | ||
| "assets:validate": "python ../scripts/glb_validate.py public/models/*.glb --fail-on-warning", | ||
| "assets:manifest": "python ../scripts/gen_glb_manifest.py > public/models/manifest.json", | ||
| "assets:all": "npm run assets:gen && npm run assets:pack && npm run assets:validate && npm run assets:manifest", | ||
| "test:manifest": "vitest run --reporter=dot" |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
🧩 Analysis chain
Standardize Python invocation; align with pack script.
Use python3 consistently to avoid env mismatches (some systems have no python). Example:
- "assets:gen": "python ../scripts/generate_reference_glbs.py",
+ "assets:gen": "python3 ../scripts/generate_reference_glbs.py",
- "assets:validate": "python ../scripts/glb_validate.py public/models/*.glb --fail-on-warning",
+ "assets:validate": "python3 ../scripts/glb_validate.py public/models/*.glb --fail-on-warning",
- "assets:manifest": "python ../scripts/gen_glb_manifest.py > public/models/manifest.json",
+ "assets:manifest": "python3 ../scripts/gen_glb_manifest.py > public/models/manifest.json",Note: > public/models/manifest.json and the *.glb glob rely on a POSIX shell; if Windows support is needed, consider a node wrapper script.
🏁 Script executed:
#!/bin/bash
echo "=== frontend/package.json python invocations ==="
rg -n '"python ' -C2 frontend/package.json || echo "none"
echo "=== frontend/package.json python3 invocations ==="
rg -n '"python3' -C2 frontend/package.json || echo "none"
echo "=== Global python invocations ==="
rg -n '"python ' -C2 || echo "none"
echo "=== Global python3 invocations ==="
rg -n '"python3' -C2 || echo "none"
echo "=== Shebang lines in scripts/ ==="
find scripts -maxdepth 1 -type f -name "*.py" | xargs -r head -n1Length of output: 1752
Use python3 in package.json scripts
Replace python with python3 in assets:gen, assets:validate, and assets:manifest to match the scripts’ shebangs. These commands use POSIX globs and redirection; for Windows support, consider a Node-based wrapper.
🤖 Prompt for AI Agents
In frontend/package.json around lines 11 to 16, three scripts use the plain
`python` executable which is inconsistent with the scripts' shebangs and can
fail on systems where Python 3 is `python3`; update the scripts `assets:gen`,
`assets:validate`, and `assets:manifest` to call `python3` instead of `python`.
Ensure you only change the command name (not arguments), run the updated scripts
locally to verify they work, and consider adding a short comment or README note
if Windows users need a Node-based wrapper for glob/redirection behavior.
| @@ -0,0 +1,33 @@ | |||
| #!/usr/bin/env python3 | |||
There was a problem hiding this comment.
Fix the executable permission.
The shebang is present but the file is not marked as executable. This prevents the script from being run directly.
Apply this fix:
chmod +x scripts/gen_glb_manifest.pyBased on static analysis hints.
🧰 Tools
🪛 Ruff (0.14.0)
1-1: Shebang is present but file is not executable
(EXE001)
🤖 Prompt for AI Agents
In scripts/gen_glb_manifest.py around line 1 the shebang is present but the file
is not executable; mark the script executable by setting the execute bit (e.g.,
run chmod +x scripts/gen_glb_manifest.py) and update the repository index (git
add scripts/gen_glb_manifest.py and commit) so the script can be run directly.
| def manifest_entry(path: Path) -> dict: | ||
| data = path.read_bytes() | ||
| return { | ||
| "file": "/models/" + path.name, | ||
| "sha256": hashlib.sha256(data).hexdigest(), | ||
| "bytes": len(data), | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Add error handling for file operations.
The function reads the entire file into memory and computes its hash without handling potential I/O errors (e.g., permission denied, file not found during read).
Apply this diff to add basic error handling:
def manifest_entry(path: Path) -> dict:
- data = path.read_bytes()
+ try:
+ data = path.read_bytes()
+ except OSError as e:
+ raise RuntimeError(f"Failed to read {path}: {e}") from e
return {
"file": "/models/" + path.name,
"sha256": hashlib.sha256(data).hexdigest(),
"bytes": len(data),
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def manifest_entry(path: Path) -> dict: | |
| data = path.read_bytes() | |
| return { | |
| "file": "/models/" + path.name, | |
| "sha256": hashlib.sha256(data).hexdigest(), | |
| "bytes": len(data), | |
| } | |
| def manifest_entry(path: Path) -> dict: | |
| try: | |
| data = path.read_bytes() | |
| except OSError as e: | |
| raise RuntimeError(f"Failed to read {path}: {e}") from e | |
| return { | |
| "file": "/models/" + path.name, | |
| "sha256": hashlib.sha256(data).hexdigest(), | |
| "bytes": len(data), | |
| } |
🤖 Prompt for AI Agents
In scripts/gen_glb_manifest.py around lines 15 to 21, manifest_entry reads the
whole file and computes a hash without handling I/O errors; wrap the file read
and hashing in a try/except that catches OSError (or Exception if broader) to
handle file-not-found/permission errors, and on failure either raise a new
exception with a clear contextual message (including the path and original
error) or return a sentinel (e.g., None) depending on callers; ensure the happy
path still returns the dict with "file", "sha256", and "bytes".
| lod1_extras = (lod1_doc.get("nodes") or [{}])[0].get("extras") or {} | ||
| if lod1_extras.get("lod", {}).get("level") != 1: | ||
| issues.append( | ||
| Issue("ERROR", "LOD", f"LOD1 asset {lod1_path.name} must declare extras.lod.level == 1") | ||
| ) | ||
| elif lod_level == 1 and "@lod1" not in path.name: | ||
| issues.append(Issue("ERROR", "LOD", "LOD1 assets must use '@lod1' suffix in filename")) | ||
|
|
There was a problem hiding this comment.
Check LOD1 metadata on the LOD1 root node, not nodes[0].
Aligns with earlier root handling and avoids reliance on node ordering.
- lod1_extras = (lod1_doc.get("nodes") or [{}])[0].get("extras") or {}
- if lod1_extras.get("lod", {}).get("level") != 1:
+ lod1_roots = gather_root_nodes(lod1_doc)
+ lod1_nodes = lod1_doc.get("nodes") or []
+ lod1_root = lod1_nodes[lod1_roots[0]] if lod1_roots else {}
+ lod1_extras = lod1_root.get("extras") or {}
+ if lod1_extras.get("lod", {}).get("level") != 1:
issues.append(
Issue("ERROR", "LOD", f"LOD1 asset {lod1_path.name} must declare extras.lod.level == 1")
)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| lod1_extras = (lod1_doc.get("nodes") or [{}])[0].get("extras") or {} | |
| if lod1_extras.get("lod", {}).get("level") != 1: | |
| issues.append( | |
| Issue("ERROR", "LOD", f"LOD1 asset {lod1_path.name} must declare extras.lod.level == 1") | |
| ) | |
| elif lod_level == 1 and "@lod1" not in path.name: | |
| issues.append(Issue("ERROR", "LOD", "LOD1 assets must use '@lod1' suffix in filename")) | |
| lod1_roots = gather_root_nodes(lod1_doc) | |
| lod1_nodes = lod1_doc.get("nodes") or [] | |
| lod1_root = lod1_nodes[lod1_roots[0]] if lod1_roots else {} | |
| lod1_extras = lod1_root.get("extras") or {} | |
| if lod1_extras.get("lod", {}).get("level") != 1: | |
| issues.append( | |
| Issue("ERROR", "LOD", f"LOD1 asset {lod1_path.name} must declare extras.lod.level == 1") | |
| ) |
🤖 Prompt for AI Agents
In scripts/glb_validate.py around lines 216 to 223, the LOD1 metadata check
currently inspects nodes[0].extras which relies on node ordering; instead read
the root object's extras by using lod1_doc.get("extras") or {} and check
.get("lod", {}).get("level") == 1, and if not append the same Issue("ERROR",
"LOD", f"LOD1 asset {lod1_path.name} must declare extras.lod.level == 1"); keep
the existing filename-suffix check for lod_level == 1 unchanged.
| HELP_HEADER="$($PACK_CMD -h 2>&1 || true)" | ||
| if [[ "$HELP_HEADER" == *"-ktx2"* ]]; then | ||
| TEXTURE_ARGS=(-ktx2) | ||
| elif [[ "$HELP_HEADER" == *"-tc"* ]]; then | ||
| TEXTURE_ARGS=(-tc) | ||
| else | ||
| echo "ERROR: gltfpack binary lacks KTX2 texture compression support (-ktx2/-tc)." >&2 | ||
| exit 1 | ||
| fi |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
🧩 Analysis chain
Also detect support for -cc (meshopt) to avoid passing unknown flags.
Current check ensures only KTX2 support. Consider verifying -cc in help before using it, or gate it similarly.
🏁 Script executed:
#!/bin/bash
# Inspect gltfpack help for -cc to ensure support; print a warning if missing.
"$PACK_CMD" -h 2>&1 | rg -n -- "-cc" || echo "Warning: gltfpack lacks -cc; EXT_meshopt_compression may not be emitted."Length of output: 72
Gate meshopt compression flag (-cc) support
After texture-arg detection, capture help and check for -cc:
HELP_HEADER="$($PACK_CMD -h 2>&1 || true)"
if [[ "$HELP_HEADER" == *"-cc"* ]]; then
MESHOPT_ARGS=(-cc)
else
echo "WARNING: gltfpack lacks meshopt compression support (-cc); EXT_meshopt_compression won’t be emitted." >&2
MESHOPT_ARGS=()
fi🤖 Prompt for AI Agents
In scripts/pack_models.sh around lines 30 to 38, after detecting texture args
from the gltfpack help output, also re-run or reuse the HELP_HEADER capture and
check for the presence of the meshopt compression flag "-cc"; if present set
MESHOPT_ARGS to (-cc), otherwise log a warning to stderr about missing meshopt
support and set MESHOPT_ARGS to an empty array; ensure HELP_HEADER is available
at that point (capture it if needed) and place this check immediately after the
texture-arg detection so MESHOPT_ARGS is exported/used later in the script.
| if [[ -f "$lod1_path" ]]; then | ||
| "$PACK_CMD" -i "$lod1_path" -o "$tmp_lod1" -kn -km -ke -cc "${TEXTURE_ARGS[@]}" "${SIMPLIFY_ARGS[@]}" | ||
| else | ||
| "$PACK_CMD" -i "$src" -o "$tmp_lod1" -kn -km -ke -cc "${TEXTURE_ARGS[@]}" "${SIMPLIFY_ARGS[@]}" | ||
| fi |
There was a problem hiding this comment.
Avoid re-simplifying existing LOD1; it compounds on reruns.
If LOD1 exists, repack without -si/-sa; only simplify when deriving from LOD0.
- if [[ -f "$lod1_path" ]]; then
- "$PACK_CMD" -i "$lod1_path" -o "$tmp_lod1" -kn -km -ke -cc "${TEXTURE_ARGS[@]}" "${SIMPLIFY_ARGS[@]}"
- else
- "$PACK_CMD" -i "$src" -o "$tmp_lod1" -kn -km -ke -cc "${TEXTURE_ARGS[@]}" "${SIMPLIFY_ARGS[@]}"
- fi
+ if [[ -f "$lod1_path" ]]; then
+ # Repack existing LOD1 to (re)apply compression/texture settings without further simplification.
+ "$PACK_CMD" -i "$lod1_path" -o "$tmp_lod1" -kn -km -ke -cc "${TEXTURE_ARGS[@]}"
+ else
+ # Derive LOD1 from LOD0 with simplification.
+ "$PACK_CMD" -i "$src" -o "$tmp_lod1" -kn -km -ke -cc "${TEXTURE_ARGS[@]}" "${SIMPLIFY_ARGS[@]}"
+ fi📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if [[ -f "$lod1_path" ]]; then | |
| "$PACK_CMD" -i "$lod1_path" -o "$tmp_lod1" -kn -km -ke -cc "${TEXTURE_ARGS[@]}" "${SIMPLIFY_ARGS[@]}" | |
| else | |
| "$PACK_CMD" -i "$src" -o "$tmp_lod1" -kn -km -ke -cc "${TEXTURE_ARGS[@]}" "${SIMPLIFY_ARGS[@]}" | |
| fi | |
| if [[ -f "$lod1_path" ]]; then | |
| # Repack existing LOD1 to (re)apply compression/texture settings without further simplification. | |
| "$PACK_CMD" -i "$lod1_path" -o "$tmp_lod1" -kn -km -ke -cc "${TEXTURE_ARGS[@]}" | |
| else | |
| # Derive LOD1 from LOD0 with simplification. | |
| "$PACK_CMD" -i "$src" -o "$tmp_lod1" -kn -km -ke -cc "${TEXTURE_ARGS[@]}" "${SIMPLIFY_ARGS[@]}" | |
| fi |
🤖 Prompt for AI Agents
In scripts/pack_models.sh around lines 85 to 89, avoid re-applying
simplification flags when an existing LOD1 is being repacked: if LOD1 exists
call PACK_CMD with the same flags but omit the simplification args (the -si/-sa
entries contained in SIMPLIFY_ARGS) so only texture/cleanup flags are used; when
deriving LOD1 from LOD0 keep including SIMPLIFY_ARGS as before. Ensure the
branch that handles the existing LOD1 does not expand "${SIMPLIFY_ARGS[@]}"
(remove it) so simplification is only applied in the else path.
| @@ -0,0 +1,113 @@ | |||
| #!/usr/bin/env python3 | |||
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Shebang vs execution mode and typing modernizations.
- Either make the script executable or drop the shebang.
- Prefer
collections.abcand built-in generics overtyping.*.
-#!/usr/bin/env python3
+# (run via python3)-from typing import Dict, Iterable, List, Sequence, Tuple
+from collections.abc import Iterable, Sequence
+from typing import Dict, List, Tuple # or use built-in: dict, list, tupleCommittable suggestion skipped: line range outside the PR's diff.
🧰 Tools
🪛 Ruff (0.14.0)
1-1: Shebang is present but file is not executable
(EXE001)
🤖 Prompt for AI Agents
In scripts/update_glb_metadata.py at line 1, the file currently has a shebang
but may not be executable and uses typing.* imports; either remove the shebang
or make the file executable (chmod +x) to match intended execution, and
modernize type hints by replacing typing imports with collections.abc where
appropriate (e.g., Iterable, Mapping) and using built-in generics (list, dict,
tuple) instead of typing.List/Dict/Tuple; update import statements and all type
annotations accordingly.
| scene_idx = doc.get("scene", 0) | ||
| root_nodes = scenes[scene_idx].get("nodes") or [] | ||
| if not root_nodes: | ||
| raise ValueError("Scene has no root nodes") | ||
| nodes = doc.get("nodes") or [] | ||
| root_index = root_nodes[0] | ||
| root = nodes[root_index] | ||
|
|
There was a problem hiding this comment.
Guard against invalid scene index.
Avoid IndexError if scene points outside scenes.
- scene_idx = doc.get("scene", 0)
- root_nodes = scenes[scene_idx].get("nodes") or []
+ scene_idx = doc.get("scene", 0)
+ if not (0 <= scene_idx < len(scenes)):
+ raise ValueError(f"Invalid scene index: {scene_idx}")
+ root_nodes = scenes[scene_idx].get("nodes") or []📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| scene_idx = doc.get("scene", 0) | |
| root_nodes = scenes[scene_idx].get("nodes") or [] | |
| if not root_nodes: | |
| raise ValueError("Scene has no root nodes") | |
| nodes = doc.get("nodes") or [] | |
| root_index = root_nodes[0] | |
| root = nodes[root_index] | |
| scene_idx = doc.get("scene", 0) | |
| if not (0 <= scene_idx < len(scenes)): | |
| raise ValueError(f"Invalid scene index: {scene_idx}") | |
| root_nodes = scenes[scene_idx].get("nodes") or [] | |
| if not root_nodes: | |
| raise ValueError("Scene has no root nodes") | |
| nodes = doc.get("nodes") or [] | |
| root_index = root_nodes[0] | |
| root = nodes[root_index] |
🧰 Tools
🪛 Ruff (0.14.0)
75-75: Avoid specifying long messages outside the exception class
(TRY003)
🤖 Prompt for AI Agents
In scripts/update_glb_metadata.py around lines 72 to 79, the code assumes
scene_idx is a valid index into scenes which can raise IndexError; validate that
scenes exists and that 0 <= scene_idx < len(scenes) before using it, and if
invalid raise a clear ValueError (e.g., "Invalid scene index: {scene_idx}") or
handle it explicitly (choose a default scene or abort) so you never index out of
range when accessing scenes[scene_idx].
PR Type
[Feature | Fix | Documentation | Other() ]
Short Description
...
Tests Added
...
Summary by CodeRabbit