Skip to content

Feat/assets gate#4

Merged
shayancoin merged 2 commits intomainfrom
feat/assets-gate
Oct 15, 2025
Merged

Feat/assets gate#4
shayancoin merged 2 commits intomainfrom
feat/assets-gate

Conversation

@shayancoin
Copy link
Owner

@shayancoin shayancoin commented Oct 15, 2025

PR Type

[Feature | Fix | Documentation | Other() ]

Short Description

...

Tests Added

...

Summary by CodeRabbit

  • Documentation
    • Replaced manual API docs with a complete OpenAPI 3.1 specification, including detailed endpoints, schemas, responses, and security notes.
  • New Features
    • Introduced a models manifest and deterministic 3D assets with LOD support for more consistent, potentially faster loading.
  • Tests
    • Added end-to-end tests and manifest validation to catch asset and workflow issues early.
  • Chores
    • Added asset generation, validation, metadata update, and packing scripts, along with new frontend commands to manage the asset pipeline.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 15, 2025

Walkthrough

Replaced Markdown API spec with an OpenAPI 3.1 JSON spec. Added frontend asset scripts and dependencies. Introduced models manifest. Added scripts to generate reference GLBs, pack/compress models, update GLB metadata, validate GLBs, and generate a manifest. Integrated KTX2/Meshopt packing and LOD bookkeeping. No public API code changes beyond documentation.

Changes

Cohort / File(s) Summary of Changes
API Specification
docs/API_SPEC.md
Replaced manual Markdown with a full OpenAPI 3.1.0 JSON specification, defining endpoints, schemas, responses, and security (APIKeyHeader).
Frontend Asset Tooling
frontend/package.json
Added asset pipeline scripts (e2e, assets:*, manifest tests) and new deps/devDeps: gltfpack, meshoptimizer, vitest, @playwright/test.
Models Manifest
frontend/public/models/manifest.json
New manifest listing six GLB models with file, sha256, and byte size.
Manifest Generator
scripts/gen_glb_manifest.py
New script to hash/size GLBs under frontend/public/models and emit a JSON manifest to stdout.
Reference GLB Generator
scripts/generate_reference_glbs.py
New deterministic GLB generator producing base/tall/wall models with minimal geometry and embedded textures.
GLB Validator
scripts/glb_validate.py
New validator CLI for Paform GLBs: structural checks, metadata/schema validation, material and LOD rules, bounds/triangle counts, warnings/errors, thresholds.
GLB Packer
scripts/pack_models.sh
New pipeline script orchestrating gltfpack with KTX2/Meshopt, generating LOD0/LOD1, and invoking metadata updater; handles env/tool detection and errors.
GLB Metadata Updater
scripts/update_glb_metadata.py
New script to read/modify/write GLB JSON chunk, set LOD level, triangle counts, and QA metadata; CLI for batch updates.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor Dev as Developer/CI
  participant Pack as pack_models.sh
  participant GenRef as generate_reference_glbs.py
  participant Gltf as gltfpack (external)
  participant Meta as update_glb_metadata.py
  participant FS as Models Dir

  Dev->>Pack: Run pack_models.sh
  Pack->>GenRef: Ensure reference GLBs exist
  GenRef-->>FS: Write base/tall/wall .glb
  Pack->>Gltf: Detect binary + KTX2 support
  loop For each source GLB (LOD0)
    Pack->>Gltf: Pack LOD0 (meshopt/ktx2)
    Gltf-->>FS: Temp packed LOD0
    Pack->>Meta: Update metadata (level=0)
    alt LOD1 exists
      Pack->>Gltf: Pack existing LOD1
    else
      Pack->>Gltf: Simplify to LOD1 (0.55)
    end
    Gltf-->>FS: Temp packed LOD1
    Pack->>Meta: Update metadata (level=1)
    Pack-->>FS: Move final LOD0/LOD1 into place
  end
  Pack-->>Dev: Packing complete
Loading
sequenceDiagram
  autonumber
  actor User as User/CI
  participant Val as glb_validate.py
  participant FS as Filesystem
  participant LOD1 as LOD1 Loader (optional)

  User->>Val: glb_validate.py [paths...] [--warn-threshold N] [--fail-on-warning]
  loop For each GLB
    Val->>FS: Read GLB bytes
    Val->>Val: Parse header + JSON chunk
    Val->>Val: Check schema, root node, materials, dimensions
    Val->>Val: Compute bounds & triangle count
    alt LOD0 file
      Val->>LOD1: If @lod1 expected, load paired file
      LOD1-->>Val: LOD1 JSON for checks
      Val->>Val: Verify LOD levels and triangle caps
    end
    Val-->>User: Issues (errors first, then warnings)
  end
  Val-->>User: Summary and exit code
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60–90 minutes

Poem

A rabbit taps keys with a hop and a thrum,
Packing GLBs till the textures hum.
LODs aligned, triangles in tune,
Validators nibble each warning soon.
Manifests hashed, specs crisp and tight—
Carrots up! Our assets shine bright. 🥕✨

Pre-merge checks and finishing touches

❌ Failed checks (3 warnings)
Check name Status Explanation Resolution
Title Check ⚠️ Warning The pull request title “Feat/assets gate” is vague and does not clearly describe the substantial changes in this PR, such as replacing the API spec with an OpenAPI document or adding asset manifest tools and GLB pipelines. It fails to convey the primary features or scope of the update to teammates scanning the history. Because it neither highlights the main API overhaul nor the asset scripting additions, it does not meet the criteria for a concise, descriptive title. Please revise the title to clearly reflect the core changes—for example, “Add OpenAPI 3.1 Spec and Asset Management Scripts”—so that it succinctly summarizes the main enhancements introduced by this PR.
Description Check ⚠️ Warning The pull request description is identical to the empty template and contains no actual information about the PR type, short description, or tests added. It lacks the required sections and fails to document the intent, scope, and verification steps for the changes. As provided, it does not meet the repository’s template requirements or offer clarity to reviewers. Please complete the description by specifying the PR Type, summarizing the key changes in the Short Description, and detailing any added or updated tests under Tests Added to fulfill the repository’s template and inform reviewers of the PR’s purpose and validation.
Docstring Coverage ⚠️ Warning Docstring coverage is 4.55% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/assets-gate

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting

Comment on lines +100 to +106
def validate(path: Path, fail_on_warning: bool, warn_threshold: int) -> Tuple[int, int, List[Issue]]:
issues: List[Issue] = []
try:
doc = load_glb(path)
except Exception as exc: # noqa: BLE001
return 0, 0, [Issue("ERROR", "LOAD", f"{path.name}: {exc}")]

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Treat GLB load failures as errors

When a GLB cannot be read, validate returns (0, 0, [Issue(...)]), so the caller prints the issue but still reports "✅ Pass" and exits with success. This means missing or corrupt assets will not fail CI even though an error is emitted. The function should return a non‑zero error count (and ideally propagate a failing exit code) so that load failures are treated as hard failures.

Useful? React with 👍 / 👎.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 15

📜 Review details

Configuration used: CodeRabbit UI

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9848a04 and c25f7e8.

⛔ Files ignored due to path filters (1)
  • frontend/package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (8)
  • docs/API_SPEC.md (1 hunks)
  • frontend/package.json (2 hunks)
  • frontend/public/models/manifest.json (1 hunks)
  • scripts/gen_glb_manifest.py (1 hunks)
  • scripts/generate_reference_glbs.py (1 hunks)
  • scripts/glb_validate.py (1 hunks)
  • scripts/pack_models.sh (1 hunks)
  • scripts/update_glb_metadata.py (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (4)
scripts/gen_glb_manifest.py (3)
scripts/generate_reference_glbs.py (1)
  • main (253-268)
scripts/glb_validate.py (1)
  • main (236-260)
scripts/update_glb_metadata.py (1)
  • main (101-109)
scripts/generate_reference_glbs.py (3)
scripts/update_glb_metadata.py (2)
  • write_glb (31-40)
  • main (101-109)
scripts/gen_glb_manifest.py (1)
  • main (24-29)
scripts/glb_validate.py (1)
  • main (236-260)
scripts/update_glb_metadata.py (2)
scripts/generate_reference_glbs.py (2)
  • write_glb (235-250)
  • main (253-268)
scripts/glb_validate.py (2)
  • triangle_count (83-97)
  • main (236-260)
scripts/glb_validate.py (1)
scripts/update_glb_metadata.py (1)
  • triangle_count (43-58)
🪛 Ruff (0.14.0)
scripts/gen_glb_manifest.py

1-1: Shebang is present but file is not executable

(EXE001)

scripts/generate_reference_glbs.py

1-1: Shebang is present but file is not executable

(EXE001)


21-21: Import from collections.abc instead: Iterable, Sequence

Import from collections.abc

(UP035)


21-21: typing.Dict is deprecated, use dict instead

(UP035)


21-21: typing.List is deprecated, use list instead

(UP035)


21-21: typing.Tuple is deprecated, use tuple instead

(UP035)

scripts/update_glb_metadata.py

1-1: Shebang is present but file is not executable

(EXE001)


9-9: Import from collections.abc instead: Iterable, Sequence

Import from collections.abc

(UP035)


9-9: typing.Dict is deprecated, use dict instead

(UP035)


9-9: typing.List is deprecated, use list instead

(UP035)


9-9: typing.Tuple is deprecated, use tuple instead

(UP035)


17-17: Avoid specifying long messages outside the exception class

(TRY003)


66-66: Avoid specifying long messages outside the exception class

(TRY003)


71-71: Avoid specifying long messages outside the exception class

(TRY003)


75-75: Avoid specifying long messages outside the exception class

(TRY003)

scripts/glb_validate.py

1-1: Shebang is present but file is not executable

(EXE001)


12-12: Import from collections.abc instead: Iterable, Sequence

Import from collections.abc

(UP035)


12-12: typing.Dict is deprecated, use dict instead

(UP035)


12-12: typing.List is deprecated, use list instead

(UP035)


12-12: typing.Tuple is deprecated, use tuple instead

(UP035)


36-36: Avoid specifying long messages outside the exception class

(TRY003)


48-48: Avoid specifying long messages outside the exception class

(TRY003)


100-100: Boolean-typed positional argument in function definition

(FBT001)

🔇 Additional comments (4)
frontend/package.json (1)

43-46: gltfpack flags compatible with v0.25.0
The flags -cc, -tc, -ktx2, -si, -sa, -kn, -km, and -ke are supported in gltfpack v0.25.x per official docs.

scripts/update_glb_metadata.py (1)

88-92: Remove duplication of extras into nodes[0] and update validator
Copying extras to nodes[0] duplicates metadata on unrelated nodes. Ensure the validator reads LOD metadata from the scene’s root node, then remove:

-    if nodes and root_index != 0:
-        first_node = nodes[0]
-        if not isinstance(first_node.get("extras"), dict):
-            first_node["extras"] = extras

Confirm the validator change before merging.

frontend/public/models/manifest.json (1)

1-34: LGTM! Clean manifest structure.

The manifest follows a clear and consistent structure with all necessary metadata (file path, SHA256 hash, byte size) for the six GLB models and their LOD1 variants. The naming convention with @lod1 suffix effectively distinguishes detail levels.

scripts/gen_glb_manifest.py (1)

32-33: LGTM! Standard entry point pattern.

The use of raise SystemExit(main()) is the correct pattern for executable Python scripts, ensuring proper exit code propagation.

Comment on lines +659 to +663
"security": [
{
"APIKeyHeader": []
}
]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Correct the Hygraph pull security scheme.

The operation description calls for a Bearer token, yet the security scheme is modeled as an apiKey header. This mismatch breaks client generation (Bearers expect the http scheme with bearer). Please change the security scheme to type: http and scheme: bearer, and keep the operation-level reference aligned.

Also applies to: 1430-1434

🤖 Prompt for AI Agents
In docs/API_SPEC.md around lines 659-663 (and likewise update lines 1430-1434),
the OpenAPI security scheme for the Hygraph pull is defined as an apiKey in a
header while operations expect a Bearer token; change the security scheme
definition to type: http and scheme: bearer (keeping any description/name fields
as needed) and ensure the operation-level security references still point to
that scheme name so clients generate an HTTP Bearer auth header instead of an
apiKey header.

Comment on lines +713 to +726
"get": {
"summary": "Metrics",
"operationId": "metrics_metrics_get",
"responses": {
"200": {
"description": "Successful Response",
"content": {
"application/json": {
"schema": {}
}
}
}
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Fix metrics response media type.

The /metrics operation documents application/json, but Prometheus-style metrics are emitted as text/plain; version=0.0.4. The current description will cause autogenerated clients to reject the actual payload. Please switch the documented media type (and schema) to plain text.

🤖 Prompt for AI Agents
In docs/API_SPEC.md around lines 713 to 726, the /metrics GET response is
incorrectly documented as "application/json"; change the response media type to
"text/plain; version=0.0.4" and replace the empty JSON schema with a plain text
schema (e.g., schema: type: string) so the OpenAPI spec matches Prometheus-style
metrics output; ensure the content block uses the exact media type string and a
simple string schema.

Comment on lines +11 to +16
"assets:gen": "python ../scripts/generate_reference_glbs.py",
"assets:pack": "bash ../scripts/pack_models.sh",
"assets:validate": "python ../scripts/glb_validate.py public/models/*.glb --fail-on-warning",
"assets:manifest": "python ../scripts/gen_glb_manifest.py > public/models/manifest.json",
"assets:all": "npm run assets:gen && npm run assets:pack && npm run assets:validate && npm run assets:manifest",
"test:manifest": "vitest run --reporter=dot"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

🧩 Analysis chain

Standardize Python invocation; align with pack script.

Use python3 consistently to avoid env mismatches (some systems have no python). Example:

-    "assets:gen": "python ../scripts/generate_reference_glbs.py",
+    "assets:gen": "python3 ../scripts/generate_reference_glbs.py",
-    "assets:validate": "python ../scripts/glb_validate.py public/models/*.glb --fail-on-warning",
+    "assets:validate": "python3 ../scripts/glb_validate.py public/models/*.glb --fail-on-warning",
-    "assets:manifest": "python ../scripts/gen_glb_manifest.py > public/models/manifest.json",
+    "assets:manifest": "python3 ../scripts/gen_glb_manifest.py > public/models/manifest.json",

Note: > public/models/manifest.json and the *.glb glob rely on a POSIX shell; if Windows support is needed, consider a node wrapper script.


🏁 Script executed:

#!/bin/bash
echo "=== frontend/package.json python invocations ==="
rg -n '"python ' -C2 frontend/package.json || echo "none"

echo "=== frontend/package.json python3 invocations ==="
rg -n '"python3' -C2 frontend/package.json || echo "none"

echo "=== Global python invocations ==="
rg -n '"python ' -C2 || echo "none"

echo "=== Global python3 invocations ==="
rg -n '"python3' -C2 || echo "none"

echo "=== Shebang lines in scripts/ ==="
find scripts -maxdepth 1 -type f -name "*.py" | xargs -r head -n1

Length of output: 1752


Use python3 in package.json scripts
Replace python with python3 in assets:gen, assets:validate, and assets:manifest to match the scripts’ shebangs. These commands use POSIX globs and redirection; for Windows support, consider a Node-based wrapper.

🤖 Prompt for AI Agents
In frontend/package.json around lines 11 to 16, three scripts use the plain
`python` executable which is inconsistent with the scripts' shebangs and can
fail on systems where Python 3 is `python3`; update the scripts `assets:gen`,
`assets:validate`, and `assets:manifest` to call `python3` instead of `python`.
Ensure you only change the command name (not arguments), run the updated scripts
locally to verify they work, and consider adding a short comment or README note
if Windows users need a Node-based wrapper for glob/redirection behavior.

@@ -0,0 +1,33 @@
#!/usr/bin/env python3
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix the executable permission.

The shebang is present but the file is not marked as executable. This prevents the script from being run directly.

Apply this fix:

chmod +x scripts/gen_glb_manifest.py

Based on static analysis hints.

🧰 Tools
🪛 Ruff (0.14.0)

1-1: Shebang is present but file is not executable

(EXE001)

🤖 Prompt for AI Agents
In scripts/gen_glb_manifest.py around line 1 the shebang is present but the file
is not executable; mark the script executable by setting the execute bit (e.g.,
run chmod +x scripts/gen_glb_manifest.py) and update the repository index (git
add scripts/gen_glb_manifest.py and commit) so the script can be run directly.

Comment on lines +15 to +21
def manifest_entry(path: Path) -> dict:
data = path.read_bytes()
return {
"file": "/models/" + path.name,
"sha256": hashlib.sha256(data).hexdigest(),
"bytes": len(data),
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Add error handling for file operations.

The function reads the entire file into memory and computes its hash without handling potential I/O errors (e.g., permission denied, file not found during read).

Apply this diff to add basic error handling:

 def manifest_entry(path: Path) -> dict:
-    data = path.read_bytes()
+    try:
+        data = path.read_bytes()
+    except OSError as e:
+        raise RuntimeError(f"Failed to read {path}: {e}") from e
     return {
         "file": "/models/" + path.name,
         "sha256": hashlib.sha256(data).hexdigest(),
         "bytes": len(data),
     }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def manifest_entry(path: Path) -> dict:
data = path.read_bytes()
return {
"file": "/models/" + path.name,
"sha256": hashlib.sha256(data).hexdigest(),
"bytes": len(data),
}
def manifest_entry(path: Path) -> dict:
try:
data = path.read_bytes()
except OSError as e:
raise RuntimeError(f"Failed to read {path}: {e}") from e
return {
"file": "/models/" + path.name,
"sha256": hashlib.sha256(data).hexdigest(),
"bytes": len(data),
}
🤖 Prompt for AI Agents
In scripts/gen_glb_manifest.py around lines 15 to 21, manifest_entry reads the
whole file and computes a hash without handling I/O errors; wrap the file read
and hashing in a try/except that catches OSError (or Exception if broader) to
handle file-not-found/permission errors, and on failure either raise a new
exception with a clear contextual message (including the path and original
error) or return a sentinel (e.g., None) depending on callers; ensure the happy
path still returns the dict with "file", "sha256", and "bytes".

Comment on lines +216 to +223
lod1_extras = (lod1_doc.get("nodes") or [{}])[0].get("extras") or {}
if lod1_extras.get("lod", {}).get("level") != 1:
issues.append(
Issue("ERROR", "LOD", f"LOD1 asset {lod1_path.name} must declare extras.lod.level == 1")
)
elif lod_level == 1 and "@lod1" not in path.name:
issues.append(Issue("ERROR", "LOD", "LOD1 assets must use '@lod1' suffix in filename"))

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Check LOD1 metadata on the LOD1 root node, not nodes[0].

Aligns with earlier root handling and avoids reliance on node ordering.

-            lod1_extras = (lod1_doc.get("nodes") or [{}])[0].get("extras") or {}
-            if lod1_extras.get("lod", {}).get("level") != 1:
+            lod1_roots = gather_root_nodes(lod1_doc)
+            lod1_nodes = lod1_doc.get("nodes") or []
+            lod1_root = lod1_nodes[lod1_roots[0]] if lod1_roots else {}
+            lod1_extras = lod1_root.get("extras") or {}
+            if lod1_extras.get("lod", {}).get("level") != 1:
                 issues.append(
                     Issue("ERROR", "LOD", f"LOD1 asset {lod1_path.name} must declare extras.lod.level == 1")
                 )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
lod1_extras = (lod1_doc.get("nodes") or [{}])[0].get("extras") or {}
if lod1_extras.get("lod", {}).get("level") != 1:
issues.append(
Issue("ERROR", "LOD", f"LOD1 asset {lod1_path.name} must declare extras.lod.level == 1")
)
elif lod_level == 1 and "@lod1" not in path.name:
issues.append(Issue("ERROR", "LOD", "LOD1 assets must use '@lod1' suffix in filename"))
lod1_roots = gather_root_nodes(lod1_doc)
lod1_nodes = lod1_doc.get("nodes") or []
lod1_root = lod1_nodes[lod1_roots[0]] if lod1_roots else {}
lod1_extras = lod1_root.get("extras") or {}
if lod1_extras.get("lod", {}).get("level") != 1:
issues.append(
Issue("ERROR", "LOD", f"LOD1 asset {lod1_path.name} must declare extras.lod.level == 1")
)
🤖 Prompt for AI Agents
In scripts/glb_validate.py around lines 216 to 223, the LOD1 metadata check
currently inspects nodes[0].extras which relies on node ordering; instead read
the root object's extras by using lod1_doc.get("extras") or {} and check
.get("lod", {}).get("level") == 1, and if not append the same Issue("ERROR",
"LOD", f"LOD1 asset {lod1_path.name} must declare extras.lod.level == 1"); keep
the existing filename-suffix check for lod_level == 1 unchanged.

Comment on lines +30 to +38
HELP_HEADER="$($PACK_CMD -h 2>&1 || true)"
if [[ "$HELP_HEADER" == *"-ktx2"* ]]; then
TEXTURE_ARGS=(-ktx2)
elif [[ "$HELP_HEADER" == *"-tc"* ]]; then
TEXTURE_ARGS=(-tc)
else
echo "ERROR: gltfpack binary lacks KTX2 texture compression support (-ktx2/-tc)." >&2
exit 1
fi
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

🧩 Analysis chain

Also detect support for -cc (meshopt) to avoid passing unknown flags.

Current check ensures only KTX2 support. Consider verifying -cc in help before using it, or gate it similarly.


🏁 Script executed:

#!/bin/bash
# Inspect gltfpack help for -cc to ensure support; print a warning if missing.
"$PACK_CMD" -h 2>&1 | rg -n -- "-cc" || echo "Warning: gltfpack lacks -cc; EXT_meshopt_compression may not be emitted."

Length of output: 72


Gate meshopt compression flag (-cc) support
After texture-arg detection, capture help and check for -cc:

HELP_HEADER="$($PACK_CMD -h 2>&1 || true)"
if [[ "$HELP_HEADER" == *"-cc"* ]]; then
  MESHOPT_ARGS=(-cc)
else
  echo "WARNING: gltfpack lacks meshopt compression support (-cc); EXT_meshopt_compression won’t be emitted." >&2
  MESHOPT_ARGS=()
fi
🤖 Prompt for AI Agents
In scripts/pack_models.sh around lines 30 to 38, after detecting texture args
from the gltfpack help output, also re-run or reuse the HELP_HEADER capture and
check for the presence of the meshopt compression flag "-cc"; if present set
MESHOPT_ARGS to (-cc), otherwise log a warning to stderr about missing meshopt
support and set MESHOPT_ARGS to an empty array; ensure HELP_HEADER is available
at that point (capture it if needed) and place this check immediately after the
texture-arg detection so MESHOPT_ARGS is exported/used later in the script.

Comment on lines +85 to +89
if [[ -f "$lod1_path" ]]; then
"$PACK_CMD" -i "$lod1_path" -o "$tmp_lod1" -kn -km -ke -cc "${TEXTURE_ARGS[@]}" "${SIMPLIFY_ARGS[@]}"
else
"$PACK_CMD" -i "$src" -o "$tmp_lod1" -kn -km -ke -cc "${TEXTURE_ARGS[@]}" "${SIMPLIFY_ARGS[@]}"
fi
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Avoid re-simplifying existing LOD1; it compounds on reruns.

If LOD1 exists, repack without -si/-sa; only simplify when deriving from LOD0.

-  if [[ -f "$lod1_path" ]]; then
-    "$PACK_CMD" -i "$lod1_path" -o "$tmp_lod1" -kn -km -ke -cc "${TEXTURE_ARGS[@]}" "${SIMPLIFY_ARGS[@]}"
-  else
-    "$PACK_CMD" -i "$src" -o "$tmp_lod1" -kn -km -ke -cc "${TEXTURE_ARGS[@]}" "${SIMPLIFY_ARGS[@]}"
-  fi
+  if [[ -f "$lod1_path" ]]; then
+    # Repack existing LOD1 to (re)apply compression/texture settings without further simplification.
+    "$PACK_CMD" -i "$lod1_path" -o "$tmp_lod1" -kn -km -ke -cc "${TEXTURE_ARGS[@]}"
+  else
+    # Derive LOD1 from LOD0 with simplification.
+    "$PACK_CMD" -i "$src" -o "$tmp_lod1" -kn -km -ke -cc "${TEXTURE_ARGS[@]}" "${SIMPLIFY_ARGS[@]}"
+  fi
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if [[ -f "$lod1_path" ]]; then
"$PACK_CMD" -i "$lod1_path" -o "$tmp_lod1" -kn -km -ke -cc "${TEXTURE_ARGS[@]}" "${SIMPLIFY_ARGS[@]}"
else
"$PACK_CMD" -i "$src" -o "$tmp_lod1" -kn -km -ke -cc "${TEXTURE_ARGS[@]}" "${SIMPLIFY_ARGS[@]}"
fi
if [[ -f "$lod1_path" ]]; then
# Repack existing LOD1 to (re)apply compression/texture settings without further simplification.
"$PACK_CMD" -i "$lod1_path" -o "$tmp_lod1" -kn -km -ke -cc "${TEXTURE_ARGS[@]}"
else
# Derive LOD1 from LOD0 with simplification.
"$PACK_CMD" -i "$src" -o "$tmp_lod1" -kn -km -ke -cc "${TEXTURE_ARGS[@]}" "${SIMPLIFY_ARGS[@]}"
fi
🤖 Prompt for AI Agents
In scripts/pack_models.sh around lines 85 to 89, avoid re-applying
simplification flags when an existing LOD1 is being repacked: if LOD1 exists
call PACK_CMD with the same flags but omit the simplification args (the -si/-sa
entries contained in SIMPLIFY_ARGS) so only texture/cleanup flags are used; when
deriving LOD1 from LOD0 keep including SIMPLIFY_ARGS as before. Ensure the
branch that handles the existing LOD1 does not expand "${SIMPLIFY_ARGS[@]}"
(remove it) so simplification is only applied in the else path.

@@ -0,0 +1,113 @@
#!/usr/bin/env python3
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Shebang vs execution mode and typing modernizations.

  • Either make the script executable or drop the shebang.
  • Prefer collections.abc and built-in generics over typing.*.
-#!/usr/bin/env python3
+# (run via python3)
-from typing import Dict, Iterable, List, Sequence, Tuple
+from collections.abc import Iterable, Sequence
+from typing import Dict, List, Tuple  # or use built-in: dict, list, tuple

Committable suggestion skipped: line range outside the PR's diff.

🧰 Tools
🪛 Ruff (0.14.0)

1-1: Shebang is present but file is not executable

(EXE001)

🤖 Prompt for AI Agents
In scripts/update_glb_metadata.py at line 1, the file currently has a shebang
but may not be executable and uses typing.* imports; either remove the shebang
or make the file executable (chmod +x) to match intended execution, and
modernize type hints by replacing typing imports with collections.abc where
appropriate (e.g., Iterable, Mapping) and using built-in generics (list, dict,
tuple) instead of typing.List/Dict/Tuple; update import statements and all type
annotations accordingly.

Comment on lines +72 to +79
scene_idx = doc.get("scene", 0)
root_nodes = scenes[scene_idx].get("nodes") or []
if not root_nodes:
raise ValueError("Scene has no root nodes")
nodes = doc.get("nodes") or []
root_index = root_nodes[0]
root = nodes[root_index]

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Guard against invalid scene index.

Avoid IndexError if scene points outside scenes.

-    scene_idx = doc.get("scene", 0)
-    root_nodes = scenes[scene_idx].get("nodes") or []
+    scene_idx = doc.get("scene", 0)
+    if not (0 <= scene_idx < len(scenes)):
+        raise ValueError(f"Invalid scene index: {scene_idx}")
+    root_nodes = scenes[scene_idx].get("nodes") or []
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
scene_idx = doc.get("scene", 0)
root_nodes = scenes[scene_idx].get("nodes") or []
if not root_nodes:
raise ValueError("Scene has no root nodes")
nodes = doc.get("nodes") or []
root_index = root_nodes[0]
root = nodes[root_index]
scene_idx = doc.get("scene", 0)
if not (0 <= scene_idx < len(scenes)):
raise ValueError(f"Invalid scene index: {scene_idx}")
root_nodes = scenes[scene_idx].get("nodes") or []
if not root_nodes:
raise ValueError("Scene has no root nodes")
nodes = doc.get("nodes") or []
root_index = root_nodes[0]
root = nodes[root_index]
🧰 Tools
🪛 Ruff (0.14.0)

75-75: Avoid specifying long messages outside the exception class

(TRY003)

🤖 Prompt for AI Agents
In scripts/update_glb_metadata.py around lines 72 to 79, the code assumes
scene_idx is a valid index into scenes which can raise IndexError; validate that
scenes exists and that 0 <= scene_idx < len(scenes) before using it, and if
invalid raise a clear ValueError (e.g., "Invalid scene index: {scene_idx}") or
handle it explicitly (choose a default scene or abort) so you never index out of
range when accessing scenes[scene_idx].

@shayancoin shayancoin merged commit 05c2eb3 into main Oct 15, 2025
2 of 6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant