Skip to content

Paform#7

Merged
shayancoin merged 4 commits intomainfrom
paform
Oct 15, 2025
Merged

Paform#7
shayancoin merged 4 commits intomainfrom
paform

Conversation

@shayancoin
Copy link
Owner

@shayancoin shayancoin commented Oct 15, 2025

PR Type

[Feature | Fix | Documentation | Other() ]

Short Description

...

Tests Added

...

Summary by CodeRabbit

  • New Features

    • Added Prometheus metrics endpoint.
    • Unified JSON error responses for validation errors.
    • Introduced Hygraph sync: secure webhook with deduplication and an authenticated pull endpoint.
    • Automated 3D asset pipeline with LOD packing and a public models manifest to enable robust caching.
  • Documentation

    • Replaced manual API docs with an OpenAPI specification.
  • Tests

    • Added performance, manifest coverage, cache behavior, and sync/validation tests.
  • Chores

    • New CI workflows to validate assets and run lightweight performance checks on pull requests.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 15, 2025

Walkthrough

Introduces CI workflows for asset generation/validation and perf testing; adds backend sync endpoints with security, metrics, migrations, and startup behaviors; implements Hygraph service stubs; adds multiple asset-processing scripts; publishes a models manifest and frontend test tooling; and expands backend/frontend tests and OpenAPI spec generation.

Changes

Cohort / File(s) Summary of changes
CI Workflows
.github/workflows/assets.yml, .github/workflows/perf-light.yml
Adds PR workflows to generate/pack/validate GLBs and to run a light k6 perf test stack with seeding and artifact upload.
Backend App & Metrics
backend/api/main.py, backend/api/db.py, backend/api/metrics.py
Adds global validation error handler, SQLite startup table creation, /metrics endpoint, and Prometheus counters. Ensures models are imported for metadata registration.
Security & Validation
backend/api/security.py
Adds API write-token auth, Hygraph signature verification, request validation utilities, and constants/headers.
Models & Migration
backend/api/models.py, backend/alembic/versions/cfe1d8e4e001_add_sync_events.py
Introduces SyncEvent model with unique constraints and matching Alembic migration to create sync_events table.
Sync Routes
backend/api/routes_sync.py
Adds Hygraph webhook endpoint with deduping and background processing, admin pull endpoint with metrics, and structured error envelopes.
Hygraph Service
backend/services/hygraph_service.py
Adds placeholder pull methods (materials/modules/systems/all) and a dialect-aware JSON expression helper.
OpenAPI Utility
backend/scripts/generate_openapi.py
Adds script to print the app’s OpenAPI JSON.
Asset Tooling Scripts
scripts/generate_reference_glbs.py, scripts/glb_validate.py, scripts/update_glb_metadata.py, scripts/gen_glb_manifest.py, scripts/pack_models.sh
Adds GLB generation, validation, metadata update, manifest generation, and packing scripts with LOD handling and CLI interfaces.
Backend Tests
backend/tests/*
Adds tests for error envelopes, sync route metrics/dedup, manifest coverage, and DB schema fixture.
Frontend Manifest & Tooling
frontend/public/models/manifest.json, frontend/package.json, frontend/vitest.config.ts, frontend/playwright.config.ts
Adds models manifest; extends npm scripts and dev deps; adds Vitest and Playwright configs for unit/E2E.
Frontend Tests
frontend/tests/e2e/cache.spec.ts, frontend/tests/manifest.spec.ts
Adds tests for asset caching headers/ETag behavior and manifest LOD coverage.
Docs
docs/API_SPEC.md
Replaces narrative docs with an OpenAPI 3.1 JSON spec.
Perf Test
tests/perf/k6-quote-cnc.js
Adds k6 scenario covering material/module creation, quote, CNC export, and optional frontend checks.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant HG as Hygraph
  participant Client as Hygraph Webhook
  participant API as FastAPI /api/sync/hygraph
  participant Sec as Security Validator
  participant DB as DB (SyncEvent)
  participant Svc as HygraphService
  participant M as Prometheus Metrics

  Client->>API: POST /api/sync/hygraph (signed)
  API->>Sec: validate_hygraph_request(body, signature)
  Sec-->>API: ok / error
  API->>DB: upsert SyncEvent (dedupe by event_id/body_sha256)
  alt First delivery
    API-->>Client: 202 Accepted { ok: true }
    API--)Svc: Background pull_all()
    Svc->>M: inc sync_success/records
  else Duplicate
    API-->>Client: 200 OK { ok: true, dedup: true }
  end
Loading
sequenceDiagram
  autonumber
  participant Admin as Admin Client
  participant API as FastAPI /api/sync/hygraph/pull
  participant Sec as require_write_token
  participant Svc as HygraphService
  participant M as Prometheus Metrics

  Admin->>API: POST /api/sync/hygraph/pull { type, page_size } + Authorization
  API->>Sec: validate token
  Sec-->>API: ok / 401/403
  API->>Svc: pull_{type} or pull_all(page_size)
  Svc-->>API: { counts }
  API->>M: inc success/failure, upserted
  API-->>Admin: 200 { ok: true, data }
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

Possibly related PRs

  • Feat/assets gate #4 — Touches the same asset pipeline scripts and frontend manifest/tooling; likely an earlier or parallel iteration of this asset automation and validation work.

Poem

I thump my paw on pipelines new,
GLBs packed tight, with hashes true.
Webhooks hop, dedupe on cue,
Metrics nibble bytes they view.
With tokens snug and tests in tow—
This warren’s ready. Ship we go! 🥕🐇

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings, 1 inconclusive)
Check name Status Explanation Resolution
Description Check ⚠️ Warning The pull request description is merely the template with placeholders and lacks any filled-out information for PR Type, Short Description, or Tests Added, so it does not describe the actual changes or testing performed. Please populate the PR description by selecting the appropriate PR Type, providing a concise summary of the changes, and listing any tests added to validate the new functionality.
Docstring Coverage ⚠️ Warning Docstring coverage is 17.74% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Title Check ❓ Inconclusive The pull request title “Paform” is too generic and does not convey the primary change or scope of the changeset, making it difficult for a reviewer to understand what this PR implements at a glance. Please update the title to succinctly summarize the main implementation or feature introduced, such as “Add automated asset workflows and Hygraph sync endpoints” so that it clearly reflects the core changes.
✨ Finishing touches
  • 📝 Docstrings were successfully generated.
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch paform

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting

Comment on lines +90 to +95
async def _process(event_id_local: Optional[str], body_sha_local: str) -> None:
t0 = time.perf_counter()
try:
counts = await HygraphService.pull_all(db)
for t, c in counts.items():
sync_records_upserted_total.labels(t).inc(int(c or 0))

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Avoid using request-scoped session inside background task

The webhook handler schedules _process to run after the response is returned, but _process reuses the db session obtained from get_db. FastAPI closes that session in the dependency cleanup once the handler exits, so when HygraphService.pull_all starts executing it will operate on a closed Session and raise InvalidRequestError as soon as it touches the database. Create a new session inside the background task (or delay closing until the task completes) instead of reusing the request-scoped one.

Useful? React with 👍 / 👎.

from api.routes_quote import router as quote_router
from api.routes_cnc import router as cnc_router
from api.routes_sync import router as sync_router
from prometheus_client import CONTENT_TYPE_LATEST, generate_latest

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Declare prometheus_client dependency for new /metrics endpoint

The API now imports CONTENT_TYPE_LATEST and generate_latest from prometheus_client, but the backend’s pyproject.toml does not list prometheus_client as a requirement. Deployments without the package installed will crash at import time with ModuleNotFoundError. Add the dependency to the backend project configuration.

Useful? React with 👍 / 👎.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 13

📜 Review details

Configuration used: CodeRabbit UI

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 102fc53 and cba636e.

⛔ Files ignored due to path filters (2)
  • backend/paform.db is excluded by !**/*.db
  • frontend/package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (28)
  • .github/workflows/assets.yml (1 hunks)
  • .github/workflows/perf-light.yml (1 hunks)
  • backend/alembic/versions/cfe1d8e4e001_add_sync_events.py (1 hunks)
  • backend/api/db.py (1 hunks)
  • backend/api/main.py (6 hunks)
  • backend/api/metrics.py (1 hunks)
  • backend/api/models.py (2 hunks)
  • backend/api/routes_sync.py (1 hunks)
  • backend/api/security.py (1 hunks)
  • backend/scripts/generate_openapi.py (1 hunks)
  • backend/services/hygraph_service.py (1 hunks)
  • backend/tests/conftest.py (1 hunks)
  • backend/tests/test_error_envelopes.py (1 hunks)
  • backend/tests/test_manifest_coverage_backend.py (1 hunks)
  • backend/tests/test_sync_routes_metrics.py (1 hunks)
  • docs/API_SPEC.md (1 hunks)
  • frontend/package.json (2 hunks)
  • frontend/playwright.config.ts (1 hunks)
  • frontend/public/models/manifest.json (1 hunks)
  • frontend/tests/e2e/cache.spec.ts (1 hunks)
  • frontend/tests/manifest.spec.ts (1 hunks)
  • frontend/vitest.config.ts (1 hunks)
  • scripts/gen_glb_manifest.py (1 hunks)
  • scripts/generate_reference_glbs.py (1 hunks)
  • scripts/glb_validate.py (1 hunks)
  • scripts/pack_models.sh (1 hunks)
  • scripts/update_glb_metadata.py (1 hunks)
  • tests/perf/k6-quote-cnc.js (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (13)
backend/tests/test_error_envelopes.py (2)
backend/tests/test_sync_routes_metrics.py (1)
  • client (19-21)
backend/tests/test_routes_quote_cnc.py (1)
  • test_quote_and_cnc_endpoints (38-63)
scripts/gen_glb_manifest.py (2)
scripts/generate_reference_glbs.py (1)
  • main (253-268)
tests/perf/k6-quote-cnc.js (1)
  • manifest (106-106)
tests/perf/k6-quote-cnc.js (1)
backend/tests/test_routes_quote_cnc.py (1)
  • test_quote_and_cnc_endpoints (38-63)
backend/tests/test_manifest_coverage_backend.py (2)
backend/api/db.py (2)
  • Base (13-14)
  • get_db (53-59)
backend/api/models.py (1)
  • Module (65-110)
backend/api/routes_sync.py (5)
backend/api/db.py (1)
  • get_db (53-59)
backend/api/models.py (1)
  • SyncEvent (118-133)
backend/api/security.py (2)
  • require_write_token (24-36)
  • validate_hygraph_request (91-108)
backend/api/main.py (1)
  • metrics (105-107)
backend/services/hygraph_service.py (5)
  • HygraphService (11-80)
  • pull_all (62-64)
  • pull_materials (47-49)
  • pull_modules (52-54)
  • pull_systems (57-59)
frontend/tests/e2e/cache.spec.ts (1)
tests/perf/k6-quote-cnc.js (3)
  • manifestResponse (103-103)
  • etag (116-116)
  • cached (118-118)
backend/tests/conftest.py (1)
backend/api/db.py (1)
  • Base (13-14)
backend/api/security.py (1)
backend/api/config.py (1)
  • Settings (9-45)
scripts/glb_validate.py (2)
scripts/update_glb_metadata.py (1)
  • main (101-109)
scripts/gen_glb_manifest.py (1)
  • main (24-29)
backend/api/main.py (2)
backend/api/db.py (1)
  • Base (13-14)
backend/api/config.py (1)
  • Settings (9-45)
scripts/update_glb_metadata.py (1)
scripts/glb_validate.py (1)
  • triangle_count (83-97)
backend/tests/test_sync_routes_metrics.py (1)
backend/services/hygraph_service.py (1)
  • HygraphService (11-80)
backend/api/models.py (1)
backend/api/db.py (1)
  • Base (13-14)
🪛 actionlint (1.7.8)
.github/workflows/perf-light.yml

41-41: could not parse as YAML: could not find expected ':'

(syntax-check)

🪛 Ruff (0.14.0)
scripts/gen_glb_manifest.py

1-1: Shebang is present but file is not executable

(EXE001)

backend/api/routes_sync.py

86-86: Local variable payload is assigned to but never used

Remove assignment to unused variable payload

(F841)

scripts/glb_validate.py

1-1: Shebang is present but file is not executable

(EXE001)


12-12: Import from collections.abc instead: Iterable, Sequence

Import from collections.abc

(UP035)


12-12: typing.Dict is deprecated, use dict instead

(UP035)


12-12: typing.List is deprecated, use list instead

(UP035)


12-12: typing.Tuple is deprecated, use tuple instead

(UP035)


36-36: Avoid specifying long messages outside the exception class

(TRY003)


48-48: Avoid specifying long messages outside the exception class

(TRY003)


100-100: Boolean-typed positional argument in function definition

(FBT001)

backend/api/main.py

36-36: Module level import not at top of file

(E402)


37-37: Module level import not at top of file

(E402)

scripts/update_glb_metadata.py

1-1: Shebang is present but file is not executable

(EXE001)


9-9: Import from collections.abc instead: Iterable, Sequence

Import from collections.abc

(UP035)


9-9: typing.Dict is deprecated, use dict instead

(UP035)


9-9: typing.List is deprecated, use list instead

(UP035)


9-9: typing.Tuple is deprecated, use tuple instead

(UP035)


17-17: Avoid specifying long messages outside the exception class

(TRY003)


66-66: Avoid specifying long messages outside the exception class

(TRY003)


71-71: Avoid specifying long messages outside the exception class

(TRY003)


75-75: Avoid specifying long messages outside the exception class

(TRY003)

scripts/generate_reference_glbs.py

1-1: Shebang is present but file is not executable

(EXE001)


21-21: Import from collections.abc instead: Iterable, Sequence

Import from collections.abc

(UP035)


21-21: typing.Dict is deprecated, use dict instead

(UP035)


21-21: typing.List is deprecated, use list instead

(UP035)


21-21: typing.Tuple is deprecated, use tuple instead

(UP035)

🪛 YAMLlint (1.37.1)
.github/workflows/perf-light.yml

[warning] 3-3: truthy value should be one of [false, true]

(truthy)


[error] 42-42: syntax error: could not find expected ':'

(syntax)

🔇 Additional comments (12)
backend/tests/test_error_envelopes.py (3)

11-17: LGTM!

The error shape validation correctly checks for:

  • ok field set to False
  • error object with code and message fields

This aligns with the unified error envelope pattern mentioned in the PR context.


20-27: LGTM! Test correctly validates error envelope for invalid JSON.

The test appropriately:

  • Sends malformed JSON ('not-json') with JSON content-type header
  • Accepts both 400 and 422 status codes (common for JSON parsing/validation errors)
  • Verifies the error response conforms to the expected envelope structure

30-37: LGTM! Test correctly validates error envelope for CNC export.

The test structure mirrors the quote test and correctly validates the error envelope for the /api/cnc/export endpoint.

backend/scripts/generate_openapi.py (1)

6-9: LGTM! Script correctly generates and outputs OpenAPI spec.

The implementation is straightforward and appropriate for a utility script:

  • Uses FastAPI's built-in openapi() method to generate the spec
  • Pretty-prints JSON with indent for readability
  • Suitable for CI/CD pipelines or manual inspection
backend/tests/conftest.py (1)

11-15: LGTM! Fixture ensures DB schema is ready for tests.

The autouse fixture correctly:

  • Registers SQLAlchemy models via import
  • Creates all tables before each test
  • Uses function scope to handle cases where tests might drop/remove tables

This supports the new database-backed models and migrations introduced in the PR.

backend/alembic/versions/cfe1d8e4e001_add_sync_events.py (2)

24-26: Verify the intended uniqueness constraints for body_sha256.

Three unique constraints are defined:

  1. (source, event_id) - ensures unique events per source
  2. (source, body_sha256) - ensures unique body hashes per source
  3. (body_sha256) - ensures global uniqueness of body hashes

The standalone body_sha256 constraint (line 26) makes constraint #2 redundant, since a globally unique body_sha256 is already more restrictive than uniqueness within a source.

Consider whether global body_sha256 uniqueness is the intended behavior:

  • If events from different sources can have identical content (unlikely but possible), remove line 26
  • If duplicate content should never exist regardless of source, remove lines 25-25

Review the intended deduplication semantics with the team to clarify whether:

  1. Duplicate event content should be prevented globally (keep line 26 only)
  2. Duplicate event content should be allowed across different sources (keep lines 24-25 only)

30-31: LGTM! Downgrade path correctly drops the table.

The downgrade function properly reverses the migration by dropping the sync_events table.

backend/api/metrics.py (1)

5-21: LGTM! Metrics follow Prometheus best practices.

The counter definitions correctly:

  • Use descriptive names with _total suffix for counters
  • Include clear documentation strings
  • Use labelnames for the "type" dimension to track different sync operations
  • Follow standard Prometheus naming conventions
scripts/gen_glb_manifest.py (2)

15-21: LGTM! Manifest entry generation is correct.

The function correctly:

  • Reads file content as bytes
  • Computes SHA-256 hash
  • Returns a well-structured manifest entry with file path, hash, and size

24-29: LGTM! Main function generates deterministic manifest.

The implementation:

  • Uses sorted() to ensure deterministic output (important for version control)
  • Aggregates all GLB files matching the pattern
  • Outputs pretty-printed JSON for readability
  • Returns proper exit code
.github/workflows/assets.yml (2)

34-37: LGTM! GLB validation step correctly fails on warnings.

The --fail-on-warning flag ensures strict validation, which is appropriate for asset quality control in CI.


44-49: LGTM! Manifest upload configuration is appropriate.

The artifact upload:

  • Uses if: always() to capture the manifest even if validation fails (useful for debugging)
  • Uploads to a clearly named artifact for easy retrieval

Comment on lines +24 to +27
- name: Generate reference GLBs
run: |
cd paform
python3 scripts/generate_reference_glbs.py
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Consider pinning the Python version.

The workflow uses python3 without specifying a version, which relies on the Ubuntu runner's default Python installation (currently 3.12). For reproducibility and to prevent unexpected breakage, consider using actions/setup-python to pin a specific Python version.

Add a Python setup step before line 24:

+      - name: Setup Python
+        uses: actions/setup-python@v5
+        with:
+          python-version: '3.12'
+
       - name: Generate reference GLBs
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- name: Generate reference GLBs
run: |
cd paform
python3 scripts/generate_reference_glbs.py
- name: Setup Python
uses: actions/setup-python@v5
with:
python-version: '3.12'
- name: Generate reference GLBs
run: |
cd paform
python3 scripts/generate_reference_glbs.py
🤖 Prompt for AI Agents
In .github/workflows/assets.yml around lines 24 to 27, the job runs python3
without pinning a version; add an actions/setup-python step immediately before
the "Generate reference GLBs" step to install and pin a specific Python version
(e.g., 3.11) using actions/setup-python@v4 and set python-version to the chosen
version so the script runs on a stable, reproducible interpreter; keep the
existing run step but ensure the runner uses the pinned Python (you can keep
invoking python3 or python after setup).

Comment on lines +23 to +24
cd paform
docker compose --env-file .env.development -f docker-compose.dev.yml up -d --build
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Remove the failing cd paform hops

After checkout the runner is already inside the repository root (~/work/paform/paform). There is no nested paform/ directory, so cd paform exits with cd: paform: No such file or directory, and because steps run with set -e, the workflow aborts before Docker Compose even runs. Drop the cd paform here (and in the later steps that repeat it) or point to the actual path that contains docker-compose.dev.yml.

🤖 Prompt for AI Agents
In .github/workflows/perf-light.yml around lines 23-24, the step runs `cd
paform` but the runner is already at the repository root
(`~/work/paform/paform`) so the extra `cd paform` fails; remove the `cd paform`
command (and any other duplicate `cd paform` occurrences in subsequent steps) or
update those steps to use the correct path that actually contains
docker-compose.dev.yml so the docker compose command runs from the existing
working directory.

Comment on lines +19 to +21
def get_settings() -> Settings:
# Instantiate settings outside of Pydantic request validation to avoid DI issues.
return Settings()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Cache Settings() construction

BaseSettings hits environment parsing on every instantiation. Wrapping the provider in @lru_cache keeps this dependency lightweight under load.

+from functools import lru_cache
+
+
+@lru_cache
 def get_settings() -> Settings:
-    # Instantiate settings outside of Pydantic request validation to avoid DI issues.
-    return Settings()
+    return Settings()

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In backend/api/security.py around lines 19 to 21, the Settings() factory is
constructed on every call which triggers BaseSettings environment parsing; wrap
the provider with functools.lru_cache to cache a single Settings instance under
load. Add the import "from functools import lru_cache" and decorate the
get_settings function with @lru_cache() so it returns a cached Settings object
while keeping the signature get_settings() -> Settings unchanged.

Comment on lines +33 to +35
expected = getattr(settings, "api_write_token", None) or os.getenv("API_WRITE_TOKEN", "")
if not expected or not secrets.compare_digest(token, expected):
raise HTTPException(status_code=HTTP_403_FORBIDDEN, detail="Invalid API write token.")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add typed settings fields so tokens/secrets actually load

Settings (see api/config.py lines 8-44) doesn’t define api_write_token, hygraph_webhook_secret, or max_webhook_body_bytes, so getattr(settings, …) always falls back. Because Pydantic doesn’t export .env values into os.environ, the fallback os.getenv(...) stays empty when you rely on the .env file, which means every write-token or Hygraph webhook check fails with 403/401 even when the secrets are configured. Please add these fields to Settings (or otherwise preload the environment) and reference them directly here instead of the os.getenv fallback to keep local/dev parity.

Suggested changes:

-    expected = getattr(settings, "api_write_token", None) or os.getenv("API_WRITE_TOKEN", "")
+    expected = settings.api_write_token
-    secret = getattr(settings, "hygraph_webhook_secret", None) or os.getenv("HYGRAPH_WEBHOOK_SECRET", "")
-    if not verify_hygraph_signature(body, signature or "", secret):
+    secret = settings.hygraph_webhook_secret
+    if not secret or not verify_hygraph_signature(body, signature or "", secret):

And in api/config.py:

     hygraph_token: str = Field(
         default="", description="Hygraph access token"
     )
+    api_write_token: str = Field(
+        default="", description="Bearer token required for admin write operations"
+    )
+    hygraph_webhook_secret: str = Field(
+        default="", description="Shared secret used to validate Hygraph webhook signatures"
+    )
+    max_webhook_body_bytes: int = Field(
+        default=2 * 1024 * 1024,
+        description="Maximum accepted webhook payload size in bytes",
+    )

Also applies to: 103-105

🤖 Prompt for AI Agents
In backend/api/security.py around lines 33-35 (and likewise around 103-105), the
code falls back to os.getenv because Settings does not declare api_write_token,
hygraph_webhook_secret, or max_webhook_body_bytes, causing tokens from the .env
to never be loaded; update api/config.py to add typed Pydantic fields for
api_write_token: Optional[str] = None, hygraph_webhook_secret: Optional[str] =
None, and max_webhook_body_bytes: int = <sensible_default>, then update
security.py to read these settings directly (e.g., token_expected =
settings.api_write_token) and remove the os.getenv fallback and ensure
secrets.compare_digest is only called when a non-empty expected value exists;
apply the same pattern for hygraph_webhook_secret and max_webhook_body_bytes at
the other referenced lines.

@@ -0,0 +1,9 @@
#!/usr/bin/env python3
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Make the script executable.

The shebang is present but the file is not marked as executable.

Run the following command to make the script executable:

chmod +x backend/scripts/generate_openapi.py

Based on static analysis hints.

🤖 Prompt for AI Agents
In backend/scripts/generate_openapi.py at line 1, the script has a shebang but
the file is not executable; mark the file as executable by running chmod +x
backend/scripts/generate_openapi.py (or set executable bit in your VCS/CI) so
the script can be invoked directly from the shell.

Comment on lines +48 to +59
with TestingSession() as s:
m = Module(
name="Base 600",
width=600.0,
height=720.0,
depth=580.0,
base_price=100.0,
material_id="", # not used here
external_ids={"hygraph_primary": "H123", "glb_code": "base_600"},
)
s.add(m)
s.commit()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

Verify empty material_id doesn't violate foreign key constraints.

Line 55 sets material_id="" (empty string), which may violate the foreign key constraint to the materials table if enforced. Since this is an in-memory test DB, the constraint might not be enforced, but it could fail in production scenarios.

Run the following script to check if the Module model has a foreign key constraint on material_id:


🏁 Script executed:

#!/bin/bash
# Check Module model definition for material_id foreign key
rg -nP -A5 'material_id.*=.*mapped_column' backend/api/models.py

Length of output: 267


Use a valid material_id instead of an empty string
material_id is a non-nullable FK to materials.id. Setting it to "" will break referential integrity outside SQLite tests. In this test, create and commit a Material record and use its id here.
backend/tests/test_manifest_coverage_backend.py:55

🤖 Prompt for AI Agents
In backend/tests/test_manifest_coverage_backend.py around lines 48 to 59, the
test creates a Module with material_id="" which violates the non-nullable FK to
materials.id; fix by creating and committing a Material record first (using the
same TestingSession), retrieve its id, and set material_id to that id when
constructing the Module so the FK references a real material and maintains
referential integrity outside SQLite.

Comment on lines +61 to +64
manifest = json.load(open(MANIFEST, "r"))
files = set(e["file"] for e in manifest.get("models", []))
assert "/models/base_600.glb" in files
assert "/models/base_600@lod1.glb" in files
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Use context manager for file operations.

The file is opened without a context manager, which could lead to resource leaks if an exception occurs.

Apply this diff:

-    manifest = json.load(open(MANIFEST, "r"))
+    with open(MANIFEST, "r") as f:
+        manifest = json.load(f)
     files = set(e["file"] for e in manifest.get("models", []))
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
manifest = json.load(open(MANIFEST, "r"))
files = set(e["file"] for e in manifest.get("models", []))
assert "/models/base_600.glb" in files
assert "/models/base_600@lod1.glb" in files
with open(MANIFEST, "r") as f:
manifest = json.load(f)
files = set(e["file"] for e in manifest.get("models", []))
assert "/models/base_600.glb" in files
assert "/models/base_600@lod1.glb" in files
🤖 Prompt for AI Agents
In backend/tests/test_manifest_coverage_backend.py around lines 61 to 64, the
manifest file is opened without a context manager; replace the direct open(...)
call with a with open(MANIFEST, "r") as f: block and pass f into json.load so
the file is automatically closed (e.g., with open(MANIFEST, "r") as f: manifest
= json.load(f)); keep the rest of the logic unchanged.

@@ -0,0 +1,33 @@
#!/usr/bin/env python3
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Make the script executable.

The shebang is present but the file is not marked as executable.

Run the following command to make the script executable:

chmod +x scripts/gen_glb_manifest.py

Based on static analysis hints.

🧰 Tools
🪛 Ruff (0.14.0)

1-1: Shebang is present but file is not executable

(EXE001)

🤖 Prompt for AI Agents
scripts/gen_glb_manifest.py lines 1-1: the file has a shebang but is not
executable; mark the script executable by running chmod +x
scripts/gen_glb_manifest.py (then commit the permission change, e.g., git add
scripts/gen_glb_manifest.py && git commit -m "Make gen_glb_manifest.py
executable").

Comment on lines +86 to +93
for v_idx in range(subdivisions):
for u_idx in range(subdivisions):
top_left = start_idx + v_idx * stride + u_idx
top_right = top_left + 1
bottom_left = top_left + stride
bottom_right = bottom_left + 1
indices.extend([top_left, bottom_left, bottom_right])
indices.extend([top_left, bottom_right, top_right])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Fix inside-out winding for generated quads.

The current index order (top_left, bottom_left, bottom_right) flips every triangle inside-out. Engines derive normals from this winding (and you’re not exporting explicit normals), so the boxes render black or get back-face culled because the normals point inward. Reorder the triangles so the cross product matches u_vec × v_vec and keeps outward normals.

-                indices.extend([top_left, bottom_left, bottom_right])
-                indices.extend([top_left, bottom_right, top_right])
+                indices.extend([top_left, top_right, bottom_left])
+                indices.extend([top_right, bottom_right, bottom_left])
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
for v_idx in range(subdivisions):
for u_idx in range(subdivisions):
top_left = start_idx + v_idx * stride + u_idx
top_right = top_left + 1
bottom_left = top_left + stride
bottom_right = bottom_left + 1
indices.extend([top_left, bottom_left, bottom_right])
indices.extend([top_left, bottom_right, top_right])
for v_idx in range(subdivisions):
for u_idx in range(subdivisions):
top_left = start_idx + v_idx * stride + u_idx
top_right = top_left + 1
bottom_left = top_left + stride
bottom_right = bottom_left + 1
indices.extend([top_left, top_right, bottom_left])
indices.extend([top_right, bottom_right, bottom_left])
🤖 Prompt for AI Agents
In scripts/generate_reference_glbs.py around lines 86–93 the triangle index
order produces inside-out winding; change the triangle vertex order so the cross
product follows u_vec × v_vec to keep normals outward. Replace the two extends
so the first triangle is (top_left, bottom_right, bottom_left) and the second is
(top_left, top_right, bottom_right) (i.e., swap bottom_left/bottom_right in the
first triangle and swap top_right/bottom_right positions in the second) to fix
the winding.

Comment on lines +43 to +58
def triangle_count(doc: Dict) -> int:
total = 0
accessors = doc.get("accessors") or []
for mesh in doc.get("meshes") or []:
for prim in mesh.get("primitives", []):
idx = prim.get("indices")
if idx is not None:
accessor = accessors[idx]
total += accessor.get("count", 0) // 3
else:
pos_idx = prim.get("attributes", {}).get("POSITION")
if pos_idx is None:
continue
accessor = accessors[pos_idx]
total += accessor.get("count", 0) // 3
return total
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Deduplicate triangle_count logic

This function is copied verbatim from scripts/glb_validate.py. Please move it to a shared module (e.g., scripts/glb_utils.py) or import it directly to avoid future drift between validator and packer code paths.

🤖 Prompt for AI Agents
In scripts/update_glb_metadata.py around lines 43 to 58, the triangle_count
function is duplicated from scripts/glb_validate.py; extract it into a shared
module (e.g., scripts/glb_utils.py) that exports triangle_count, then replace
the local definition with an import from that module. Specifically, create
scripts/glb_utils.py containing the triangle_count implementation, update
scripts/update_glb_metadata.py to from scripts.glb_utils import triangle_count
and remove the duplicated function, and also update scripts/glb_validate.py to
import triangle_count from the new shared module instead of defining it locally
so both modules share the same implementation.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 15, 2025

Note

Docstrings generation - SUCCESS
Generated docstrings for this pull request at #13

coderabbitai bot added a commit that referenced this pull request Oct 15, 2025
Docstrings generation was requested by @shayancoin.

* #7 (comment)

The following files were modified:

* `backend/alembic/versions/cfe1d8e4e001_add_sync_events.py`
* `backend/api/db.py`
* `backend/api/main.py`
* `backend/api/routes_sync.py`
* `backend/api/security.py`
* `backend/services/hygraph_service.py`
* `backend/tests/conftest.py`
* `backend/tests/test_error_envelopes.py`
* `backend/tests/test_manifest_coverage_backend.py`
* `backend/tests/test_sync_routes_metrics.py`
* `scripts/gen_glb_manifest.py`
* `scripts/generate_reference_glbs.py`
* `scripts/glb_validate.py`
* `scripts/update_glb_metadata.py`
* `tests/perf/k6-quote-cnc.js`
@shayancoin shayancoin merged commit a4b444f into main Oct 15, 2025
2 of 7 checks passed
shayancoin pushed a commit that referenced this pull request Oct 15, 2025
Docstrings generation was requested by @shayancoin.

* #7 (comment)

The following files were modified:

* `backend/alembic/versions/cfe1d8e4e001_add_sync_events.py`
* `backend/api/db.py`
* `backend/api/main.py`
* `backend/api/routes_sync.py`
* `backend/api/security.py`
* `backend/services/hygraph_service.py`
* `backend/tests/conftest.py`
* `backend/tests/test_error_envelopes.py`
* `backend/tests/test_manifest_coverage_backend.py`
* `backend/tests/test_sync_routes_metrics.py`
* `scripts/gen_glb_manifest.py`
* `scripts/generate_reference_glbs.py`
* `scripts/glb_validate.py`
* `scripts/update_glb_metadata.py`
* `tests/perf/k6-quote-cnc.js`

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant