Skip to content

Conversation

@jiashengguo
Copy link
Member

@jiashengguo jiashengguo commented Jan 6, 2026

Summary by CodeRabbit

Release Notes

  • New Features

    • Introduced ZenStack Proxy CLI with configurable options for schema paths, port numbers, and database connections
    • Added express-based proxy server with API endpoints for model operations and schema metadata queries
    • Integrated telemetry system for usage tracking
  • Chores

    • Added comprehensive project documentation and TypeScript configuration
    • Configured automated dependency management and release publishing workflows

✏️ Tip: You can customize this high-level summary in your review settings.

Copilot AI review requested due to automatic review settings January 6, 2026 04:33
@coderabbitai
Copy link

coderabbitai bot commented Jan 6, 2026

📝 Walkthrough

Walkthrough

This pull request introduces a complete ZenStack Proxy CLI application package, including CLI entry point, Express-based server, environment detection utilities, telemetry integration, ZModel schema parsing, build scripts, GitHub Actions workflows, and project configuration files.

Changes

Cohort / File(s) Summary
CI/CD & Automation
.github/dependabot.yml, .github/workflows/build-test.yml, .github/workflows/publish-release.yml
Adds Dependabot configuration for weekly npm updates on dev branch; two GitHub Actions workflows for automated testing on pull requests (ubuntu-latest with Node 20.x, pnpm caching) and for publishing releases to NPM and creating GitHub releases from main branch commits.
Project Configuration
package.json, tsconfig.json, .gitignore, .prettierrc, .prettierignore
Defines package metadata, CLI binary entry, build/publish scripts, dependencies (Express, Prisma adapters, TypeScript tooling); TypeScript compiler settings with strict mode and ES2020 target; gitignore patterns for build artifacts, logs, environment files; Prettier formatting rules with JSON override and .github exclusion.
Documentation & Entry
README.md, bin/cli.js
Comprehensive README documenting ZenStack Proxy CLI installation, usage, options (path, port, schema, datasource URL override, log levels), examples, and API endpoints; executable entry point script that invokes the default CLI export.
CLI Core
src/index.ts, src/cli-error.ts, src/constants.ts
Main CLI command builder using Commander with options for zenstack path, port, ZModel schema, datasource URL override, and log levels; schema validation and resolution; server invocation with telemetry tracking and error handling; dedicated error type; telemetry token placeholder (replaced at build time).
Server Implementation
src/server.ts
Express-based server with PrismaClient initialization, dynamic database adapter instantiation (SQLite, PostgreSQL, MySQL via Prisma adapters), ZenStack model-meta and enums loading; middleware setup (JSON, CORS, body limits); model operations endpoint at /api/model; schema metadata endpoint at /api/schema; graceful shutdown on SIGTERM/SIGINT.
Telemetry System
src/telemetry.ts
Mixpanel-based telemetry with conditional initialization via DO_NOT_TRACK and token; captures hostId, sessionId, OS details, Node/Prisma versions, container/CI context; exposes isTracking getter, track(), trackError(), trackSpan() for duration measurement, and trackCli() wrapper; singleton instance for cross-module access.
Utility Modules
src/utils/exec-utils.ts, src/utils/is-ci.ts, src/utils/is-container.ts, src/utils/is-docker.ts, src/utils/is-wsl.ts, src/utils/machine-id-utils.ts, src/utils/version-utils.ts
Command execution wrappers for child_process.execSync and package managers (bunx/npx); environment detection helpers for CI, container (Podman/Docker), WSL; platform-specific machine ID computation via SHA-256 hashing with UUID fallback; version retrieval for Node, Prisma, ZenStack with graceful degradation.
Schema Parsing & Build
src/zmodel-parser.ts, scripts/post-build.ts
ZModel schema parser extracting datasource, generator, Prisma schema path with support for multiple URL sources (explicit override, inline, environment variable, prisma.config.ts); comment removal and validation; post-build script that injects telemetry token into compiled constants.js.

Sequence Diagram

sequenceDiagram
    participant User
    participant CLI as CLI<br/>(bin/cli.js)
    participant Parser as Schema<br/>Parser
    participant Server as Express<br/>Server
    participant DB as Database<br/>(via Adapter)
    participant Telemetry as Telemetry<br/>Tracker
    
    User->>CLI: Run zenstack-proxy command
    CLI->>CLI: Initialize program<br/>(Commander)
    CLI->>CLI: Load .env via dotenv
    
    rect rgb(240, 248, 255)
        note over CLI,Telemetry: Telemetry tracking span
        CLI->>Telemetry: Start: 'proxy:start'
        Telemetry->>Telemetry: Capture environment,<br/>hostId, sessionId
    end
    
    CLI->>Parser: parseZModelSchema(path)
    Parser->>Parser: Validate file exists
    Parser->>Parser: Extract datasource &<br/>generator config
    Parser-->>CLI: Return ZModelConfig
    
    alt Schema validation failed
        Parser-->>CLI: Throw error
        CLI->>Telemetry: trackError()
        CLI->>User: Print error message
        CLI->>Telemetry: Complete: 'proxy:error'
    else Schema valid
        CLI->>Server: startServer(config)
        
        Server->>Server: Load PrismaClient
        Server->>Server: createAdapter<br/>(provider)
        Server->>Server: Load ZenStack<br/>model-meta
        
        Server->>DB: Test connection
        alt Connection failed
            Server-->>CLI: Throw CliError
            CLI->>Telemetry: trackError()
            CLI->>User: Print connection error
            CLI->>Telemetry: Complete: 'proxy:error'
        else Connection OK
            Server->>Server: Setup Express<br/>middleware
            Server->>Server: Mount /api/model &<br/>/api/schema endpoints
            Server->>Server: Start listening
            Server-->>CLI: Server ready
            
            CLI->>Telemetry: Complete: 'proxy:complete'
            CLI->>User: Log server status<br/>(port, endpoints)
            
            User->>Server: Send API request
            Server->>DB: Query via adapter
            DB-->>Server: Return results
            Server-->>User: JSON response
            
            User->>User: Interrupt (Ctrl+C)
            Server->>Server: Graceful shutdown
            Server->>DB: Disconnect Prisma
            Server->>Telemetry: Flush events
            Server->>CLI: Exit(0)
        end
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Poem

🐰 A bundle of CLI scripts, so neat,
With schemas and servers, a tidy suite,
Telemetry tracking each hop and bound,
The proxy's now ready to spin around! 🚀
Building with jest, and watching with care,
ZenStack's new tool is beyond compare! ✨

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Title check ⚠️ Warning The title 'merge dev to main' is a generic branch merge description that fails to communicate the actual substantial changes introduced in this pull request, such as the CLI implementation, workflows, configuration, or any other meaningful project changes. Revise the title to reflect the primary feature or change, such as 'Add ZenStack Proxy CLI with GitHub Actions workflows and build configuration' or similar descriptive title that conveys the actual project changes.
Docstring Coverage ⚠️ Warning Docstring coverage is 57.69% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (1 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR merges development work to main, adding a complete ZenStack Proxy CLI tool. The implementation includes a command-line interface for running an Express server with ZenStack integration, telemetry tracking, database adapter support for SQLite/PostgreSQL/MySQL, and automated CI/CD workflows.

Key changes include:

  • Core CLI and server infrastructure with ZenStack middleware integration
  • ZModel schema parser with support for Prisma 7 configuration
  • Telemetry implementation using Mixpanel for usage tracking
  • GitHub Actions workflows for build/test, publishing, and code review

Reviewed changes

Copilot reviewed 20 out of 23 changed files in this pull request and generated 12 comments.

Show a summary per file
File Description
tsconfig.json TypeScript configuration for ES2020 target with strict mode enabled
src/zmodel-parser.ts Parser for ZModel schemas, extracting datasource/generator configuration with Prisma 7 support
src/utils/version-utils.ts Utilities for retrieving version information for the CLI, ZenStack, and Prisma
src/utils/machine-id-utils.ts Cross-platform machine ID generation for telemetry identification
src/utils/is-*.ts Environment detection utilities for WSL, Docker, containers, and CI
src/utils/exec-utils.ts Wrapper utilities for executing shell commands with environment variable merging
src/telemetry.ts Telemetry implementation with Mixpanel integration for tracking CLI usage
src/server.ts Express server setup with ZenStack middleware, database adapters, and API endpoints
src/index.ts CLI entry point with Commander.js for parsing command-line arguments
src/constants.ts Constants file with telemetry token placeholder for build-time replacement
src/cli-error.ts Custom error class for CLI-specific errors
scripts/post-build.ts Post-build script to inject telemetry token into compiled code
package.json Package manifest with dependencies, scripts, and publishing configuration
README.md Documentation covering installation, usage, options, and API endpoints
pnpm-lock.yaml Lock file for dependency resolution with pnpm
.gitignore Git ignore patterns for node_modules, build artifacts, and environment files
.github/workflows/publish-release.yml CI workflow for building, publishing to npm, and creating GitHub releases
.github/workflows/claude-code-review.yml Automated code review workflow using Claude AI
.github/workflows/build-test.yml CI workflow for building and testing on pull requests
.github/dependabot.yml Dependabot configuration for automated dependency updates
Files not reviewed (1)
  • pnpm-lock.yaml: Language not supported

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 14

🤖 Fix all issues with AI Agents
In @.github/dependabot.yml:
- Line 19: The file places the Dependabot setting target-branch at the root
level which is invalid; move that key into each updates entry so each update
block contains its own target-branch value (i.e., locate the top-level
target-branch and remove it, then add target-branch: dev inside every updates:
[...] entry), ensuring each updates item explicitly defines its target-branch
per Dependabot v2 schema.

In @.github/workflows/publish-release.yml:
- Line 26: Replace the pinned pnpm version value "10.12.1" with the latest
stable release "10.27.0" in the workflow where the YAML key is "version"
(currently set to 10.12.1) so the GitHub Actions job uses pnpm 10.27.0; update
only the version scalar to "10.27.0" and keep the surrounding workflow
configuration unchanged.

In @README.md:
- Around line 19-26: Update the CLI options documentation to fix the formatting
inconsistency: replace the erroneous "`-s  --schema <path>`" entry in the README
with "`-s, --schema <path>`" so it matches the style of the other options and
the implementation in src/index.ts; ensure there is a comma and single space
after `-s` and before `--schema`.

In @src/index.ts:
- Around line 17-20: getVersion() can return undefined but the code uses a
non-null assertion on it when calling program.version(getVersion()!), which is
unsafe; instead obtain the version into a variable (e.g., const version =
getVersion()) and either call program.version(version) only when version is
defined or provide a safe fallback (e.g., const version = getVersion() ??
'0.0.0') and pass that to program.version; update the call site where
program.version(...) is invoked and remove the non-null assertion on
getVersion().
- Around line 45-51: The code passes parseInt(options.port) directly to
startServer which can yield NaN for invalid input; validate options.port before
calling startServer (use parseInt with radix 10), check that the parsed value is
a finite integer within the valid TCP port range (1–65535), and if invalid
either throw a clear error or provide a sensible default; update the call to
startServer to pass the validated port number (reference options.port and the
startServer invocation) so runtime errors are avoided.

In @src/server.ts:
- Around line 234-249: The SIGTERM/SIGINT handlers registered inside startServer
use process.on and will be added every time startServer runs, causing duplicate
shutdowns; change registration to use process.once for both signals (or add an
idempotent guard that tracks a boolean like shutdownHandlersRegistered and skips
registering if already true) so the server.close/await prisma.$disconnect logic
tied to server and prisma executes only once per process; ensure the chosen
approach references the existing server and prisma variables and still logs the
closing message and exits.
- Around line 208-213: The file registers express.json() twice which is
redundant and can conflict with the configured instance; remove the plain
app.use(express.json()) and keep the configured app.use(express.json({ limit:
'5mb' })) so body parsing uses the intended limit; update the app middleware
order accordingly (app, express.json({ limit: '5mb' }), express.urlencoded(...),
etc.) ensuring only the configured express.json call remains.
- Around line 52-78: The createAdapter function calls url.trim() without
validating config.datasource.url; add a null/undefined check before trimming and
handle missing URL by throwing the existing CliError (or a clear error) so we
don't call trim on undefined. Locate createAdapter and the switch case for
'sqlite' where url is read from config.datasource, validate that url is a
non-empty string before calling url.trim() and before using
resolvePrismaSchemaDir / resolveSQLitePath, and keep the existing catch that
throws CliError mentioning better-sqlite3 and @prisma/adapter-better-sqlite3.
- Around line 124-139: The code assumes generator.output is relative to
zmodelSchemaDir/prisma by hardcoding 'prisma' when building prismaClientPath;
instead use the actual resolved Prisma schema directory (the value produced by
resolvePrismaSchemaDir or the prismaSchemaPath-derived directory) when joining
with generator.output and 'client'. Update the path construction that sets
prismaClientPath to use that resolved prisma schema dir (not the literal
'prisma'), keep the absolute-path branch unchanged, and leave PrismaClient =
prismaModule.PrismaClient as-is so the loaded module is used.

In @src/utils/exec-utils.ts:
- Around line 6-15: The child process loses parent environment because execSync
always passes env: mergedEnv which becomes undefined when options.env is
omitted; update execSync (function execSync and the _exec call) so that when
options.env is not provided you do not pass an undefined env (either set
mergedEnv to process.env or only include the env property in the options object
when options.env is present) so the child inherits the parent environment
(ensure stdio and other restOptions behavior stays the same).

In @src/utils/is-ci.ts:
- Around line 2-5: The isInCi logic currently treats any CI value except '0' or
'false' as true; update the check to normalize env['CI'] (trim and toLowerCase)
and only treat explicit truthy tokens (e.g., '1', 'true', 'yes') as CI, treating
'' (empty string) as false, while preserving the existing fallbacks
('CONTINUOUS_INTEGRATION' in env and Object.keys(env).some(key =>
key.startsWith('CI_'))); modify the exported const isInCi accordingly (use the
normalized value variable) or, if the broader behavior was intentional, add a
clarifying comment near isInCi explaining why other values are accepted.

In @src/utils/version-utils.ts:
- Around line 18-30: The recursion termination incorrectly checks startPath !==
'/' which fails on Windows; update getNodeModulesFolder to detect root
cross-platform by using path.parse(startPath).root (or compare
path.resolve(startPath) to path.parse(startPath).root) instead of '/' — e.g.,
compute const root = path.parse(startPath).root and stop recursion when
startPath === root (or when parent === root and parent does not change),
ensuring you use path.join/resolve consistently so the function returns
undefined at the real filesystem root on all platforms.

In @src/zmodel-parser.ts:
- Around line 38-90: The code uses the Function constructor in loadPrismaConfig
to evaluate prisma.config.ts (configFn = new Function('env', `return
${configObjectStr}`)), which executes arbitrary code and is a security risk and
also does not validate the returned shape; replace this with a safe, static
extraction and validation: parse the file AST with a JS/TS parser (e.g.,
@babel/parser or TypeScript compiler API) to locate the export default
object/defineConfig call and read the datasource.url property without executing
code, or if runtime evaluation is required, run the code inside a sandboxed vm
with no access to globals and a controlled env function; additionally validate
that the resolved value is an object before accessing config.datasource?.url and
preserve existing error handling for missing env variables thrown by env()
(rethrow those), updating loadPrismaConfig to return null on parse/eval failures
with logged warnings.
🧹 Nitpick comments (15)
src/utils/is-wsl.ts (1)

4-18: LGTM! Well-implemented WSL detection.

The implementation correctly detects WSL using the standard approach: checking the platform, os.release(), and /proc/version for the "microsoft" identifier. The defensive error handling with try/catch is appropriate, and the synchronous file read is acceptable for this CLI initialization context.

💡 Optional: Consider memoizing for repeated calls

If this function is called frequently, you could memoize the result since WSL status won't change during runtime:

+let cachedResult: boolean | undefined;
+
 export const isWsl = () => {
+    if (cachedResult !== undefined) {
+        return cachedResult;
+    }
+
     if (process.platform !== 'linux') {
-        return false;
+        return (cachedResult = false);
     }
 
     if (os.release().toLowerCase().includes('microsoft')) {
-        return true;
+        return (cachedResult = true);
     }
 
     try {
-        return fs.readFileSync('/proc/version', 'utf8').toLowerCase().includes('microsoft');
+        return (cachedResult = fs.readFileSync('/proc/version', 'utf8').toLowerCase().includes('microsoft'));
     } catch {
-        return false;
+        return (cachedResult = false);
     }
 };

However, the current implementation is already very fast, so this is purely a micro-optimization.

src/utils/machine-id-utils.ts (2)

22-28: Consider modern alternatives to hasOwnProperty.

While the current implementation works, you could use the in operator or Object.hasOwn() (Node.js 16.9+) instead of calling hasOwnProperty directly on process.env.

🔎 Proposed refactor
 function isWindowsProcessMixedOrNativeArchitecture() {
-  // eslint-disable-next-line no-prototype-builtins
-  if (process.arch === 'ia32' && process.env.hasOwnProperty('PROCESSOR_ARCHITEW6432')) {
+  if (process.arch === 'ia32' && 'PROCESSOR_ARCHITEW6432' in process.env) {
     return 'mixed'
   }
   return 'native'
 }

34-61: Minor inconsistency in error handling.

The expose() function throws an error for unsupported platforms (line 59), but getMachineId() already checks platform support at line 64 before calling expose(). This makes the throw statement unreachable in practice.

For consistency, you could either:

  1. Remove the throw and return undefined for the default case, or
  2. Remove the platform check in getMachineId() and rely on the exception
🔎 Option 1: Return undefined for unsupported platforms
     case 'freebsd':
       return result
         .toString()
         .replace(/\r+|\n+|\s+/gi, '')
         .toLowerCase()
     default:
-      throw new Error(`Unsupported platform: ${process.platform}`)
+      return undefined
   }
 }
.github/workflows/publish-release.yml (1)

58-72: Improve changelog generation robustness.

The changelog generation script may produce malformed output if commit messages contain special characters, backticks, or newlines that could break the heredoc delimiter or GitHub Actions output format.

🔎 Proposed improvement
       - name: Generate changelog
         id: changelog
         run: |
           PREVIOUS_TAG=$(git describe --tags --abbrev=0 2>/dev/null || echo "")
 
           if [ -z "$PREVIOUS_TAG" ]; then
-            CHANGELOG=$(git log --oneline --no-merges --format="* %s" HEAD)
+            CHANGELOG=$(git log --oneline --no-merges --format="* %s" HEAD | head -n 50)
           else
-            CHANGELOG=$(git log --oneline --no-merges --format="* %s" ${PREVIOUS_TAG}..HEAD)
+            CHANGELOG=$(git log --oneline --no-merges --format="* %s" ${PREVIOUS_TAG}..HEAD | head -n 50)
           fi
 
           if [ -z "$CHANGELOG" ]; then
             CHANGELOG="* Automated release"
           fi
 
-          echo "changelog<<EOF" >> $GITHUB_OUTPUT
-          echo "$CHANGELOG" >> $GITHUB_OUTPUT
-          echo "EOF" >> $GITHUB_OUTPUT
+          {
+            echo "changelog<<EOF"
+            echo "$CHANGELOG"
+            echo "EOF"
+          } >> $GITHUB_OUTPUT

Consider limiting the number of commits included and grouping the output commands to ensure atomicity.

package.json (1)

19-19: Fill in the author field.

The author field is currently empty. For proper package attribution and npm registry information, consider adding the author name or organization.

.github/workflows/claude-code-review.yml (1)

34-36: Consider pinning to a specific version or commit SHA.

The workflow uses @beta which is an unpinned, moving target. This could introduce unexpected breaking changes or bugs in the CI pipeline. For better stability and reproducibility, consider pinning to a specific version tag or commit SHA.

Example:

- name: Run Claude Code Review
  id: claude-review
  uses: anthropics/[email protected]  # or @<commit-sha>
scripts/post-build.ts (1)

10-16: Add error handling for file operations.

The file read/write operations lack explicit error handling. While the script will fail if files are missing, providing clearer error messages would improve debuggability during builds.

🔎 Proposed enhancement with error handling
  for (const file of filesToProcess) {
    console.log(`Processing ${file} for telemetry token...`)
    const filePath = path.join(__dirname, '..', file)
-   const content = fs.readFileSync(filePath, 'utf-8')
-   const updatedContent = content.replace('<TELEMETRY_TRACKING_TOKEN>', token)
-   fs.writeFileSync(filePath, updatedContent, 'utf-8')
+   try {
+     const content = fs.readFileSync(filePath, 'utf-8')
+     const updatedContent = content.replace('<TELEMETRY_TRACKING_TOKEN>', token)
+     fs.writeFileSync(filePath, updatedContent, 'utf-8')
+   } catch (error) {
+     throw new Error(`Failed to process ${file}: ${error instanceof Error ? error.message : String(error)}`)
+   }
  }
src/utils/is-docker.ts (1)

24-31: Consider updating the TODO comment or using ??=.

The TODO suggests waiting for Node.js 16, but the workflow and tsconfig indicate the project targets Node.js 20.x. The nullish coalescing assignment operator (??=) is available in Node.js 15+ and ES2021, so you can safely use it now.

🔎 Proposed modernization using ??=
 export default function isDocker() {
-    // TODO: Use `??=` when targeting Node.js 16.
-    if (isDockerCached === undefined) {
-        isDockerCached = hasDockerEnv() || hasDockerCGroup();
-    }
-
+    isDockerCached ??= hasDockerEnv() || hasDockerCGroup();
     return isDockerCached;
 }
src/utils/is-container.ts (1)

16-23: Consider updating the TODO comment or using ??=.

Similar to is-docker.ts, this TODO references Node.js 16, but the project targets Node.js 20.x. You can safely use the nullish coalescing assignment operator (??=) now.

🔎 Proposed modernization using ??=
 export function isInContainer() {
-    // TODO: Use `??=` when targeting Node.js 16.
-    if (cachedResult === undefined) {
-        cachedResult = hasContainerEnv() || isDocker();
-    }
-
+    cachedResult ??= hasContainerEnv() || isDocker();
     return cachedResult;
 }
src/index.ts (1)

57-82: Correct exit code handling for non-CommanderError cases.

Line 61 initializes exitCode to 1, but it's only updated to e.exitCode for CommanderError (line 69). For CliError and unhandled errors, the code will exit with 1, which is correct. However, the logic would be clearer if exit codes were explicitly set for each error type, and the process should exit immediately if telemetry is not tracking.

🔎 Proposed refactor for clarity
   let exitCode = 1
   try {
     await telemetry.trackCli(async () => {
       await program.parseAsync()
     })
+    exitCode = 0
   } catch (e: any) {
     if (e instanceof CommanderError) {
       // ignore
       exitCode = e.exitCode
     } else if (e instanceof CliError) {
       console.error(red(e.message))
+      exitCode = 1
     } else {
       console.error(red(`Unhandled error: ${e}`))
+      exitCode = 1
     }
-    if (telemetry.isTracking) {
-      // give telemetry a chance to send events before exit
-      setTimeout(() => {
-        process.exit(exitCode)
-      }, 200)
-    }
+  } finally {
+    if (exitCode !== 0 && telemetry.isTracking) {
+      // give telemetry a chance to send events before exit
+      setTimeout(() => {
+        process.exit(exitCode)
+      }, 200)
+    } else if (exitCode !== 0) {
+      process.exit(exitCode)
+    }
   }
 }
src/server.ts (2)

144-147: Reconsider manipulating module.paths.

Lines 144-146 directly manipulate module.paths to support pnpm monorepos. This approach is fragile and can lead to unexpected module resolution behavior, especially in complex dependency scenarios or different package managers.

Consider using require.resolve with explicit paths or documenting this as a known limitation for certain monorepo setups. Alternatively, provide a CLI option for users to specify the node_modules location.


196-199: Optimize adapter creation call.

Line 197 calls createAdapter(zmodelConfig, zmodelSchemaDir) even when isClientEngine is false, only to assign null to the adapter. This unnecessarily executes the adapter creation logic and its potential side effects (logging, error throwing) when the result won't be used.

🔎 Proposed optimization
   const prisma = new PrismaClient({
-    adapter: isClientEngine ? createAdapter(zmodelConfig, zmodelSchemaDir) : null,
+    adapter: isClientEngine ? createAdapter(zmodelConfig, zmodelSchemaDir) : undefined,
     log: options.logLevel || [],
   })

Note: Changed null to undefined as that's typically the expected type for optional adapter parameter. Verify PrismaClient type definition.

src/telemetry.ts (1)

36-42: Consider handling additional DO_NOT_TRACK values.

Line 37 only checks if DO_NOT_TRACK equals the string '1'. The DO_NOT_TRACK convention also recognizes other truthy values like 'true', 'TRUE', or any non-empty string. Consider a more inclusive check to respect user privacy preferences.

🔎 Proposed enhancement
   constructor() {
-    if (process.env['DO_NOT_TRACK'] !== '1' && TELEMETRY_TRACKING_TOKEN) {
+    const doNotTrack = process.env['DO_NOT_TRACK']
+    const isOptedOut = doNotTrack && doNotTrack !== '0' && doNotTrack.toLowerCase() !== 'false'
+    if (!isOptedOut && TELEMETRY_TRACKING_TOKEN) {
       this.mixpanel = init(TELEMETRY_TRACKING_TOKEN, {
         geolocate: true,
       })
     }
   }
src/zmodel-parser.ts (2)

27-33: Comment removal regex may not handle all edge cases.

The regex patterns on lines 29 and 31 remove comments but don't account for comment-like patterns inside string literals. For example, a string containing "file://path" or "https://example.com" could be incorrectly modified.

While this is unlikely to cause issues in typical ZModel schemas, consider documenting this limitation or implementing a more robust parser that respects string boundaries if schema files are expected to contain such patterns.


177-177: Fix typo in comment.

Line 177 has a typo: "Exact engineType" should be "Extract engineType" to match the pattern of other comments in this function.

🔎 Proposed fix
-  // Exact engineType (optional)
+  // Extract engineType (optional)
   const engineTypeMatch = generatorBlock.match(/engineType\s*=\s*['"]([^'"]+)['"]/)
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c5d4ccd and 41d8644.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (22)
  • .github/dependabot.yml
  • .github/workflows/build-test.yml
  • .github/workflows/claude-code-review.yml
  • .github/workflows/publish-release.yml
  • .gitignore
  • README.md
  • package.json
  • scripts/post-build.ts
  • src/cli-error.ts
  • src/constants.ts
  • src/index.ts
  • src/server.ts
  • src/telemetry.ts
  • src/utils/exec-utils.ts
  • src/utils/is-ci.ts
  • src/utils/is-container.ts
  • src/utils/is-docker.ts
  • src/utils/is-wsl.ts
  • src/utils/machine-id-utils.ts
  • src/utils/version-utils.ts
  • src/zmodel-parser.ts
  • tsconfig.json
🧰 Additional context used
🧬 Code graph analysis (6)
src/utils/machine-id-utils.ts (1)
src/utils/exec-utils.ts (1)
  • execSync (6-15)
src/server.ts (3)
src/zmodel-parser.ts (2)
  • ZModelConfig (18-22)
  • GeneratorConfig (12-16)
src/cli-error.ts (1)
  • CliError (4-4)
src/utils/version-utils.ts (3)
  • getNodeModulesFolder (18-30)
  • getZenStackVersion (32-43)
  • getPrismaVersion (48-58)
src/telemetry.ts (7)
src/utils/machine-id-utils.ts (1)
  • getMachineId (63-77)
src/utils/version-utils.ts (2)
  • getVersion (5-16)
  • getPrismaVersion (48-58)
src/utils/is-docker.ts (1)
  • isDocker (24-31)
src/utils/is-wsl.ts (1)
  • isWsl (4-18)
src/utils/is-container.ts (1)
  • isInContainer (16-23)
src/utils/is-ci.ts (1)
  • isInCi (2-5)
src/constants.ts (1)
  • TELEMETRY_TRACKING_TOKEN (2-2)
src/index.ts (5)
src/utils/version-utils.ts (1)
  • getVersion (5-16)
src/zmodel-parser.ts (1)
  • parseZModelSchema (197-213)
src/server.ts (1)
  • startServer (181-250)
src/telemetry.ts (1)
  • telemetry (114-114)
src/cli-error.ts (1)
  • CliError (4-4)
src/utils/is-container.ts (1)
src/utils/is-docker.ts (1)
  • isDocker (24-31)
src/zmodel-parser.ts (1)
src/cli-error.ts (1)
  • CliError (4-4)
🪛 actionlint (1.7.9)
.github/workflows/build-test.yml

18-18: label "buildjet-8vcpu-ubuntu-2204" is unknown. available labels are "windows-latest", "windows-latest-8-cores", "windows-2025", "windows-2022", "windows-11-arm", "ubuntu-slim", "ubuntu-latest", "ubuntu-latest-4-cores", "ubuntu-latest-8-cores", "ubuntu-latest-16-cores", "ubuntu-24.04", "ubuntu-24.04-arm", "ubuntu-22.04", "ubuntu-22.04-arm", "macos-latest", "macos-latest-xl", "macos-latest-xlarge", "macos-latest-large", "macos-26-xlarge", "macos-26", "macos-15-intel", "macos-15-xlarge", "macos-15-large", "macos-15", "macos-14-xl", "macos-14-xlarge", "macos-14-large", "macos-14", "macos-13-xl", "macos-13-xlarge", "macos-13-large", "macos-13", "self-hosted", "x64", "arm", "arm64", "linux", "macos", "windows". if it is a custom label for self-hosted runner, set list of labels in actionlint.yaml config file

(runner-label)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Agent
🔇 Additional comments (17)
src/utils/machine-id-utils.ts (3)

63-77: Excellent error handling and fallback strategy.

The implementation has robust error handling with appropriate fallbacks to randomUUID() at multiple levels:

  • Unsupported platforms (line 65)
  • Failed command execution or parsing (line 75)
  • Invalid/empty parse results (line 71)

The use of hashing (line 73) is good practice for privacy, ensuring the hardware identifier isn't exposed directly.


11-20: Platform-specific commands are well-designed.

The command definitions are appropriate for each platform:

  • Darwin: Uses ioreg to query IOPlatformExpertDevice
  • Windows: Queries the Cryptography registry key with architecture-aware path
  • Linux: Tries multiple sources with graceful fallback to hostname
  • FreeBSD: Uses kenv or sysctl as fallback

The Linux command's || : ensures it always succeeds, allowing the parsing logic to handle empty/invalid results gracefully.


3-3: The recommendation to use exec-utils is not applicable here. While exec-utils.ts exists as a wrapper, it returns void and is designed for commands that only produce side effects (printing to console). The getMachineId() function at line 68 explicitly requires the command output to parse and extract the machine ID, making direct use of execSync the correct approach. Additionally, exec-utils is not currently used elsewhere in the codebase, so there is no consistency concern.

Likely an incorrect or invalid review comment.

.gitignore (1)

1-7: LGTM!

The ignore patterns are appropriate for a Node.js/TypeScript project, covering build artifacts, dependencies, logs, environment files, and OS-specific files.

src/cli-error.ts (1)

1-4: LGTM!

Clean implementation of a custom error class for CLI-specific errors. The JSDoc comment clearly describes its purpose.

.github/workflows/publish-release.yml (1)

9-10: Remove comment — the telemetry configuration is not contradictory.

The TELEMETRY_TRACKING_TOKEN is required during the build step (used in scripts/post-build.ts to embed the token in built artifacts), while DO_NOT_TRACK="1" prevents telemetry from actually running at runtime. The telemetry initialization code explicitly respects this pattern: if (process.env['DO_NOT_TRACK'] !== '1' && TELEMETRY_TRACKING_TOKEN). This is intentional and correct—the token is needed for build-time artifact generation while the flag prevents runtime telemetry execution in the CI environment.

Likely an incorrect or invalid review comment.

src/constants.ts (1)

1-2: LGTM!

The placeholder pattern for build-time token substitution is appropriate. The AI summary indicates that scripts/post-build.ts handles the replacement, which is the correct approach for keeping sensitive tokens out of source control.

scripts/post-build.ts (1)

4-8: LGTM! Token validation is appropriate.

The token validation ensures the build fails fast if the required environment variable is missing, which is correct behavior for a build-time script.

.github/workflows/build-test.yml (4)

1-11: LGTM! Environment configuration is appropriate.

The workflow correctly uses secrets for the telemetry token and sets DO_NOT_TRACK=1 to prevent telemetry during CI runs.


24-54: LGTM! Setup steps follow best practices.

The workflow properly:

  • Pins pnpm version for reproducibility
  • Uses cache to speed up builds
  • Installs with --frozen-lockfile to ensure consistent dependencies

56-59: LGTM! Build steps are correctly ordered.

The build step compiles the code, then the post-build script injects the telemetry token into the built artifacts.


18-18: The buildjet-8vcpu-ubuntu-2204 runner is intentional and properly configured.

This workflow uses BuildJet, a third-party GitHub Actions runner provider, as evidenced by the use of buildjet/setup-node@v3 and buildjet/cache@v3 actions. The runner label is a valid BuildJet offering and appears to be a deliberate organizational choice for enhanced performance or custom specifications. No action needed.

src/utils/is-docker.ts (2)

1-14: LGTM! Docker environment detection is implemented correctly.

The function properly handles errors and uses synchronous operations, which is acceptable for this utility function.


16-22: LGTM! CGroup detection is correctly implemented.

The function appropriately reads the cgroup file and checks for Docker presence.

tsconfig.json (2)

1-16: LGTM! TypeScript configuration is well-structured.

The configuration appropriately enables strict mode, declarations, and source maps. The ES2020 target with CommonJS modules provides good compatibility.


17-20: LGTM! Include/exclude patterns are appropriate.

The configuration correctly includes source files and excludes build artifacts and dependencies.

src/utils/is-container.ts (1)

1-14: LGTM! Podman detection is correctly implemented.

The function properly detects Podman containers by checking for the /run/.containerenv file with appropriate error handling.

Comment on lines +234 to +249
// Graceful shutdown
process.on('SIGTERM', async () => {
server.close(() => {
console.log('\nZenStack proxy server closed')
})
await prisma.$disconnect()
process.exit(0)
})

process.on('SIGINT', async () => {
server.close(() => {
console.log('\nZenStack proxy server closed')
})
await prisma.$disconnect()
process.exit(0)
})
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Risk of duplicate signal handler registration.

Lines 235-249 register SIGTERM and SIGINT handlers each time startServer is called. If startServer is invoked multiple times (e.g., in tests or during hot reload), this will register multiple handlers, leading to repeated shutdown attempts and potential race conditions during cleanup.

Consider one of these approaches:

  1. Use process.once() instead of process.on() to ensure handlers run only once
  2. Track registration state and skip if already registered
  3. Provide a cleanup function to remove handlers
  4. Document that startServer should only be called once per process
🤖 Prompt for AI Agents
In @src/server.ts around lines 234-249, The SIGTERM/SIGINT handlers registered
inside startServer use process.on and will be added every time startServer runs,
causing duplicate shutdowns; change registration to use process.once for both
signals (or add an idempotent guard that tracks a boolean like
shutdownHandlersRegistered and skips registering if already true) so the
server.close/await prisma.$disconnect logic tied to server and prisma executes
only once per process; ensure the chosen approach references the existing server
and prisma variables and still logs the closing message and exits.

* fix: change the package name to `@zenstackhq/proxy`

- Consolidate Dependabot configuration for npm
- Remove obsolete Claude Code Review workflow
- Add .prettierignore and .prettierrc for code formatting
- Update README for schema option formatting
- Create CLI entry point in bin/cli.js
- Modify package.json for clearer project description
- Enhance error handling in index.ts
- Fix import typo in server.ts
- Improve telemetry import in telemetry.ts
- Refine getNodeModulesFolder logic in version-utils.ts
- Clarify comments in zmodel-parser.ts

* fix: change runner to ubuntu-latest in build-test workflow
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (3)
src/index.ts (1)

79-85: Telemetry may not flush on success path.

The setTimeout for telemetry flush only executes in the catch block. On successful CLI execution, the process may exit before Mixpanel's async track() calls complete, potentially losing telemetry data.

Consider adding a flush mechanism on successful completion as well, or ensure Mixpanel's callbacks complete before the process naturally exits.

src/zmodel-parser.ts (2)

5-5: Consider expanding DatabaseProvider type.

The type only includes sqlite, postgresql, and mysql, but Prisma supports additional providers like sqlserver, mongodb, and cockroachdb. If users specify these providers, the cast on line 113 will silently accept invalid values.

🔎 Proposed fix
-type DatabaseProvider = 'sqlite' | 'postgresql' | 'mysql'
+type DatabaseProvider = 'sqlite' | 'postgresql' | 'mysql' | 'sqlserver' | 'mongodb' | 'cockroachdb'

186-191: Regex assumes specific attribute ordering in plugin block.

The regex requires provider to appear before output in the plugin block. If users write output before provider, the match will fail.

🔎 Proposed fix to handle any ordering
 export function parsePrismaSchemaPath(content: string): string | undefined {
-  const match = content.match(
-    /plugin\s+\w+\s*\{[^}]*provider\s*=\s*['"]@core\/prisma['"][^}]*output\s*=\s*['"]([^'"]+)['"][^}]*\}/s
-  )
-  return match ? match[1] : undefined
+  // Match any plugin block
+  const pluginMatch = content.match(/plugin\s+\w+\s*\{([^}]+)\}/s)
+  if (!pluginMatch) return undefined
+  
+  const block = pluginMatch[1]
+  // Check if this is the @core/prisma provider
+  const providerMatch = block.match(/provider\s*=\s*['"]@core\/prisma['"]/)
+  if (!providerMatch) return undefined
+  
+  // Extract output
+  const outputMatch = block.match(/output\s*=\s*['"]([^'"]+)['"]/)
+  return outputMatch ? outputMatch[1] : undefined
 }
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 41d8644 and ae41d65.

📒 Files selected for processing (12)
  • .github/dependabot.yml
  • .github/workflows/build-test.yml
  • .prettierignore
  • .prettierrc
  • README.md
  • bin/cli.js
  • package.json
  • src/index.ts
  • src/server.ts
  • src/telemetry.ts
  • src/utils/version-utils.ts
  • src/zmodel-parser.ts
✅ Files skipped from review due to trivial changes (1)
  • .prettierignore
🚧 Files skipped from review as they are similar to previous changes (5)
  • package.json
  • src/server.ts
  • .github/workflows/build-test.yml
  • .github/dependabot.yml
  • README.md
🧰 Additional context used
🧬 Code graph analysis (3)
src/telemetry.ts (7)
src/utils/machine-id-utils.ts (1)
  • getMachineId (63-77)
src/utils/version-utils.ts (2)
  • getVersion (5-16)
  • getPrismaVersion (51-61)
src/utils/is-docker.ts (1)
  • isDocker (24-31)
src/utils/is-wsl.ts (1)
  • isWsl (4-18)
src/utils/is-container.ts (1)
  • isInContainer (16-23)
src/utils/is-ci.ts (1)
  • isInCi (2-5)
src/constants.ts (1)
  • TELEMETRY_TRACKING_TOKEN (2-2)
src/index.ts (5)
src/utils/version-utils.ts (1)
  • getVersion (5-16)
src/zmodel-parser.ts (1)
  • parseZModelSchema (196-212)
src/server.ts (1)
  • startServer (182-250)
src/telemetry.ts (1)
  • telemetry (113-113)
src/cli-error.ts (1)
  • CliError (4-4)
src/zmodel-parser.ts (1)
src/cli-error.ts (1)
  • CliError (4-4)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: build-test (20.x)
🔇 Additional comments (11)
.prettierrc (1)

1-14: LGTM!

Standard Prettier configuration with sensible defaults. The JSON parser override ensures proper formatting for configuration files.

bin/cli.js (1)

1-2: LGTM!

Clean CLI entry point that correctly delegates to the compiled default export.

src/utils/version-utils.ts (3)

18-33: LGTM!

The cross-platform root detection using path.parse(startPath).root === startPath correctly handles both Unix (/) and Windows (C:\) root paths, resolving the previously flagged issue.


5-16: LGTM!

Graceful fallback pattern for version detection across development and production environments.


35-61: LGTM!

Robust version detection with appropriate fallbacks for both ZenStack and Prisma packages.

src/index.ts (1)

66-78: Good improvement on error typing.

Using e: unknown with proper type narrowing via instanceof checks is the correct approach for type-safe error handling.

src/telemetry.ts (3)

19-41: LGTM!

Clean initialization with proper respect for DO_NOT_TRACK environment variable. The metadata collection is comprehensive and useful for telemetry analysis.


47-70: LGTM!

The track method properly enriches events with environment context and guards against undefined Mixpanel client.


108-110: LGTM!

Clean wrapper for CLI telemetry tracking using the generic trackSpan method.

src/zmodel-parser.ts (2)

27-33: LGTM!

Clean comment removal implementation handling both multi-line and single-line comments.


196-212: LGTM!

Clean orchestration function that coordinates schema parsing with proper error propagation from helper functions.

@jiashengguo jiashengguo merged commit 419393e into main Jan 6, 2026
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants