-
-
Notifications
You must be signed in to change notification settings - Fork 12
merge dev to main #66
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
* fix: several issues with ts schema generation * addressing review comments
* fix: schema generation for unsupported field type * fix build * update * more tests * update
Co-authored-by: ymc9 <[email protected]>
WalkthroughThis update introduces temporary Prisma schema file generation and cleanup for CLI database and migration commands, enhances test coverage with new CLI and end-to-end tests, and adds support for complex default values in schema generation. Several packages are version-bumped, and workspace/test infrastructure is improved for better modularity and extensibility. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant CLI
participant ActionUtils
participant FileSystem
participant PrismaCLI
User->>CLI: Run db/migrate command
CLI->>ActionUtils: generateTempPrismaSchema(zmodelPath)
ActionUtils->>FileSystem: Write temp schema file
ActionUtils-->>CLI: Return temp schema path
CLI->>PrismaCLI: Execute command with temp schema
PrismaCLI-->>CLI: Command completes
CLI->>FileSystem: Delete temp schema file
CLI-->>User: Output result
sequenceDiagram
participant User
participant CLI
participant GenerateAction
participant FileSystem
User->>CLI: Run generate command [--save-prisma-schema [path]]
CLI->>GenerateAction: run(options)
alt savePrismaSchema specified
GenerateAction->>FileSystem: Write Prisma schema to path
end
GenerateAction->>FileSystem: Write TypeScript schema
CLI-->>User: Output result
Poem
Warning There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure. 🔧 ESLint
packages/cli/src/actions/action-utils.tsOops! Something went wrong! :( ESLint: 9.29.0 ESLint couldn't find an eslint.config.(js|mjs|cjs) file. From ESLint v9.0.0, the default configuration file is now eslint.config.js. https://eslint.org/docs/latest/use/configure/migration-guide If you still have problems after following the migration guide, please stop by packages/cli/src/actions/migrate.tsOops! Something went wrong! :( ESLint: 9.29.0 ESLint couldn't find an eslint.config.(js|mjs|cjs) file. From ESLint v9.0.0, the default configuration file is now eslint.config.js. https://eslint.org/docs/latest/use/configure/migration-guide If you still have problems after following the migration guide, please stop by packages/cli/src/actions/db.tsOops! Something went wrong! :( ESLint: 9.29.0 ESLint couldn't find an eslint.config.(js|mjs|cjs) file. From ESLint v9.0.0, the default configuration file is now eslint.config.js. https://eslint.org/docs/latest/use/configure/migration-guide If you still have problems after following the migration guide, please stop by
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR merges development branch updates into main and introduces end-to-end schema generation tests for Trigger.dev and Formbricks.
- Added a Vitest config in
tests/e2e/vitest.config.tsto extend the base test setup. - Created e2e tests for both Trigger.dev and Formbricks schemas with their corresponding Prisma schema fixtures.
- Introduced a dedicated
tests/e2e/package.jsonto manage e2e dependencies and run scripts.
Reviewed Changes
Copilot reviewed 39 out of 40 changed files in this pull request and generated 2 comments.
Show a summary per file
| File | Description |
|---|---|
| tests/e2e/vitest.config.ts | Merge base Vitest config for e2e tests |
| tests/e2e/trigger.dev/trigger-dev.test.ts | Add e2e test for Trigger.dev schema generation |
| tests/e2e/trigger.dev/schema.zmodel | Add Prisma schema fixture for Trigger.dev |
| tests/e2e/formbricks/formbricks.test.ts | Add e2e test for Formbricks schema generation |
| tests/e2e/formbricks/schema.zmodel | Add Prisma schema fixture for Formbricks |
| tests/e2e/package.json | Define e2e scripts and dependencies |
Files not reviewed (1)
- pnpm-lock.yaml: Language not supported
Comments suppressed due to low confidence (2)
tests/e2e/trigger.dev/trigger-dev.test.ts:6
- [nitpick] Directory name
trigger.devcontains a dot while the test filename uses hyphens (trigger-dev.test.ts); consider renaming the directory totrigger-devor aligning naming conventions for clarity and cross-platform consistency.
describe('Trigger.dev e2e tests', () => {
tests/e2e/trigger.dev/trigger-dev.test.ts:8
- This test only asserts that
generateTsSchemaresolves truthily. To catch regressions in schema output, add snapshot tests or specific assertions on the generated TypeScript definitions.
await expect(
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 10
🔭 Outside diff range comments (1)
packages/cli/src/actions/migrate.ts (1)
97-103: ConsolidatehandleSubProcessErrorinto a single implementationThe helper is already exported from
packages/cli/src/actions/action-utils.ts; migrate.ts should import and reuse it rather than define a duplicate.• In
packages/cli/src/actions/migrate.ts, update the import:-import { generateTempPrismaSchema, getSchemaFile } from './action-utils'; +import { generateTempPrismaSchema, getSchemaFile, handleSubProcessError } from './action-utils';• Remove the local definition at the bottom of the file (lines 97–103):
- function handleSubProcessError(err: unknown) { - if (err instanceof Error && 'status' in err && typeof err.status === 'number') { - process.exit(err.status); - } else { - process.exit(1); - } - }
🧹 Nitpick comments (9)
package.json (1)
3-3: Version bump looks good – mind the supporting release assetsThe jump to
3.0.0-alpha.7is consistent with the rest of the workspace.
Just verify that the changelog and Git tags are generated in the same commit so automation (e.g.,turbo release, npm publish) can detect the correct version.packages/tanstack-query/package.json (1)
3-3: Version bump acknowledged – double-check peer range
@zenstackhq/runtimeis still referenced viaworkspace:*, which resolves correctly, but make sure the published tarball ends up with the final semver range (usually replaced by the publish script).
No blocking issue, just a reminder.tests/e2e/cal.com/schema.zmodel (2)
34-34: Fix typo in comment.- // weightAdjustment is deprecated. We not calculate the calibratino value on the spot. Plan to drop this column. + // weightAdjustment is deprecated. We now calculate the calibration value on the spot. Plan to drop this column.
26-2366: Well-structured schema with comprehensive modeling.The schema demonstrates good practices:
- Appropriate indexes on foreign keys and frequently queried fields
- Proper cascade delete configurations for referential integrity
- Clear deprecation comments for fields planned for removal
- JSON field validation using Zod schemas
Consider documenting the migration plan for the deprecated fields to ensure smooth transitions.
packages/cli/src/actions/migrate.ts (1)
54-65: Consider using template literals for better readabilityThe command string construction is getting complex with multiple ternary operators.
-await execPackage( - `prisma migrate dev --schema "${prismaSchemaFile}" --skip-generate${options.name ? ` --name ${options.name}` : ''}${options.createOnly ? ' --create-only' : ''}`, - { - stdio: 'inherit', - }, -); +const args = [ + 'prisma migrate dev', + `--schema "${prismaSchemaFile}"`, + '--skip-generate', + options.name && `--name ${options.name}`, + options.createOnly && '--create-only' +].filter(Boolean).join(' '); + +await execPackage(args, { + stdio: 'inherit', +});tests/e2e/trigger.dev/schema.zmodel (1)
92-93: Consider using String or Bytes for encrypted data.Storing
encryptedTokenas Json type is unusual. Encrypted data is typically binary or base64-encoded strings. Using Json might introduce unnecessary overhead or potential parsing issues.Consider changing the type to
String(for base64-encoded data) orBytes(for raw binary data):- encryptedToken Json + encryptedToken String // for base64-encoded encrypted datatests/e2e/formbricks/schema.zmodel (3)
50-50: Consider using a junction table for webhook-survey relationships.The
surveyIdsarray field may have performance implications when querying webhooks by survey ID. For better query performance and flexibility, consider using a junction table pattern similar toTagsOnResponses.
119-119: Consider adding an index on theuserIdfield.If you frequently query contacts by their external
userId, adding an index would improve query performance.Add this index to the Contact model:
@@index([environmentId]) +@@index([userId])
1-1102: Consider monitoring query performance for complex relationships.This schema has many interconnected models with complex relationships. While the indexing strategy is good, consider:
- Monitoring slow queries in production
- Adding composite indexes for common query patterns
- Implementing query result caching for frequently accessed data
- Using database views for complex aggregations
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (39)
.github/workflows/publish-release.yml(0 hunks)package.json(1 hunks)packages/cli/package.json(3 hunks)packages/cli/src/actions/action-utils.ts(2 hunks)packages/cli/src/actions/db.ts(1 hunks)packages/cli/src/actions/generate.ts(2 hunks)packages/cli/src/actions/migrate.ts(3 hunks)packages/cli/src/index.ts(1 hunks)packages/cli/test/db.test.ts(1 hunks)packages/cli/test/generate.test.ts(1 hunks)packages/cli/test/init.test.ts(1 hunks)packages/cli/test/migrate.test.ts(1 hunks)packages/cli/test/utils.ts(1 hunks)packages/common-helpers/package.json(1 hunks)packages/create-zenstack/package.json(1 hunks)packages/eslint-config/package.json(1 hunks)packages/ide/vscode/package.json(1 hunks)packages/language/package.json(1 hunks)packages/runtime/package.json(1 hunks)packages/sdk/package.json(1 hunks)packages/sdk/src/schema/schema.ts(1 hunks)packages/sdk/src/ts-schema-generator.ts(8 hunks)packages/tanstack-query/package.json(1 hunks)packages/testtools/package.json(1 hunks)packages/testtools/src/index.ts(1 hunks)packages/testtools/src/project.ts(1 hunks)packages/testtools/src/schema.ts(2 hunks)packages/typescript-config/package.json(1 hunks)packages/zod/package.json(1 hunks)pnpm-workspace.yaml(1 hunks)samples/blog/package.json(2 hunks)tests/e2e/cal.com/cal-com.test.ts(1 hunks)tests/e2e/cal.com/schema.zmodel(1 hunks)tests/e2e/formbricks/formbricks.test.ts(1 hunks)tests/e2e/formbricks/schema.zmodel(1 hunks)tests/e2e/package.json(1 hunks)tests/e2e/trigger.dev/schema.zmodel(1 hunks)tests/e2e/trigger.dev/trigger-dev.test.ts(1 hunks)tests/e2e/vitest.config.ts(1 hunks)
💤 Files with no reviewable changes (1)
- .github/workflows/publish-release.yml
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: build-test (20.x)
🔇 Additional comments (44)
packages/eslint-config/package.json (1)
3-3: LGTMNothing else changed besides the version; no further action required.
packages/typescript-config/package.json (1)
3-3: LGTMVersion bump only; dependencies remain workspace-pinned. ✅
packages/runtime/package.json (1)
3-3: Core package version updated successfullyNo problems spotted. Ensure downstream packages depending on
@zenstackhq/runtimepick up this exact version during publish.packages/zod/package.json (1)
3-3: Version bump is consistent with the coordinated α-7 release – LGTM
No further action required.packages/language/package.json (1)
4-4: Aligned version increment – looks good
Matches the mono-repo release cadence.packages/create-zenstack/package.json (1)
3-3: Release version updated correctly
Change is straightforward and consistent.packages/ide/vscode/package.json (1)
4-4: Bump VS Code extension version above current Marketplace releaseThe version in packages/ide/vscode/package.json was changed from 3.0.3 → 3.0.0-alpha.7, which is a SemVer downgrade. vsce publish (and the Marketplace) will reject any version that is not higher than the last published one.
Please confirm the current release version for zenstack.zenstack in the VS Code Marketplace and update the version field to something greater (for example, bump the patch):
- "version": "3.0.0-alpha.7", + "version": "3.0.4-alpha.7",(or e.g.
3.1.0-alpha.7, etc.)Ensure this is done before cutting the release to avoid Marketplace publication errors.
pnpm-workspace.yaml (1)
5-5: Ensure test projects are private to avoid accidental publishingIncluding
tests/**in the workspace is useful, but anypackage.jsonundertests/should have"private": trueto preventpnpm publish/npm publishmishaps.Double-check the test packages or add
"publishConfig": { "access": "private" }if needed.samples/blog/package.json (2)
3-3: LGTM! Version bump is consistent across the monorepo.The version update from
3.0.0-alpha.6to3.0.0-alpha.7aligns with the coordinated release across all ZenStack packages.
21-21: Catalog configuration for Prisma is correctly set up.pnpm-workspace.yaml defines prisma: ^6.0.0 in the default catalog, and all package.json files (including samples/blog/package.json) reference
"prisma": "catalog:"as intended.
- Verified
pnpm-workspace.yamlcontains:catalog: prisma: ^6.0.0- Checked across the repo that
"prisma": "catalog:"is used consistently.No further action required.
tests/e2e/cal.com/schema.zmodel (1)
1-5: LGTM! PostgreSQL datasource properly configured.The datasource configuration correctly sets up PostgreSQL with both standard and direct URLs, which is appropriate for connection pooling scenarios.
packages/sdk/package.json (1)
3-3: LGTM! Version bump aligned with monorepo release.packages/common-helpers/package.json (1)
3-3: LGTM! Version bump aligned with monorepo release.packages/testtools/package.json (1)
3-3: LGTM! Version bump aligned with monorepo release.packages/testtools/src/index.ts (1)
1-1: LGTM! Clean API expansion.The new export follows the established pattern and properly exposes the project module functionality to consumers of the testtools package.
packages/cli/package.json (3)
6-6: Version bump looks good.The alpha version increment is appropriate for this feature release.
31-31: Dependency reordering is fine.Moving
@zenstackhq/common-helpersto the first position doesn't affect functionality.
46-52: New temporary file dependencies support CLI enhancements.The addition of
tmpand@types/tmpaligns with the temporary Prisma schema generation functionality mentioned in the PR objectives. The version ranges are appropriate.packages/sdk/src/schema/schema.ts (1)
62-62: Array default values support looks good.The addition of
unknown[]to the default property type enables array default values as mentioned in the schema generation enhancements. Whileunknown[]is quite permissive, it provides the flexibility needed for varied default value scenarios.tests/e2e/package.json (1)
1-11: Well-structured e2e test package configuration.The package.json follows monorepo best practices with:
- Appropriate private flag for test packages
- Version consistency with the main project
- Proper workspace dependency on testtools
- Clean test script setup using vitest
tests/e2e/vitest.config.ts (1)
1-4: Clean Vitest configuration setup.The configuration properly extends the base Vitest config using the mergeConfig pattern. The empty override configuration allows for future e2e-specific customizations while currently using the base setup.
packages/cli/test/generate.test.ts (6)
6-10: Well-structured test model.The simple User model with
cuid()default is appropriate for testing purposes and provides sufficient complexity to validate schema generation functionality.
13-18: Solid baseline test case.The test correctly verifies that the default behavior generates a TypeScript schema file while not generating a Prisma schema file unless explicitly requested.
20-24: Good coverage of custom output directory option.The test properly validates that the
--outputoption is respected and files are generated in the specified directory.
33-37: Correct validation of save prisma schema flag.The test appropriately verifies that the
--save-prisma-schemaflag creates the expected Prisma schema file.
26-31: Confirm temp‐workspace cleanup increateProjectThe
createProjecthelper usescreateTestProject(from@zenstackhq/testtools), but we couldn’t find any local cleanup logic. To prevent leftover temp directories after tests:
- Verify that
createTestProjectautomatically removes or reuses its workspace (e.g., via teardown hooks or on process exit).- If it doesn’t, consider adding explicit cleanup in your tests or enhancing the utility to delete the workspace after each run.
39-43: No workspace isolation issue with custom schema pathThe
--save-prisma-schema "../prisma/schema.prisma"value is resolved against the output directory (workDir/zenstack), producingworkDir/prisma/schema.prisma. This path stays inside the test workspace, so there is no need for additional isolation safeguards or changes.Likely an incorrect or invalid review comment.
packages/cli/test/db.test.ts (1)
6-18: Clean and focused test implementation.The test effectively validates the core functionality of the
db pushcommand by checking for the expected database file creation. The structure is consistent with other CLI tests and uses the same reliable test utilities.tests/e2e/cal.com/cal-com.test.ts (1)
6-12: Proper e2e test implementation.The test correctly uses the 'cal-com' project identifier and follows the established pattern for e2e schema generation testing.
packages/cli/src/index.ts (1)
52-58: Well-implemented CLI option additions.The new
--silentand--save-prisma-schemaoptions are properly structured:
- The silent flag follows standard CLI conventions
- The save-prisma-schema option correctly uses optional parameter syntax
[path]to allow both flag and value usage- Both options have clear, descriptive help text
- Implementation follows existing Commander.js patterns in the codebase
packages/cli/src/actions/action-utils.ts (1)
2-2: LGTM! Import reorganization looks clean.The import statements are well-organized and the new imports are appropriately added.
Also applies to: 4-6
packages/cli/test/init.test.ts (1)
1-13: LGTM! Well-structured CLI test.The test is well-written with proper setup using temporary directories and clear verification of expected behavior. The use of
unsafeCleanup: trueensures proper cleanup after test execution.packages/cli/test/migrate.test.ts (1)
1-41: LGTM! Comprehensive migrate command test coverage.The test suite provides excellent coverage of the migrate command workflows:
migrate devwith database and migration directory creationmigrate resetwith database persistence verificationmigrate deploywith database recreationmigrate statuscommand executionThe tests are well-structured with clear setup, execution, and verification phases.
packages/cli/test/utils.ts (1)
12-18: LGTM!The
createProjectfunction implementation is clean and follows best practices with proper directory creation and flexible schema content handling.packages/cli/src/actions/generate.ts (1)
32-40: Well-implemented Prisma schema saving logic!The implementation correctly handles both boolean and string types for the
savePrismaSchemaoption, with proper path resolution and directory creation.packages/testtools/src/schema.ts (1)
37-38: Good refactoring to centralize test project setup!Replacing manual temp directory setup with
createTestProject()improves maintainability and reduces code duplication.packages/cli/src/actions/db.ts (1)
22-44: Excellent implementation of temporary schema handling!The refactored
runPushfunction properly handles temporary Prisma schema generation with appropriate error handling and cleanup in the finally block.packages/sdk/src/ts-schema-generator.ts (3)
287-287: Good refactoring for consistency.Using the dedicated
generateFieldTypeLiteralhelper method improves code maintainability and ensures consistent field type literal generation across the codebase.
633-661: Well-implemented deduplication logic.The addition of
seenKeysSet effectively prevents duplicate entries when the same field combination appears in both@@idand@@uniqueattributes. This is a good defensive programming practice.
688-700: Clean implementation of field type literal generation.The
generateFieldTypeLiteralmethod effectively centralizes the logic for generating field type literals with proper validation and fallback handling.tests/e2e/formbricks/schema.zmodel (4)
711-735: Good security implementation for API keys.The API key implementation follows security best practices with hashed keys and granular permission management through the
ApiKeyEnvironmentjunction table.
886-886: Verify if unique constraint onurlaligns with requirements.The unique constraint on
urlprevents multiple short URLs from pointing to the same destination. This might limit use cases where different short URLs are needed for the same destination (e.g., for campaign tracking).
978-978: Appropriate use of pgvector for embeddings.The implementation correctly uses Prisma's
Unsupportedtype for pgvector integration. The 512-dimension vectors are suitable for most embedding models.Also applies to: 1026-1026
1033-1101: Well-designed team-based access control.The team permission structure follows RBAC best practices with clear separation of concerns between team membership (
TeamUser) and project access (ProjectTeam). The composite primary keys and cascading deletes are properly configured.
| describe('Formbricks e2e tests', () => { | ||
| it('has a working schema', async () => { | ||
| await expect( | ||
| generateTsSchema(fs.readFileSync(path.join(__dirname, 'schema.zmodel'), 'utf8'), 'postgresql', 'cal-com'), | ||
| ).resolves.toBeTruthy(); | ||
| }); | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Verify project identifier consistency.
The test is for Formbricks but uses 'cal-com' as the project identifier in the generateTsSchema call. This seems inconsistent and might be a copy-paste error from the Cal.com test.
🏁 Script executed:
#!/bin/bash
# Description: Check if 'cal-com' project identifier is intentional for Formbricks test
# Expected: Verify if this is correct or should be 'formbricks'
rg -A 5 -B 5 "generateTsSchema.*cal-com" tests/e2e/Length of output: 2027
Replace project identifier in Formbricks e2e test
The Formbricks schema test currently passes 'cal-com' to generateTsSchema, which is a copy-paste from the Cal.com test. It should use the Formbricks identifier to match its own schema.
• tests/e2e/formbricks/formbricks.test.ts, around line 6
Change the third argument of generateTsSchema from 'cal-com' to 'formbricks'.
Suggested diff:
- await expect(
- generateTsSchema(
- fs.readFileSync(path.join(__dirname, 'schema.zmodel'), 'utf8'),
- 'postgresql',
- 'cal-com',
- ),
- ).resolves.toBeTruthy();
+ await expect(
+ generateTsSchema(
+ fs.readFileSync(path.join(__dirname, 'schema.zmodel'), 'utf8'),
+ 'postgresql',
+ 'formbricks',
+ ),
+ ).resolves.toBeTruthy();Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In tests/e2e/formbricks/formbricks.test.ts around lines 6 to 12, the third
argument passed to generateTsSchema is incorrectly set to 'cal-com', which is
from a different project. Change this argument to 'formbricks' to correctly
match the Formbricks schema and ensure the test validates the right project
identifier.
| export async function generateTempPrismaSchema(zmodelPath: string) { | ||
| const model = await loadSchemaDocument(zmodelPath); | ||
| const prismaSchema = await new PrismaSchemaGenerator(model).generate(); | ||
| const prismaSchemaFile = path.resolve(path.dirname(zmodelPath), '~schema.prisma'); | ||
| fs.writeFileSync(prismaSchemaFile, prismaSchema); | ||
| return prismaSchemaFile; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Address potential race conditions and improve error handling.
The current implementation has several concerns:
- Race condition risk: Using a fixed filename
~schema.prismacould cause conflicts if multiple processes run simultaneously in the same directory. - Synchronous file operations: Using
fs.writeFileSyncin an async function context is inconsistent and could block the event loop. - Missing error handling: No error handling for file write operations.
Consider this improved implementation:
export async function generateTempPrismaSchema(zmodelPath: string) {
const model = await loadSchemaDocument(zmodelPath);
const prismaSchema = await new PrismaSchemaGenerator(model).generate();
- const prismaSchemaFile = path.resolve(path.dirname(zmodelPath), '~schema.prisma');
- fs.writeFileSync(prismaSchemaFile, prismaSchema);
+ const prismaSchemaFile = path.resolve(path.dirname(zmodelPath), `~schema-${Date.now()}-${Math.random().toString(36).substr(2, 9)}.prisma`);
+ await fs.promises.writeFile(prismaSchemaFile, prismaSchema);
return prismaSchemaFile;
}This addresses the race condition with a unique filename and uses async file operations for consistency.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export async function generateTempPrismaSchema(zmodelPath: string) { | |
| const model = await loadSchemaDocument(zmodelPath); | |
| const prismaSchema = await new PrismaSchemaGenerator(model).generate(); | |
| const prismaSchemaFile = path.resolve(path.dirname(zmodelPath), '~schema.prisma'); | |
| fs.writeFileSync(prismaSchemaFile, prismaSchema); | |
| return prismaSchemaFile; | |
| } | |
| export async function generateTempPrismaSchema(zmodelPath: string) { | |
| const model = await loadSchemaDocument(zmodelPath); | |
| const prismaSchema = await new PrismaSchemaGenerator(model).generate(); | |
| const prismaSchemaFile = path.resolve( | |
| path.dirname(zmodelPath), | |
| `~schema-${Date.now()}-${Math.random().toString(36).substr(2, 9)}.prisma` | |
| ); | |
| await fs.promises.writeFile(prismaSchemaFile, prismaSchema); | |
| return prismaSchemaFile; | |
| } |
🤖 Prompt for AI Agents
In packages/cli/src/actions/action-utils.ts around lines 47 to 53, the function
generateTempPrismaSchema uses a fixed filename '~schema.prisma' which can cause
race conditions if multiple processes run concurrently in the same directory. It
also uses synchronous file writing which blocks the event loop and lacks error
handling for file operations. To fix this, generate a unique temporary filename
for each call to avoid conflicts, replace fs.writeFileSync with the asynchronous
fs.promises.writeFile to maintain async consistency, and wrap the file writing
in a try-catch block to handle and propagate any errors properly.
| describe('Trigger.dev e2e tests', () => { | ||
| it('has a working schema', async () => { | ||
| await expect( | ||
| generateTsSchema(fs.readFileSync(path.join(__dirname, 'schema.zmodel'), 'utf8'), 'postgresql', 'cal-com'), | ||
| ).resolves.toBeTruthy(); | ||
| }); | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix inconsistent project identifier.
The test is for Trigger.dev but uses 'cal-com' as the project identifier in the generateTsSchema call. This inconsistency could cause confusion about what's actually being tested.
Apply this fix to use the correct project identifier:
describe('Trigger.dev e2e tests', () => {
it('has a working schema', async () => {
await expect(
- generateTsSchema(fs.readFileSync(path.join(__dirname, 'schema.zmodel'), 'utf8'), 'postgresql', 'cal-com'),
+ generateTsSchema(fs.readFileSync(path.join(__dirname, 'schema.zmodel'), 'utf8'), 'postgresql', 'trigger-dev'),
).resolves.toBeTruthy();
});
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| describe('Trigger.dev e2e tests', () => { | |
| it('has a working schema', async () => { | |
| await expect( | |
| generateTsSchema(fs.readFileSync(path.join(__dirname, 'schema.zmodel'), 'utf8'), 'postgresql', 'cal-com'), | |
| ).resolves.toBeTruthy(); | |
| }); | |
| }); | |
| describe('Trigger.dev e2e tests', () => { | |
| it('has a working schema', async () => { | |
| await expect( | |
| generateTsSchema( | |
| fs.readFileSync(path.join(__dirname, 'schema.zmodel'), 'utf8'), | |
| 'postgresql', | |
| 'trigger-dev' | |
| ), | |
| ).resolves.toBeTruthy(); | |
| }); | |
| }); |
🤖 Prompt for AI Agents
In tests/e2e/trigger.dev/trigger-dev.test.ts around lines 6 to 12, the project
identifier used in the generateTsSchema call is 'cal-com', which is inconsistent
with the Trigger.dev test context. Replace 'cal-com' with the correct project
identifier that matches Trigger.dev to maintain consistency and clarity in the
test.
| export function createTestProject() { | ||
| const { name: workDir } = tmp.dirSync({ unsafeCleanup: true }); | ||
|
|
||
| fs.mkdirSync(path.join(workDir, 'node_modules')); | ||
|
|
||
| // symlink all entries from "node_modules" | ||
| const nodeModules = fs.readdirSync(path.join(__dirname, '../node_modules')); | ||
| for (const entry of nodeModules) { | ||
| if (entry.startsWith('@zenstackhq')) { | ||
| continue; | ||
| } | ||
| fs.symlinkSync( | ||
| path.join(__dirname, '../node_modules', entry), | ||
| path.join(workDir, 'node_modules', entry), | ||
| 'dir', | ||
| ); | ||
| } | ||
|
|
||
| // in addition, symlink zenstack packages | ||
| const zenstackPackages = ['language', 'sdk', 'runtime', 'cli']; | ||
| fs.mkdirSync(path.join(workDir, 'node_modules/@zenstackhq')); | ||
| for (const pkg of zenstackPackages) { | ||
| fs.symlinkSync( | ||
| path.join(__dirname, `../../${pkg}`), | ||
| path.join(workDir, `node_modules/@zenstackhq/${pkg}`), | ||
| 'dir', | ||
| ); | ||
| } | ||
|
|
||
| fs.writeFileSync( | ||
| path.join(workDir, 'package.json'), | ||
| JSON.stringify( | ||
| { | ||
| name: 'test', | ||
| version: '1.0.0', | ||
| type: 'module', | ||
| }, | ||
| null, | ||
| 4, | ||
| ), | ||
| ); | ||
|
|
||
| fs.writeFileSync( | ||
| path.join(workDir, 'tsconfig.json'), | ||
| JSON.stringify( | ||
| { | ||
| compilerOptions: { | ||
| module: 'ESNext', | ||
| target: 'ESNext', | ||
| moduleResolution: 'Bundler', | ||
| esModuleInterop: true, | ||
| skipLibCheck: true, | ||
| strict: true, | ||
| }, | ||
| include: ['**/*.ts'], | ||
| }, | ||
| null, | ||
| 4, | ||
| ), | ||
| ); | ||
|
|
||
| return workDir; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Improve error handling and path robustness.
The function is well-structured but could be more robust:
- Missing error handling: Symlink operations can fail and should be handled gracefully.
- Brittle relative paths: The hardcoded relative paths (
../../${pkg}) assume a specific project structure that could change.
Consider adding error handling and more robust path resolution:
+import { fileURLToPath } from 'node:url';
export function createTestProject() {
const { name: workDir } = tmp.dirSync({ unsafeCleanup: true });
fs.mkdirSync(path.join(workDir, 'node_modules'));
+ const currentDir = path.dirname(fileURLToPath(import.meta.url));
+ const nodeModulesPath = path.join(currentDir, '../node_modules');
// symlink all entries from "node_modules"
- const nodeModules = fs.readdirSync(path.join(__dirname, '../node_modules'));
+ if (!fs.existsSync(nodeModulesPath)) {
+ throw new Error(`node_modules not found at ${nodeModulesPath}`);
+ }
+ const nodeModules = fs.readdirSync(nodeModulesPath);
for (const entry of nodeModules) {
if (entry.startsWith('@zenstackhq')) {
continue;
}
- fs.symlinkSync(
- path.join(__dirname, '../node_modules', entry),
- path.join(workDir, 'node_modules', entry),
- 'dir',
- );
+ try {
+ fs.symlinkSync(
+ path.join(nodeModulesPath, entry),
+ path.join(workDir, 'node_modules', entry),
+ 'dir',
+ );
+ } catch (error) {
+ console.warn(`Failed to symlink ${entry}:`, error);
+ }
}This makes the code more resilient to project structure changes and handles symlink failures gracefully.
🤖 Prompt for AI Agents
In packages/testtools/src/project.ts around lines 5 to 67, improve robustness by
adding try-catch blocks around all fs.symlinkSync calls to handle potential
errors gracefully without crashing. Replace hardcoded relative paths like
../../${pkg} with path.resolve or path.join using __dirname to dynamically
compute absolute paths, ensuring the code adapts to project structure changes.
This will make symlink creation safer and path resolution more reliable.
| export function runCli(command: string, cwd: string) { | ||
| const cli = path.join(__dirname, '../dist/index.js'); | ||
| execSync(`node ${cli} ${command}`, { cwd }); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential command injection vulnerability
The command parameter is directly interpolated into the shell command without escaping, which could lead to command injection if untrusted input is passed.
Consider using execFileSync with arguments array instead:
-export function runCli(command: string, cwd: string) {
- const cli = path.join(__dirname, '../dist/index.js');
- execSync(`node ${cli} ${command}`, { cwd });
-}
+export function runCli(command: string, cwd: string) {
+ const cli = path.join(__dirname, '../dist/index.js');
+ const args = command.split(' ');
+ execSync('node', [cli, ...args], { cwd });
+}Or use a more robust command parsing approach if complex arguments are needed.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In packages/cli/test/utils.ts around lines 20 to 23, the runCli function uses
execSync with a command string that directly interpolates the command parameter,
risking command injection. To fix this, replace execSync with execFileSync and
pass the CLI path as the first argument and the command split into an arguments
array as the second argument, avoiding shell interpolation. This change ensures
safer execution by properly escaping arguments and preventing injection
vulnerabilities.
| if (Array.isArray(defaultValue)) { | ||
| objectFields.push( | ||
| ts.factory.createPropertyAssignment( | ||
| 'default', | ||
| ts.factory.createArrayLiteralExpression( | ||
| defaultValue.map((item) => this.createLiteralNode(item as any)), | ||
| ), | ||
| ), | ||
| ); | ||
| } else { | ||
| objectFields.push( | ||
| ts.factory.createPropertyAssignment('default', this.createLiteralNode(defaultValue)), | ||
| ); | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Avoid casting to any - preserve type safety.
The cast to any on line 374 undermines TypeScript's type safety. Since defaultValue can be unknown[], consider preserving the type information.
if (Array.isArray(defaultValue)) {
objectFields.push(
ts.factory.createPropertyAssignment(
'default',
ts.factory.createArrayLiteralExpression(
- defaultValue.map((item) => this.createLiteralNode(item as any)),
+ defaultValue.map((item) => this.createLiteralNode(item)),
),
),
);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (Array.isArray(defaultValue)) { | |
| objectFields.push( | |
| ts.factory.createPropertyAssignment( | |
| 'default', | |
| ts.factory.createArrayLiteralExpression( | |
| defaultValue.map((item) => this.createLiteralNode(item as any)), | |
| ), | |
| ), | |
| ); | |
| } else { | |
| objectFields.push( | |
| ts.factory.createPropertyAssignment('default', this.createLiteralNode(defaultValue)), | |
| ); | |
| } | |
| } | |
| if (Array.isArray(defaultValue)) { | |
| objectFields.push( | |
| ts.factory.createPropertyAssignment( | |
| 'default', | |
| ts.factory.createArrayLiteralExpression( | |
| defaultValue.map((item) => this.createLiteralNode(item)), | |
| ), | |
| ), | |
| ); | |
| } else { | |
| objectFields.push( | |
| ts.factory.createPropertyAssignment('default', this.createLiteralNode(defaultValue)), | |
| ); | |
| } |
🤖 Prompt for AI Agents
In packages/sdk/src/ts-schema-generator.ts around lines 369 to 383, the code
casts items in defaultValue to any, which breaks type safety. To fix this,
remove the any cast and ensure the createLiteralNode method accepts the correct
type by properly typing defaultValue as unknown[] and passing items without
casting. Adjust the method signature or add type guards if necessary to maintain
type safety while handling unknown array elements.
| private getMappedValue( | ||
| expr: Expression, | ||
| fieldType: DataModelFieldType, | ||
| ): string | number | boolean | unknown[] | { call: string; args: any[] } | { authMember: string[] } | undefined { | ||
| if (isLiteralExpr(expr)) { | ||
| const lit = (expr as LiteralExpr).value; | ||
| return fieldType.type === 'Boolean' | ||
| ? (lit as boolean) | ||
| : ['Int', 'Float', 'Decimal', 'BigInt'].includes(field.type.type!) | ||
| : ['Int', 'Float', 'Decimal', 'BigInt'].includes(fieldType.type!) | ||
| ? Number(lit) | ||
| : lit; | ||
| } else if (isReferenceExpr(defaultValue) && isEnumField(defaultValue.target.ref)) { | ||
| return defaultValue.target.ref.name; | ||
| } else if (isInvocationExpr(defaultValue)) { | ||
| } else if (isArrayExpr(expr)) { | ||
| return expr.items.map((item) => this.getMappedValue(item, fieldType)); | ||
| } else if (isReferenceExpr(expr) && isEnumField(expr.target.ref)) { | ||
| return expr.target.ref.name; | ||
| } else if (isInvocationExpr(expr)) { | ||
| return { | ||
| call: defaultValue.function.$refText, | ||
| args: defaultValue.args.map((arg) => this.getLiteral(arg.value)), | ||
| call: expr.function.$refText, | ||
| args: expr.args.map((arg) => this.getLiteral(arg.value)), | ||
| }; | ||
| } else if (this.isAuthMemberAccess(defaultValue)) { | ||
| } else if (this.isAuthMemberAccess(expr)) { | ||
| return { | ||
| authMember: this.getMemberAccessChain(defaultValue), | ||
| authMember: this.getMemberAccessChain(expr), | ||
| }; | ||
| } else { | ||
| throw new Error(`Unsupported default value type for field ${field.name}`); | ||
| throw new Error(`Unsupported default value type for ${expr.$type}`); | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Enhance error handling and validation.
Two improvements to consider:
- Validate that
expr.function.refexists before using$refText - Make the error message more descriptive
} else if (isInvocationExpr(expr)) {
+ if (!expr.function.ref) {
+ throw new Error(`Invalid invocation expression: missing function reference`);
+ }
return {
call: expr.function.$refText,
args: expr.args.map((arg) => this.getLiteral(arg.value)),
};
} else if (this.isAuthMemberAccess(expr)) {
return {
authMember: this.getMemberAccessChain(expr),
};
} else {
- throw new Error(`Unsupported default value type for ${expr.$type}`);
+ throw new Error(`Unsupported default value expression type: ${expr.$type} at field ${fieldType.type}`);
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| private getMappedValue( | |
| expr: Expression, | |
| fieldType: DataModelFieldType, | |
| ): string | number | boolean | unknown[] | { call: string; args: any[] } | { authMember: string[] } | undefined { | |
| if (isLiteralExpr(expr)) { | |
| const lit = (expr as LiteralExpr).value; | |
| return fieldType.type === 'Boolean' | |
| ? (lit as boolean) | |
| : ['Int', 'Float', 'Decimal', 'BigInt'].includes(field.type.type!) | |
| : ['Int', 'Float', 'Decimal', 'BigInt'].includes(fieldType.type!) | |
| ? Number(lit) | |
| : lit; | |
| } else if (isReferenceExpr(defaultValue) && isEnumField(defaultValue.target.ref)) { | |
| return defaultValue.target.ref.name; | |
| } else if (isInvocationExpr(defaultValue)) { | |
| } else if (isArrayExpr(expr)) { | |
| return expr.items.map((item) => this.getMappedValue(item, fieldType)); | |
| } else if (isReferenceExpr(expr) && isEnumField(expr.target.ref)) { | |
| return expr.target.ref.name; | |
| } else if (isInvocationExpr(expr)) { | |
| return { | |
| call: defaultValue.function.$refText, | |
| args: defaultValue.args.map((arg) => this.getLiteral(arg.value)), | |
| call: expr.function.$refText, | |
| args: expr.args.map((arg) => this.getLiteral(arg.value)), | |
| }; | |
| } else if (this.isAuthMemberAccess(defaultValue)) { | |
| } else if (this.isAuthMemberAccess(expr)) { | |
| return { | |
| authMember: this.getMemberAccessChain(defaultValue), | |
| authMember: this.getMemberAccessChain(expr), | |
| }; | |
| } else { | |
| throw new Error(`Unsupported default value type for field ${field.name}`); | |
| throw new Error(`Unsupported default value type for ${expr.$type}`); | |
| } | |
| } | |
| private getMappedValue( | |
| expr: Expression, | |
| fieldType: DataModelFieldType, | |
| ): string | number | boolean | unknown[] | { call: string; args: any[] } | { authMember: string[] } | undefined { | |
| if (isLiteralExpr(expr)) { | |
| const lit = (expr as LiteralExpr).value; | |
| return fieldType.type === 'Boolean' | |
| ? (lit as boolean) | |
| : ['Int', 'Float', 'Decimal', 'BigInt'].includes(fieldType.type!) | |
| ? Number(lit) | |
| : lit; | |
| } else if (isArrayExpr(expr)) { | |
| return expr.items.map((item) => this.getMappedValue(item, fieldType)); | |
| } else if (isReferenceExpr(expr) && isEnumField(expr.target.ref)) { | |
| return expr.target.ref.name; | |
| } else if (isInvocationExpr(expr)) { | |
| if (!expr.function.ref) { | |
| throw new Error(`Invalid invocation expression: missing function reference`); | |
| } | |
| return { | |
| call: expr.function.$refText, | |
| args: expr.args.map((arg) => this.getLiteral(arg.value)), | |
| }; | |
| } else if (this.isAuthMemberAccess(expr)) { | |
| return { | |
| authMember: this.getMemberAccessChain(expr), | |
| }; | |
| } else { | |
| throw new Error( | |
| `Unsupported default value expression type: ${expr.$type} at field ${fieldType.type}` | |
| ); | |
| } | |
| } |
🤖 Prompt for AI Agents
In packages/sdk/src/ts-schema-generator.ts between lines 450 and 477, enhance
error handling in the getMappedValue method by first checking if
expr.function.ref exists before accessing expr.function.$refText to avoid
potential runtime errors. Additionally, improve the thrown error message in the
else block to be more descriptive by including details about the unsupported
expression type and context to aid debugging.
| generator client { | ||
| provider = "prisma-client-js" | ||
| binaryTargets = ["native", "debian-openssl-1.1.x"] | ||
| previewFeatures = ["tracing", "metrics"] | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Verify the OpenSSL version compatibility.
The binary target debian-openssl-1.1.x refers to an older OpenSSL version. Modern Debian systems typically use OpenSSL 3.x. This could cause compatibility issues or security vulnerabilities.
🌐 Web query:
What is the latest recommended Prisma binary target for Debian systems with current OpenSSL versions?
💡 Result:
To ensure compatibility between Prisma and your Debian system's OpenSSL version, it's essential to specify the correct binary target in your schema.prisma file. Prisma provides binary targets tailored for different OpenSSL versions on Debian systems.
Identifying Your OpenSSL Version:
First, determine the OpenSSL version installed on your Debian system by running:
openssl versionThis command will output the OpenSSL version, such as OpenSSL 3.0.10 1 Aug 2023.
Configuring Prisma with the Appropriate Binary Target:
Based on your OpenSSL version, update the binaryTargets in your schema.prisma file as follows:
- For OpenSSL 3.0.x:
generator client {
provider = "prisma-client-js"
binaryTargets = ["native", "debian-openssl-3.0.x"]
}- For OpenSSL 3.2.x:
generator client {
provider = "prisma-client-js"
binaryTargets = ["native", "debian-openssl-3.2.x"]
}After updating the schema.prisma file, regenerate the Prisma Client by running:
npx prisma generateThis configuration ensures that Prisma uses the correct query engine binary compatible with your system's OpenSSL version.
Additional Considerations:
-
Prisma Version: Ensure you're using a Prisma version that supports your OpenSSL version. For instance, Prisma added support for OpenSSL 3.0.x in version 3.13.0. (stackoverflow.com)
-
System Compatibility: If you encounter issues related to OpenSSL versions, consider updating your Prisma and OpenSSL packages to the latest stable releases.
By configuring Prisma with the appropriate binary target matching your Debian system's OpenSSL version, you can achieve optimal compatibility and performance.
Citations:
Update Prisma binaryTargets to match your Debian system’s OpenSSL 3.x
Your schema currently pins an older OpenSSL 1.1 binary target, which can lead to compatibility or security issues on modern Debian releases. Please:
-
Check your host’s OpenSSL version:
openssl version
-
In
tests/e2e/trigger.dev/schema.zmodel, replacedebian-openssl-1.1.xwith the matching 3.x target:generator client { provider = "prisma-client-js" - binaryTargets = ["native", "debian-openssl-1.1.x"] + binaryTargets = [ + "native", + "debian-openssl-3.0.x" // use "debian-openssl-3.2.x" if your system reports OpenSSL 3.2.x + ] previewFeatures = ["tracing", "metrics"] } -
Run
npx prisma generateto regenerate the client. -
Ensure you’re on Prisma ≥ 3.13.0 to support OpenSSL 3.x.
🤖 Prompt for AI Agents
In tests/e2e/trigger.dev/schema.zmodel around lines 7 to 11, the Prisma
generator's binaryTargets includes "debian-openssl-1.1.x", which is outdated for
modern Debian systems using OpenSSL 3.x. Check your system's OpenSSL version
with "openssl version" and update the binaryTargets to the appropriate OpenSSL
3.x target (e.g., "debian-openssl-3.0.x"). After updating, run "npx prisma
generate" to regenerate the client and ensure your Prisma version is at least
3.13.0 to support OpenSSL 3.x.
| welcomeCard Json @default("{\"enabled\": false}") | ||
| /// [SurveyQuestions] | ||
| questions Json @default("[]") | ||
| /// [SurveyEnding] | ||
| endings Json[] @default([]) | ||
| thankYouCard Json? //deprecated | ||
| /// [SurveyHiddenFields] | ||
| hiddenFields Json @default("{\"enabled\": false}") | ||
| /// [SurveyVariables] | ||
| variables Json @default("[]") | ||
| responses Response[] | ||
| displayOption displayOptions @default(displayOnce) | ||
| recontactDays Int? | ||
| displayLimit Int? | ||
| triggers SurveyTrigger[] | ||
| /// [SurveyInlineTriggers] | ||
| inlineTriggers Json? | ||
| attributeFilters SurveyAttributeFilter[] | ||
| displays Display[] | ||
| autoClose Int? | ||
| autoComplete Int? | ||
| delay Int @default(0) | ||
| runOnDate DateTime? | ||
| closeOnDate DateTime? | ||
| /// [SurveyClosedMessage] | ||
| surveyClosedMessage Json? | ||
| segmentId String? | ||
| segment Segment? @relation(fields: [segmentId], references: [id]) | ||
|
|
||
| /// [SurveyProjectOverwrites] | ||
| projectOverwrites Json? | ||
|
|
||
| /// [SurveyStyling] | ||
| styling Json? | ||
|
|
||
| /// [SurveySingleUse] | ||
| singleUse Json? @default("{\"enabled\": false, \"isEncrypted\": true}") | ||
|
|
||
| /// [SurveyVerifyEmail] | ||
| verifyEmail Json? // deprecated | ||
| isVerifyEmailEnabled Boolean @default(false) | ||
| isSingleResponsePerEmailEnabled Boolean @default(false) | ||
| isBackButtonHidden Boolean @default(false) | ||
| pin String? | ||
| resultShareKey String? @unique | ||
| displayPercentage Decimal? | ||
| languages SurveyLanguage[] | ||
| showLanguageSwitch Boolean? | ||
| documents Document[] | ||
| followUps SurveyFollowUp[] | ||
| /// [SurveyRecaptcha] | ||
| recaptcha Json? @default("{\"enabled\": false, \"threshold\":0.1}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider implementing a deprecation strategy for legacy fields.
The schema contains deprecated fields (thankYouCard, verifyEmail) that should be removed. Additionally, the extensive use of Json fields bypasses Prisma's type safety. Consider:
- Creating a migration plan to remove deprecated fields
- Using Prisma's type-safe approach with proper relations instead of JSON where feasible
- If JSON is necessary, ensure strict validation at the application layer
🤖 Prompt for AI Agents
In tests/e2e/formbricks/schema.zmodel between lines 365 and 416, deprecated
fields thankYouCard and verifyEmail should be removed following a clear
migration plan to avoid breaking changes. Replace Json fields with proper Prisma
relations or typed models wherever possible to leverage type safety. For Json
fields that must remain, implement strict validation in the application code to
ensure data integrity. This approach improves maintainability and reduces
runtime errors.
|
|
||
| generator client { | ||
| provider = "prisma-client-js" | ||
| previewFeatures = ["postgresqlExtensions"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Consider the stability implications of using preview features.
The postgresqlExtensions preview feature may have breaking changes in future Prisma versions. Ensure your team is prepared to handle potential migration issues when upgrading Prisma.
🌐 Web query:
What is the current stability status of the postgresqlExtensions preview feature in Prisma?
💡 Result:
As of June 2025, the postgresqlExtensions preview feature in Prisma has been discontinued. This decision was made due to its complexity and limited value. Instead of this feature, Prisma plans to support specific PostgreSQL extensions, such as PostGIS, through focused, independent efforts. (github.com)
If you're currently using the postgresqlExtensions feature, it's recommended to transition to alternative methods for managing PostgreSQL extensions in your Prisma schema. Prisma's documentation provides guidance on how to represent PostgreSQL extensions in your schema and manage them using Prisma Migrate. (prismagraphql.com)
For more details on the discontinuation and Prisma's roadmap for PostgreSQL extensions, you can refer to the official announcement. (github.com)
Citations:
- 1: Preview Features Update 2025 prisma/prisma#26136
- 2: https://www.prismagraphql.com/docs/orm/prisma-schema/postgresql-extensions?utm_source=openai
- 3: Preview Features Update 2025 prisma/prisma#26136
Replace discontinued postgresqlExtensions preview feature
The postgresqlExtensions preview feature was officially discontinued in June 2025 and will no longer be supported. Please remove this setting and configure any required PostgreSQL extensions (e.g. pgvector, PostGIS) using Prisma Migrate and the documented approach:
• Location: tests/e2e/formbricks/schema.zmodel, line 9
• Change:
generator client {
provider = "prisma-client-js"
- previewFeatures = ["postgresqlExtensions"]
+ // removed deprecated preview feature; configure extensions per https://www.prismagraphql.com/docs/orm/prisma-schema/postgresql-extensions
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| previewFeatures = ["postgresqlExtensions"] | |
| generator client { | |
| provider = "prisma-client-js" | |
| - previewFeatures = ["postgresqlExtensions"] | |
| + // removed deprecated preview feature; configure extensions per https://www.prismagraphql.com/docs/orm/prisma-schema/postgresql-extensions | |
| } |
🤖 Prompt for AI Agents
In tests/e2e/formbricks/schema.zmodel at line 9, remove the deprecated
previewFeatures setting "postgresqlExtensions" entirely. Instead, configure any
needed PostgreSQL extensions like pgvector or PostGIS through Prisma Migrate
following the official documentation. This means deleting the line
previewFeatures = ["postgresqlExtensions"] and ensuring extension setup is
handled outside this previewFeatures array.
Summary by CodeRabbit
New Features
--silent) and to save the generated Prisma schema to a custom or default location.Bug Fixes
Tests
generate,db,migrate,init) and end-to-end schema validation for real-world projects (Cal.com, Formbricks, Trigger.dev).Chores
3.0.0-alpha.7across all packages.Documentation