ztd-cli is a SQL-first CLI for feature-first application development. It treats handwritten SQL (DDL and named-parameter DML) as the source of truth and scaffolds feature-local layouts.
When SQL changes introduce inconsistencies, the toolchain surfaces them as type errors and test failures before they ship.
The focus is SQL maintainability: keep schema, queries, specs, and tests close to each feature, and let the toolchain show you what broke when the SQL changed.
- DDL stays the source of truth for the shape you test.
- You can keep the app and deployment databases out of the inner loop while still testing against a real engine.
- Migration artifacts are generated for review, but they are not applied automatically.
npm install -D @rawsql-ts/ztd-cli vitest typescript
npx ztd init --starter
# starter scaffold generates compose.yaml, starter DDL, config, and test stubs
npx ztd agents init
cp .env.example .env
# edit ZTD_DB_PORT=5433 if needed
docker compose up -d
npx ztd ztd-config
npx vitest runnpm install -D @rawsql-ts/ztd-cli vitest typescript
npx ztd init --starter
# starter scaffold generates compose.yaml, starter DDL, config, and test stubs
npx ztd agents init
Copy-Item .env.example .env
# edit ZTD_DB_PORT=5433 if needed
docker compose up -d
npx ztd ztd-config
npx vitest runThe starter scaffold includes @rawsql-ts/testkit-core, so npx ztd ztd-config works in a fresh standalone project and writes the generated runtime manifest to .ztd/generated/ztd-fixture-manifest.generated.ts.
That manifest carries tableDefinitions schema metadata only; test rows stay explicit fixtures outside the generated contract.
The removable starter smoke path also shows @rawsql-ts/testkit-postgres and createPostgresTestkitClient for the DB-backed test that catches setup problems early.
If you want the fixture-resolution details, read the @rawsql-ts/testkit-postgres package README after the starter smoke path.
If you add a no-op repository telemetry seam under src/infrastructure/telemetry/, use queryId as the stable lookup key, and keep repositoryName, methodName, paramsShape, and transformations as safe execution metadata only.
That seam does not emit SQL text or bind values by default, so you can decide explicitly when to connect console, pino, or OpenTelemetry sinks.
Make sure Docker Desktop or another Docker daemon is already running before you start the compose path, because docker compose up -d only launches the stack.
The generated Vitest setup derives ZTD_TEST_DATABASE_URL from .env, so the test runtime sees the same port setting as the compose file.
ztd-cliprepares the workflow, but it does not execute SQL by itself. Pair it with a database adapter and a DBMS-specific testkit such as@rawsql-ts/adapter-node-pg+@rawsql-ts/testkit-postgresfor Postgres.
If 5432 is busy, use another local port and update ZTD_DB_PORT in .env.
-
macOS / Linux / Git Bash:
cp .env.example .env # edit ZTD_DB_PORT=5433 if needed docker compose up -d npx vitest run -
PowerShell:
Copy-Item .env.example .env # edit ZTD_DB_PORT=5433 if needed docker compose up -d npx vitest run
Use this prompt after Quickstart. Choose ztd init or ztd init --starter based on whether I want the removable starter sample.
Add a users insert feature to this feature-first project.
Read the nearest AGENTS.md files first. Then read `.codex/agents/*` if present.
Start with `npx ztd feature scaffold --table users --action insert`.
Keep `entryspec.ts`, the query-local `queryspec.ts`, and the query-local SQL resource inside `src/features/users-insert`.
Use `zod` schemas at the feature boundary and DB boundary, then add tests as the AI follow-up.
Add the two tests in src/features/users-insert/tests as the follow-up step.
Do not apply migrations automatically.
Quickstart treats npx ztd agents init as an optional follow-up after starter scaffold creation.
If you skipped that step and still want the opt-in Codex bootstrap for the project, run it before asking Codex to inspect src/features/smoke.
If you want PROMPT_DOGFOOD.md for debugging or prompt review, pass --with-dogfooding to npx ztd init --starter.
ztd agents init adds:
- visible
AGENTS.mdguidance .codex/config.toml.codex/agents/
Existing user-owned guidance files are preserved; use npx ztd agents status if you need to review customer bootstrap targets separately from internal .ztd guidance, including managed, customized, or unmanaged-conflict files.
A good first request after setup is:
Add a users insert feature to this feature-first project.
Read the nearest AGENTS.md files first. Then read `.codex/agents/*` if present.
Start with `npx ztd feature scaffold --table users --action insert`.
Keep `entryspec.ts`, the query-local `queryspec.ts`, and the query-local SQL resource inside `src/features/users-insert`.
Use `zod` schemas at the feature boundary and DB boundary, then add tests as the AI follow-up.
Add the two tests in src/features/users-insert/tests as the follow-up step.
Do not apply migrations automatically.
ztd init --startercreates a feature-first starter scaffold withsmoke, starter DDL, and local Postgres wiring.ztd feature scaffold --table <table> --action <insert|update|delete|get-by-id|list>creates a fixed CRUD/SELECT boundary scaffold withentryspec.ts, a co-located query directory (queryspec.ts+ SQL resource),zodDTO schemas at both boundaries, a shared executor runtime contract, the empty featuretests/directory, and SQL-resource helper files on first run.get-by-iduses a zero-or-one baseline, andlistkeeps default paging and primary-key ordering insidequeryspec.tswhile leaving the two test files to AI follow-up.ztd agents initadds the optional Codex bootstrap on demand: visibleAGENTS.md,.codex/agents, and.codex/config.toml.ztd ztd-config --watchkeeps generatedTestRowMaptypes and runtime fixture metadata aligned with DDL as files change.ztd lintchecks SQL against a temporary Postgres before you ship it.ztd model-genandztd query useskeep QuerySpec scaffolding and impacted-file discovery close to the feature-first slice.ztd query sssql scaffoldandztd query sssql refreshmove optional filters into SQL-first authoring and keep runtime pruning explicit. Runtime no longer injects new filter predicates.ztd query match-observedranks likely source SQL assets for an observed SELECT statement whenqueryIdis missing. See the investigation guide for the full flow.ztd perf init/ztd perf runsupport tuning without forcing SQL rewrites first.--dry-run/--output jsonmake the workflow reviewable and machine-readable.
| Command | Purpose |
|---|---|
ztd init --starter |
Scaffold the recommended first-run project. |
ztd feature scaffold --table <table> --action <insert/update/delete/get-by-id/list> |
Scaffold the fixed CRUD/SELECT boundary files (entryspec.ts, query-local queryspec.ts + SQL, zod DTO schemas, shared executor runtime contract), the empty tests/ directory, and SQL-resource helper files on first run, while using @rawsql-ts/sql-contract for cardinality and catalog execution and leaving the two test files to AI follow-up. |
ztd agents init |
Add the optional Codex bootstrap on demand. |
ztd ztd-config |
Regenerate TestRowMap, runtime fixture metadata, and layout metadata from DDL; add --watch for live updates. |
ztd lint |
Lint SQL files against a temporary Postgres. |
ztd model-gen |
Generate QuerySpec scaffolding from SQL assets. |
ztd query uses |
Find impacted SQL before changing a table or column. |
ztd query match-observed |
Rank likely source SQL assets from observed SELECT text. |
ztd query sssql scaffold / ztd query sssql refresh |
Author and refresh SQL-first optional filter branches. |
ztd ddl pull / ztd ddl diff |
Inspect an explicit target and prepare migration SQL. |
ztd perf * |
Run the tuning loop (init, db reset, run) for index or pipeline investigation. |
ztd describe |
Inspect commands in machine-readable form, including --output json. |
After DDL or schema changes, rerun ztd ztd-config, ztd lint, and npx vitest run. Use ztd ddl diff or ztd ddl pull when you need a migration plan. The generated runtime manifest under .ztd/generated/ is the preferred input for @rawsql-ts/testkit-postgres.
Run npx ztd describe command <name> for per-command flags and options.
ztd.config.json now keeps schema resolution at the top level:
{
"dialect": "postgres",
"ztdRootDir": ".ztd",
"ddlDir": "db/ddl",
"testsDir": ".ztd/tests",
"defaultSchema": "public",
"searchPath": ["public"],
"ddlLint": "strict"
}ddl.defaultSchema and ddl.searchPath are no longer read. If an older project still keeps schema settings under ddl, move them to the top-level defaultSchema and searchPath fields.
| Term | Meaning |
|---|---|
| ZTD | Zero Table Dependency - test against a real database engine without creating or mutating application tables. |
| DDL | SQL schema files that act as the source of truth for type generation. |
| TestRowMap | Generated TypeScript types that describe row shape from local DDL. |
| QuerySpec | Contract object that ties a SQL asset file to parameter and output types. |
- SQL-first End-to-End Tutorial - starter flow, repair loops, and scenario-specific CLI guidance
- SQL Tool Happy Paths - choose between query plan, perf, query uses, and telemetry
- Perf Tuning Decision Guide - index tuning vs pipeline tuning
- JOIN Direction Lint Specification - readable FK-aware JOIN guidance
- Repository Telemetry Setup - how to edit the scaffold, emit logs, and investigate with
queryId - Observed SQL Investigation - how to use
ztd query match-observedwhenqueryIdis missing
- ztd-cli Telemetry Philosophy - opt-in telemetry guidance
- Observed SQL Matching - reverse lookup for missing
queryId, with best-effort ranking and skip/warning reporting
ztd-config can already produce separate artifacts for multiple DB contexts. At runtime, treat each context as its own SqlClient and keep the clients side by side when a single workflow needs to talk to more than one database.
type AppClients = {
dbA: SqlClient;
dbB: SqlClient;
};
async function runWorkflow(clients: AppClients) {
const users = await createUsersRepository(clients.dbA).listActiveUsers();
const invoices = await createBillingRepository(clients.dbB).listOpenInvoices();
return { users, invoices };
}This is the minimum runtime step needed for multi-DB workflows. It is not saga orchestration itself; it is the client-binding pattern that makes a saga-style design possible later. See the proof test in packages/testkit-postgres/tests/client.test.ts.
- Migration Lifecycle Dogfooding - separate-AI prompts for DDL, SQL, DTO, and migration repair
- Perf Scale Tuning Dogfooding - separate-AI tuning dogfood
- Published-Package Verification Before Release - pack and smoke-test the published-package path
- Local-Source Dogfooding - unpublished local checkout workflow
- ztd-cli Agent Interface - machine-readable command surface
- Codex Bootstrap Verification - reviewer-checkable fresh-project verification for
ztd agents init - ztd-cli spawn EPERM Investigation - reviewer-checkable root-cause investigation for the local Vitest startup blocker
- ztd Onboarding Dogfooding - reviewer-checkable README Quickstart and tutorial verification for the customer-facing onboarding path
MIT