VS Code-first tooling for vibe-coding inference pipelines on Modal, backed by a workspace home file, schema-driven .lumen runner configs, and a managed self-contained pipeline server.
The goal experience is simple:
- Install the VS Code extension.
- Create
lumen.config.json, or runLumen: Init Workspace. - Open it and follow the workspace setup flow.
- Let Lumen scaffold a committed runtime into
server/and configs intoassets/. - Start vibe-coding inference pipelines with your editor and preferred LLM tools.
- A VS Code extension first, not a Python SDK
- A way to vibe-code inference pipelines on Modal with your preferred LLM tools
- A provider-agnostic
.lumenfile format for runner configs - A VS Code custom editor with a first-run setup wizard
- A managed workspace bootstrap flow centered on
lumen.config.json - Optional starter pipeline packs and AI skill packs for pipeline authoring
- Lumen is extension-led. The editor is the product surface; the Python server is just the runtime it manages for you.
lumen.config.jsonis the workspace home. Opening it becomes the setup and control surface for the whole workspace.- A
.lumenfile is a JSON array of generation configs. Each config targets one pipeline on one live server URL. - The VS Code extension owns the workflow: scaffold runtime files under
server/, generate auth, startmodal serve, discover schemas, mirror logs, and write schema snapshots. - The Python server is intentionally small. It discovers pipeline modules, validates bearer auth, exposes pipeline schemas, and runs
generate. - Lumen installs default skill packs into both
.agents/skills/and.claude/skills/so bring-your-own agents can work immediately.
The intended user flow is:
- Install the Lumen VS Code extension.
- Create
lumen.config.json, or runLumen: Init Workspace. - Open the workspace home in VS Code.
- Initialize the workspace. Lumen creates
server/,assets/, the first.lumenfile, and default agent skills. - Copy the generated auth token into a Modal secret named
lumen-auth, or let Lumen do it if the Modal CLI is available. - Start the dev server from the home screen.
- Add configs from the discovered pipelines and iterate in
assets/*.lumen.
What this unlocks:
- scaffold a self-contained Modal server without hand-rolling project setup
- browse discovered pipeline schemas as typed forms in VS Code
- install starter pipelines such as
falorz-image-turbo - mirror logs and schemas into workspace files for debugging and agent workflows
- install default agent skills so your LLM can help author pipelines and
.lumenconfigs
If you want to try Lumen from this repo today, install the extension from source:
bun install
cd packages/vscode
bun run package
code --install-extension dist/lumen-vscode.vsixThe intended loop is: open VS Code, open lumen.config.json, let Lumen scaffold the managed workspace, then use the home screen and your editor/LLM tools to keep shaping pipelines in that workspace.
What the extension writes for you:
lumen.config.jsonas the workspace homeserver/with a self-contained FastAPI + Modal serverassets/main.lumenas the first runner configAGENTS.mdwith workspace-level guidance for agents and editorsserver/.authkeyfor local auth during developmentserver/lumen.logfor mirrored logsserver/lumen.schema.jsonfor the latest discovered pipeline schemas.agents/skills/and.claude/skills/with default Lumen skill packs
Use Lumen when you want one workspace to own the whole loop:
- start from a blank workspace instead of a prebuilt backend repo
- scaffold or evolve the Modal server from the editor
- define and run generation configs in
assets/*.lumen - install starter pipelines such as
falorz-image-turbo - let an LLM help vibe-code new pipelines through installed skill packs
- keep experimenting with any model provider as long as the server exposes the Lumen HTTP contract
The key positioning is simple: Lumen is a VS Code extension for managing and authoring Modal inference pipelines, with lumen.config.json as the workspace home, .lumen as the runner-config surface, and bundled skills as accelerators.
If you just want a ready-made example, open templates/default. It is an assembled sample project showing the scaffolded server, starter pipelines, and optional AI guidance packs together.
- packages/lumen contains the core types, ports, and editor-facing logic for
.lumenfiles and generation flows. - packages/vscode contains the VS Code extension, custom editor, managed server bootstrap flow, and packaged scaffold assets.
- templates/default contains an assembled example project that mirrors what the extension installs into a workspace.
- dexter.config.ts contains repo-local tooling entrypoints.
bun run lint
bun x tsc --noEmit
bun test templates/default/doctor.test.ts
cd templates/default/server && uv run pytest- Open packages/vscode/README.md for the extension-specific context.
- Open templates/default/AGENTS.md for the generated project model.