Skip to content

pgflow-dev/pgflow

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

pgflow logo

AI workflows in Supabase, no extra infra.
TypeScript workflows with full autocomplete, zero boilerplate, automatic retries and realtime progress. Built on Postgres + Edge Functions.

Docs | Demo | GitHub | Discord
License Built for Supabase

"A workflow engine built on Supabase primitives."
Paul Copplestone, CEO, Supabase (via X)

pgflow DAG execution showing parallel steps with automatic retry

Quick Start

# Install pgflow in your Supabase project
npx pgflow@latest install

# Restart Supabase and apply migrations
npx supabase stop && npx supabase start
npx supabase migrations up

Then define your workflow (full guide):

import { Flow } from '@pgflow/dsl';

new Flow<{ url: string }>({ slug: 'analyzeArticle' })
  .step({ slug: 'scrape' }, (input) => scrapeWebsite(input.run.url))
  .step({ slug: 'summarize', dependsOn: ['scrape'] }, (input) =>
    summarize(input.scrape)
  )
  .step({ slug: 'extractKeywords', dependsOn: ['scrape'] }, (input) =>
    extractKeywords(input.scrape)
  )
  .step(
    { slug: 'publish', dependsOn: ['summarize', 'extractKeywords'] },
    (input) =>
      publish({ summary: input.summarize, keywords: input.extractKeywords })
  );

This replaces ~240 lines of queue setup, state management, and coordination code. See full comparison

Why pgflow?

Building workflows in Supabase today means wiring together pgmq, pg_cron, state tables, and Edge Functions yourself. It works, but it's tedious.

pgflow gives you:

  • Declarative workflows - Define steps and dependencies in TypeScript. pgflow handles queues, state, and coordination.
  • Built for Supabase - Runs entirely in your existing project. No Redis, no Temporal, no external services.
  • AI-ready - Automatic retries with exponential backoff for flaky LLM APIs. Per-step, not per-workflow.
  • Parallel processing - Fan out over arrays with independent retries. If 3 of 100 items fail, only those 3 retry.
  • Full observability - All workflow state in Postgres. Query runs, debug failures, inspect outputs with SQL.
  • Flexible triggers - Start from your app, database triggers, pg_cron, or direct SQL calls.

What can you build?

  • AI Pipelines - Scrape websites, chunk content, generate embeddings, summarize with LLMs. Each step retries independently when APIs flake.
  • Background Jobs - Process uploads, send emails, sync data. Reliable task queue processing without Redis or external services.
  • RAG Pipelines - Chunk documents, generate embeddings, index content. Perfect for AI applications with multi-step LLM chains.
  • Data Workflows - ETL pipelines, scheduled imports, multi-step transformations. All orchestrated in Postgres.

See how pgflow compares to Trigger.dev, Inngest, DBOS, and Vercel Workflows.

How it works

  1. Define workflows using the TypeScript DSL
  2. Compile them to SQL migrations
  3. Deploy as Supabase Edge Functions
  4. Trigger from your app, SQL, or pg_cron

The execution engine handles scheduling, retries, and result aggregation automatically.

Packages

Package Version Description
pgflow npm CLI for installing and compiling flows
@pgflow/core npm SQL Core - foundational tables and functions
@pgflow/dsl npm TypeScript DSL for defining flows with type inference
@pgflow/edge-worker JSR Task queue worker for Supabase Edge Functions
@pgflow/client npm TypeScript client for starting and monitoring workflows

Releases

  • Release Process: See RELEASES.md for how versions are managed and published
  • Snapshot Releases: See SNAPSHOT_RELEASES.md for testing changes before release

Note

This project and all its components are licensed under Apache 2.0 license.