Skip to content

AOT-Technologies/m8flow

Repository files navigation

m8flow — Python-based workflow engine

m8flow is an open-source workflow engine implemented in pure Python. It is built on the proven foundation of SpiffWorkflow, with a vision shaped by 8 guiding principles for flow orchestration:

Merge flows effectively – streamline complex workflows Make apps faster – speed up development and deployment Manage processes better – bring structure and clarity to execution Minimize errors – reduce mistakes through automation Maximize efficiency – get more done with fewer resources Model workflows visually – design with simplicity and clarity Modernize systems – upgrade legacy processes seamlessly Mobilize innovation – empower teams to build and experiment quickly


Why m8flow?

Future-proof alternative → replaces Camunda 7 with a modern, Python-based workflow engine Enterprise-grade integrations → tight alignment with formsflow.ai, caseflow, and the SLED360 automation suite Open and extensible → open source by default, extensible for enterprise-grade use cases Principles-first branding → "m8" = 8 principles for flow, consistent with the product family (caseflow, formsflow.ai)


Features

BPMN 2.0: pools, lanes, multi-instance tasks, sub-processes, timers, signals, messages, boundary events, loops DMN: baseline implementation integrated with the Python execution engine Forms support: extract form definitions (Camunda XML extensions → JSON) for CLI or web UI generation Python-native workflows: run workflows via Python code or JSON structures Integration-ready: designed to plug into formsflow, caseflow, decision engines, and enterprise observability tools

A complete list of the latest features is available in our release notes.


Repository Structure

m8flow/
├── bin/                          # Developer helper scripts
│   ├── fetch-upstream.sh         # Fetch upstream source folders on demand
│   └── diff-from-upstream.sh     # Report local vs upstream divergence
│
├── docker/                       # All Docker and Compose files
│   ├── m8flow-docker-compose.yml         # Primary local dev stack
│   ├── m8flow-docker-compose.prod.yml    # Production overrides
│   ├── m8flow.backend.Dockerfile
│   ├── m8flow.frontend.Dockerfile
│   ├── m8flow.keycloak.Dockerfile
│   ├── minio.local-dev.docker-compose.yml
│   └── minio.production.docker-compose.yml
│
├── docs/                         # Documentation and images
│   └── env-reference.md          # Canonical environment variable reference
│
├── extensions/                   # m8flow-specific extensions (Apache 2.0)
│   ├── app.py                    # Extensions Flask/ASGI entry point
│   ├── m8flow-backend/           # Tenant APIs, auth middleware, DB migrations
│   │   ├── bin/                  # Backend run/migration scripts
│   │   ├── keycloak/             # Realm exports and Keycloak setup scripts
│   │   ├── migrations/           # Alembic migrations for m8flow tables
│   │   ├── src/m8flow_backend/   # Extension source code
│   │   └── tests/
│   └── m8flow-frontend/          # Multi-tenant UI extensions
│       └── src/
│
├── keycloak-extensions/          # Keycloak realm-info-mapper provider (JAR)
│
├── m8flow-connector-proxy/       # m8flow connector proxy service (Apache 2.0)
│
├── m8flow-nats-consumer/         # NATS event consumer service
│
├── upstream.sources.json         # Canonical upstream repo/ref/folder config
├── sample.env                    # Environment variable template
├── start_dev.sh                  # Local dev launcher (backend + frontend)
└── LICENSE                       # Apache License 2.0

# ── Gitignored — fetched via bin/fetch-upstream.sh ──────────────────────────
# spiffworkflow-backend/          Upstream LGPL-2.1 workflow engine
# spiffworkflow-frontend/         Upstream LGPL-2.1 BPMN modeler UI
# spiff-arena-common/             Upstream LGPL-2.1 shared utilities

Why are those directories missing? spiffworkflow-backend, spiffworkflow-frontend, and spiff-arena-common come from AOT-Technologies/m8flow-core (LGPL-2.1). They are not stored here to keep m8flow's Apache 2.0 licence boundary clean. Run ./bin/fetch-upstream.sh once after cloning to populate them. See the License note for details.


Pre-requisites

Ensure the following tools are installed:

  • Git
  • Docker and Docker Compose
  • Python 3.11+ and uv (for local backend development only)
  • Node.js 18+ and npm (for local frontend development only)

Clone and Set Up

1. Clone the repository

git clone https://github.com/AOT-Technologies/m8flow.git
cd m8flow

2. Fetch the upstream SpiffWorkflow code

The upstream LGPL-2.1 engine is not stored in this repo.

Docker builds are self-contained — the Dockerfiles automatically fetch upstream from GitHub during the build, so no local pre-fetch is needed for docker compose up --build.

For local development (running backend/frontend outside Docker), fetch upstream manually:

./bin/fetch-upstream.sh

This clones configured folders from AOT-Technologies/m8flow-core into your working tree. Folder lists are defined in upstream.sources.json under backend, frontend, and others. These directories are gitignored and must be re-fetched after every fresh clone.

To pin a specific upstream tag (local dev or Docker):

# Local dev
./bin/fetch-upstream.sh 0.0.1

# Docker build (set in .env or inline)
UPSTREAM_TAG=0.0.1 docker compose -f docker/m8flow-docker-compose.yml up -d --build

3. Configure environment

Copy the sample environment file and edit it for your setup:

cp sample.env .env

The key variable to set is your machine's LAN IP address. Replace all <LOCAL_IP> placeholders:

Linux:

IP="$(ip route get 1.1.1.1 2>/dev/null | awk '{for(i=1;i<=NF;i++) if($i=="src"){print $(i+1); exit}}')" && \
  [ -n "$IP" ] && sed -i.bak "s/<LOCAL_IP>/$IP/g" .env && echo "Using IP=$IP"

macOS:

IP="$(ipconfig getifaddr en0)" && \
  sed -i '' "s/<LOCAL_IP>/$IP/g" .env && echo "Using IP=$IP"

Windows CMD:

powershell -NoProfile -Command "$ifIndex=(Get-NetRoute '0.0.0.0/0'|sort RouteMetric,InterfaceMetric|select -First 1).IfIndex; $ip=(Get-NetIPAddress -AddressFamily IPv4 -InterfaceIndex $ifIndex|?{$_.IPAddress -notlike '169.254*' -and $_.IPAddress -notlike '127.*'}|select -First 1 -Expand IPAddress); $c=Get-Content .env -Raw; $n=$c -replace '<LOCAL_IP>',$ip; $n|Set-Content .env -Encoding UTF8; Write-Host Using IP=$ip"

Full environment variable documentation: docs/env-reference.md.


Running with Docker (recommended)

Start the full stack

Start all infrastructure services (database, Keycloak, MinIO, Redis, NATS) and init containers (run once on first setup):

docker compose --profile init -f docker/m8flow-docker-compose.yml up -d --build

On subsequent starts, skip the init profile:

docker compose -f docker/m8flow-docker-compose.yml up -d --build

Docker Compose services

The Keycloak image is built with the m8flow realm-info-mapper provider, so tokens include m8flow_tenant_id and m8flow_tenant_name. No separate build of the keycloak-extensions JAR is required. Realm import can be done manually in the Keycloak Admin Console (see Keycloak Setup below) or by running ./extensions/m8flow-backend/keycloak/start_keycloak.sh once after Keycloak is up; the script imports the m8flow realm only (expects Keycloak on ports 7002 and 7009, e.g. when using Docker Compose).

Service Description Port
m8flow-db PostgreSQL — m8flow application database 1111
keycloak-db PostgreSQL — Keycloak database
keycloak Keycloak identity provider (with m8flow realm mapper) 7002, 7009
keycloak-proxy Nginx proxy in front of Keycloak 7002
redis Redis — Celery broker and cache 6379
nats NATS messaging server (optional profile) 4222
minio MinIO object storage (process models, templates) 9000, 9001
m8flow-backend SpiffWorkflow backend + m8flow extensions 7000
m8flow-frontend SpiffWorkflow frontend + m8flow extensions 7001
m8flow-connector-proxy m8flow connector proxy (SMTP, Slack, HTTP, etc.) 8004
m8flow-celery-worker Celery background task worker
m8flow-celery-flower Celery monitoring UI 5555
m8flow-nats-consumer NATS event consumer

Init-only services (run once via --profile init):

Service Purpose
fetch-upstream Fetches upstream spiff-arena code into the working tree
keycloak-master-admin-init Sets up Keycloak master realm admin
minio-mc-init Creates MinIO buckets (m8flow-process-models, m8flow-templates)
process-models-sync Syncs process models into MinIO
templates-sync Syncs templates into MinIO

Stop and clean up

# Stop containers (preserves volumes)
docker compose -f docker/m8flow-docker-compose.yml down

# Stop and delete all data volumes
docker compose -f docker/m8flow-docker-compose.yml down -v

Running Locally (without Docker for backend/frontend)

Use this mode for active development of m8flow extensions.

1. Start infrastructure services

Start only the infrastructure (database, Keycloak, MinIO, Redis) as containers:

docker compose --profile init -f docker/m8flow-docker-compose.yml up -d --build \
  m8flow-db keycloak-db keycloak keycloak-proxy redis minio minio-mc-init

2. Start backend and frontend

./start_dev.sh

This script:

  • Sources .env from the repo root
  • Starts the m8flow extensions backend (uvicorn) on port 7000 in the background
  • Starts the m8flow extensions frontend (npm) on port 7001 in the foreground
  • Runs SpiffWorkflow DB migrations first if M8FLOW_BACKEND_UPGRADE_DB=true

Press Ctrl+C to stop both services.

macOS note: Port 7000 may be claimed by AirPlay Receiver. Disable it in System Settings → General → AirDrop & Handoff → AirPlay Receiver.

3. Verify the backend

curl http://localhost:7000/v1.0/status

Expected response:

{ "ok": true, "can_access_frontend": true }

Running backend only

./extensions/m8flow-backend/bin/run_m8flow_backend.sh

Or with uv (syncs deps and optionally runs migrations):

./extensions/m8flow-backend/bin/setup_and_run_backend.sh

Running a Celery worker

./extensions/m8flow-backend/bin/run_m8flow_celery_worker.sh

Running Flower (Celery monitoring UI)

./extensions/m8flow-backend/bin/run_m8flow_celery_worker.sh flower

Open http://localhost:5555.


Keycloak Setup

Automatic import (recommended)

You can import realms by running ./extensions/m8flow-backend/keycloak/start_keycloak.sh after starting Docker to import the m8flow realm. Tenant realms are created later via the tenant realm API when needed.

./extensions/m8flow-backend/keycloak/start_keycloak.sh

This imports the m8flow realm. Tenant realms are created later via the tenant realm API when needed.

Manual import

  1. Open the Keycloak Admin Console at http://localhost:7002/
  2. Log in with your admin credentials
  3. Click Keycloak master → Create a realm
  4. Import extensions/m8flow-backend/keycloak/realm_exports/m8flow-tenant-template.json
  5. Click Create

For tenant-aware setup this realm includes token claims m8flow_tenant_id and m8flow_tenant_name.

Configure the client redirect URIs

With the realm "m8flow" selected, click on "Clients" and then on the client ID m8flow-backend.

Set the following:

Valid redirect URIs

http://<LOCAL_IP>:8000/*
http://<LOCAL_IP>:8001/*

Valid post logout redirect URIs

http://<LOCAL_IP>:8000/*
http://<LOCAL_IP>:8001/*

Disable Client authentication.

For full Keycloak configuration reference: extensions/m8flow-backend/keycloak/KEYCLOAK_SETUP.md.


Access the Application

Open http://<LOCAL_IP>:8001/ (Docker) or http://localhost:7001/ (local dev) in your browser. You will be redirected to Keycloak login.

Default test users (password = username):

Username Role
super-admin Full access
tenant-admin Tenant administration
editor Create and edit process models
viewer Read-only access
integrator Service task / connector access
reviewer Review and approve tasks

Running Backend Tests

Requires ./bin/fetch-upstream.sh to have been run first — tests use spiffworkflow-backend/pyproject.toml for pytest config.

Run all tests:

pytest -c spiffworkflow-backend/pyproject.toml ./extensions/m8flow-backend/tests/ -q

Run a specific test file:

pytest -c spiffworkflow-backend/pyproject.toml \
  ./extensions/m8flow-backend/tests/unit/m8flow_backend/services/test_tenant_context_middleware.py -q

Production Deployment

See docker/DEPLOYMENT.md for production compose and hardening guidance.

Production MinIO

A dedicated MinIO compose file with pinned image, restart policy, and resource limits:

# MinIO only
docker compose -f docker/minio.production.docker-compose.yml up -d

# MinIO with the full stack
docker compose -f docker/m8flow-docker-compose.yml \
               -f docker/minio.production.docker-compose.yml up -d

# With bucket init
docker compose --profile init \
               -f docker/m8flow-docker-compose.yml \
               -f docker/minio.production.docker-compose.yml up -d

Set MINIO_ROOT_USER and MINIO_ROOT_PASSWORD in .env (no defaults in the production file).


Contribute

We welcome contributions from the community!

  • Submit PRs with passing tests and clear references to issues

Credits

m8flow builds upon the outstanding work of the SpiffWorkflow community and contributors over the past decade. We extend gratitude to:

  • Samuel Abels (@knipknap), Matthew Hampton (@matthewhampton)
  • The University of Virginia & early BPMN/DMN contributors
  • The BPMN.js team, Bruce Silver, and the wider open-source workflow community
  • Countless contributors past and present

License note

m8flow is released under the Apache License 2.0. See the LICENSE file for the full text.

The upstream AOT-Technologies/m8flow-core code (LGPL-2.1) is not stored in this repository. It is fetched on demand via bin/fetch-upstream.sh and gitignored so that it never enters the m8flow commit history. This keeps the licence boundaries cleanly separated while still allowing the app to run against the upstream SpiffWorkflow engine.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors