Mycel provides a complete debugging toolkit for HCL flows — from simple CLI tracing to full IDE integration with breakpoints. Since there's no traditional code to step through, debugging in Mycel means tracing the data pipeline: seeing what data enters each stage, how it's transformed, where it's validated, and what gets written.
All debug features (breakpoints, DAP, verbose flow) are development-only — they are automatically disabled in staging and production with a warning log. This ensures zero overhead and zero risk in production.
- Quick Start
- mycel trace
- Dry-Run Mode
- Interactive Breakpoints (CLI)
- DAP Server (IDE Integration)
- Studio Debug Protocol
- Verbose Flow Logging
- Log-Level Debugging
- Local vs Docker
- Troubleshooting
- CLI Reference
# See what a flow does, step by step
mycel trace get_users
# Simulate a write without touching the database
mycel trace create_user --input '{"email":"test@x.com","name":"Test"}' --dry-run
# Interactive debugging — pause at each stage
mycel trace create_user --input '{"email":"test@x.com"}' --breakpoints
# IDE debugging — connect VS Code, IntelliJ, or Neovim
mycel trace create_user --input '{"email":"test@x.com"}' --dap=4711The trace command executes a single flow and shows a step-by-step trace of the entire data pipeline.
# Trace a read flow
mycel trace get_users --config ./my-service
# Trace a write flow with JSON input
mycel trace create_user --input '{"email":"test@example.com","name":"Test","age":25}'
# Trace with path parameters
mycel trace get_user --params id=123
# List all available flows
mycel trace --list Flow: create_user
Duration: 6.2ms
──────────────────────────────────────────────────
1. INPUT
{"email":"TEST@Example.COM","name":"Test","age":25}
2. SANITIZE 0.1ms
{"email":"TEST@Example.COM","name":"Test","age":25}
3. VALIDATE INPUT 0.2ms
4. TRANSFORM 0.3ms
{"id":"a1b2c3d4","email":"test@example.com","name":"Test","age":25,"created_at":"2026-03-10T14:30:00Z"}
5. WRITE → users 5.4ms
INSERT → users
{"affected":1,"last_id":42}
✓ completed successfully
Each stage shows:
- Stage name and timing
- Data snapshot at that point in the pipeline
- Errors with the exact stage where they occurred
- Skipped stages (e.g., validation when no schema is configured)
| Problem | What trace shows |
|---|---|
| Validation error | Exact field and constraint that failed at VALIDATE INPUT |
| Transform bug | Input vs output data at TRANSFORM — see exactly what changed |
| Missing data | Which ENRICH or STEP failed to return expected fields |
| Wrong query | Filters passed to READ stage |
| Permission error | Error at WRITE stage with the exact operation attempted |
| Sanitization issue | Before/after at SANITIZE — see what the sanitizer changed |
With --dry-run, write operations (INSERT, UPDATE, DELETE) are simulated without executing. Reads still run against real data so you can trace the full pipeline end-to-end.
# See what would be written without actually writing
mycel trace create_user --input '{"email":"test@x.com","name":"Test","age":25}' --dry-run
# Works for updates too
mycel trace update_user --input '{"id":"123","name":"New Name"}' --dry-run
# And deletes
mycel trace delete_user --params id=123 --dry-runDry-run output marks write stages with [dry-run] and shows the payload that would have been sent:
5. WRITE → users [dry-run]
INSERT → users
{"id":"a1b2c3d4","email":"test@x.com","name":"Test","age":25}
Dry-run is safe to run against production data sources — no data is modified. It works for:
- INSERT — shows the payload that would be inserted
- UPDATE — shows the payload and filters (which rows would be affected)
- DELETE — shows the filters (which rows would be deleted)
- Multi-destination writes — shows what would be written to each destination
The --breakpoints flag enables interactive step-by-step debugging directly in your terminal. Execution pauses at every pipeline stage, showing the current data state and waiting for your command.
Dev only. Breakpoints are automatically disabled outside development mode.
# Pause at every pipeline stage
mycel trace create_user --input '{"email":"test@x.com","name":"Test"}' --breakpoints
# Pause only at specific stages (faster iteration)
mycel trace create_user --input '{"email":"test@x.com"}' --break-at=transform,writeWhen paused at a breakpoint, you can use these commands:
| Command | Shortcut | Description |
|---|---|---|
next |
n or Enter |
Step to the next stage |
continue |
c |
Run until the next explicit breakpoint |
print |
p |
Re-print the current data snapshot |
quit |
q |
Abort flow execution immediately |
help |
h |
Show available commands |
$ mycel trace create_user --input '{"email":"TEST@X.COM","name":"Test"}' --breakpoints
⏸ BREAKPOINT at input
{
"email": "TEST@X.COM",
"name": "Test"
}
debug> n
⏸ BREAKPOINT at sanitize
{
"email": "TEST@X.COM",
"name": "Test"
}
debug> n
⏸ BREAKPOINT at transform
{
"email": "TEST@X.COM",
"name": "Test"
}
debug> p
{
"email": "TEST@X.COM",
"name": "Test"
}
debug> c
⏸ BREAKPOINT at write
{
"id": "a1b2c3d4",
"email": "test@x.com",
"name": "Test",
"created_at": "2026-03-10T14:30:00Z"
}
debug> q
✗ execution aborted by user
| Stage | Description |
|---|---|
input |
Raw input data as received |
sanitize |
After input sanitization (XSS, injection protection) |
filter |
Filter expression evaluation (accept/reject) |
dedupe |
Deduplication check |
validate |
Input validation against type schema |
enrich |
Data enrichment from other connectors |
transform |
CEL transformation (the most common breakpoint) |
step |
Individual step execution in multi-step flows |
read |
Database/API read operation |
write |
Database/API write operation |
Tip: For most debugging sessions, --break-at=transform,write is the sweet spot — you see the data right before and after transformation, and right before it's written.
The --dap flag starts a Debug Adapter Protocol server that lets IDE debugger panels (variables, call stack) connect to a running trace session.
Dev only. The DAP server is automatically disabled outside development mode.
Note: Gutter breakpoints. Currently, you cannot click the gutter of an HCL file to set breakpoints — IDEs only enable that for file types with a registered debug adapter. Dedicated extensions for VS Code and IntelliJ that enable gutter breakpoints on HCL files are planned alongside the Mycel Language Server. For now, use
--break-aton the CLI or thebreakAtproperty in your launch config.
- Start
mycel tracewith--dap:mycel trace create_user --input '{"email":"test@x.com"}' --dap=4711 - Your IDE connects to
localhost:4711as a DAP client - The IDE sends
launch(withbreakAtstages) — Mycel executes the flow - When a breakpoint stage is reached, the IDE shows variables and call stack
- F10 (Step Over) advances to the next stage, F5 (Continue) runs to the next breakpoint
Create .vscode/launch.json:
{
"version": "0.2.0",
"configurations": [
{
"name": "Debug Mycel Flow",
"type": "node",
"request": "attach",
"debugServer": 4711
}
]
}VS Code's
debugServeroption connects directly to any running DAP server over TCP — no extension required.
Start the DAP server, then press F5. When a breakpoint hits:
- Variables panel shows data at the current stage
- Call Stack shows pipeline stages executed so far
- F10 → next stage, F5 → continue, Shift+F5 → abort
IntelliJ does not natively support connecting to arbitrary DAP servers. Options:
- DAP Plugin: Install "Debug Adapter Protocol" from JetBrains Marketplace → Run → Edit Configurations → DAP Remote Debug → host
localhost, port4711 - Terminal: Use
mycel trace --break-at=transform,writedirectly in IntelliJ's terminal (recommended until a dedicated plugin is available)
Neovim has excellent DAP support through nvim-dap:
local dap = require('dap')
dap.adapters.mycel = function(callback, config)
callback({ type = 'server', host = '127.0.0.1', port = config.port or 4711 })
end
dap.configurations.mycel = {
{
type = 'mycel',
request = 'launch',
name = 'Debug Mycel Flow',
flow = '${input:flow}',
port = 4711,
breakAt = { 'transform', 'write' },
},
}Start the DAP server, then :lua require('dap').continue() (F5).
| Command | Description |
|---|---|
initialize |
Capability negotiation |
launch |
Start flow execution (supports input, dryRun, breakAt) |
setBreakpoints |
Set breakpoints at pipeline stages |
configurationDone |
Signal IDE configuration complete |
threads |
List debug threads (one per flow) |
stackTrace |
Pipeline stages as call stack |
scopes / variables |
Inspect data at current breakpoint |
continue / next |
Resume or step to next stage |
disconnect |
Stop debugging and abort flow |
The Studio Debug Protocol provides a WebSocket-based debug interface designed for Mycel Studio (the desktop IDE) and any WebSocket-capable client. Unlike the DAP server which debugs a single trace, Studio connects to a running service and debugs requests in real time — similar to Chrome DevTools or IntelliJ's debugger.
The protocol uses JSON-RPC 2.0 over WebSocket at :9090/debug (the admin server port).
// From Electron/Tauri IDE or any WebSocket client
const ws = new WebSocket("ws://localhost:9090/debug");
// Attach to get session ID and flow list
ws.send(JSON.stringify({
jsonrpc: "2.0", id: 1,
method: "debug.attach",
params: { clientName: "mycel-studio 1.0" }
}));
// Response: { jsonrpc: "2.0", id: 1, result: { sessionId: "s1", flows: ["create_user", "get_users"] } }Before any messages are consumed, Studio must complete a setup handshake. This eliminates race conditions where messages arrive before breakpoints are configured.
Studio Mycel
│ │
├─── debug.attach ──────────────►│ Session created, debug throttling enabled
│◄── { sessionId, flows } ──────┤
│ │
├─── debug.setBreakpoints ──────►│ Breakpoints registered (repeat per flow)
│◄── { breakpoints } ───────────┤
│ │
├─── debug.ready ───────────────►│ Suspended connectors start (topology only)
│◄── { ok, sources } ───────────┤ Returns event source capabilities
│ │
├─── debug.consume ─────────────►│ Pull ONE message from queue (RabbitMQ/Kafka)
│ ... breakpoint hit ... │
│◄── event.stopped ─────────────┤
│ │
├─── debug.continue ────────────►│ Resume paused thread
│◄── event.continued ───────────┤
│◄── { ok } ────────────────────┤ Message fully processed
│ │
├─── debug.consume ─────────────►│ Pull next message (repeat as needed)
│ ... │
debug.ready response includes event source capabilities:
{
"jsonrpc": "2.0", "id": 3,
"result": {
"ok": true,
"sources": [
{
"connector": "rabbit",
"type": "rabbitmq",
"source": "orders.q",
"manualConsume": true
},
{
"connector": "mqtt_sensors",
"type": "mqtt",
"source": "sensors/#",
"manualConsume": false
}
]
}
}manualConsume: true— queue-based connectors (RabbitMQ, Kafka) that supportdebug.consume. No messages are consumed until Studio explicitly requests them.manualConsume: false— push-based connectors (Redis Pub/Sub, MQTT, CDC, File, WebSocket) where messages arrive in real time. These use automatic debug throttling (one at a time) instead.
| Method | Purpose |
|---|---|
debug.attach |
Connect debugger, get session + flow list |
debug.detach |
Disconnect cleanly |
debug.setBreakpoints |
Set breakpoints (stage + rule-level + conditional) |
debug.ready |
Signal setup complete; returns event source capabilities |
debug.consume |
Fetch one message from a queue connector (RabbitMQ/Kafka) |
debug.continue |
Resume paused thread |
debug.next |
Step to next pipeline stage |
debug.stepInto |
Step per-CEL-rule within transform |
debug.evaluate |
Evaluate arbitrary CEL in current context |
debug.variables |
Get variables at current breakpoint |
debug.threads |
List active debug threads |
inspect.flows |
List all flows with configs |
inspect.flow |
Full flow config detail |
inspect.connectors |
List connectors |
inspect.types |
List type schemas |
inspect.transforms |
List named transforms |
For queue-based connectors (RabbitMQ, Kafka), debug.consume fetches and processes exactly one message from the queue. The request blocks until the message is fully processed (including hitting any breakpoints along the way).
{
"jsonrpc": "2.0", "id": 10,
"method": "debug.consume",
"params": { "connector": "rabbit" }
}How it works per connector:
- RabbitMQ: Uses AMQP
Basic.Get(pull one message). If the queue is empty, polls until a message arrives or the request is cancelled. - Kafka: Uses
FetchMessageto pull one message, then commits the offset after processing.
This gives Studio full control over when messages are consumed, making it easy to debug event-driven flows step by step. The IDE can show a "Consume Next Message" button that triggers debug.consume.
Push-based connectors (MQTT, Redis Pub/Sub, CDC, etc.) don't support debug.consume — they receive messages in real time via automatic debug throttling.
Events are JSON-RPC notifications (no id, no response expected) pushed from the runtime to the IDE:
| Event | Purpose |
|---|---|
event.stopped |
Thread hit breakpoint (stage or rule-level) |
event.continued |
Thread resumed |
event.stageEnter |
Pipeline stage starting (with input data) |
event.stageExit |
Pipeline stage completed (with output, duration, error) |
event.ruleEval |
Individual CEL rule evaluated (target, expression, result) |
event.flowStart |
Request entered flow |
event.flowEnd |
Request completed flow |
Set a rule-level breakpoint to pause at a specific CEL expression within a transform:
{
"jsonrpc": "2.0", "id": 2,
"method": "debug.setBreakpoints",
"params": {
"flow": "create_user",
"breakpoints": [
{ "stage": "transform", "ruleIndex": -1 },
{ "stage": "transform", "ruleIndex": 1, "condition": "input.email != \"\"" }
]
}
}ruleIndex: -1pauses at the transform stage (before any rules execute)ruleIndex: 0pauses before the first CEL ruleconditionis a CEL expression evaluated against the current activation — only pauses when true
Use debug.stepInto to step through rules one at a time, and debug.evaluate to run ad-hoc CEL expressions against the paused thread's data.
Zero-cost when idle: When no Studio client is connected, the overhead is zero. When connected but no breakpoints set, only lightweight event streaming occurs. Breakpoints add pause/resume overhead only to flows that have them.
When a debugger connects, all event-driven connectors automatically switch to single-message processing. This applies to push-based connectors where messages arrive in real time:
- Redis Pub/Sub: Messages are gated one at a time
- MQTT: All topic callbacks are serialized through a shared gate
- CDC: Database change events are processed one at a time
- File watch: File events are processed one at a time
- WebSocket: Incoming client messages are serialized
Queue-based connectors (RabbitMQ, Kafka) go further: in debug suspend mode, they don't consume at all until Studio sends debug.consume (see Manual Consume).
When the debugger disconnects, original concurrency settings are restored automatically. This ensures you can step through messages one by one without a flood of concurrent events interfering with your debugging session.
No configuration is needed — throttling is enabled automatically via the DebugThrottler interface on each connector.
DAP coexistence: Studio protocol and DAP are fully independent. Both implement trace.BreakpointController but use different transports (WebSocket vs TCP) and lifecycle models (long-lived vs one-shot).
When debugging event-driven flows (message queues, CDC, etc.), there's a timing problem: if Mycel starts consuming before your debugger connects, messages may be processed before you can set breakpoints.
Start Suspended solves this by deferring Start() on event-driven connectors until a debugger completes the setup handshake (debug.ready):
# Via CLI flag
mycel start --debug-suspend
# Via environment variable
MYCEL_DEBUG_SUSPEND=true mycel startWhat gets suspended:
- RabbitMQ, Kafka, Redis Pub/Sub, MQTT — no messages consumed
- CDC — no change events captured
- File watch — no file events processed
- WebSocket — no client connections accepted
What starts normally (needed for health checks and admin API):
- REST, gRPC, GraphQL, SOAP, TCP, SSE
Lifecycle:
- Mycel starts — event-driven connectors are registered but not started
- Studio connects to
:9090/debugand sendsdebug.attach - Studio sets breakpoints with
debug.setBreakpoints - Studio sends
debug.ready— suspended connectors connect to brokers and set up topology - For queue-based connectors (RabbitMQ, Kafka): Studio sends
debug.consumeto pull one message at a time - For push-based connectors (MQTT, CDC, etc.): messages arrive automatically, throttled to one at a time
Dev only. Like all debug features,
--debug-suspendis automatically disabled outside development mode with a warning log.
For runtime debugging without stopping the service, use --verbose-flow to log every pipeline stage for all requests as structured log entries:
Dev only. Verbose flow logging is automatically disabled outside development mode.
# Start with per-request pipeline tracing
mycel start --verbose-flow
# Combine with debug logging for maximum detail
mycel start --verbose-flow --log-level debugExample log output (each request generates multiple log lines):
DBG trace stage=input flow=create_user data={"email":"test@x.com"}
DBG trace stage=sanitize flow=create_user duration=0.1ms data={"email":"test@x.com"}
DBG trace stage=validate_input flow=create_user duration=0.2ms
DBG trace stage=transform flow=create_user duration=0.3ms data={"id":"abc","email":"test@x.com"}
DBG trace stage=write flow=create_user duration=5.1ms data={"affected":1}
This is useful for:
- Diagnosing intermittent issues in a running service
- Comparing requests that succeed vs. fail
- Verifying transforms without stopping the service
- Monitoring pipeline performance (each stage is timed)
For runtime debugging without pipeline tracing, adjust the log level:
# Start with debug logging (shows all internal operations)
mycel start --log-level debug
# Or via environment variable
MYCEL_LOG_LEVEL=debug mycel startIn development mode (MYCEL_ENV=development), the default log level is already debug.
Log levels from most to least verbose:
| Level | What's logged |
|---|---|
debug |
Everything: connector operations, query details, cache hits/misses, config parsing |
info |
Service lifecycle, request summaries, connector status |
warn |
Deprecations, configuration issues, retry attempts |
error |
Failures only |
For the best debugging experience, run Mycel locally:
# Install locally
go install github.com/matutetandil/mycel/cmd/mycel@latest
# Trace directly
mycel trace create_user --input '{"email":"test@x.com"}'
# Interactive breakpoints
mycel trace create_user --input '{"email":"test@x.com"}' --breakpointsDebugging also works from Docker — pass the trace command directly:
# Simple trace from Docker
docker run -v $(pwd):/etc/mycel ghcr.io/matutetandil/mycel \
trace get_users
# Dry-run from Docker
docker run -v $(pwd):/etc/mycel ghcr.io/matutetandil/mycel \
trace create_user --input '{"email":"test@x.com"}' --dry-run
# Interactive breakpoints from Docker (requires -it for stdin)
docker run -it -v $(pwd):/etc/mycel ghcr.io/matutetandil/mycel \
trace create_user --input '{"email":"test@x.com"}' --breakpoints
# DAP server from Docker (expose the port)
docker run -p 4711:4711 -v $(pwd):/etc/mycel ghcr.io/matutetandil/mycel \
trace create_user --input '{"email":"test@x.com"}' --dap=4711Debug flags (--breakpoints, --break-at, --dap, --verbose-flow) only work when MYCEL_ENV=development (the default). If you see this warning:
# Set explicitly
MYCEL_ENV=development mycel trace create_user --breakpoints
# Or via .env file
echo "MYCEL_ENV=development" >> .env- Verify the port is correct:
mycel trace ... --dap=4711means connect tolocalhost:4711 - Check for firewalls or port conflicts:
lsof -i :4711 - Make sure you're connecting after the "listening" message appears
- Some IDEs need a brief delay — if it fails on first try, wait 1 second and retry
- Make sure you're using valid stage names (see Pipeline Stages table)
- Not all stages execute for every flow — e.g.,
enrichonly runs if the flow has enrichments configured - Use
--breakpoints(all stages) first to see which stages your flow actually goes through
# List available flows
mycel trace --list --config ./my-serviceThe --config flag must point to the directory containing your HCL files.
You need -it flags for interactive mode (stdin access):
# Wrong (no stdin)
docker run -v $(pwd):/etc/mycel ghcr.io/matutetandil/mycel trace ... --breakpoints
# Correct
docker run -it -v $(pwd):/etc/mycel ghcr.io/matutetandil/mycel trace ... --breakpointsExpose the DAP port with -p:
docker run -p 4711:4711 -v $(pwd):/etc/mycel ghcr.io/matutetandil/mycel \
trace ... --dap=4711mycel trace <flow-name> [flags]
Flags:
--input string JSON input data for the flow
--params string Key=value parameters (comma-separated, e.g., id=123,status=active)
--dry-run Simulate write operations without executing them
--breakpoints Pause at every pipeline stage for interactive debugging (dev only)
--break-at string Pause at specific stages (dev only, comma-separated)
Valid stages: input, sanitize, filter, accept, dedupe, validate,
transform, step, read, write
--dap int Start DAP server on this port for IDE debugging (dev only)
--list List all available flows
mycel start [flags]
Flags:
--verbose-flow Log all flow pipeline stages per request (dev only)
--debug-suspend Defer event-driven connector start until debugger connects (dev only)
Environment Variables:
MYCEL_DEBUG_SUSPEND=true Same as --debug-suspend
Global Flags:
-c, --config string Configuration directory (default ".")
-e, --env string Environment (dev, staging, prod)
--log-level string Log level: debug, info, warn, error
--log-format string Log format: text, json
Admin Server (always available on :9090):
/health Health check endpoints
/metrics Prometheus metrics
/debug Studio Debug Protocol (WebSocket JSON-RPC 2.0)