-
Notifications
You must be signed in to change notification settings - Fork 49
Description
Component: Runtime + DSL (fluent Java builder)
Summary
Add first-class support for event-driven schedules using the top-level schedule.on
construct from the Serverless Workflow spec, so a workflow can be started by one-or-more CloudEvents without an explicit listen
task.
Spec reference: https://github.com/serverlessworkflow/specification/blob/main/dsl-reference.md#schedule
Motivation
Today, starting a workflow on an external event typically requires beginning the definition with a listen
task. The specification also allows declaring event triggers at the schedule level (schedule.on
), which communicates intent more clearly, avoids boilerplate, and enables portable, non-task-centric triggering semantics.
This feature would:
- Bring the runtime in line with the CNCF spec (
schedule.on
). - Reduce boilerplate for event-sourced workflows.
- Simplify static analysis and tooling: the trigger lives in one well-known, top-level place.
Desired behavior
- When a workflow definition includes
schedule.on
, the engine registers the described event filter(s) at startup, and starts a new instance when a matching event arrives. - The event payload(s) are provided as workflow input following spec semantics (e.g.,
one
→ array with 1 element;all
/any
→ array of collected/matched events). - Existing task-level
listen
continues to work unchanged.
YAML example (from spec)
document:
dsl: '1.0.1'
namespace: examples
name: event-driven-schedule
version: '0.1.0'
schedule:
on:
one:
with:
type: com.example.hospital.events.patients.heartbeat.low
do:
- callNurse:
call: http
with:
method: post
endpoint: https://hospital.example.com/api/v1/notify
body:
patientId: ${ $workflow.input[0].data.patient.id }
patientName: ${ $workflow.input[0].data.patient.name }
roomNumber: ${ $workflow.input[0].data.patient.room.number }
vitals:
heartRate: ${ $workflow.input[0].data.patient.vitals.bpm }
timestamp: ${ $workflow.input[0].data.timestamp }
message: "Alert: Patient's heartbeat is critically low. Immediate attention required."
Java DSL proposal (new top-level trigger)
Allow declaring the trigger at the top level via a new builder entry point. Example:
import static io.serverlessworkflow.fluent.spec.dsl.DSL.event;
import static io.serverlessworkflow.fluent.spec.dsl.DSL.to;
Workflow w = WorkflowBuilder.workflow()
.on(ev -> ev.listen(to().one(e -> e.type("io.quarkiverse.flow.messaging.hello.request"))))
.tasks(t -> t
.set("{ message: \"Hello \" + .[0].name + \"!\" }")
.emit(e -> e.event(event()
.type("io.quarkiverse.flow.messaging.hello.response")
.jsonData("{ message }"))))
.build();
Notes:
on(...)
maps toschedule.on
in the spec.- For
one
, the input provided to the workflow body is an array of size 1 (aligning with the spec’s$workflow.input[0]
usage). - The builder structure for
on(...)
should mirror the existing listen DSL (reusingto().one(...)
,to().all(...)
, etc.).
Engine/runtime semantics
-
On deployment/startup, parse
schedule.on
and register the corresponding EventConsumer filter(s) with the application’s eventing provider(s). -
When the filter matches, instantiate a new workflow with the matching events as input.
-
Ordering and correlation semantics (for
all
/any
) should match the current task-level listen semantics. -
If both
schedule.on
and a first tasklisten
are used, they are independent:schedule.on
triggers instance creation;- an initial
listen
task applies inside the instance.
Backward compatibility
- No breaking changes. Existing workflows without
schedule.on
continue to work. - Existing task-level
listen
remains fully supported.
Alternatives considered
- Keep using a leading
listen
task. This works today but hides the triggering concern in the task list, and does not leverage the spec’s top-level schedule feature.
Acceptance criteria
- A workflow with only
schedule.on
(and no leadinglisten
task) starts upon receiving a matching event. $workflow.input
is populated per spec so sample expressions like${ $workflow.input[0].data... }
work.- Java DSL exposes a clear
on(...)
entry point that maps toschedule.on
. - Unit/integration tests cover
one
,any
, andall
shapes.
Test ideas
- Happy path: send a CloudEvent with the declared type → instance starts; assert output.
- No match: send events with different types → no instance is started.
- Multiple (
any
/all
): verify correlation/window behavior mirrors task-level listen. - Serialization: verify data mapping end-to-end using CloudEvents JSON.
Thanks for considering! This will align the engine more closely with the spec and significantly improve ergonomics for event-sourced workflows.