Skip to content

Commit 1536873

Browse files
remove stepwise usage of workflows from code (#19877)
1 parent 1b7d412 commit 1536873

File tree

3 files changed

+0
-30
lines changed

3 files changed

+0
-30
lines changed

docs/docs/understanding/workflows/observability.md

Lines changed: 0 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -99,30 +99,6 @@ Received event Query 3
9999
Step step_three produced event StopEvent
100100
```
101101

102-
## Stepwise execution
103-
104-
In a notebook environment it can be helpful to run a workflow step by step. You can do this by calling `run_step` on the handler object:
105-
106-
```python
107-
w = ConcurrentFlow(timeout=10, verbose=True)
108-
handler = w.run(stepwise=True)
109-
110-
# Each time we call `run_step`, the workflow will advance and return all the events
111-
# that were produced in the last step. This events need to be manually propagated
112-
# for the workflow to keep going (we assign them to `produced_events` with the := operator).
113-
while produced_events := await handler.run_step():
114-
# If we're here, it means there's at least an event we need to propagate,
115-
# let's do it with `send_event`
116-
for ev in produced_events:
117-
handler.ctx.send_event(ev)
118-
119-
# If we're here, it means the workflow execution completed, and
120-
# we can now access the final result.
121-
result = await handler
122-
```
123-
124-
You can call `run_step` multiple times to step through the workflow one step at a time.
125-
126102
## Visualizing most recent execution
127103

128104
If you're running a workflow step by step, or you have just executed a workflow with branching, you can get the visualizer to draw only exactly which steps just executed using `draw_most_recent_execution`:

llama-index-core/llama_index/core/agent/workflow/base_agent.py

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -601,7 +601,6 @@ def run(
601601
chat_history: Optional[List[ChatMessage]] = None,
602602
memory: Optional[BaseMemory] = None,
603603
ctx: Optional[Context] = None,
604-
stepwise: bool = False,
605604
max_iterations: Optional[int] = None,
606605
start_event: Optional[AgentWorkflowStartEvent] = None,
607606
**kwargs: Any,
@@ -610,7 +609,6 @@ def run(
610609
if ctx is not None and ctx.is_running:
611610
return super().run(
612611
ctx=ctx,
613-
stepwise=stepwise,
614612
**kwargs,
615613
)
616614
else:
@@ -624,5 +622,4 @@ def run(
624622
return super().run(
625623
start_event=start_event,
626624
ctx=ctx,
627-
stepwise=stepwise,
628625
)

llama-index-core/llama_index/core/agent/workflow/multi_agent_workflow.py

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -639,7 +639,6 @@ def run(
639639
chat_history: Optional[List[ChatMessage]] = None,
640640
memory: Optional[BaseMemory] = None,
641641
ctx: Optional[Context] = None,
642-
stepwise: bool = False,
643642
max_iterations: Optional[int] = None,
644643
start_event: Optional[AgentWorkflowStartEvent] = None,
645644
**kwargs: Any,
@@ -648,7 +647,6 @@ def run(
648647
if ctx is not None and ctx.is_running:
649648
return super().run(
650649
ctx=ctx,
651-
stepwise=stepwise,
652650
**kwargs,
653651
)
654652
else:
@@ -662,7 +660,6 @@ def run(
662660
return super().run(
663661
start_event=start_event,
664662
ctx=ctx,
665-
stepwise=stepwise,
666663
)
667664

668665
@classmethod

0 commit comments

Comments
 (0)