You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
#NOTE: Tool calls and handoffs are NOT streamed from streaming_model.py
486
+
# They are collected during streaming and returned in the final ModelResponse
487
+
# To stream tool lifecycle events, use TemporalStreamingHooks (see below)
518
488
```
519
489
520
490
#### UI Subscription
521
491
522
492
The frontend subscribes to `stream:{task_id}` and receives:
523
-
1. Real-time text chunks as they're generated
524
-
2.Complete tool calls when they're ready
525
-
3.Reasoning summaries for o1 models
493
+
1. Real-time text chunks as they're generated (from StreamingModel)
494
+
2.Reasoning summaries for o1/o3 models (from StreamingModel)
495
+
3.Tool lifecycle events (from TemporalStreamingHooks - see section below)
526
496
4. DONE signal when complete
527
497
528
498
This decoupling means we can stream anything we want through Redis!
@@ -543,6 +513,222 @@ class ExampleWorkflow:
543
513
544
514
---
545
515
516
+
## Streaming Lifecycle Events with Hooks
517
+
518
+
### Overview: Two Types of Streaming
519
+
520
+
The streaming implementation described above (StreamingModel) only streams **LLM text responses and reasoning tokens**. It does NOT stream tool calls or agent handoffs - those are collected during execution and returned in the final response.
521
+
522
+
To stream **agent lifecycle events** (tool requests, tool responses, handoffs), we provide **`TemporalStreamingHooks`** - a simpler, complementary approach that works alongside the streaming model.
523
+
524
+
| What Gets Streamed | StreamingModel | TemporalStreamingHooks |
**Best practice**: Use both together for complete streaming visibility!
533
+
534
+
### What are Hooks?
535
+
536
+
Hooks are callbacks provided by the OpenAI Agents SDK that fire during agent execution lifecycle events. They provide interception points for:
537
+
-`on_agent_start` - When an agent begins execution
538
+
-`on_agent_end` - When an agent completes execution
539
+
-`on_tool_start` - When a tool is about to be invoked
540
+
-`on_tool_end` - When a tool completes execution
541
+
-`on_handoff` - When control transfers between agents
542
+
543
+
### Why Use Hooks vs. Streaming Model?
544
+
545
+
The streaming model approach operates at the LLM response level - it sees tokens as they're generated but doesn't have visibility into tool lifecycle events. Hooks provide a simpler, more configurable way to track what the agent is doing without understanding the plugin architecture internals.
546
+
547
+
### Quick Start with TemporalStreamingHooks
548
+
549
+
```python
550
+
from agentex.lib.core.temporal.plugins.openai_agents import TemporalStreamingHooks
551
+
from agents import Agent, Runner
552
+
553
+
# Create an agent
554
+
agent = Agent(
555
+
name="Assistant",
556
+
model="gpt-4o",
557
+
instructions="You are a helpful assistant",
558
+
tools=[my_tool] # Assume we have some tools
559
+
)
560
+
561
+
# Initialize hooks with your task_id
562
+
hooks = TemporalStreamingHooks(task_id="abc123")
563
+
564
+
# Run the agent - lifecycle events automatically stream to the UI!
565
+
result =await Runner.run(agent, "Hello", hooks=hooks)
566
+
```
567
+
568
+
That's it! Tool requests, tool responses, and handoffs are now automatically streamed to the AgentEx UI in real-time.
569
+
570
+
### What Gets Streamed by Hooks
571
+
572
+
The `TemporalStreamingHooks` class automatically streams:
573
+
574
+
1.**Tool Requests** (`on_tool_start`):
575
+
- Fires when a tool is about to execute
576
+
- Streams `ToolRequestContent` with tool name and call ID
577
+
- Shows in UI that tool is being invoked
578
+
-**Note**: Tool arguments are not available due to OpenAI SDK architecture
579
+
580
+
2.**Tool Responses** (`on_tool_end`):
581
+
- Fires when a tool completes execution
582
+
- Streams `ToolResponseContent` with tool result
583
+
- Shows tool output in the UI
584
+
585
+
3.**Agent Handoffs** (`on_handoff`):
586
+
- Fires when control transfers between agents
587
+
- Streams `TextContent` with "Handoff from AgentA to AgentB" message
588
+
589
+
### Using Hooks in Temporal Workflows
590
+
591
+
When using hooks inside a Temporal workflow, combine them with the streaming context:
592
+
593
+
```python
594
+
from agents import Agent
595
+
from agents.run import get_default_agent_runner
596
+
from agentex.lib.core.temporal.plugins.openai_agents import TemporalStreamingHooks
597
+
598
+
@workflow.defn
599
+
classMyWorkflow:
600
+
@workflow.run
601
+
asyncdefrun(self, params):
602
+
agent = Agent(
603
+
name="Assistant",
604
+
instructions="You are a helpful assistant",
605
+
model="gpt-4o",
606
+
tools=[my_search_tool, my_calculator_tool]
607
+
)
608
+
609
+
# Create hooks with the task_id for lifecycle event streaming
The `stream_lifecycle_content` activity then uses the AgentEx streaming infrastructure to push events to Redis, just like the streaming model does.
698
+
699
+
### Limitations
700
+
701
+
**Important**: Tool arguments are not available in `on_tool_start` hooks due to OpenAI SDK architecture. The hook signature doesn't include tool arguments - they're only passed to the actual tool function. This is why `arguments={}` appears in `ToolRequestContent`.
702
+
703
+
If you need tool arguments in your streaming data, you'll need to:
704
+
1. Stream them from within the tool function itself, or
705
+
2. Wait for `on_tool_end` where you can log the full tool context
706
+
707
+
### Power Users: Direct RunHooks Subclassing
708
+
709
+
If you need complete control, ignore `TemporalStreamingHooks` and subclass `agents.RunHooks` directly:
710
+
711
+
```python
712
+
from agents import RunHooks
713
+
from temporalio import workflow
714
+
from agentex.lib.core.temporal.plugins.openai_agents.activities import stream_lifecycle_content
0 commit comments