Skip to content

Commit 865d24f

Browse files
committed
fixing merge
2 parents 2918e43 + 8d58ea7 commit 865d24f

File tree

64 files changed

+4996
-1245
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

64 files changed

+4996
-1245
lines changed

.release-please-manifest.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,3 @@
11
{
2-
".": "0.5.0"
2+
".": "0.5.1"
33
}

CHANGELOG.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,13 @@
11
# Changelog
22

3+
## 0.5.1 (2025-10-29)
4+
5+
Full Changelog: [v0.5.0...v0.5.1](https://github.com/scaleapi/agentex-python/compare/v0.5.0...v0.5.1)
6+
7+
### Bug Fixes
8+
9+
* **client:** close streams without requiring full consumption ([f56acae](https://github.com/scaleapi/agentex-python/commit/f56acae74ee83a116e735ca7bf68f2096aafaf6e))
10+
311
## 0.5.0 (2025-10-28)
412

513
Full Changelog: [v0.4.28...v0.5.0](https://github.com/scaleapi/agentex-python/compare/v0.4.28...v0.5.0)
Lines changed: 39 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,44 @@
11
# [Sync] Hello ACP
22

33
This is a simple AgentEx agent that just says hello and acknowledges the user's message to show which ACP methods need to be implemented for the sync ACP type.
4-
It is a SYNC agent.
4+
The simplest agent type: synchronous request/response pattern with a single `@acp.on_message_send` handler. Best for stateless operations that complete immediately.
55

6-
## Official Documentation
6+
## What You'll Learn
7+
- Building a basic synchronous agent
8+
- The `@acp.on_message_send` handler pattern
9+
- When to use sync vs agentic agents
710

8-
[000 Hello ACP](https://dev.agentex.scale.com/docs/tutorials/sync/000_hello_acp)
11+
## Prerequisites
12+
- Development environment set up (see [main repo README](https://github.com/scaleapi/scale-agentex))
13+
- Backend services running: `make dev` from repository (agentex) root
14+
15+
## Quick Start
16+
17+
```bash
18+
cd examples/tutorials/00_sync/000_hello_acp
19+
uv run agentex agents run --manifest manifest.yaml
20+
```
21+
22+
## Key Code
23+
24+
```python
25+
@acp.on_message_send
26+
async def handle_message_send(params: SendMessageParams):
27+
return TextContent(
28+
author="agent",
29+
content=f"Echo: {params.content.content}"
30+
)
31+
```
32+
33+
That's it - one handler, immediate response. No task creation, no state management.
34+
35+
## When to Use
36+
- Simple chatbots with no memory requirements
37+
- Quick Q&A or information lookup agents
38+
- Prototyping and testing agent responses
39+
- Operations that complete in under a second
40+
41+
## Why This Matters
42+
Sync agents are the simplest way to get started with AgentEx. They're perfect for learning the basics and building stateless agents. Once you need conversation memory or task tracking, you'll graduate to agentic agents.
43+
44+
**Next:** [010_multiturn](../010_multiturn/) - Add conversation memory to your agent
Lines changed: 50 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,54 @@
11
# [Sync] Multiturn
22

3-
This tutorial demonstrates how to handle multiturn conversations in AgentEx agents using the Agent 2 Client Protocol (ACP).
3+
Handle multi-turn conversations in synchronous agents by manually maintaining conversation history and context between messages.
44

5-
## Official Documentation
5+
## What You'll Learn
6+
- How to handle conversation history in sync agents
7+
- Building context from previous messages
8+
- The limitations of stateless multiturn patterns
69

7-
[010 Multiturn](https://dev.agentex.scale.com/docs/tutorials/sync/010_multiturn)
10+
## Prerequisites
11+
- Development environment set up (see [main repo README](https://github.com/scaleapi/scale-agentex))
12+
- Backend services running: `make dev` from repository root
13+
- Understanding of basic sync agents (see [000_hello_acp](../000_hello_acp/))
14+
15+
## Quick Start
16+
17+
```bash
18+
cd examples/tutorials/00_sync/010_multiturn
19+
uv run agentex agents run --manifest manifest.yaml
20+
```
21+
22+
## Key Pattern
23+
24+
Sync agents are stateless by default. To handle multi-turn conversations, you need to:
25+
1. Accept conversation history in the request
26+
2. Maintain context across messages
27+
3. Return responses that build on previous exchanges
28+
29+
```python
30+
@acp.on_message_send
31+
async def handle_message_send(params: SendMessageParams):
32+
# Accept conversation history from client
33+
history = params.conversation_history
34+
35+
# Build context from history
36+
context = build_context(history)
37+
38+
# Generate response considering full context
39+
response = generate_response(params.content, context)
40+
41+
return TextContent(author="agent", content=response)
42+
```
43+
44+
The handler accepts history, builds context, and returns responses that reference previous exchanges.
45+
46+
## When to Use
47+
- Simple chatbots that need conversation memory
48+
- When client can maintain and send conversation history
49+
- Quick prototypes before building full agentic agents
50+
51+
## Why This Matters
52+
While sync agents can handle conversations, you're responsible for managing state on the client side. This becomes complex quickly. For production conversational agents, consider agentic agents ([10_agentic/00_base/010_multiturn](../../10_agentic/00_base/010_multiturn/)) where the platform manages state automatically.
53+
54+
**Next:** [020_streaming](../020_streaming/) - Stream responses in real-time

examples/tutorials/00_sync/010_multiturn/project/acp.py

Lines changed: 24 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,17 @@
11
import os
22
from typing import Union, AsyncGenerator
33

4+
from agents import Agent, Runner, RunConfig
5+
46
from agentex.lib import adk
57
from agentex.types import TextContent
68
from agentex.lib.types.acp import SendMessageParams
9+
from agentex.lib.types.converters import convert_task_messages_to_oai_agents_inputs
710
from agentex.lib.utils.model_utils import BaseModel
8-
from agentex.lib.types.llm_messages import LLMConfig, UserMessage, SystemMessage, AssistantMessage
911
from agentex.lib.sdk.fastacp.fastacp import FastACP
1012
from agentex.types.task_message_update import TaskMessageUpdate
1113
from agentex.types.task_message_content import TaskMessageContent
14+
from agentex.lib.adk.providers._modules.sync_provider import SyncStreamingProvider
1215

1316
# Create an ACP server
1417
acp = FastACP.create(
@@ -66,21 +69,27 @@ async def handle_message_send(
6669
task_messages = await adk.messages.list(task_id=params.task.id)
6770

6871
#########################################################
69-
# 3. Convert task messages to LLM messages.
72+
# 3. Run the agent with OpenAI Agents SDK
7073
#########################################################
7174

72-
# This might seem duplicative, but the split between TaskMessage and LLMMessage is intentional and important.
75+
# Initialize the provider and run config to allow for tracing
76+
provider = SyncStreamingProvider(
77+
trace_id=params.task.id,
78+
)
79+
80+
run_config = RunConfig(
81+
model_provider=provider,
82+
)
83+
84+
# Initialize the agent
85+
test_agent = Agent(name="assistant", instructions=state.system_prompt, model=state.model)
86+
87+
# Convert task messages to OpenAI Agents SDK format
88+
input_list = convert_task_messages_to_oai_agents_inputs(task_messages)
89+
90+
# Run the agent
91+
result = await Runner.run(test_agent, input_list, run_config=run_config)
7392

74-
llm_messages = [
75-
SystemMessage(content=state.system_prompt),
76-
*[
77-
UserMessage(content=getattr(message.content, "content", ""))
78-
if getattr(message.content, "author", None) == "user"
79-
else AssistantMessage(content=getattr(message.content, "content", ""))
80-
for message in task_messages
81-
if getattr(message.content, "type", None) == "text"
82-
],
83-
]
8493

8594
# TaskMessages are messages that are sent between an Agent and a Client. They are fundamentally decoupled from messages sent to the LLM. This is because you may want to send additional metadata to allow the client to render the message on the UI differently.
8695

@@ -94,25 +103,7 @@ async def handle_message_send(
94103
# - If using multiple LLMs, but one LLM's output should not be sent to the user (i.e. a critic model), you can leverage the State as an internal storage mechanism to store the critic model's conversation history. This i s a powerful and flexible way to handle complex scenarios.
95104

96105
#########################################################
97-
# 4. Call an LLM to respond to the user's message.
106+
# 4. Return the agent response to the client.
98107
#########################################################
99108

100-
# Call an LLM to respond to the user's message
101-
chat_completion = await adk.providers.litellm.chat_completion(
102-
llm_config=LLMConfig(model=state.model, messages=llm_messages),
103-
trace_id=params.task.id,
104-
)
105-
106-
#########################################################
107-
# 5. Return the agent response to the client.
108-
#########################################################
109-
110-
# The Agentex server automatically commits input and output messages to the database so you don't need to do this yourself, simply process the input content and return the output content.
111-
112-
# Return the agent response to the client
113-
if chat_completion.choices[0].message:
114-
content_str = chat_completion.choices[0].message.content or ""
115-
else:
116-
content_str = ""
117-
118-
return TextContent(author="agent", content=content_str)
109+
return TextContent(author="agent", content=result.final_output)

examples/tutorials/00_sync/010_multiturn/pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ description = "An AgentEx agent"
99
readme = "README.md"
1010
requires-python = ">=3.12"
1111
dependencies = [
12-
"agentex-sdk",
12+
"agentex-sdk==0.4.28",
1313
"scale-gp",
1414
]
1515

Lines changed: 39 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,45 @@
11
# [Sync] Streaming
22

3-
This tutorial demonstrates how to implement streaming responses in AgentEx agents using the Agent 2 Client Protocol (ACP).
3+
Stream responses progressively using async generators instead of returning a single message. Enables showing partial results as they're generated.
44

5-
## Official Documentation
5+
## What You'll Learn
6+
- How to stream responses using async generators
7+
- The `yield` pattern for progressive updates
8+
- When streaming improves user experience
69

7-
[020 Streaming](https://dev.agentex.scale.com/docs/tutorials/sync/020_streaming)
10+
## Prerequisites
11+
- Development environment set up (see [main repo README](https://github.com/scaleapi/scale-agentex))
12+
- Backend services running: `make dev` from repository root
13+
- Understanding of basic sync agents (see [000_hello_acp](../000_hello_acp/))
814

15+
## Quick Start
916

17+
```bash
18+
cd examples/tutorials/00_sync/020_streaming
19+
uv run agentex agents run --manifest manifest.yaml
20+
```
21+
22+
## Key Code
23+
24+
```python
25+
@acp.on_message_send
26+
async def handle_message_send(params: SendMessageParams):
27+
async def stream_response():
28+
for chunk in response_chunks:
29+
yield TaskMessageUpdate(content=TextContent(...))
30+
31+
return stream_response()
32+
```
33+
34+
Return an async generator instead of a single response - each `yield` sends an update to the client.
35+
36+
## When to Use
37+
- Streaming LLM responses (OpenAI, Anthropic, etc.)
38+
- Large data processing with progress updates
39+
- Any operation that takes >1 second to complete
40+
- Improving perceived responsiveness
41+
42+
## Why This Matters
43+
Streaming dramatically improves user experience for longer operations. Instead of waiting 10 seconds for a complete response, users see results immediately as they're generated. This is essential for modern AI agents.
44+
45+
**Next:** Ready for task management? → [10_agentic/00_base/000_hello_acp](../../10_agentic/00_base/000_hello_acp/)

examples/tutorials/00_sync/020_streaming/project/acp.py

Lines changed: 35 additions & 40 deletions
Original file line numberDiff line numberDiff line change
@@ -1,19 +1,19 @@
11
import os
22
from typing import Union, AsyncGenerator
33

4+
from agents import Agent, Runner, RunConfig
5+
46
from agentex.lib import adk
57
from agentex.lib.types.acp import SendMessageParams
8+
from agentex.lib.types.converters import convert_task_messages_to_oai_agents_inputs
69
from agentex.lib.utils.model_utils import BaseModel
7-
from agentex.lib.types.llm_messages import LLMConfig, UserMessage, SystemMessage, AssistantMessage
810
from agentex.lib.sdk.fastacp.fastacp import FastACP
9-
from agentex.types.task_message_delta import TextDelta
10-
from agentex.types.task_message_update import (
11-
TaskMessageUpdate,
12-
StreamTaskMessageDone,
13-
StreamTaskMessageFull,
14-
StreamTaskMessageDelta,
15-
)
11+
from agentex.types.task_message_update import TaskMessageUpdate, StreamTaskMessageFull
1612
from agentex.types.task_message_content import TextContent, TaskMessageContent
13+
from agentex.lib.adk.providers._modules.sync_provider import (
14+
SyncStreamingProvider,
15+
convert_openai_to_agentex_events,
16+
)
1717

1818
# Create an ACP server
1919
acp = FastACP.create(
@@ -69,40 +69,35 @@ async def handle_message_send(
6969

7070
task_messages = await adk.messages.list(task_id=params.task.id)
7171

72-
llm_messages = [
73-
SystemMessage(content=state.system_prompt),
74-
*[
75-
UserMessage(content=getattr(message.content, "content", ""))
76-
if getattr(message.content, "author", None) == "user"
77-
else AssistantMessage(content=getattr(message.content, "content", ""))
78-
for message in task_messages
79-
if message.content and getattr(message.content, "type", None) == "text"
80-
],
81-
]
8272

83-
#########################################################
84-
# 4. Call an LLM to respond to the user's message and stream the response to the client.
85-
#########################################################
73+
# Initialize the provider and run config to allow for tracing
74+
provider = SyncStreamingProvider(
75+
trace_id=params.task.id,
76+
)
8677

87-
# Call an LLM to respond to the user's message
78+
# Initialize the run config to allow for tracing and streaming
79+
run_config = RunConfig(
80+
model_provider=provider,
81+
)
8882

89-
print(f"Calling LLM with model {state.model} and messages {llm_messages}")
9083

91-
# The Agentex server automatically commits input and output messages to the database so you don't need to do this yourself, simply process the input content and return the output content.
84+
test_agent = Agent(name="assistant", instructions=state.system_prompt, model=state.model)
85+
86+
# Convert task messages to OpenAI Agents SDK format
87+
input_list = convert_task_messages_to_oai_agents_inputs(task_messages)
88+
89+
# Run the agent and stream the events
90+
result = Runner.run_streamed(test_agent, input_list, run_config=run_config)
91+
92+
93+
#########################################################
94+
# 4. Stream the events to the client.
95+
#########################################################
96+
# Convert the OpenAI events to Agentex events
97+
# This is done by converting the OpenAI events to Agentex events and yielding them to the client
98+
stream = result.stream_events()
99+
100+
# Yield the Agentex events to the client
101+
async for agentex_event in convert_openai_to_agentex_events(stream):
102+
yield agentex_event
92103

93-
message_index = 0
94-
async for chunk in adk.providers.litellm.chat_completion_stream(
95-
llm_config=LLMConfig(model=state.model, messages=llm_messages, stream=True),
96-
trace_id=params.task.id,
97-
):
98-
if chunk and chunk.choices and chunk.choices[0].delta and chunk.choices[0].delta.content:
99-
yield StreamTaskMessageDelta(
100-
type="delta",
101-
index=message_index,
102-
delta=TextDelta(type="text", text_delta=chunk.choices[0].delta.content or ""),
103-
)
104-
105-
yield StreamTaskMessageDone(
106-
type="done",
107-
index=message_index,
108-
)

examples/tutorials/00_sync/020_streaming/pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ description = "An AgentEx agent that does multiturn streaming chat"
99
readme = "README.md"
1010
requires-python = ">=3.12"
1111
dependencies = [
12-
"agentex-sdk",
12+
"agentex-sdk==0.4.28",
1313
"scale-gp",
1414
]
1515

0 commit comments

Comments
 (0)