Skip to content

Commit 7e5fa8e

Browse files
committed
Merge branch 'main' into stas/agent_input_type
2 parents 57e4245 + a08a0a5 commit 7e5fa8e

File tree

76 files changed

+5196
-1379
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

76 files changed

+5196
-1379
lines changed

.github/workflows/publish-pypi.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# This workflow is triggered when a GitHub release is created.
22
# It can also be run manually to re-publish to PyPI in case it failed for some reason.
3-
# You can run this workflow by navigating to https://www.github.com/scaleapi/agentex-python/actions/workflows/publish-pypi.yml
3+
# You can run this workflow by navigating to https://www.github.com/scaleapi/scale-agentex-python/actions/workflows/publish-pypi.yml
44
name: Publish PyPI
55
on:
66
workflow_dispatch:

.github/workflows/release-doctor.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ jobs:
99
release_doctor:
1010
name: release doctor
1111
runs-on: ubuntu-latest
12-
if: github.repository == 'scaleapi/agentex-python' && (github.event_name == 'push' || github.event_name == 'workflow_dispatch' || startsWith(github.head_ref, 'release-please') || github.head_ref == 'next')
12+
if: github.repository == 'scaleapi/scale-agentex-python' && (github.event_name == 'push' || github.event_name == 'workflow_dispatch' || startsWith(github.head_ref, 'release-please') || github.head_ref == 'next')
1313

1414
steps:
1515
- uses: actions/checkout@v4

.release-please-manifest.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,3 @@
11
{
2-
".": "0.5.0"
2+
".": "0.5.3"
33
}

.stats.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11
configured_endpoints: 34
22
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/sgp%2Fagentex-sdk-2b422fbf02ff3b77795fb8c71cbe784de3a3add48560655ba4fe7f3fcc509995.yml
33
openapi_spec_hash: bca5c04d823694c87417dae188480291
4-
config_hash: 6481ea6b42040f435dedcb00a98f35f8
4+
config_hash: 0197f86ba1a4b1b5ce813d0e62138588

CHANGELOG.md

Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,27 @@
11
# Changelog
22

3+
## 0.5.3 (2025-10-31)
4+
5+
Full Changelog: [v0.5.2...v0.5.3](https://github.com/scaleapi/scale-agentex-python/compare/v0.5.2...v0.5.3)
6+
7+
### Chores
8+
9+
* re apply example updates ([043973b](https://github.com/scaleapi/scale-agentex-python/commit/043973bec649ab2304eff7a313938e1e3e5377e5))
10+
11+
## 0.5.2 (2025-10-31)
12+
13+
Full Changelog: [v0.5.0...v0.5.2](https://github.com/scaleapi/scale-agentex-python/compare/v0.5.0...v0.5.2)
14+
15+
### Features
16+
17+
* **api:** manual updates ([dc66b57](https://github.com/scaleapi/scale-agentex-python/commit/dc66b57618525669b3aa15676343ef542675a5f9))
18+
* bump the helm chart version ([1ffafb0](https://github.com/scaleapi/scale-agentex-python/commit/1ffafb0406138d6abd84254fa394b88c4a28ce70))
19+
20+
21+
### Chores
22+
23+
* sync repo ([0e05416](https://github.com/scaleapi/scale-agentex-python/commit/0e05416219ca93ae347e6175804bc0f2259a6b44))
24+
325
## 0.5.0 (2025-10-28)
426

527
Full Changelog: [v0.4.28...v0.5.0](https://github.com/scaleapi/agentex-python/compare/v0.4.28...v0.5.0)

CONTRIBUTING.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ If you’d like to use the repository from source, you can either install from g
6262
To install via git:
6363

6464
```sh
65-
$ pip install git+ssh://[email protected]/scaleapi/agentex-python.git
65+
$ pip install git+ssh://[email protected]/scaleapi/scale-agentex-python.git
6666
```
6767

6868
Alternatively, you can build from source and install the wheel file:
@@ -120,7 +120,7 @@ the changes aren't made through the automated pipeline, you may want to make rel
120120

121121
### Publish with a GitHub workflow
122122

123-
You can release to package managers by using [the `Publish PyPI` GitHub action](https://www.github.com/scaleapi/agentex-python/actions/workflows/publish-pypi.yml). This requires a setup organization or repository secret to be set up.
123+
You can release to package managers by using [the `Publish PyPI` GitHub action](https://www.github.com/scaleapi/scale-agentex-python/actions/workflows/publish-pypi.yml). This requires a setup organization or repository secret to be set up.
124124

125125
### Publish manually
126126

README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -268,9 +268,9 @@ task = response.parse() # get the object that `tasks.list()` would have returne
268268
print(task)
269269
```
270270

271-
These methods return an [`APIResponse`](https://github.com/scaleapi/agentex-python/tree/main/src/agentex/_response.py) object.
271+
These methods return an [`APIResponse`](https://github.com/scaleapi/scale-agentex-python/tree/main/src/agentex/_response.py) object.
272272

273-
The async client returns an [`AsyncAPIResponse`](https://github.com/scaleapi/agentex-python/tree/main/src/agentex/_response.py) with the same structure, the only difference being `await`able methods for reading the response content.
273+
The async client returns an [`AsyncAPIResponse`](https://github.com/scaleapi/scale-agentex-python/tree/main/src/agentex/_response.py) with the same structure, the only difference being `await`able methods for reading the response content.
274274

275275
#### `.with_streaming_response`
276276

@@ -374,7 +374,7 @@ This package generally follows [SemVer](https://semver.org/spec/v2.0.0.html) con
374374

375375
We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.
376376

377-
We are keen for your feedback; please open an [issue](https://www.github.com/scaleapi/agentex-python/issues) with questions, bugs, or suggestions.
377+
We are keen for your feedback; please open an [issue](https://www.github.com/scaleapi/scale-agentex-python/issues) with questions, bugs, or suggestions.
378378

379379
### Determining the installed version
380380

Lines changed: 39 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,44 @@
11
# [Sync] Hello ACP
22

33
This is a simple AgentEx agent that just says hello and acknowledges the user's message to show which ACP methods need to be implemented for the sync ACP type.
4+
The simplest agent type: synchronous request/response pattern with a single `@acp.on_message_send` handler. Best for stateless operations that complete immediately.
45

5-
## Official Documentation
6+
## What You'll Learn
7+
- Building a basic synchronous agent
8+
- The `@acp.on_message_send` handler pattern
9+
- When to use sync vs agentic agents
610

7-
[000 Hello ACP](https://dev.agentex.scale.com/docs/tutorials/sync/000_hello_acp)
11+
## Prerequisites
12+
- Development environment set up (see [main repo README](https://github.com/scaleapi/scale-agentex))
13+
- Backend services running: `make dev` from repository (agentex) root
14+
15+
## Quick Start
16+
17+
```bash
18+
cd examples/tutorials/00_sync/000_hello_acp
19+
uv run agentex agents run --manifest manifest.yaml
20+
```
21+
22+
## Key Code
23+
24+
```python
25+
@acp.on_message_send
26+
async def handle_message_send(params: SendMessageParams):
27+
return TextContent(
28+
author="agent",
29+
content=f"Echo: {params.content.content}"
30+
)
31+
```
32+
33+
That's it - one handler, immediate response. No task creation, no state management.
34+
35+
## When to Use
36+
- Simple chatbots with no memory requirements
37+
- Quick Q&A or information lookup agents
38+
- Prototyping and testing agent responses
39+
- Operations that complete in under a second
40+
41+
## Why This Matters
42+
Sync agents are the simplest way to get started with AgentEx. They're perfect for learning the basics and building stateless agents. Once you need conversation memory or task tracking, you'll graduate to agentic agents.
43+
44+
**Next:** [010_multiturn](../010_multiturn/) - Add conversation memory to your agent
Lines changed: 50 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,54 @@
11
# [Sync] Multiturn
22

3-
This tutorial demonstrates how to handle multiturn conversations in AgentEx agents using the Agent 2 Client Protocol (ACP).
3+
Handle multi-turn conversations in synchronous agents by manually maintaining conversation history and context between messages.
44

5-
## Official Documentation
5+
## What You'll Learn
6+
- How to handle conversation history in sync agents
7+
- Building context from previous messages
8+
- The limitations of stateless multiturn patterns
69

7-
[010 Multiturn](https://dev.agentex.scale.com/docs/tutorials/sync/010_multiturn)
10+
## Prerequisites
11+
- Development environment set up (see [main repo README](https://github.com/scaleapi/scale-agentex))
12+
- Backend services running: `make dev` from repository root
13+
- Understanding of basic sync agents (see [000_hello_acp](../000_hello_acp/))
14+
15+
## Quick Start
16+
17+
```bash
18+
cd examples/tutorials/00_sync/010_multiturn
19+
uv run agentex agents run --manifest manifest.yaml
20+
```
21+
22+
## Key Pattern
23+
24+
Sync agents are stateless by default. To handle multi-turn conversations, you need to:
25+
1. Accept conversation history in the request
26+
2. Maintain context across messages
27+
3. Return responses that build on previous exchanges
28+
29+
```python
30+
@acp.on_message_send
31+
async def handle_message_send(params: SendMessageParams):
32+
# Accept conversation history from client
33+
history = params.conversation_history
34+
35+
# Build context from history
36+
context = build_context(history)
37+
38+
# Generate response considering full context
39+
response = generate_response(params.content, context)
40+
41+
return TextContent(author="agent", content=response)
42+
```
43+
44+
The handler accepts history, builds context, and returns responses that reference previous exchanges.
45+
46+
## When to Use
47+
- Simple chatbots that need conversation memory
48+
- When client can maintain and send conversation history
49+
- Quick prototypes before building full agentic agents
50+
51+
## Why This Matters
52+
While sync agents can handle conversations, you're responsible for managing state on the client side. This becomes complex quickly. For production conversational agents, consider agentic agents ([10_agentic/00_base/010_multiturn](../../10_agentic/00_base/010_multiturn/)) where the platform manages state automatically.
53+
54+
**Next:** [020_streaming](../020_streaming/) - Stream responses in real-time
Lines changed: 39 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,45 @@
11
# [Sync] Streaming
22

3-
This tutorial demonstrates how to implement streaming responses in AgentEx agents using the Agent 2 Client Protocol (ACP).
3+
Stream responses progressively using async generators instead of returning a single message. Enables showing partial results as they're generated.
44

5-
## Official Documentation
5+
## What You'll Learn
6+
- How to stream responses using async generators
7+
- The `yield` pattern for progressive updates
8+
- When streaming improves user experience
69

7-
[020 Streaming](https://dev.agentex.scale.com/docs/tutorials/sync/020_streaming)
10+
## Prerequisites
11+
- Development environment set up (see [main repo README](https://github.com/scaleapi/scale-agentex))
12+
- Backend services running: `make dev` from repository root
13+
- Understanding of basic sync agents (see [000_hello_acp](../000_hello_acp/))
814

15+
## Quick Start
916

17+
```bash
18+
cd examples/tutorials/00_sync/020_streaming
19+
uv run agentex agents run --manifest manifest.yaml
20+
```
21+
22+
## Key Code
23+
24+
```python
25+
@acp.on_message_send
26+
async def handle_message_send(params: SendMessageParams):
27+
async def stream_response():
28+
for chunk in response_chunks:
29+
yield TaskMessageUpdate(content=TextContent(...))
30+
31+
return stream_response()
32+
```
33+
34+
Return an async generator instead of a single response - each `yield` sends an update to the client.
35+
36+
## When to Use
37+
- Streaming LLM responses (OpenAI, Anthropic, etc.)
38+
- Large data processing with progress updates
39+
- Any operation that takes >1 second to complete
40+
- Improving perceived responsiveness
41+
42+
## Why This Matters
43+
Streaming dramatically improves user experience for longer operations. Instead of waiting 10 seconds for a complete response, users see results immediately as they're generated. This is essential for modern AI agents.
44+
45+
**Next:** Ready for task management? → [10_agentic/00_base/000_hello_acp](../../10_agentic/00_base/000_hello_acp/)

0 commit comments

Comments
 (0)