Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Empty file added .github/scripts/sync_agents.py
Empty file.
4 changes: 1 addition & 3 deletions .github/workflows/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,5 @@ name: Test AgentEx Tutorials

on:
workflow_dispatch:

workflow_call:


workflow_call:
4 changes: 2 additions & 2 deletions .github/workflows/publish-pypi.yml
Original file line number Diff line number Diff line change
Expand Up @@ -21,8 +21,8 @@ jobs:
curl -sSf https://rye.astral.sh/get | bash
echo "$HOME/.rye/shims" >> $GITHUB_PATH
env:
RYE_VERSION: '0.44.0'
RYE_INSTALL_OPTION: '--yes'
RYE_VERSION: "0.44.0"
RYE_INSTALL_OPTION: "--yes"

- name: Publish to PyPI
run: |
Expand Down
4 changes: 3 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -14,4 +14,6 @@ dist
codegen.log
Brewfile.lock.json

.DS_Store
.DS_Store

examples/**/uv.lock
15 changes: 8 additions & 7 deletions examples/tutorials/00_sync/000_hello_acp/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -22,16 +22,17 @@ RUN uv pip install --system --upgrade pip setuptools wheel

ENV UV_HTTP_TIMEOUT=1000

# Copy just the requirements file to optimize caching
COPY 000_hello_acp/requirements.txt /app/requirements.txt
# Copy pyproject.toml and README.md to install dependencies
COPY 000_hello_acp/pyproject.toml /app/000_hello_acp/pyproject.toml
COPY 000_hello_acp/README.md /app/000_hello_acp/README.md

WORKDIR /app/

# Install the required Python packages
RUN uv pip install --system -r requirements.txt
WORKDIR /app/000_hello_acp

# Copy the project code
COPY 000_hello_acp/project /app/project
COPY 000_hello_acp/project /app/000_hello_acp/project

# Install the required Python packages from pyproject.toml
RUN uv pip install --system .

# Set environment variables
ENV PYTHONPATH=/app
Expand Down
20 changes: 9 additions & 11 deletions examples/tutorials/00_sync/000_hello_acp/manifest.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
build:
context:
# Root directory for the build context
root: ../ # Keep this as the default root
root: ../ # Keep this as the default root

# Paths to include in the Docker build context
# Must include:
Expand All @@ -34,14 +34,13 @@ build:
# Helps keep build context small and builds fast
dockerignore: 000_hello_acp/.dockerignore


# Local Development Configuration
# -----------------------------
# Only used when running the agent locally
local_development:
agent:
port: 8000 # Port where your local ACP server is running
host_address: host.docker.internal # Host address for Docker networking (host.docker.internal for Docker, localhost for direct)
port: 8000 # Port where your local ACP server is running
host_address: host.docker.internal # Host address for Docker networking (host.docker.internal for Docker, localhost for direct)

# File paths for local development (relative to this manifest.yaml)
paths:
Expand All @@ -53,7 +52,6 @@ local_development:
# /absolute/path/acp.py (absolute path)
acp: project/acp.py


# Agent Configuration
# -----------------
agent:
Expand Down Expand Up @@ -83,39 +81,39 @@ agent:
# secret_name: openai-api-key
# secret_key: api-key

# Optional: Set Environment variables for running your agent locally as well
# Optional: Set Environment variables for running your agent locally as well
# as for deployment later on
# env:
# - name: OPENAI_BASE_URL
# value: "https://api.openai.com/v1"
# - name: ACCOUNT_ID
# value: "your_account_id_here"


# Deployment Configuration
# -----------------------
# Configuration for deploying your agent to Kubernetes clusters
deployment:
# Container image configuration
image:
repository: "" # Update with your container registry
tag: "latest" # Default tag, should be versioned in production
tag: "latest" # Default tag, should be versioned in production

# Global deployment settings that apply to all clusters
# These can be overridden in cluster-specific files (deploy/*.yaml)
global:
agent:
name: "s000-hello-acp"
description: "An AgentEx agent that just says hello and acknowledges the user's message"

# Default replica count
replicaCount: 1

# Default resource requirements
resources:
requests:
cpu: "500m"
memory: "1Gi"
limits:
cpu: "1000m"
memory: "2Gi"
memory: "2Gi"

4 changes: 3 additions & 1 deletion examples/tutorials/00_sync/000_hello_acp/pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,8 @@ requires-python = ">=3.12"
dependencies = [
"agentex-sdk",
"scale-gp",
"pytest",
"pytest-xdist"
]

[project.optional-dependencies]
Expand All @@ -30,4 +32,4 @@ target-version = ['py312']

[tool.isort]
profile = "black"
line_length = 88
line_length = 88
128 changes: 128 additions & 0 deletions examples/tutorials/00_sync/000_hello_acp/tests/test_agent.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,128 @@
"""
Sample tests for AgentEx ACP agent.

This test suite demonstrates how to test the main AgentEx API functions:
- Non-streaming message sending
- Streaming message sending
- Task creation via RPC

To run these tests:
1. Make sure the agent is running (via docker-compose or `agentex agents run`)
2. Set the AGENTEX_API_BASE_URL environment variable if not using default
3. Run: pytest test_agent.py -v

Configuration:
- AGENTEX_API_BASE_URL: Base URL for the AgentEx server (default: http://localhost:5003)
- AGENT_NAME: Name of the agent to test (default: hello-acp)
"""

import os
from agentex.types import TextContentParam, TextDelta, TextContent
from agentex.types.agent_rpc_params import ParamsSendMessageRequest
from agentex.types.task_message_update import StreamTaskMessageDelta, StreamTaskMessageFull
import pytest
from agentex import Agentex


# Configuration from environment variables
AGENTEX_API_BASE_URL = os.environ.get("AGENTEX_API_BASE_URL", "http://localhost:5003")
AGENT_NAME = os.environ.get("AGENT_NAME", "s000-hello-acp")


@pytest.fixture
def client():
"""Create an AgentEx client instance for testing."""
client = Agentex(base_url=AGENTEX_API_BASE_URL)
yield client
# Clean up: close the client connection
client.close()


@pytest.fixture
def agent_name():
"""Return the agent name for testing."""
return AGENT_NAME


class TestNonStreamingMessages:
"""Test non-streaming message sending."""

def test_send_simple_message(self, client: Agentex, agent_name: str):
"""Test sending a simple message and receiving a response."""

message_content = "Hello, Agent! How are you?"
response = client.agents.send_message(
agent_name=agent_name,
params=ParamsSendMessageRequest(
content=TextContentParam(
author="user",
content=message_content,
type="text",
)
),
)
result = response.result
assert result is not None
assert len(result) == 1
message = result[0]
assert isinstance(message.content, TextContent)
assert (
message.content.content
== f"Hello! I've received your message. Here's a generic response, but in future tutorials we'll see how you can get me to intelligently respond to your message. This is what I heard you say: {message_content}"
)


class TestStreamingMessages:
"""Test streaming message sending."""

def test_stream_simple_message(self, client: Agentex, agent_name: str):
"""Test streaming a simple message and aggregating deltas."""

message_content = "Hello, Agent! Can you stream your response?"
aggregated_content = ""
full_content = ""
received_chunks = False

for chunk in client.agents.send_message_stream(
agent_name=agent_name,
params=ParamsSendMessageRequest(
content=TextContentParam(
author="user",
content=message_content,
type="text",
)
),
):
received_chunks = True
task_message_update = chunk.result
# Collect text deltas as they arrive or check full messages
if isinstance(task_message_update, StreamTaskMessageDelta) and task_message_update.delta is not None:
delta = task_message_update.delta
if isinstance(delta, TextDelta) and delta.text_delta is not None:
aggregated_content += delta.text_delta

elif isinstance(task_message_update, StreamTaskMessageFull):
content = task_message_update.content
if isinstance(content, TextContent):
full_content = content.content

if not full_content and not aggregated_content:
raise AssertionError("No content was received in the streaming response.")
if not received_chunks:
raise AssertionError("No streaming chunks were received, when at least 1 was expected.")

if full_content:
assert (
full_content
== f"Hello! I've received your message. Here's a generic response, but in future tutorials we'll see how you can get me to intelligently respond to your message. This is what I heard you say: {message_content}"
)

if aggregated_content:
assert (
aggregated_content
== f"Hello! I've received your message. Here's a generic response, but in future tutorials we'll see how you can get me to intelligently respond to your message. This is what I heard you say: {message_content}"
)


if __name__ == "__main__":
pytest.main([__file__, "-v"])
18 changes: 10 additions & 8 deletions examples/tutorials/00_sync/010_multiturn/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -22,19 +22,21 @@ RUN uv pip install --system --upgrade pip setuptools wheel

ENV UV_HTTP_TIMEOUT=1000

# Copy just the requirements file to optimize caching
COPY 010_multiturn/requirements.txt /app/requirements.txt
# Copy pyproject.toml and README.md to install dependencies
COPY 010_multiturn/pyproject.toml /app/010_multiturn/pyproject.toml
COPY 010_multiturn/README.md /app/010_multiturn/README.md

WORKDIR /app/

# Install the required Python packages
RUN uv pip install --system -r requirements.txt
WORKDIR /app/010_multiturn

# Copy the project code
COPY 010_multiturn/project /app/project
COPY 010_multiturn/project /app/010_multiturn/project

# Install the required Python packages from pyproject.toml
RUN uv pip install --system .

WORKDIR /app/010_multiturn
# Set environment variables
ENV PYTHONPATH=/app

# Run the agent using uvicorn
CMD ["uvicorn", "project.acp:acp", "--host", "0.0.0.0", "--port", "8000"]
CMD ["uvicorn", "project.acp:acp", "--host", "0.0.0.0", "--port", "8000"]
29 changes: 14 additions & 15 deletions examples/tutorials/00_sync/010_multiturn/project/acp.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,12 @@

from agentex.lib import adk
from agentex.lib.types.acp import SendMessageParams
from agentex.types.task_message import TaskMessageContent
from agentex.lib.utils.model_utils import BaseModel
from agentex.lib.types.llm_messages import LLMConfig, UserMessage, SystemMessage, AssistantMessage
from agentex.lib.sdk.fastacp.fastacp import FastACP
from agentex.types.task_message_update import TaskMessageUpdate
from agentex.types.task_message_content import TextContent
from agentex.types.task_message_content import TaskMessageContent
from agentex.types import TextContent

# Create an ACP server
acp = FastACP.create(
Expand All @@ -24,7 +24,7 @@ class StateModel(BaseModel):
# Note: The return of this handler is required to be persisted by the Agentex Server
@acp.on_message_send
async def handle_message_send(
params: SendMessageParams
params: SendMessageParams,
) -> Union[TaskMessageContent, AsyncGenerator[TaskMessageUpdate, None]]:
"""
In this tutorial, we'll see how to handle a basic multi-turn conversation without streaming.
Expand All @@ -33,12 +33,12 @@ async def handle_message_send(
# 0. Validate the message.
#########################################################

if not hasattr(params.content, 'type') or params.content.type != "text":
if not hasattr(params.content, "type") or params.content.type != "text":
raise ValueError(f"Expected text message, got {getattr(params.content, 'type', 'unknown')}")

if not hasattr(params.content, 'author') or params.content.author != "user":
if not hasattr(params.content, "author") or params.content.author != "user":
raise ValueError(f"Expected user message, got {getattr(params.content, 'author', 'unknown')}")

if not os.environ.get("OPENAI_API_KEY"):
return TextContent(
author="agent",
Expand Down Expand Up @@ -74,12 +74,14 @@ async def handle_message_send(
llm_messages = [
SystemMessage(content=state.system_prompt),
*[
UserMessage(content=getattr(message.content, 'content', '')) if getattr(message.content, 'author', None) == "user" else AssistantMessage(content=getattr(message.content, 'content', ''))
UserMessage(content=getattr(message.content, "content", ""))
if getattr(message.content, "author", None) == "user"
else AssistantMessage(content=getattr(message.content, "content", ""))
for message in task_messages
if getattr(message.content, 'type', None) == "text"
]
if getattr(message.content, "type", None) == "text"
],
]

# TaskMessages are messages that are sent between an Agent and a Client. They are fundamentally decoupled from messages sent to the LLM. This is because you may want to send additional metadata to allow the client to render the message on the UI differently.

# LLMMessages are OpenAI-compatible messages that are sent to the LLM, and are used to track the state of a conversation with a model.
Expand All @@ -90,7 +92,7 @@ async def handle_message_send(
# - Taking a markdown document output by an LLM, postprocessing it into a JSON object to clearly denote title, content, and footers. This can be sent as a DataContent TaskMessage to the client and converted back to markdown here to send back to the LLM.
# - If using multiple LLMs (like in an actor-critic framework), you may want to send DataContent that denotes which LLM generated which part of the output and write conversion logic to split the TaskMessagehistory into multiple LLM conversations.
# - If using multiple LLMs, but one LLM's output should not be sent to the user (i.e. a critic model), you can leverage the State as an internal storage mechanism to store the critic model's conversation history. This i s a powerful and flexible way to handle complex scenarios.

#########################################################
# 4. Call an LLM to respond to the user's message.
#########################################################
Expand All @@ -113,7 +115,4 @@ async def handle_message_send(
else:
content_str = ""

return TextContent(
author="agent",
content=content_str
)
return TextContent(author="agent", content=content_str)
Loading
Loading