Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
# Python
__pycache__/
*.py[cod]
*$py.class
*.so
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST

# Virtual environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/

# IDEs
.vscode/
.idea/
*.swp
*.swo
*~

# OS
.DS_Store
.DS_Store?
._*
.Spotlight-V100
.Trashes
ehthumbs.db
Thumbs.db

# Logs
*.log
logs/

# Development files
.pytest_cache/
.coverage
.tox/
.cache
nosetests.xml
coverage.xml

# Git
.git/
.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
# Use the official Agentex base image
FROM agentex-base:latest

# Set working directory
WORKDIR /app

# Copy requirements and install dependencies
COPY 040_openai_temporal_integration/pyproject.toml ./
RUN pip install -e .

# Copy the agent code
COPY 040_openai_temporal_integration/ ./

# Set environment variables
ENV PYTHONPATH=/app
ENV WORKFLOW_NAME=at040-openai-temporal-integration
ENV WORKFLOW_TASK_QUEUE=040_openai_temporal_integration_queue
ENV AGENT_NAME=at040-openai-temporal-integration

# Expose the default ACP port
EXPOSE 8000

# Default command to run the worker
CMD ["python", "project/run_worker.py"]
Original file line number Diff line number Diff line change
@@ -0,0 +1,122 @@
# OpenAI Temporal Integration Tutorial

This tutorial demonstrates the **Agent Platform Integration** for Agentex that provides a streamlined approach to agent development while maintaining full Agentex infrastructure compatibility.

## Before vs After Comparison

| Aspect | Complex Manual (10_agentic/10_temporal/010_agent_chat) | Simplified Platform (this tutorial) |
|--------|-------------------------------------------------------|--------------------------------------|
| **Lines of code** | 277 lines | ~30 lines |
| **Manual orchestration** | Required | Automatic |
| **Activity definitions** | Manual `@activity.defn` for each operation | Built-in durability |
| **State management** | Manual conversation state tracking | Automatic |
| **Error handling** | Manual try/catch and retry logic | Built-in recovery |
| **ACP integration** | Manual message creation/sending | Automatic via bridge |

## Key Features

### **Reduced Complexity**
- Simplified codebase: from 277 lines to ~30 lines
- Automatic agent execution durability
- Built-in tool call orchestration

### **Infrastructure Compatibility**
- Full ACP protocol compatibility
- Existing deployment configurations work unchanged
- Same authentication and monitoring systems
- Multi-tenant hosting support maintained

### **Platform Extensibility**
- OpenAI Agents SDK integration (implemented)
- Extensible architecture for LangChain, CrewAI
- Strategy pattern for custom frameworks

## Implementation Details

### Workflow Definition
```python
@workflow.defn(name=environment_variables.WORKFLOW_NAME)
class At040OpenAITemporalIntegration(OpenAIAgentWorkflow):
async def create_agent(self) -> Agent:
return Agent(
name="Tool-Enabled Assistant",
model="gpt-4o-mini",
instructions="You are a helpful assistant...",
tools=[], # Add tools as needed
)
```

### Worker Setup
```python
worker = AgentexWorker(
task_queue=environment_variables.WORKFLOW_TASK_QUEUE,
agent_platform="openai", # Platform optimization
)
await worker.run(activities=[], workflow=At040OpenAITemporalIntegration)
```

## Technical Architecture

### Durability Features
- Agent executions are automatically temporal activities
- Tool calls include built-in retry mechanisms
- Conversation state persists across workflow restarts

### Performance Features
- Automatic exclusion of unused provider activities
- Direct SDK integration reduces overhead
- Platform-specific worker configuration

### Extensibility
- Strategy pattern for adding new agent platforms
- Consistent workflow interface across platforms
- Full compatibility with existing Agentex infrastructure

## Running the Tutorial

1. **Set environment variables:**
```bash
export WORKFLOW_NAME="at040-openai-temporal-integration"
export WORKFLOW_TASK_QUEUE="040_openai_temporal_integration_queue"
export AGENT_NAME="at040-openai-temporal-integration"
export OPENAI_API_KEY="your-openai-api-key"
```

2. **Run the agent:**
```bash
uv run agentex agents run --manifest manifest.yaml
```

3. **Test via ACP API:**
```bash
curl -X POST http://localhost:8000/api \
-H "Content-Type: application/json" \
-d '{
"method": "task/create",
"params": {
"agent_name": "at040-openai-temporal-integration"
}
}'
```

## Migration from Manual Approach

To migrate from the manual orchestration pattern (010_agent_chat):

1. **Update workflow inheritance:**
- Change from: `BaseWorkflow`
- Change to: `OpenAIAgentWorkflow`

2. **Replace orchestration code:**
- Remove: Manual `adk.providers.openai.run_agent_streamed_auto_send()` calls
- Add: `create_agent()` method implementation

3. **Update worker configuration:**
- Add: `agent_platform="openai"` parameter to `AgentexWorker`
- Activities: Use empty list `[]` for automatic optimization

4. **Simplify activity management:**
- Remove: Custom `@activity.defn` wrapper functions
- Retain: Core business logic as regular functions

This maintains full compatibility with existing Agentex infrastructure.
Original file line number Diff line number Diff line change
@@ -0,0 +1,119 @@
# Agent Manifest Configuration
# ---------------------------
# This file defines how your agent should be built and deployed.

# Build Configuration
# ------------------
# The build config defines what gets packaged into your agent's Docker image.
build:
context:
# Root directory for the build context
root: ../ # Keep this as the default root

# Paths to include in the Docker build context
# Must include your agent's directory (your custom agent code)
include_paths:
- 040_openai_temporal_integration

# Path to your agent's Dockerfile
# This defines how your agent's image is built from the context
# Relative to the root directory
dockerfile: 040_openai_temporal_integration/Dockerfile

# Path to your agent's .dockerignore
# Filters unnecessary files from the build context
dockerignore: 040_openai_temporal_integration/.dockerignore


# Local Development Configuration
# -----------------------------
# Only used when running the agent locally
local_development:
agent:
port: 18000 # Port where your local ACP server is running
host_address: host.docker.internal # Host address for Docker networking

# File paths for local development (relative to this manifest.yaml)
paths:
# Path to ACP server file
acp: project/acp.py

# Path to temporal worker file
worker: project/run_worker.py


# Agent Configuration
# -----------------
agent:
# Type of agent - either sync or agentic
acp_type: agentic

# Unique name for your agent
# Used for task routing and monitoring
name: at040-openai-temporal-integration

# Description of what your agent does
# Helps with documentation and discovery
description: "Simplified OpenAI agent chat using agent platform integration"

# Temporal workflow configuration
# This enables your agent to run as a Temporal workflow for long-running tasks
temporal:
enabled: true
workflows:
# Name of the workflow class
# Must match the @workflow.defn name in your workflow.py
- name: at040-openai-temporal-integration

# Queue name for task distribution
# Used by Temporal to route tasks to your agent
# Convention: <agent_name>_task_queue
queue_name: 040_openai_temporal_integration_queue

# Optional: Credentials mapping
# Maps Kubernetes secrets to environment variables
# Common credentials include:
credentials:
- env_var_name: OPENAI_API_KEY
secret_name: openai-api-key
secret_key: api-key

# Optional: Set Environment variables for running your agent locally as well
# as for deployment later on
# env:
# - name: OPENAI_BASE_URL
# value: "https://api.openai.com/v1"
# - name: ACCOUNT_ID
# value: "your_account_id_here"


# Deployment Configuration
# -----------------------
# Configuration for deploying your agent to Kubernetes clusters
deployment:
# Container image configuration
image:
repository: "" # Update with your container registry
tag: "latest" # Default tag, should be versioned in production

imagePullSecrets:
- name: my-registry-secret # Update with your image pull secret name

# Global deployment settings that apply to all clusters
# These can be overridden using --override-file with custom configuration files
global:
agent:
name: "at040-openai-temporal-integration"
description: "Simplified OpenAI agent chat using agent platform integration"

# Default replica count
replicaCount: 1

# Default resource requirements
resources:
requests:
cpu: "500m"
memory: "1Gi"
limits:
cpu: "1000m"
memory: "2Gi"
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
# Simplified OpenAI Agent Platform Tutorial
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
import os

from agentex.lib.sdk.fastacp.fastacp import FastACP
from agentex.lib.types.fastacp import TemporalACPConfig


# Create the ACP server
acp = FastACP.create(
acp_type="agentic",
config=TemporalACPConfig(
# When deployed to the cluster, the Temporal address will automatically be set to the cluster address
# For local development, we set the address manually to talk to the local Temporal service set up via docker compose
type="temporal",
temporal_address=os.getenv("TEMPORAL_ADDRESS", "localhost:7233")
)
)


# Notice that we don't need to register any handlers when we use type="temporal"
# If you look at the code in agentex.sdk.fastacp.impl.temporal_acp
# you can see that the handlers are automatically registered to forward all ACP events
# to the temporal workflow via the temporal client.

# The temporal workflow is responsible for handling the ACP events and sending responses
# This is handled by the workflow method that is decorated with @workflow.signal(name=SignalName.RECEIVE_EVENT)
Loading
Loading