diff --git a/docs/openai_agents/README.md b/docs/openai_agents/README.md new file mode 100644 index 00000000..09c831b4 --- /dev/null +++ b/docs/openai_agents/README.md @@ -0,0 +1,20 @@ +# Durable OpenAI Agents + +Build production-ready AI agents with automatic state persistence and failure recovery. + +## Overview + +The Durable OpenAI Agents integration combines the familiar OpenAI Agents SDK with Azure Durable Functions to create reliable, stateful AI agents that can survive any failure and continue exactly where they stopped. + +## Key Benefits + +- **Enhanced Agent Resilience**: Built-in retry mechanisms for LLM calls and tool executions +- **Multi-Agent Orchestration Reliability**: Individual agent failures don't crash entire workflows +- **Built-in Observability**: Monitor agent progress through the Durable Task Scheduler dashboard +- **Familiar Developer Experience**: Keep using the OpenAI Agents SDK with minimal code changes +- **Distributed Compute and Scalability**: Agent workflows automatically scale across multiple compute instances + +## Documentation + +- [Getting Started](getting-started.md) - Setup and your first durable agent +- [Reference](reference.md) - Complete reference documentation \ No newline at end of file diff --git a/docs/openai_agents/getting-started.md b/docs/openai_agents/getting-started.md new file mode 100644 index 00000000..e68a27fc --- /dev/null +++ b/docs/openai_agents/getting-started.md @@ -0,0 +1,191 @@ +# Getting Started with Durable OpenAI Agents + +Getting started guide for implementing stateful AI agents using Azure Durable Functions orchestration with automatic checkpointing and replay semantics. + +## Prerequisites + +- Python 3.10+ runtime environment +- Azure Functions Core Tools v4.x (`npm install -g azure-functions-core-tools@4 --unsafe-perm true`) +- Azure OpenAI service endpoint with model deployment +- Docker (Optional for the Durable Task Scheduler Emulator) + +## Environment Setup + +### Create an Azure Functions App + +This framework is designed specifically for **Azure Functions applications**. You need to create a Python Functions app to use Durable OpenAI Agents. + +**For new users**: If you're new to Azure Functions, follow these guides to get started: +- [Create your first Python function in Azure](https://learn.microsoft.com/en-us/azure/azure-functions/create-first-function-vs-code-python) +- [Azure Functions Python developer guide](https://learn.microsoft.com/en-us/azure/azure-functions/functions-reference-python) + +**For experienced Functions users**: Create a new Python Functions app or use an existing one. + +**Note**: The `samples-v2/openai_agents` directory contains a complete working example you can reference or use as a starting point. + +### Set Up Local Development Environment + +Create and activate a virtual environment to isolate dependencies: + +```bash +# Create virtual environment +python -m venv venv + +# Activate virtual environment +# On macOS/Linux: +source venv/bin/activate +# On Windows: +# venv\Scripts\activate +``` + +### Install Dependencies + +Add the OpenAI Agents dependencies to your `requirements.txt`: + +``` +azure-functions-durable +azure-functions +openai +openai-agents +azure-identity +``` + +Then install them: + +```bash +pip install -r requirements.txt +``` + +### Configuring Durable Task Scheduler Backend + +**Durable Task Scheduler is the preferred backend** for this integration as it provides enhanced performance, better observability, and simplified local development. While not a hard requirement, it's strongly recommended for production workloads. + +There are two ways to configure the backend locally: + +#### Using the Emulator (Recommended) + +The emulator simulates a scheduler and taskhub in a Docker container, making it ideal for development and learning. + +1. **Pull the Docker Image for the Emulator:** +```bash +docker pull mcr.microsoft.com/dts/dts-emulator:latest +``` + +2. **Run the Emulator:** +```bash +docker run --name dtsemulator -d -p 8080:8080 -p 8082:8082 mcr.microsoft.com/dts/dts-emulator:latest +``` + +3. **Wait for container readiness** (approximately 10-15 seconds) + +4. **Verify emulator status:** +```bash +curl http://localhost:8080/health +``` + +**Note**: The sample code automatically uses the default emulator settings (`endpoint: http://localhost:8080`, `taskhub: default`). No additional environment variables are required. + +#### Alternative: Azure Storage Backend + +If you prefer using Azure Storage as the backend (legacy approach): + +```bash +# Uses local storage emulator - requires Azurite +npm install -g azurite +azurite --silent --location /tmp/azurite --debug /tmp/azurite/debug.log +``` + +Update `local.settings.json`: +```json +{ + "Values": { + "AzureWebJobsStorage": "UseDevelopmentStorage=true" + } +} +``` + +## Configuration + +1. **Install project dependencies:** + +```bash +pip install -r requirements.txt +``` + +2. **Configure service settings:** + +Update `local.settings.json` with your service configuration: + +```json +{ + "IsEncrypted": false, + "Values": { + "AzureWebJobsStorage": "UseDevelopmentStorage=true", + "FUNCTIONS_WORKER_RUNTIME": "python", + "AZURE_OPENAI_ENDPOINT": "https://.openai.azure.com/", + "AZURE_OPENAI_DEPLOYMENT": "", + "AZURE_OPENAI_API_VERSION": "2024-10-01-preview", + "DURABLE_TASK_SCHEDULER_CONNECTION_STRING": "http://localhost:8080;Authentication=None;", + "TASKHUB": "default" + } +} +``` + +## Hello World Example + +Execute the included hello world sample. + +```python +# basic/hello_world.py - Standard OpenAI Agent +from agents import Agent, Runner + +def main(): + agent = Agent( + name="Assistant", + instructions="You only respond in haikus.", + ) + result = Runner.run_sync(agent, "Tell me about recursion in programming.") + return result.final_output +``` + +**Durable Transformation**: The `@app.durable_openai_agent_orchestrator` decorator in `function_app.py` wraps this agent execution within a Durable Functions orchestrator, providing agent state persisted at each LLM and tool interaction. + +## Execution and Monitoring + +1. **Start the Azure Functions host:** + +Navigate to the `samples-v2/openai_agents` directory and run: + +```bash +func start --port 7071 +``` + +2. **Initiate orchestration instance:** + +```bash +curl -X POST http://localhost:7071/api/orchestrators/hello_world \ + -H "Content-Type: application/json" +``` + +Response contains orchestration instance metadata: + +```json +{ + "id": "f4b2c8d1e9a7...", + "statusQueryGetUri": "http://localhost:7071/runtime/webhooks/durabletask/instances/f4b2c8d1e9a7...", + "sendEventPostUri": "http://localhost:7071/runtime/webhooks/durabletask/instances/f4b2c8d1e9a7.../raiseEvent/{eventName}", + "terminatePostUri": "http://localhost:7071/runtime/webhooks/durabletask/instances/f4b2c8d1e9a7.../terminate", + "purgeHistoryDeleteUri": "http://localhost:7071/runtime/webhooks/durabletask/instances/f4b2c8d1e9a7..." +} +``` + +3. **Monitor execution via Durable Task Scheduler dashboard:** + +Navigate to `http://localhost:8082` for real-time orchestration monitoring: +- Instance execution timeline with LLM call latencies +- State transition logs and checkpoint data +- Retry attempt tracking and failure analysis + +## Next Steps + +- Reference [Reference Documentation](reference.md) for complete technical details. \ No newline at end of file diff --git a/docs/openai_agents/reference.md b/docs/openai_agents/reference.md new file mode 100644 index 00000000..720cdd3d --- /dev/null +++ b/docs/openai_agents/reference.md @@ -0,0 +1,138 @@ +# Reference Documentation + +Complete reference for Durable OpenAI Agents integration. + +## Durable Orchestration + +### @app.durable_openai_agent_orchestrator + +Primary decorator enabling durable execution for agent invocations. + +```python +from azure.durable_functions.openai_agents import durable_openai_agent_orchestrator + +@app.orchestration_trigger(context_name="context") +@app.durable_openai_agent_orchestrator +def my_agent_orchestrator(context): + # Agent implementation + pass +``` + +**Features**: +- Automatic state persistence for agent conversations +- Built-in retry mechanisms for LLM calls +- Tool call durability and replay protection +- Integration with Durable Functions monitoring using the Durable Task Scheduler + +**Constraints**: +- Functions must be deterministic (identical outputs for identical inputs) +- No non-deterministic operations: `datetime.now()`, `random`, `uuid.uuid4()` +- See [Durable Functions Code Constraints](https://learn.microsoft.com/azure/azure-functions/durable/durable-functions-code-constraints?tabs=csharp) + +### @app.orchestration_trigger + +Azure Functions orchestration trigger decorator. Required with `@app.durable_openai_agent_orchestrator`. + +```python +@app.orchestration_trigger(context_name="context") +@app.durable_openai_agent_orchestrator +def my_orchestrator(context): + # ... +``` + +## Agent Execution + +### Runner.run_sync() + +Runner for agents in durable orchestration context. + +```python +from agents import Agent, Runner + +def my_orchestrator(context): + agent = Agent(name="Assistant", instructions="Be helpful") + result = Runner.run_sync(agent, "Hello world") + return result.final_output +``` + +**Parameters**: +- `agent` (Agent): Agent instance to run +- `messages` (str | list): Input message(s) + +**Returns**: Agent result object with `final_output` property + +## Tools + +### Durable Functions Activity Tools + +Durable Function Activities that execute as durable tool invocations. **This is the recommended approach for most use cases** as it provides the strongest correctness guarantees. - **When in doubt - this is the safe choice** + +```python +# 1. Define activity function +@app.activity_trigger(input_name="input_param") +async def my_activity(input_param): + # External API calls, database operations, etc. + return result + +# 2. Use in orchestrator +@app.orchestration_trigger(context_name="context") +@app.durable_openai_agent_orchestrator +def my_orchestrator(context): + agent = Agent( + tools=[context.create_activity_tool(my_activity)] + ) + # ... +``` + +**Components**: +- `@app.activity_trigger(input_name="param")`: Decorator for activity functions +- `context.create_activity_tool(activity_function)`: Creates tool from activity function + +**Best For**: External API calls, database operations, file I/O, expensive computations, non-deterministic operations + +### Open AI Function Tools + +Simple, deterministic tools that execute within the orchestration context. **Recommended only as a performance optimization when you're certain the tool meets all deterministic requirements.** + +```python +from agents import function_tool + +@function_tool +def calculate(expression: str) -> str: + """Calculate mathematical expressions.""" + return str(eval(expression)) +``` + +**Requirements**: +- Must be deterministic (same input → same output) +- Should be fast-executing +- No external API calls (use activity tools instead) +- Input/output must be JSON serializable + +**Best For**: Calculations, data transformations, validation logic, quick lookups + +### Current Limitations + +**MCP (Model Context Protocol)**: MCP tool support is not currently available. Use function tools or activity tools instead. + +## Constraints + +Orchestration functions must be deterministic and replay-safe: + +- **Deterministic**: Same input always produces same output +- **Idempotent**: Safe to execute multiple times +- **Side-effect free**: No external calls in orchestration logic + +```python +# ✅ Good: Deterministic +def good_orchestrator(context): + input_data = context.get_input() + agent = high_priority_agent if input_data.get("priority") == "high" else standard_agent + return Runner.run_sync(agent, input_data["content"]) + +# ❌ Bad: Non-deterministic +def bad_orchestrator(context): + import random + agent = agent_a if random.choice([True, False]) else agent_b # Non-deterministic! + return Runner.run_sync(agent, context.get_input()) +``` \ No newline at end of file diff --git a/samples-v2/openai_agents/README.md b/samples-v2/openai_agents/README.md index ac932099..acd92501 100644 --- a/samples-v2/openai_agents/README.md +++ b/samples-v2/openai_agents/README.md @@ -1,194 +1,44 @@ -# Azure Functions Durable OpenAI Agents Samples +# OpenAI Agents with Azure Durable Functions - Samples -This repository contains samples demonstrating how to use OpenAI Agents with Azure Durable Functions in Python. +This directory contains sample code demonstrating how to use OpenAI agents with Azure Durable Functions for reliable, stateful AI workflows. -## Prerequisites +## 📖 Documentation -Before running these samples, ensure you have the following: +**Complete documentation is located at: [/docs/openai_agents/](/docs/openai_agents/)** -1. **Python 3.8 or later** installed on your system -2. **Azure Functions Core Tools** v4.x installed ([Installation Guide](https://docs.microsoft.com/en-us/azure/azure-functions/functions-run-local)) -3. **Azure OpenAI Service** set up with a deployed model -4. **Azure CLI** (optional, for authentication) +### Quick Links -## Setup +- **[Getting Started Guide](/docs/openai_agents/getting-started.md)** - Setup and basic usage +- **[API Reference](/docs/openai_agents/reference.md)** - Complete technical reference +- **[Overview](/docs/openai_agents/README.md)** - Feature overview and concepts -### 1. Clone and Navigate to the Project +## 🚀 Quick Start -```bash -git clone -cd azure-functions-durable-python/samples-v2/openai_agents -``` - -### 2. Create a Python Virtual Environment - -```bash -python -m venv .venv -``` - -Activate the virtual environment: -- **Windows (PowerShell)**: `.venv\Scripts\Activate.ps1` -- **Windows (Command Prompt)**: `.venv\Scripts\activate.bat` -- **macOS/Linux**: `source .venv/bin/activate` - -### 3. Install Dependencies - -```bash -pip install -r requirements.txt -``` - -### 4. Install the Durable Functions Extension - -Install the Azure Durable Functions extension for OpenAI Agents from the parent directory: - -```bash -pip install -e ..\.. -``` - -This step is required because this sample uses a local development version of the `azure.durable_functions.openai_agents` module that extends Azure Durable Functions with OpenAI Agents support. The `-e` flag installs the package in "editable" mode, which means changes to the source code will be reflected immediately without reinstalling. - -### 5. Configure Environment Variables - -Copy the template file and update it with your Azure OpenAI settings: - -```bash -cp local.settings.json.template local.settings.json -``` - -Edit `local.settings.json` and replace the placeholder values: - -```json -{ - "IsEncrypted": false, - "Values": { - "AzureWebJobsStorage": "UseDevelopmentStorage=true", - "FUNCTIONS_WORKER_RUNTIME": "python", - "AZURE_OPENAI_ENDPOINT": "https://your-openai-service.openai.azure.com/", - "AZURE_OPENAI_DEPLOYMENT": "your-gpt-deployment-name", - "AZURE_OPENAI_API_VERSION": "2025-03-01-preview" - } -} -``` - -**Required Configuration:** -- `AZURE_OPENAI_ENDPOINT`: Your Azure OpenAI service endpoint -- `AZURE_OPENAI_DEPLOYMENT`: The name of your deployed GPT model -- `AZURE_OPENAI_API_VERSION`: API version (default: "2025-03-01-preview") - -### 6. Authentication - -This sample uses Azure Default Credential for authentication. Make sure you're authenticated to Azure: - -```bash -az login -``` - -Alternatively, you can use other authentication methods supported by Azure Default Credential: -- Managed Identity (when running in Azure) -- Visual Studio/VS Code authentication -- Environment variables (`AZURE_CLIENT_ID`, `AZURE_CLIENT_SECRET`, `AZURE_TENANT_ID`) - -## Running the Samples - -### 1. Start the Azure Functions Host - -```bash -func host start -``` - -The function app will start and display the available endpoints. +1. **Setup**: Follow the [Getting Started Guide](/docs/openai_agents/getting-started.md) +2. **Run Samples**: Explore the `/basic` directory for examples +3. **Reference**: Check [API Reference](/docs/openai_agents/reference.md) for advanced usage -### 2. Trigger the Hello World Sample +## 📂 Sample Structure -Once the function is running, you can trigger the orchestrator using HTTP requests: - -```bash -# Start the hello_world orchestration -curl -X POST "http://localhost:7071/api/orchestrators/hello_world" ``` - -This will return a response with URLs to check the orchestration status: - -```json -{ - "id": "abc123...", - "statusQueryGetUri": "http://localhost:7071/runtime/webhooks/durabletask/instances/abc123.../", - "sendEventPostUri": "http://localhost:7071/runtime/webhooks/durabletask/instances/abc123.../raiseEvent/{eventName}", - "terminatePostUri": "http://localhost:7071/runtime/webhooks/durabletask/instances/abc123.../terminate", - "purgeHistoryDeleteUri": "http://localhost:7071/runtime/webhooks/durabletask/instances/abc123.../", - "restartPostUri": "http://localhost:7071/runtime/webhooks/durabletask/instances/abc123.../restart" -} +basic/ # Basic usage examples +├── hello_world.py # Simplest agent example +├── tools.py # Function and activity tools +├── dynamic_system_prompt.py # Dynamic prompt handling +├── lifecycle_example.py # Agent lifecycle management +└── ... # Additional examples ``` -### 3. Check Orchestration Status - -Use the `statusQueryGetUri` from the response to check the status: +## 🔧 Running Samples ```bash -curl "http://localhost:7071/runtime/webhooks/durabletask/instances/{instance-id}/" -``` - -## Available Samples - -### Hello World (`basic/hello_world.py`) - -A simple example that demonstrates: -- Creating an OpenAI Agent with specific instructions (responds only in haikus) -- Running the agent synchronously with a query about recursion -- Returning the agent's response - -The agent is configured to respond only in haiku format and will answer questions about programming concepts. +# Install dependencies +pip install -r requirements.txt -## Project Structure +# Start the Azure Functions runtime +func start +# Test with HTTP requests (see documentation for details) ``` -openai_agents/ -├── function_app.py # Main Azure Functions app with orchestrator -├── requirements.txt # Python dependencies -├── host.json # Azure Functions host configuration -├── local.settings.json.template # Environment variables template -├── local.settings.json # Your local configuration (gitignored) -├── basic/ -│ └── hello_world.py # Hello world agent sample -└── README.md # This file -``` - -## Key Components - -### `function_app.py` -- Sets up Azure OpenAI client with Azure Default Credential -- Configures the durable functions orchestrator -- Provides HTTP trigger for starting orchestrations - -### `basic/hello_world.py` -- Demonstrates basic agent creation and execution -- Shows how to use the OpenAI Agents SDK with custom instructions - -## Troubleshooting - -### Common Issues - -1. **Authentication Errors**: Ensure you're logged in to Azure CLI or have proper environment variables set -2. **OpenAI Endpoint Errors**: Verify your `AZURE_OPENAI_ENDPOINT` and `AZURE_OPENAI_DEPLOYMENT` settings -3. **Missing Dependencies**: Run `pip install -r requirements.txt` to ensure all packages are installed -4. **Function Host Issues**: Make sure Azure Functions Core Tools are properly installed - -### Debugging - -- Check the function logs in the terminal where you ran `func host start` -- Use the status URLs returned by the orchestrator to monitor execution -- Verify your Azure OpenAI service is accessible and the deployment is active - -## Next Steps - -- Explore more complex agent scenarios -- Add custom tools and functions to your agents -- Integrate with other Azure services -- Deploy to Azure Functions for production use - -## Resources -- [Azure Functions Documentation](https://docs.microsoft.com/en-us/azure/azure-functions/) -- [Azure Durable Functions](https://docs.microsoft.com/en-us/azure/azure-functions/durable/) -- [Azure OpenAI Service](https://docs.microsoft.com/en-us/azure/cognitive-services/openai/) -- [OpenAI Agents SDK](https://github.com/openai/openai-python) +**For complete setup instructions, configuration details, and troubleshooting, see the [Getting Started Guide](/docs/openai_agents/getting-started.md).**