|
1 | | -# Strands Integration (OpenAI) |
| 1 | +# AWS Strands Example Server |
2 | 2 |
|
3 | | -This integration demonstrates how to use Strands Agents SDK with OpenAI models and AG-UI protocol. |
| 3 | +Demo FastAPI server that wires the Strands Agents SDK (Gemini models) into the |
| 4 | +AG-UI protocol. Each route mounts a ready-made agent that showcases different UI |
| 5 | +patterns (vanilla chat, backend tool rendering, shared state, and generative UI). |
4 | 6 |
|
5 | | -## Prerequisites |
| 7 | +## Requirements |
6 | 8 |
|
7 | | -- Python 3.12 or later |
8 | | -- Poetry for dependency management |
9 | | -- OpenAI API key |
10 | | -- Strands Agents SDK with OpenAI support installed |
| 9 | +- Python 3.12 or 3.13 (the project is pinned to `<3.14`) |
| 10 | +- Poetry 1.8+ (ships with the repo via `curl -sSL https://install.python-poetry.org | python3 -`) |
| 11 | +- Google API key with access to Gemini 2.5 Flash (set as `GOOGLE_API_KEY`) |
| 12 | +- (Optional) AG-UI repo running locally so you can point the Dojo at these routes |
11 | 13 |
|
12 | | -## Setup |
| 14 | +## Quick start |
13 | 15 |
|
14 | | -1. Install Strands SDK with OpenAI support: |
15 | 16 | ```bash |
16 | | -pip install 'strands-agents[openai]' |
| 17 | +cd integrations/aws-strands/python/examples |
| 18 | + |
| 19 | +# pick a supported interpreter if your global default is 3.14 |
| 20 | +poetry env use python3.13 |
| 21 | + |
| 22 | +poetry install |
17 | 23 | ``` |
18 | 24 |
|
19 | | -2. Configure OpenAI API key: |
| 25 | +Create a `.env` file in this folder (same dir as `pyproject.toml`) so every |
| 26 | +example can load credentials automatically: |
| 27 | + |
20 | 28 | ```bash |
21 | | -# Set your OpenAI API key (required) |
22 | | -export OPENAI_API_KEY=your-api-key-here |
| 29 | +GOOGLE_API_KEY=your-gemini-key |
| 30 | +# Optional overrides |
| 31 | +PORT=8000 # FastAPI listen port |
23 | 32 | ``` |
24 | 33 |
|
25 | | -3. Optional: Configure OpenAI model settings: |
26 | | -```bash |
27 | | -# Set the OpenAI model to use (default: gpt-4o) |
28 | | -export OPENAI_MODEL=gpt-4o |
| 34 | +> The sample agents default to `gemini-2.5-flash` and already set sensible |
| 35 | +> temperature/token parameters; override only if you need a different tier. |
29 | 36 |
|
30 | | -# Set max tokens (default: 2000) |
31 | | -export OPENAI_MAX_TOKENS=2000 |
| 37 | +## Running the demo server |
32 | 38 |
|
33 | | -# Set temperature (default: 0.7) |
34 | | -export OPENAI_TEMPERATURE=0.7 |
35 | | -``` |
| 39 | +Either command exposes all mounted apps on `http://localhost:${PORT:-8000}`: |
36 | 40 |
|
37 | | -4. Install dependencies: |
38 | 41 | ```bash |
39 | | -cd integrations/aws-strands-integration/python/examples |
40 | | -poetry install |
| 42 | +poetry run dev # uses the Poetry script entry point (server:main) |
| 43 | +# or |
| 44 | +poetry run python -m server |
41 | 45 | ``` |
42 | 46 |
|
43 | | -## Running the server |
| 47 | +The root route lists the available demos: |
44 | 48 |
|
45 | | -To run the server: |
| 49 | +| Route | Description | |
| 50 | +| --- | --- | |
| 51 | +| `/agentic-chat` | Simple chat agent with a frontend-only `change_background` tool | |
| 52 | +| `/backend-tool-rendering` | Backend-executed tools (charts, faux weather) rendered in AG-UI | |
| 53 | +| `/agentic-generative-ui` | Demonstrates `PredictState` + delta streaming for plan tracking | |
| 54 | +| `/shared-state` | Recipe builder showing shared JSON state + tool arguments | |
46 | 55 |
|
47 | | -```bash |
48 | | -cd integrations/aws-strands-integration/python/examples |
| 56 | +Point the AG-UI Dojo (or any AG-UI client) at these SSE endpoints to see the |
| 57 | +Strands wrapper translate Gemini events into protocol-native messages. |
49 | 58 |
|
50 | | -poetry install && poetry run dev |
51 | | -``` |
| 59 | +## Environment reference |
| 60 | + |
| 61 | +| Variable | Required | Purpose | |
| 62 | +| --- | --- | --- | |
| 63 | +| `GOOGLE_API_KEY` | Yes | Auth for the Gemini SDK (`strands.models.gemini.GeminiModel`) | |
| 64 | +| `PORT` | No | Overrides the default `8000` uvicorn port | |
52 | 65 |
|
53 | | -The server will start on `http://localhost:8000` by default. You can change the port by setting the `PORT` environment variable. |
| 66 | +All OpenTelemetry exporters are disabled by default in code (`OTEL_SDK_DISABLED` |
| 67 | +and `OTEL_PYTHON_DISABLED_INSTRUMENTATIONS`), so you do not need to set those |
| 68 | +manually. |
54 | 69 |
|
55 | | -## Integration Details |
| 70 | +## How it works |
56 | 71 |
|
57 | | -This integration uses the Strands Agents SDK with OpenAI models. The server: |
58 | | -- Accepts AG-UI protocol requests |
59 | | -- Connects to OpenAI models via Strands SDK |
60 | | -- Streams responses back as AG-UI events |
61 | | -- Handles tool calls and state management |
| 72 | +- Each `server/api/*.py` file constructs a Strands `Agent`, registers any tools, |
| 73 | + and wraps it with `ag_ui_strands.StrandsAgent`. |
| 74 | +- `server/__init__.py` mounts the four FastAPI apps under a single router and |
| 75 | + exposes the `main()` entrypoint that `poetry run dev` calls. |
| 76 | +- The project depends on `ag_ui_strands` via a path dependency (`..`) so you can |
| 77 | + develop the integration and server side-by-side without publishing a wheel. |
| 78 | +- Want a different Gemini tier? Update the `model_id` argument in the agent |
| 79 | + definitions inside `server/api/*.py`. |
62 | 80 |
|
63 | | -## Notes |
64 | 81 |
|
65 | | -- The integration uses OpenAI models (default: gpt-4o) |
66 | | -- Ensure your OpenAI API key is valid and has access to the specified model |
67 | | -- The integration supports streaming responses when available in the Strands SDK |
68 | | -- You can customize the model, max_tokens, and temperature via environment variables |
69 | 82 |
|
0 commit comments