Skip to content

Commit 867ec52

Browse files
authored
Merge pull request #447 from DefangLabs/linda-agentic-autogen
Autogen Agent sample
2 parents 1f29d1d + 76063bc commit 867ec52

File tree

18 files changed

+5396
-1
lines changed

18 files changed

+5396
-1
lines changed

.github/workflows/deploy-changed-samples.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -82,6 +82,7 @@ jobs:
8282
TEST_MB_DB_PASS: ${{ secrets.TEST_MB_DB_PASS }}
8383
TEST_MB_DB_PORT: ${{ secrets.TEST_MB_DB_PORT }}
8484
TEST_MB_DB_USER: ${{ secrets.TEST_MB_DB_USER }}
85+
TEST_MISTRAL_API_KEY: ${{ secrets.TEST_MISTRAL_API_KEY }}
8586
TEST_MODEL: ${{ secrets.TEST_MODEL }}
8687
TEST_MONGO_INITDB_ROOT_USERNAME: ${{ secrets.TEST_MONGO_INITDB_ROOT_USERNAME }}
8788
TEST_MONGO_INITDB_ROOT_PASSWORD: ${{ secrets.TEST_MONGO_INITDB_ROOT_PASSWORD }}
@@ -103,7 +104,6 @@ jobs:
103104
TEST_SHARED_SECRETS: ${{ secrets.TEST_SHARED_SECRETS}}
104105
TEST_TAVILY_API_KEY: ${{ secrets.TEST_TAVILY_API_KEY }}
105106
TEST_ALLOWED_HOSTS: ${{ secrets.TEST_ALLOWED_HOSTS }}
106-
TEST_MISTRAL_API_KEY: ${{ secrets.TEST_MISTRAL_API_KEY }}
107107
run: |
108108
SAMPLES=$(sed 's|^samples/||' changed_samples.txt | paste -s -d ',' -)
109109
echo "Running tests for samples: $SAMPLES"
Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
2+
FROM mcr.microsoft.com/devcontainers/python:3.13-bookworm
Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
{
2+
"build": {
3+
"dockerfile": "Dockerfile",
4+
"context": ".."
5+
},
6+
"features": {
7+
"ghcr.io/defanglabs/devcontainer-feature/defang-cli:1.0.4": {},
8+
"ghcr.io/devcontainers/features/docker-in-docker:2": {},
9+
"ghcr.io/devcontainers/features/aws-cli:1": {}
10+
}
11+
}
Lines changed: 25 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,25 @@
1+
name: Deploy
2+
3+
on:
4+
push:
5+
branches:
6+
- main
7+
8+
jobs:
9+
deploy:
10+
environment: playground
11+
runs-on: ubuntu-latest
12+
permissions:
13+
contents: read
14+
id-token: write
15+
16+
steps:
17+
- name: Checkout Repo
18+
uses: actions/checkout@v4
19+
20+
- name: Deploy
21+
uses: DefangLabs/[email protected]
22+
with:
23+
config-env-vars: MISTRAL_API_KEY
24+
env:
25+
MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}

samples/agentic-autogen/.gitignore

Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,30 @@
1+
# Environment variables
2+
.env
3+
.env.local
4+
.env.production
5+
6+
# Python
7+
__pycache__/
8+
/myenv
9+
10+
# Node.js
11+
node_modules/
12+
npm-debug.log*
13+
.pnpm-debug.log*
14+
15+
# Frontend build
16+
frontend/dist/
17+
frontend/build/
18+
19+
# IDE
20+
.vscode/
21+
.idea/
22+
*~
23+
24+
# OS
25+
.DS_Store
26+
._*
27+
Thumbs.db
28+
29+
# Optional npm cache directory
30+
.npm

samples/agentic-autogen/README.md

Lines changed: 57 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,57 @@
1+
# Agentic Autogen
2+
3+
[![1-click-deploy](https://raw.githubusercontent.com/DefangLabs/defang-assets/main/Logos/Buttons/SVG/deploy-with-defang.svg)](https://portal.defang.dev/redirect?url=https%3A%2F%2Fgithub.com%2Fnew%3Ftemplate_name%3Dsample-agentic-autogen-template%26template_owner%3DDefangSamples)
4+
5+
This sample shows an agentic Autogen application using Mistral and FastAPI, deployed with Defang. For demonstration purposes, it will require a [Mistral AI](https://mistral.ai/) API key (see [Configuration](#configuration) for more details). However, you are free to modify it to use a different LLM, say the [Defang OpenAI Access Gateway](https://github.com/DefangLabs/openai-access-gateway/) service, as an alternative. Note that the Vite React frontend is served through the FastAPI backend so that they can be treated as one service in production.
6+
7+
## Prerequisites
8+
9+
1. Download [Defang CLI](https://github.com/DefangLabs/defang)
10+
2. (Optional) If you are using [Defang BYOC](https://docs.defang.io/docs/concepts/defang-byoc) authenticate with your cloud provider account
11+
3. (Optional for local development) [Docker CLI](https://docs.docker.com/engine/install/)
12+
13+
## Development
14+
15+
To run the application locally, you can use the following command:
16+
17+
```bash
18+
docker compose up --build
19+
```
20+
21+
## Configuration
22+
23+
For this sample, you will need to provide the following [configuration](https://docs.defang.io/docs/concepts/configuration):
24+
25+
> Note that if you are using the 1-click deploy option, you can set these values as secrets in your GitHub repository and the action will automatically deploy them for you.
26+
27+
### `MISTRAL_API_KEY`
28+
An API key to access the [Mistral AI API](https://mistral.ai/).
29+
```bash
30+
defang config set MISTRAL_API_KEY
31+
```
32+
33+
## Deployment
34+
35+
> [!NOTE]
36+
> Download [Defang CLI](https://github.com/DefangLabs/defang)
37+
38+
### Defang Playground
39+
40+
Deploy your application to the Defang Playground by opening up your terminal and typing:
41+
```bash
42+
defang compose up
43+
```
44+
45+
### BYOC
46+
47+
If you want to deploy to your own cloud account, you can [use Defang BYOC](https://docs.defang.io/docs/tutorials/deploy-to-your-cloud).
48+
49+
---
50+
51+
Title: Agentic Autogen
52+
53+
Short Description: An Autogen agent application using Mistral and FastAPI, deployed with Defang.
54+
55+
Tags: Agent, Autogen, Mistral, FastAPI, Vite, React, Python, JavaScript, AI
56+
57+
Languages: Python, JavaScript
Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
2+
services:
3+
autogen:
4+
build:
5+
context: ./src
6+
dockerfile: Dockerfile
7+
ports:
8+
- "3000:3000"
9+
environment:
10+
- MISTRAL_API_KEY=${MISTRAL_API_KEY}
11+
restart: unless-stopped
12+
healthcheck:
13+
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
14+
interval: 30s
15+
timeout: 10s
16+
retries: 3
Lines changed: 27 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,27 @@
1+
# Default .dockerignore file for Defang
2+
**/__pycache__
3+
**/.direnv
4+
**/.DS_Store
5+
**/.envrc
6+
**/.git
7+
**/.github
8+
**/.idea
9+
**/.next
10+
**/.vscode
11+
**/compose.*.yaml
12+
**/compose.*.yml
13+
**/compose.yaml
14+
**/compose.yml
15+
**/docker-compose.*.yaml
16+
**/docker-compose.*.yml
17+
**/docker-compose.yaml
18+
**/docker-compose.yml
19+
**/node_modules
20+
**/Thumbs.db
21+
# Dockerfile
22+
# *.Dockerfile
23+
# Ignore our own binary, but only in the root to avoid ignoring subfolders
24+
defang
25+
defang.exe
26+
# Ignore our project-level state
27+
.defang
Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,40 @@
1+
# ------------ FRONTEND BUILD STAGE ------------ #
2+
FROM node:22-alpine AS frontend-build
3+
4+
WORKDIR /frontend
5+
6+
# Copy only package files first (cache optimization)
7+
COPY frontend/package*.json ./
8+
RUN npm ci
9+
10+
# Copy rest of the frontend source
11+
COPY frontend/ ./
12+
RUN npm run build
13+
14+
15+
# ------------ BACKEND STAGE ------------ #
16+
FROM python:3.13-slim AS backend
17+
18+
WORKDIR /app
19+
20+
# Install build dependencies for Python packages
21+
RUN apt-get update && apt-get install -y \
22+
gcc \
23+
curl \
24+
&& rm -rf /var/lib/apt/lists/*
25+
26+
# Install dependencies
27+
COPY backend/requirements.txt ./
28+
RUN pip install --no-cache-dir -r requirements.txt
29+
30+
# Copy backend code
31+
COPY backend/ ./
32+
33+
# Copy built frontend into backend's static folder
34+
COPY --from=frontend-build /frontend/dist ./static
35+
36+
# Optionally expose the same port for both backend and frontend
37+
EXPOSE 3000
38+
39+
# Start the backend server (which serves both API + frontend files)
40+
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "3000"]
Lines changed: 146 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,146 @@
1+
from fastapi import FastAPI, HTTPException, Request
2+
from fastapi.responses import FileResponse
3+
from fastapi.staticfiles import StaticFiles
4+
from pydantic import BaseModel
5+
import autogen
6+
import os
7+
from dotenv import load_dotenv
8+
import asyncio
9+
from typing import List, Dict, Any
10+
11+
# Load environment variables
12+
load_dotenv()
13+
14+
app = FastAPI(title="AI Debate Club", version="1.0.0")
15+
16+
# Serve React static files from "./static"
17+
app.mount("/static", StaticFiles(directory="static"), name="static")
18+
19+
@app.get("/{full_path:path}")
20+
async def serve_react_app(full_path: str, request: Request):
21+
# Serve static files and frontend src files for SPA routing and development
22+
# Try to serve files from /static first
23+
static_file_path = os.path.join("static", full_path)
24+
frontend_src_path = os.path.join("frontend", "src", full_path)
25+
26+
if os.path.isfile(static_file_path):
27+
return FileResponse(static_file_path)
28+
elif os.path.isfile(frontend_src_path):
29+
return FileResponse(frontend_src_path)
30+
else:
31+
# Fallback to React index.html for SPA routing
32+
index_path = os.path.join("static", "index.html")
33+
if os.path.exists(index_path):
34+
return FileResponse(index_path)
35+
return {"error": "Not Found"}
36+
37+
# Debate request and response models
38+
class DebateRequest(BaseModel):
39+
topic: str
40+
41+
class DebateResponse(BaseModel):
42+
messages: List[Dict[str, Any]]
43+
topic: str
44+
45+
# Mistral API configuration
46+
def get_llm_config():
47+
api_key = os.getenv("MISTRAL_API_KEY")
48+
if not api_key:
49+
raise HTTPException(status_code=500, detail="MISTRAL_API_KEY not configured. Please set MISTRAL_API_KEY in your .env file")
50+
51+
return {"config_list": [{"model": "mistral-small", "api_type": "mistral", "base_url": "https://api.mistral.ai/v1", "api_key": api_key}]}
52+
53+
def create_debate_agents():
54+
"""Create the debate agents with specific roles"""
55+
llm_config = get_llm_config()
56+
57+
# Pro agent (argues in favor)
58+
pro_agent = autogen.ConversableAgent(
59+
name="ProAgent",
60+
system_message="You are a skilled debater arguing in FAVOR of the given topic. Present compelling arguments, evidence, and reasoning to support the topic. Be persuasive, logical, and engaging. Keep responses concise but impactful. Always stay in character as the 'pro' side of the debate.",
61+
llm_config=llm_config,
62+
)
63+
64+
# Con agent (argues against)
65+
con_agent = autogen.ConversableAgent(
66+
name="ConAgent",
67+
system_message="You are a skilled debater arguing AGAINST the given topic. Present compelling arguments, evidence, and reasoning to oppose the topic. Be persuasive, logical, and engaging. Keep responses concise but impactful. Always stay in character as the 'con' side of the debate.",
68+
llm_config=llm_config,
69+
)
70+
71+
# return user_proxy, pro_agent, con_agent
72+
return pro_agent, con_agent
73+
74+
def init_autogen_chat(manager, agent, debate_prompt):
75+
manager.initiate_chat(agent, message=debate_prompt)
76+
77+
@app.post("/debate", response_model=DebateResponse)
78+
async def start_debate(request: DebateRequest):
79+
"""Start a debate between Pro and Con agents on the given topic"""
80+
try:
81+
# Create agents
82+
pro_agent, con_agent = create_debate_agents()
83+
84+
# Create group chat with simpler configuration
85+
groupchat = autogen.GroupChat(
86+
agents=[pro_agent, con_agent],
87+
messages=[],
88+
max_round=4, # Maximum 4 turns as specified
89+
speaker_selection_method="round_robin", # Add round robin turn-taking
90+
allow_repeat_speaker=False, # Prevent the same agent from speaking twice in a row
91+
)
92+
93+
manager = autogen.GroupChatManager(
94+
groupchat=groupchat,
95+
llm_config=get_llm_config(),
96+
)
97+
98+
# Start the debate with a simpler prompt
99+
debate_prompt = f"Debate topic: {request.topic}. ProAgent argues FOR, ConAgent argues AGAINST. Keep it respectful and engaging."
100+
101+
# Run the debate
102+
try:
103+
# Try AutoGen first
104+
await asyncio.to_thread(
105+
init_autogen_chat,
106+
manager,
107+
pro_agent,
108+
debate_prompt
109+
)
110+
except Exception as chat_error:
111+
# Return a mock response if AutoGen fails
112+
print(f"AutoGen chat failed: {chat_error}")
113+
114+
mock_messages = [
115+
{"role": "ProAgent", "content": f"I will argue in favor of: {request.topic}. This is an important topic that deserves careful consideration."},
116+
{"role": "ConAgent", "content": f"I will argue against: {request.topic}. There are valid concerns that need to be addressed."}
117+
]
118+
return DebateResponse(topic=request.topic, messages=mock_messages)
119+
120+
# Extract messages from the chat result
121+
messages = []
122+
for msg in groupchat.messages:
123+
if msg.get("content") and msg.get("name"):
124+
# Clean up the message format to avoid API issues
125+
clean_msg = {
126+
"role": msg["name"],
127+
"content": msg["content"]
128+
}
129+
messages.append(clean_msg)
130+
131+
return DebateResponse(
132+
topic=request.topic,
133+
messages=messages
134+
)
135+
136+
except Exception as e:
137+
raise HTTPException(status_code=500, detail=f"Debate failed: {str(e)}")
138+
139+
@app.get("/health")
140+
async def health_check():
141+
"""Health check endpoint"""
142+
return {"status": "healthy", "message": "AI Debate Club is running"}
143+
144+
if __name__ == "__main__":
145+
import uvicorn
146+
uvicorn.run(app, host="0.0.0.0", port=8000)

0 commit comments

Comments
 (0)