A tool-using AI agent API that routes user requests to the appropriate tools (knowledge base search, ticket creation, followup scheduling) using OpenAI's function calling.
- Tool-using Agent: LangGraph-based agent that decides when to call tools
- Knowledge Base Search: Semantic search using Qdrant vector database with OpenAI embeddings
- Ticket Creation: SQLite-backed support ticket system with SQLAlchemy ORM
- Followup Scheduling: Schedule customer followups via email/phone/WhatsApp
- Full Observability: Structured logging with ClickHouse, trace IDs, latency metrics
- CI/CD Pipeline: GitHub Actions for automated testing, linting, and type checking
┌─────────────────────────────────────────────────────────────────────────┐
│ Client Request │
└─────────────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────────┐
│ FastAPI + Middlewares │
│ (Request Logging, Error Handling) │
└─────────────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────────┐
│ LangGraph Agent Orchestration │
│ (Tool loop with max 6 iterations safety cap) │
└─────────────────────────────────────────────────────────────────────────┘
│ │ │
▼ ▼ ▼
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
│ search_kb │ │create_ticket │ │schedule_ │
│ │ │ │ │followup │
└──────┬───────┘ └──────┬───────┘ └──────┬───────┘
│ │ │
▼ └────────┬────────┘
┌──────────────┐ ▼
│ Qdrant │ ┌──────────────┐
│ (Vectors) │ │ SQLite │
└──────────────┘ │ (Tickets/ │
│ Followups) │
└──────────────┘
┌─────────────────────────────────────────────────────────────────────────┐
│ ClickHouse (Structured Logs) │
│ ▲ │
│ │ │
│ Grafana Dashboards │
└─────────────────────────────────────────────────────────────────────────┘
- Framework: FastAPI + Pydantic v2
- Agent Orchestration: LangGraph + langchain-openai
- Vector Database: Qdrant (semantic similarity search)
- Embeddings: OpenAI text-embedding-3-small (1536 dimensions)
- Relational Database: SQLite with SQLAlchemy ORM
- Logging: structlog + ClickHouse (OLAP for log analytics)
- Monitoring: Grafana dashboards
- Testing: pytest with 177+ tests
- CI/CD: GitHub Actions (lint, type-check, test)
The project includes a GitHub Actions pipeline (.github/workflows/ci.yml) that runs on every push and pull request to main and dev branches:
| Step | Description |
|---|---|
| Format Check | ruff format --check ensures consistent code formatting |
| Lint | ruff check catches code quality issues |
| Type Check | mypy validates static typing |
| Tests | pytest runs the full test suite |
The pipeline uses Poetry for dependency management with caching to speed up builds.
All application logs are captured with structlog and optionally stored in ClickHouse for analysis:
- Trace IDs: Every request gets a unique
trace_idfor end-to-end tracing - Latency Metrics: Tool execution times, OpenAI call durations
- Tool Call Tracking: Which tools were called, with what arguments, and their results
Enable ClickHouse logging by setting CLICKHOUSE_ENABLED=true in your .env file.
With ClickHouse as the log backend, you can use Grafana to:
- Analyze request patterns: Track which tools are used most frequently
- Monitor latency: Identify slow requests and bottlenecks
- Debug issues: Search logs by trace_id to reconstruct request flows
- Create alerts: Set up alerts for error rates or latency spikes
Access Grafana at http://localhost:3000 after running docker compose up -d.
The knowledge base uses Qdrant for semantic search:
- Embedding Model: OpenAI
text-embedding-3-small(1536 dimensions) - Similarity Metric: Cosine similarity
- Collection:
knowledge_basewith payload filtering support
Access Qdrant dashboard at http://localhost:6333/dashboard.
- Python 3.11+
- Docker & Docker Compose
- Poetry
- OpenAI API key
git clone <repository-url>
cd flyboardpoetry installcp .env.example .env
# Edit .env and set your OPENAI_API_KEYRequired environment variables:
OPENAI_API_KEY=your_openai_api_key_here
Optional environment variables:
OPENAI_MODEL=gpt-4o # Default: gpt-4o
QDRANT_URL=http://localhost:6333 # Default: http://localhost:6333
SQLITE_DB_PATH=app/resources/database/data/flyboard.db # Default path
CLICKHOUSE_ENABLED=false # Enable ClickHouse logging
LOG_LEVEL=INFO # Logging level
LOG_FORMAT=console # console or json
docker compose up -dThis starts:
- Qdrant (vector database) on port 6333
- ClickHouse (log storage) on ports 8123, 9000
- Grafana (dashboards) on port 3000
poetry run python scripts/seed_kb.pypoetry run uvicorn app.app:app --host 0.0.0.0 --port 8000Or use the Makefile:
make runGET /healthResponse:
{"status": "ok"}POST /v1/agent/run
Content-Type: application/json
{
"task": "string",
"customer_id": "string (optional)",
"language": "string (optional, e.g., 'en', 'es', 'pt')"
}Response:
{
"trace_id": "string",
"final_answer": "string",
"tool_calls": [
{
"name": "search_kb|create_ticket|schedule_followup",
"arguments": {},
"result": {},
"duration_ms": 0
}
],
"metrics": {
"latency_ms": 0,
"model": "string",
"openai_calls": 0
}
}Search the knowledge base for relevant information.
{
"query": "string",
"top_k": 5,
"filters": {
"tags": ["pricing"],
"audience": "customer"
}
}Create a support ticket.
{
"title": "string",
"body": "string",
"priority": "low|medium|high"
}Schedule a customer followup.
{
"datetime_iso": "2026-01-06T10:30:00",
"contact": "email/phone/name",
"channel": "email|phone|whatsapp"
}curl -X POST http://localhost:8000/v1/agent/run \
-H "Content-Type: application/json" \
-d '{"task": "Give me the high-level pricing model and what can change the quote."}'curl -X POST http://localhost:8000/v1/agent/run \
-H "Content-Type: application/json" \
-d '{"task": "How does CRM writeback work and how long does it take to set up?"}'curl -X POST http://localhost:8000/v1/agent/run \
-H "Content-Type: application/json" \
-d '{"task": "We are failing to write to HubSpot since this morning. What should we check and can you open a high priority ticket for ops?"}'curl -X POST http://localhost:8000/v1/agent/run \
-H "Content-Type: application/json" \
-d '{"task": "Schedule a follow-up call with Marta (+34612345678) tomorrow at 10:30 CET via WhatsApp to discuss custom SLA."}'curl -X POST http://localhost:8000/v1/agent/run \
-H "Content-Type: application/json" \
-d '{"task": "¿En qué idiomas funciona y qué incluye el onboarding?"}'Run all tests:
poetry run pytest tests/ -vRun with coverage:
poetry run pytest tests/ --cov=app --cov-report=htmlpoetry run mypy app/
poetry run ruff check app/app/
├── app.py # FastAPI application
├── controllers/ # Request handlers
├── middlewares/ # Error handlers, logging middleware
├── routes/ # API routes
├── schemas/ # Pydantic models
├── resources/
│ ├── assistant/
│ │ ├── engine.py # LangGraph agent orchestration
│ │ ├── prompts/ # System prompts
│ │ ├── schemas/ # Agent schemas
│ │ └── tools/ # Tool implementations
│ ├── database/ # SQLite + SQLAlchemy
│ ├── embedding/ # OpenAI embeddings
│ ├── logs_database/ # ClickHouse provider
│ └── vector_database/ # Qdrant provider
└── utils/
└── logger.py # Structured logging
tests/
├── controllers/
├── middlewares/
├── resources/
│ └── assistant/
│ ├── prompts/
│ └── tools/
└── routes/
- Max Tool Iterations: The agent loop has a hard cap of 6 iterations to prevent infinite loops
- Input Validation: All inputs are validated with Pydantic v2 schemas
- Error Handling: OpenAI errors return 502 with trace_id for debugging
- Graceful Degradation: If ClickHouse is unavailable, logs fall back to stdout
- SQLite:
app/resources/database/data/flyboard.db- Stores tickets and followups - Qdrant: Docker volume - Stores knowledge base vectors