- Overview
- Architecture
- Project Structure
- Tech Stack
- Quickstart
- Ingesting Documents
- Voice Call Setup
- Features
- Deployment
This project demonstrates a complete production-ready implementation of an AI-powered support assistant. It features multimodal interaction (voice/chat), knowledge retrieval via vector search, and integration with Vapi for real-time voice calling. The system is built with FastAPI, LangChain, Weaviate, and Streamlit, containerized with Docker, and deployable on AWS. These agents handled customer queries with human-like precision across diverse channels, reduced support ticket load, improved CSAT scores, and integrated seamlessly with CRMs and business platforms.
ai-support-agent/
│
├── app/
│ ├── agents/
│ │ └── support_agent.py
│ ├── chains/
│ │ └── retrieval_chain.py
│ ├── data/
│ │ └── loader.py
│ ├── db/
│ │ └── weaviate_client.py
│ ├── endpoints/
│ │ └── routes.py
│ ├── services/
│ │ └── llm_interface.py
│ ├── vapi/
│ │ └── vapi_integration.py
│ ├── streamlit_app/
│ │ └── ui.py
│ └── main.py
│
├── config/
│ └── settings.py
│
├── tests/
│ └── test_routes.py
│
├── Dockerfile
├── requirements.txt
├── .env
├── README.md
- Lang: Python, TypeScript, Bash
- Backend/API: FastAPI
- LLMs & Prompt Orchestration: OpenAI (GPT-4), Claude, LangChain,
- Knowledge Retrieval & RAG: Weaviate (primary vector store), LlamaIndex (for document parsing and indexing)
- Voice & Audio Interfaces: Vapi.ai
- Workflow Automation & Orchestration: n8n (for business logic workflows), LangChain Agents (for tool-based tasks)
- Observability & Tracing: LangSmith, LLMGuard (safety filters & evaluation)
- Frontend: Streamlit
- Deployment & Infrastructure: Docker, Terraform, GitHub Actions, AWS
- Security & Compliance: HashiCorp Vault (secrets), OPA (policy), OAuth2, PII masking
- 3rd-Party Integrations: SendGrid, Slack, Google Calendar (for CRM, alerts, reminders)
git clone https://github.com/yourusername/ai-support-agent.git
cd ai-support-agentCreate .env file and include:
OPENAI_API_KEY=your-key
WEAVIATE_URL=http://localhost:8080
VAPI_API_KEY=your-vapi-key
docker-compose up --buildAPI: http://localhost:8000
Chat UI: http://localhost:8501
Drop PDFs or .txt files into a folder and run:
python app/ingestion/document_ingestor.pyMake sure your Vapi account is configured correctly. Voice calls can be handled in:
app/vapi/voice_router.py- Context-aware, memory-capable chat
- RAG (Retrieval-Augmented Generation) for domain-specific queries
- Voice interaction using Vapi
- Deployable locally or to the cloud
- Easily extendable with more agents, tools, or endpoints
- Use the included Docker setup to deploy the API, Weaviate, and Streamlit UI.
- For AWS: Containerize with ECR or ECS
- Attach persistent volume to Weaviate if needed
- Secure API with IAM/SSL/Gateway

