A full-stack AI customer support system built with a multi-agent architecture. The system uses a parent Router Agent to analyze user intent and delegate tasks to specialized sub-agents: Order Agent, Billing Agent, and Support Agent.
- Demo Video: Loom Video
- Live Application: Vercel Deployment
- Multi-Agent Architecture: Intelligent routing between specialized agents.
- Real-time Streaming: AI responses are streamed to the frontend for a smooth UX.
- Context Awareness: Maintains conversation history and user context across messages.
- Tool Integration: Agents can interact with a PostgreSQL database via Prisma to fetch real-time order and invoice data.
- Type-Safe: Built with TypeScript and Hono for end-to-end type safety.
- Frontend: React, Vite, Lucide Icons.
- Backend: Hono, Node.js.
- AI: Vercel AI SDK, Google Gemini & OpenAI.
- Database: PostgreSQL (Supabase) with Prisma ORM.
- Monorepo Management: Turborepo.
The project follows a Controller-Service pattern:
- Router Agent: Analyzes the incoming query and classifies it into
ORDER,BILLING, orSUPPORT. - Sub-Agents:
- Order Agent: Handles status checks, tracking, and order details.
- Billing Agent: Manages invoices, payment status, and refunds.
- Support Agent: Handles general FAQs.
- Tools: Structured JSON tools allow agents to query the database safely.
- Node.js (v18+)
- PostgreSQL Database (e.g., Supabase)
Run from the root directory:
npm installCreate a .env file in the Backend directory:
# Database
DATABASE_URL="postgresql://postgres.[ref]:[pass]@aws-0-us-east-1.pooler.supabase.com:6543/postgres?pgbouncer=true"
# AI Providers
GOOGLE_GENERATIVE_AI_API_KEY="your_google_key"
OPENAI_API_KEY="your_openai_key"
# Supabase API (Optional if using direct DB connection)
SUPABASE_URL="https://your-project.supabase.co"
SUPABASE_ANON_KEY="your-anon-key"Create a .env file in the Frontend directory (optional for local, required for prod):
VITE_API_URL="http://localhost:3000" # Or your production Backend URLInitialize the database and seed test data:
cd Backend
npm run db:push
npm run db:seedFrom the root directory, start both Frontend and Backend in parallel:
npm run dev- Frontend: http://localhost:5173
- Backend: http://localhost:3000
- Connect your GitHub repo.
- Root Directory:
Backend - Build Command:
npm install && npm run build - Start Command:
npm start- Note: The start command includes a safety build step.
- Environment Variables: Add
DATABASE_URL,GOOGLE_GENERATIVE_AI_API_KEY,OPENAI_API_KEY.
- Connect your GitHub repo.
- Root Directory:
Frontend - Build Command:
npm run build(Default) - Output Directory:
dist(Default) - Environment Variables:
VITE_API_URL: Your Render Backend URL (e.g.,https://my-support-ai.onrender.com)
- Robust Deployment: Backend is configured to auto-build typescript and generate Prisma clients on deployment.
- Type Safety: strict TypeScript configuration across the monorepo.
- Error Handling: Graceful error handling for missing API keys and database connection issues.
During the development and deployment of this project, several technical hurdles were encountered and resolved:
-
Database Connection Issues (Prisma + Supabase):
- Challenge: Encountered issues with connection pooling when using Prisma with Supabase in a serverless/containerized environment.
- Solution: Configured the connection string with
?pgbouncer=trueand adjusted the Prisma schema to handle connection limits effectively.
-
CORS & Production Environment Variables:
- Challenge: Initially, the frontend failed to communicate with the backend on Render due to CORS restrictions and incorrect environment variable naming in the production dashboard.
- Solution: Implemented dynamic CORS configuration in the Hono backend to allow the specific Vercel domain and ensured all
VITE_prefixed variables were correctly set in the Vercel dashboard.
-
Vercel AI SDK Streaming:
- Challenge: Getting the streaming response to work correctly between the Backend (Hono) and the Frontend (React) required precise handling of the
StreamDataandAIStreamobjects to ensure the UI updated in real-time without buffering. - Solution: Refined the backend tool-calling logic to ensure that streaming chunks were sent immediately after tool execution.
- Challenge: Getting the streaming response to work correctly between the Backend (Hono) and the Frontend (React) required precise handling of the
-
Monorepo Deployment Complexity:
- Challenge: Deploying a Turborepo project where the Frontend and Backend are in separate subdirectories required specific configuration for "Root Directory" and "Build Commands" on both Render and Vercel.
- Solution: Configured the
Backenddirectory as a standalone project for Render and used theFrontenddirectory as the root for Vercel, ensuringnpm installworked correctly in both contexts.