An AI-powered chat widget you can drop into any website, plus a modern dashboard to track user conversations and analytics.
Click the image below to watch a short video demo of the project. It showcases the embeddable chat widget in action, along with the analytics dashboard.
- Features
- Architecture
- Tech Stack
- Getting Started
- Project Structure
- Embedding the Widget
- Deployment & Demos
- Challenges & Solutions
- Contributing
- License
- Embeddable Widget: AI chat powered by WebSocket + Express API
- Multi-user Support: Each user can mint multiple widgets
- Analytics Dashboard: Conversation metrics, usage trends, and more
- Zero-Config Embed: Copy & paste a single
<script>tag - Real-time Updates: Socket.io for live chat sessions
Monorepo with Turborepo & Bun:
- apps/chat-widget – React + Vite embeddable widget
- apps/web – Next.js dashboard & HTTP API
- apps/backend – Bun + Express + Socket.io server
- packages/db – Prisma client & database layer
- Widget opens a WebSocket to
backend→ streams user messages. - Backend invokes AI via
gemini API→ pushes responses back. - REST calls from
web(Next.js) read/write analytics viapackages/db. - Dashboard UI shows aggregated metrics and real-time sessions.
- Language: TypeScript
- Frameworks: React, Next.js, Express, Socket.io
- Bundler: Vite (widget), Turbo (monorepo)
- Runtime: Node.js ≥18 & Bun
- DB: Prisma + PostgreSQL database
- Deployment: Digital Ocean (socketio server and nextjs server), GitHub Pages (widget via
gh-pages)
- Node.js ≥18
- Bun v1.2.9+ (for backend & db)
- (Optional) Yarn, npm or pnpm
Clone & install:
git clone https://github.com/aadithya2112/chatterly
cd chatterly
# install everything
bun installRun all services in dev mode:
bun run dev
# → dashboard & widget & backend start concurrentlyIndividually: (Recommended)
- Generate Prisma Client
cd packages/db && bunx prisma generate
- Dashboard (Next.js)
cd apps/web && bun run dev
- Widget (Vite)
cd apps/chat-widget && bun run dev
- Backend (Socket.io)
cd apps/backend && bun src/index.ts
Create a .env (for each app) from .env.example:
# apps/backend/.env
GOOGLE_GENAI_API_KEY="…"
# apps/web/.env
DATABASE_URL="postgres://user:pass@localhost:5432/db"
JWT_SECRET="…"
# packages/db/.env
DATABASE_URL="postgres://user:pass@localhost:5432/db" # Same as web
## Project Structure
/ ├─ apps/ │ ├─ chat-widget/ # React widget (Vite) │ ├─ web/ # Dashboard & HTTP API (Next.js) │ └─ backend/ # WebSocket server (Bun + Express) └─ packages/ ├─ db/ # Prisma client & migrations
## Embedding the Widget
Once you’ve built & deployed `apps/chat-widget` (→ GitHub Pages or your CDN), add this to any HTML:
```html
<!DOCTYPE html>
<html lang="en">
<head>
<!-- Load the chat widget -->
<script
defer
src="https://YOUR_CDN_URL/chat-widget.js"
data-widget-id="WIDGET_ID"
data-api-url="https://dashboard.yoursite.com/api"
data-ws-url="wss://api.yoursite.com"
></script>
</head>
<body>
<h1>My page with AI Chat</h1>
</body>
</html>
data-widget-id: Unique ID from your Dashboarddata-api-url: REST endpoints for auth/analyticsdata-ws-url: WebSocket endpoint for live chat
- Widget Script (GitHub Pages): https://aadithya2112.github.io/chatterly/chat-widget.iife.js
- Main webpage (DigitalOcean): https://chatterly.aadithya.tech/
- CORS & WebSocket Upscaling
Enabled wildcard CORS on Socket.io; for production, lock to whitelisted domains. - Monorepo Builds
Leveraged Turborepo cache to parallelize builds & reduce CI time by 70%. - Shared Types
Configuredpackages/typescript-config& workspace path aliases to avoid duplication.
