AI-powered therapy session transcription and analysis platform
Transform therapy sessions into actionable insights with automatic transcription, speaker diarization, and intelligent analysis.
- Node.js 20+
- Supabase account (free tier)
- OpenAI API key
-
Clone and install
git clone <your-repo> cd "peerbridge proj/frontend" npm install
-
Configure environment
# Edit frontend/.env.local with your credentials: # - NEXT_PUBLIC_SUPABASE_URL # - NEXT_PUBLIC_SUPABASE_ANON_KEY # - OPENAI_API_KEY
-
Set up Supabase
- Create project at supabase.com
- Run
supabase/schema.sqlin SQL Editor - Copy URL and anon key to
.env.local
-
Run development server
npm run dev
Deploy in 10 minutes: See RAILWAY_DEPLOYMENT.md
Stack:
- β Railway - Next.js hosting + backend support ($5 FREE credit)
- β Supabase - PostgreSQL + file storage (FREE)
β οΈ OpenAI - Whisper API + GPT-4 (~$0.40 per session)
Why Railway over Vercel: Transparent pricing, no dark patterns, developer-first platform
- Session Timeline - Chronological view of all therapy sessions
- AI Chat (Dobby) - Ask questions about your therapy journey
- Notes & Goals - Track progress and treatment plans
- Progress Patterns - Visualize mood and topic trends
- Upload Page - Drag-drop audio files for processing
- Automatic Transcription - OpenAI Whisper API (accurate, fast)
- Speaker Diarization - Identify Therapist vs. Client
- Session Analysis - GPT-4 extracts:
- Overall mood/tone
- Main topics discussed
- Key insights
- Action items
- Brief summary
- Live progress bar during processing
- Status polling every 2 seconds
- Estimated completion time
βββββββββββββββββββββββββββββββββββ
β Next.js 16 + React 19 β
β (Deployed on Railway) β
β - App Router β
β - Server Components β
β - API Routes β
βββββββββββββ¬ββββββββββββββββββββββ
β
βββΊ Supabase
β - PostgreSQL (sessions, users, notes)
β - Storage (audio files)
β - Row Level Security
β
βββΊ OpenAI APIs
- Whisper (transcription)
- GPT-4 (analysis)
Core Tables:
users- Therapists and patientspatients- Extended patient infotherapy_sessions- Session metadata + resultssession_notes- AI-extracted clinical notestreatment_goals- Goal tracking
Storage:
audio-sessionsbucket - Uploaded audio files
peerbridge proj/
βββ frontend/ # Next.js application
β βββ app/
β β βββ api/ # Serverless API routes
β β β βββ upload/ # File upload endpoint
β β β βββ process/ # Audio processing
β β β βββ status/[id]/ # Status polling
β β β βββ trigger-processing/ # Async trigger
β β βββ patient/dashboard-v3/ # Main dashboard
β β βββ upload/ # Upload page
β β βββ components/ # UI components
β βββ lib/
β β βββ supabase.ts # Supabase client + types
β β βββ api-client.ts # API helpers
β βββ package.json
β
βββ audio-transcription-pipeline/ # Original pipeline (reference)
β βββ src/
β β βββ pipeline.py # CPU/API pipeline
β β βββ pipeline_gpu.py # GPU pipeline (legacy)
β βββ ui-web/ # React UI (reference)
β
βββ supabase/
β βββ schema.sql # Database schema
β
βββ DEPLOYMENT.md # Deployment guide
βββ README.md # This file
Each project is self-contained with its own:
- Virtual environment (
venv/) - Dependencies (
requirements.txt) - Configuration (
.env,.python-version) - Tests and documentation
Run Pipeline:
cd audio-transcription-pipeline
source venv/bin/activate
python tests/test_full_pipeline.pyRun Backend:
cd backend
source venv/bin/activate
uvicorn app.main:app --reload- Master documentation:
Project MDs/TherapyBridge.md(start here!) - Organization rules:
.claude/CLAUDE.md - Orchestration methodology:
.claude/DYNAMIC_WAVE_ORCHESTRATION.md - Pipeline docs:
audio-transcription-pipeline/README.md - Backend docs:
backend/README.md - Frontend docs:
frontend/README.md
Each project needs its own .env file:
Pipeline:
cd audio-transcription-pipeline
cp .env.example .env
# Edit .env with your OpenAI and HuggingFace keysBackend:
cd backend
cp .env.example .env
# Edit .env with your database URL and OpenAI keyThis repo follows strict organization rules (see .claude/CLAUDE.md):
- Minimize file count
- One README per component
- No implementation plans (execute and delete)
- No duplicate configs
- Value over volume
- Transcription: OpenAI Whisper API / faster-whisper (GPU)
- Diarization: pyannote.audio 3.1
- Backend: FastAPI + PostgreSQL (Neon)
- AI Extraction: OpenAI GPT-4o
- Frontend: Next.js 16 + React 19 + Tailwind CSS
Each project has independent development:
- Separate virtual environments
- Separate dependencies
- Separate test suites
- Can be deployed independently
Proprietary - TherapyBridge Project