This guide will help you deploy your Flask-based TechEdu Hub chatbot to Vercel for faster performance compared to Render.
- Vercel Account: Sign up at vercel.com
- GitHub Repository: Your code should be in a GitHub repository
- API Keys: OpenRouter or OpenAI API key for chat functionality
Ensure your repository contains these files (already created):
vercel.json- Vercel configurationapi/index.py- Serverless function entry point.vercelignore- Files to exclude from deployment.env.vercel- Environment variables templaterequirements.txt- Python dependencies
- Go to vercel.com and sign in
- Click "New Project"
- Import your GitHub repository
- Vercel will automatically detect it as a Python project
In your Vercel project dashboard, go to Settings > Environment Variables and add:
OPENROUTER_API_KEY=your_openrouter_api_key_here
SECRET_KEY=your_super_secret_key_here_change_in_production
FLASK_ENV=production
PYTHONPATH=.
DATABASE_URL=your_database_url_here
EMBED_MODEL=sentence-transformers/all-MiniLM-L6-v2
TOP_K=3
MAX_TOKENS=350
CHAT_MODEL=gpt-4o-mini
- Click "Deploy" in Vercel
- Vercel will:
- Install dependencies from
requirements.txt - Build the serverless function
- Deploy static files
- Provide you with a live URL
- Install dependencies from
- Works for development and small-scale production
- Data is ephemeral (resets on each deployment)
- No additional setup required
- Use PostgreSQL, MySQL, or other cloud databases
- Set
DATABASE_URLenvironment variable - Examples:
# PostgreSQL DATABASE_URL=postgresql://user:password@host:port/database # MySQL DATABASE_URL=mysql://user:password@host:port/database
After deployment:
- Visit your Vercel URL
- Test the chatbot functionality
- Check user registration and login
- Verify admin dashboard access
- Test FAQ search and AI responses
- Faster Cold Starts: Vercel's serverless functions start faster
- Global CDN: Static files served from edge locations
- Automatic Scaling: Scales to zero when not in use
- Better Performance: Generally faster response times
- Serverless Architecture: Each request is handled independently
- Memory Limits: 1GB memory limit per function
- Execution Time: 60-second timeout per request
- Database: Consider external database for persistent data
-
Import Errors:
- Ensure all dependencies are in
requirements.txt - Check Python path configuration
- Ensure all dependencies are in
-
Database Connection:
- Verify
DATABASE_URLenvironment variable - Ensure database is accessible from Vercel
- Verify
-
API Key Issues:
- Double-check
OPENROUTER_API_KEYorOPENAI_API_KEY - Ensure keys have sufficient credits/permissions
- Double-check
-
Static Files:
- Templates and static files are handled automatically
- Check
vercel.jsonconfiguration if issues occur
- View function logs in Vercel dashboard
- Use Vercel CLI for local testing:
vercel dev - Check browser developer tools for client-side errors
- Database Connection Pooling: Use connection pooling for external databases
- Caching: Implement caching for FAQ embeddings
- Async Operations: Use async/await for I/O operations where possible
- Memory Management: Monitor memory usage in Vercel dashboard
- Vercel Analytics: Built-in performance monitoring
- Function Logs: Real-time logging in dashboard
- Error Tracking: Automatic error detection and alerts
If you encounter issues:
- Check Vercel documentation: vercel.com/docs
- Review function logs in Vercel dashboard
- Test locally with
vercel dev - Check GitHub repository for configuration files
Note: This deployment maintains all features from the original Flask application while optimizing for Vercel's serverless environment.