From 553deefb7b4571c0f00f696ec0c7089da34d7039 Mon Sep 17 00:00:00 2001 From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com> Date: Fri, 3 Oct 2025 08:07:24 +0000 Subject: [PATCH 1/5] Initial plan From 0a0ef9bc2464b4c094cc700ae80439180d6297d7 Mon Sep 17 00:00:00 2001 From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com> Date: Fri, 3 Oct 2025 08:13:08 +0000 Subject: [PATCH 2/5] Update server1.py with environment-based config and CORS support Co-authored-by: yashnaiduu <152394598+yashnaiduu@users.noreply.github.com> --- Dockerfile | 5 +-- README.md | 101 ++++++++++++++++++++++++++++++++++++++++++++++++++--- server1.py | 13 +++++-- 3 files changed, 110 insertions(+), 9 deletions(-) diff --git a/Dockerfile b/Dockerfile index c4baca0f..a06246bd 100644 --- a/Dockerfile +++ b/Dockerfile @@ -31,5 +31,6 @@ RUN mkdir -p Uploads # Expose port EXPOSE 5050 -# Run the application -CMD ["python", "server1.py"] \ No newline at end of file +# Run the application with gunicorn for production +# Railway will set the PORT environment variable +CMD gunicorn --bind 0.0.0.0:${PORT:-5050} --timeout 120 server1:app \ No newline at end of file diff --git a/README.md b/README.md index fa5694fe..1caa386a 100644 --- a/README.md +++ b/README.md @@ -73,6 +73,76 @@ The built files in the `out` directory can be deployed to any static hosting ser - Azure Static Web Apps - Surge.sh +## 🖥️ Backend Deployment (Flask API) + +### Railway Deployment (Recommended for Backend) + +The Flask backend (`server1.py`) can be easily deployed to Railway: + +#### Prerequisites + +1. **Model File**: Upload `mobilenet_brain_tumor_classifier.h5` to your repository or use external storage (Google Cloud Storage, AWS S3, etc.) +2. **Dataset Files**: Upload dataset files to storage or include them in the repository (note: large files may require Git LFS or external storage) + +#### Deployment Steps + +1. **Push code to GitHub** + ```bash + git add . + git commit -m "Prepare for Railway deployment" + git push + ``` + +2. **Deploy to Railway** + - Go to [Railway.app](https://railway.app) + - Create a new project from your GitHub repository + - Railway will automatically detect the Dockerfile + +3. **Configure Environment Variables** in Railway dashboard: + ```env + GOOGLE_API_KEY=your_gemini_api_key_here + PORT=5050 + UPLOAD_FOLDER=Uploads + DATASET_PATH=./Dataset + MODEL_PATH=mobilenet_brain_tumor_classifier.h5 + ``` + +4. **Get your Railway backend URL** + - Railway will provide a URL like: `https://your-app.railway.app` + +#### Using External Storage for Large Files + +If your model and dataset files are too large for the repository: + +**Google Cloud Storage Example:** +```python +# Add to server1.py before loading the model +from google.cloud import storage + +def download_model_from_gcs(): + client = storage.Client() + bucket = client.bucket('your-bucket-name') + blob = bucket.blob('mobilenet_brain_tumor_classifier.h5') + blob.download_to_filename('mobilenet_brain_tumor_classifier.h5') +``` + +**AWS S3 Example:** +```python +import boto3 + +def download_model_from_s3(): + s3 = boto3.client('s3') + s3.download_file('your-bucket-name', 'mobilenet_brain_tumor_classifier.h5', + 'mobilenet_brain_tumor_classifier.h5') +``` + +### Alternative Backend Deployment Options + +- **Heroku**: Similar to Railway, supports Dockerfile deployment +- **Google Cloud Run**: Serverless container deployment +- **AWS Elastic Beanstalk**: Supports Docker containers +- **DigitalOcean App Platform**: Docker-based deployment + ## 🔧 Configuration ### Environment Variables @@ -85,11 +155,24 @@ NEXT_PUBLIC_API_URL=https://your-backend-api.com ### Backend Integration -To connect with the original Flask backend: +To connect the Next.js frontend with the Flask backend: + +#### 1. Configure Frontend Environment + +Create a `.env.local` file in the frontend directory: + +```env +NEXT_PUBLIC_API_URL=https://your-railway-backend.railway.app +``` + +Replace `https://your-railway-backend.railway.app` with your actual Railway backend URL. + +#### 2. Update API Endpoints -1. Update the API endpoints in the components -2. Replace mock data with actual API calls -3. Handle CORS configuration on your backend +The Flask backend provides the following endpoints: +- `POST /predict` - Upload and classify a brain MRI image +- `GET /random` - Get a random dataset image with prediction +- `POST /heatmap` - Generate Grad-CAM heatmap for an uploaded image Example API integration: @@ -101,6 +184,16 @@ const response = await fetch(`${process.env.NEXT_PUBLIC_API_URL}/predict`, { const result = await response.json() ``` +#### 3. CORS Configuration + +The Flask backend is already configured with CORS support via `flask-cors`: +```python +from flask_cors import CORS +CORS(app) # Enables CORS for all routes +``` + +This allows the frontend to make requests from any domain. + ## 📱 Features ### Image Upload diff --git a/server1.py b/server1.py index 4b8c63a6..494b978d 100644 --- a/server1.py +++ b/server1.py @@ -1,4 +1,5 @@ from flask import Flask, render_template, request, jsonify +from flask_cors import CORS import cv2 import numpy as np import tensorflow as tf @@ -17,6 +18,7 @@ logger = logging.getLogger(__name__) app = Flask(__name__) +CORS(app) # Enable CORS for all routes app.config['UPLOAD_FOLDER'] = os.getenv('UPLOAD_FOLDER', 'Uploads') app.config['MAX_CONTENT_LENGTH'] = 16 * 1024 * 1024 app.config['ALLOWED_EXTENSIONS'] = {'png', 'jpg', 'jpeg', 'bmp'} @@ -33,9 +35,14 @@ # Configure Gemini API # The API key is read from the GOOGLE_API_KEY environment variable try: - genai.configure(api_key='Add Your Own APi Key') - gemini_vision_model = genai.GenerativeModel('gemini-2.5-flash-preview-05-20') - logger.info("Gemini API configured successfully.") + api_key = os.getenv('GOOGLE_API_KEY') + if api_key: + genai.configure(api_key=api_key) + gemini_vision_model = genai.GenerativeModel('gemini-2.5-flash-preview-05-20') + logger.info("Gemini API configured successfully.") + else: + logger.warning("GOOGLE_API_KEY environment variable not set. Gemini API will not be available.") + gemini_vision_model = None except Exception as e: logger.error(f"Failed to configure Gemini API: {str(e)}") gemini_vision_model = None # Indicate that Gemini is not available From 1c64b0afd8b655e5d22bd782f3f5b652f2d476a0 Mon Sep 17 00:00:00 2001 From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com> Date: Fri, 3 Oct 2025 08:15:37 +0000 Subject: [PATCH 3/5] Add deployment documentation and environment variable examples Co-authored-by: yashnaiduu <152394598+yashnaiduu@users.noreply.github.com> --- .env.example | 18 ++++ .env.local.example | 8 ++ DEPLOYMENT.md | 253 +++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 279 insertions(+) create mode 100644 .env.example create mode 100644 .env.local.example create mode 100644 DEPLOYMENT.md diff --git a/.env.example b/.env.example new file mode 100644 index 00000000..d6ce8339 --- /dev/null +++ b/.env.example @@ -0,0 +1,18 @@ +# Backend Environment Variables (for Railway/Production) +# Copy this file and set your actual values + +# Required: Google Gemini API Key +# Get one from: https://makersuite.google.com/app/apikey +GOOGLE_API_KEY=your_gemini_api_key_here + +# Optional: Port (Railway sets this automatically) +PORT=5050 + +# Optional: Upload folder path +UPLOAD_FOLDER=Uploads + +# Optional: Dataset path (only needed for /random endpoint) +DATASET_PATH=./Dataset + +# Optional: Model file path +MODEL_PATH=mobilenet_brain_tumor_classifier.h5 diff --git a/.env.local.example b/.env.local.example new file mode 100644 index 00000000..7a9ebc6e --- /dev/null +++ b/.env.local.example @@ -0,0 +1,8 @@ +# Frontend Environment Variables (Next.js) +# Copy this file to .env.local and set your actual Railway backend URL + +# Backend API URL from Railway +NEXT_PUBLIC_API_URL=https://your-railway-backend.railway.app + +# Note: Replace 'your-railway-backend.railway.app' with your actual Railway deployment URL +# You can find this in your Railway project dashboard after deployment diff --git a/DEPLOYMENT.md b/DEPLOYMENT.md new file mode 100644 index 00000000..ef899997 --- /dev/null +++ b/DEPLOYMENT.md @@ -0,0 +1,253 @@ +# Railway Deployment Guide + +This document provides step-by-step instructions for deploying the NeuroScan Brain Tumor Classification application to Railway. + +## 📋 Overview + +The application consists of: +- **Backend**: Flask API (`server1.py`) for ML model inference +- **Frontend**: Next.js web application for user interface + +## 🚀 Backend Deployment (Railway) + +### Prerequisites + +1. **Railway Account**: Sign up at [railway.app](https://railway.app) +2. **GitHub Repository**: Your code should be pushed to GitHub +3. **Model File**: `mobilenet_brain_tumor_classifier.h5` (150-200MB) +4. **Dataset**: Brain MRI dataset files (optional for production) +5. **Gemini API Key**: Get one from [Google AI Studio](https://makersuite.google.com/app/apikey) + +### Step 1: Prepare Your Repository + +Ensure these files are in your repository: +- ✅ `server1.py` (Flask application) +- ✅ `requirements.txt` (Python dependencies) +- ✅ `Dockerfile` (Container configuration) +- ✅ `mobilenet_brain_tumor_classifier.h5` (model file) + +**Note**: If your model file is too large (>100MB), consider using Git LFS or external storage. + +### Step 2: Deploy to Railway + +1. **Go to Railway**: https://railway.app +2. **Create New Project**: Click "New Project" +3. **Deploy from GitHub**: + - Select "Deploy from GitHub repo" + - Choose your repository + - Railway will automatically detect the Dockerfile + +### Step 3: Configure Environment Variables + +In the Railway project dashboard, add these environment variables: + +| Variable | Value | Required | +|----------|-------|----------| +| `GOOGLE_API_KEY` | Your Gemini API key | Yes | +| `PORT` | 5050 (Railway auto-sets this) | No | +| `UPLOAD_FOLDER` | Uploads | No | +| `DATASET_PATH` | ./Dataset | No* | +| `MODEL_PATH` | mobilenet_brain_tumor_classifier.h5 | No | + +*Only needed if you want the `/random` endpoint to work + +### Step 4: Get Your Backend URL + +After deployment completes: +1. Railway will provide a URL like: `https://your-app.railway.app` +2. Copy this URL - you'll need it for frontend configuration + +### Step 5: Test Your Backend + +Test the endpoints: +```bash +# Health check +curl https://your-app.railway.app/ + +# Upload and predict (with an image file) +curl -X POST https://your-app.railway.app/predict \ + -F "file=@/path/to/brain-mri.jpg" +``` + +## 🎨 Frontend Configuration + +### Step 1: Create `.env.local` + +In your Next.js frontend directory, create a `.env.local` file: + +```env +NEXT_PUBLIC_API_URL=https://your-app.railway.app +``` + +Replace `https://your-app.railway.app` with your actual Railway backend URL. + +### Step 2: Deploy Frontend + +Deploy the frontend to Vercel, Netlify, or your preferred platform: + +**Vercel (Recommended)**: +1. Push code to GitHub +2. Import project in Vercel dashboard +3. Add environment variable: `NEXT_PUBLIC_API_URL` +4. Deploy + +**Netlify**: +1. Build: `npm run build` +2. Deploy the `out` folder +3. Add environment variable in Netlify dashboard + +## 📦 Handling Large Files + +### Option 1: Git LFS (Large File Storage) + +For the model file: +```bash +git lfs install +git lfs track "*.h5" +git add .gitattributes +git add mobilenet_brain_tumor_classifier.h5 +git commit -m "Add model with Git LFS" +git push +``` + +### Option 2: External Storage + +#### Using Google Cloud Storage + +1. Upload model to GCS bucket +2. Add download code to `server1.py`: + +```python +from google.cloud import storage + +def download_model_from_gcs(): + if not os.path.exists('mobilenet_brain_tumor_classifier.h5'): + client = storage.Client() + bucket = client.bucket('your-bucket-name') + blob = bucket.blob('mobilenet_brain_tumor_classifier.h5') + blob.download_to_filename('mobilenet_brain_tumor_classifier.h5') + logger.info("Model downloaded from GCS") + +# Call before loading model +download_model_from_gcs() +``` + +3. Add `google-cloud-storage` to `requirements.txt` +4. Set `GOOGLE_APPLICATION_CREDENTIALS` in Railway + +#### Using AWS S3 + +1. Upload model to S3 bucket +2. Add download code: + +```python +import boto3 + +def download_model_from_s3(): + if not os.path.exists('mobilenet_brain_tumor_classifier.h5'): + s3 = boto3.client('s3') + s3.download_file('your-bucket', + 'mobilenet_brain_tumor_classifier.h5', + 'mobilenet_brain_tumor_classifier.h5') + logger.info("Model downloaded from S3") +``` + +3. Add `boto3` to `requirements.txt` +4. Set AWS credentials in Railway + +### Option 3: Railway Volume + +Railway supports persistent storage volumes: +1. Create a volume in Railway dashboard +2. Mount it to `/app/models` +3. Update `MODEL_PATH` to `/app/models/mobilenet_brain_tumor_classifier.h5` +4. Upload model to volume via Railway CLI + +## 🔍 Troubleshooting + +### Build Failures + +**Issue**: Dockerfile build fails +- Check Railway build logs +- Verify all dependencies in `requirements.txt` +- Ensure sufficient memory (upgrade Railway plan if needed) + +**Issue**: Model file not found +- Verify model file is in repository or accessible via storage +- Check `MODEL_PATH` environment variable + +### Runtime Errors + +**Issue**: Import errors +- Verify `requirements.txt` includes all dependencies: + - `flask==2.3.3` + - `flask-cors==4.0.0` + - `gunicorn==20.1.0` + - `tensorflow==2.15.0` + - `google-generativeai==0.8.5` + +**Issue**: CORS errors from frontend +- Verify `CORS(app)` is in `server1.py` +- Check frontend is using correct backend URL + +**Issue**: Gemini API errors +- Verify `GOOGLE_API_KEY` is set in Railway +- Check API key is valid and has quota remaining + +### Performance Issues + +**Issue**: Slow predictions +- Increase Railway instance size +- Consider using GPU instances for TensorFlow +- Enable model caching + +**Issue**: Timeout errors +- Increase gunicorn timeout (default: 120s) +- Optimize model inference code + +## 📊 Monitoring + +Railway provides: +- **Logs**: View application logs in real-time +- **Metrics**: CPU, memory, network usage +- **Deployments**: Track deployment history + +Access these in the Railway project dashboard. + +## 🔒 Security Best Practices + +1. **Never commit API keys** to the repository +2. **Use environment variables** for all secrets +3. **Enable HTTPS** (Railway provides this automatically) +4. **Limit file upload sizes** (already set to 16MB) +5. **Validate all inputs** (already implemented) +6. **Keep dependencies updated** regularly + +## 💰 Cost Considerations + +Railway pricing (as of 2025): +- **Free Tier**: $5 credit/month +- **Developer Plan**: $20/month (more resources) +- **Team Plan**: Custom pricing + +Model inference is compute-intensive. Monitor usage and upgrade as needed. + +## 📚 Additional Resources + +- [Railway Documentation](https://docs.railway.app) +- [Flask Deployment Best Practices](https://flask.palletsprojects.com/en/2.3.x/deploying/) +- [TensorFlow Model Optimization](https://www.tensorflow.org/model_optimization) +- [Gunicorn Configuration](https://docs.gunicorn.org/en/stable/configure.html) + +## 🆘 Support + +If you encounter issues: +1. Check Railway build/deployment logs +2. Review this deployment guide +3. Open an issue on GitHub +4. Consult Railway community/support + +--- + +**Last Updated**: October 2025 +**Maintained by**: Yash Naidu From a986732e83d93575dc927e6e263ce2bd396170f8 Mon Sep 17 00:00:00 2001 From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com> Date: Fri, 3 Oct 2025 08:16:56 +0000 Subject: [PATCH 4/5] Add comprehensive changes summary documentation Co-authored-by: yashnaiduu <152394598+yashnaiduu@users.noreply.github.com> --- CHANGES_SUMMARY.md | 204 +++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 204 insertions(+) create mode 100644 CHANGES_SUMMARY.md diff --git a/CHANGES_SUMMARY.md b/CHANGES_SUMMARY.md new file mode 100644 index 00000000..713570cb --- /dev/null +++ b/CHANGES_SUMMARY.md @@ -0,0 +1,204 @@ +# Changes Summary - Railway Deployment Preparation + +This document summarizes all changes made to prepare the NeuroScan application for deployment on Railway. + +## 🔧 Code Changes + +### 1. server1.py + +#### Added CORS Support +```python +from flask_cors import CORS + +app = Flask(__name__) +CORS(app) # Enable CORS for all routes +``` +- **Why**: Allows the frontend (deployed on Vercel/Netlify) to make API requests to the backend (deployed on Railway) +- **Impact**: No more CORS errors when accessing the API from different domains + +#### Updated Gemini API Configuration +```python +# Before: +genai.configure(api_key='Add Your Own APi Key') + +# After: +api_key = os.getenv('GOOGLE_API_KEY') +if api_key: + genai.configure(api_key=api_key) + gemini_vision_model = genai.GenerativeModel('gemini-2.5-flash-preview-05-20') +else: + logger.warning("GOOGLE_API_KEY environment variable not set.") + gemini_vision_model = None +``` +- **Why**: Security best practice - never hardcode API keys +- **Impact**: API key is now read from environment variables, making it secure and configurable per deployment + +#### Port Configuration +```python +# Already present, but verified: +port = int(os.environ.get("PORT", 5050)) +app.run(debug=True, host='0.0.0.0', port=port) +``` +- **Why**: Railway automatically sets the PORT environment variable +- **Impact**: Application will work correctly on Railway without hardcoded ports + +### 2. Dockerfile + +#### Updated CMD to Use Gunicorn +```dockerfile +# Before: +CMD ["python", "server1.py"] + +# After: +CMD gunicorn --bind 0.0.0.0:${PORT:-5050} --timeout 120 server1:app +``` +- **Why**: Gunicorn is a production-grade WSGI server, better than Flask's development server +- **Impact**: Improved performance, stability, and proper handling of multiple requests + +### 3. requirements.txt + +#### Verified Dependencies +All required dependencies are already present: +- ✅ Flask==2.3.3 +- ✅ Flask-Cors==4.0.0 +- ✅ gunicorn==20.1.0 +- ✅ numpy==1.24.3 +- ✅ tensorflow==2.15.0 +- ✅ pillow==11.2.1 +- ✅ opencv-python==4.8.0.76 +- ✅ google-generativeai==0.8.5 +- ✅ Werkzeug==3.1.3 + +**No changes needed** - all dependencies were already correctly specified. + +## 📚 Documentation Changes + +### 4. README.md + +#### Added Backend Deployment Section +- Comprehensive Railway deployment guide +- Prerequisites and deployment steps +- Environment variable configuration +- External storage options for large files (GCS, S3) +- Alternative deployment platforms + +#### Enhanced Backend Integration Section +- Frontend environment configuration (`.env.local`) +- API endpoint documentation +- CORS configuration explanation +- Example API usage code + +### 5. DEPLOYMENT.md (New File) + +Created a detailed deployment guide covering: +- Step-by-step Railway deployment instructions +- Environment variable reference table +- Frontend configuration +- Large file handling strategies (Git LFS, GCS, S3, Railway volumes) +- Troubleshooting common issues +- Security best practices +- Cost considerations +- Monitoring and support resources + +### 6. .env.example (New File) + +Created backend environment variables template: +```env +GOOGLE_API_KEY=your_gemini_api_key_here +PORT=5050 +UPLOAD_FOLDER=Uploads +DATASET_PATH=./Dataset +MODEL_PATH=mobilenet_brain_tumor_classifier.h5 +``` + +### 7. .env.local.example (New File) + +Created frontend environment variables template: +```env +NEXT_PUBLIC_API_URL=https://your-railway-backend.railway.app +``` + +## 🎯 What This Accomplishes + +### Security ✅ +- Removed hardcoded API key +- API keys now managed via environment variables +- Better secret management for production + +### Functionality ✅ +- CORS support for cross-origin requests +- Frontend can communicate with backend on different domains +- Production-ready server with Gunicorn + +### Deployment ✅ +- Railway-ready configuration +- Dynamic PORT handling +- Proper environment variable usage +- Comprehensive documentation + +### Developer Experience ✅ +- Clear deployment instructions +- Environment variable templates +- Troubleshooting guide +- Multiple deployment options + +## 🚀 Next Steps for Deployment + +1. **Backend (Railway)**: + - Push code to GitHub + - Create Railway project from repository + - Set `GOOGLE_API_KEY` environment variable + - Railway will automatically build and deploy + +2. **Frontend (Vercel/Netlify)**: + - Create `.env.local` with Railway backend URL + - Deploy to Vercel or Netlify + - Application is now fully functional! + +3. **Large Files**: + - Decide on storage strategy for `mobilenet_brain_tumor_classifier.h5` + - Options: Git LFS, GCS, S3, or Railway volume + - Follow instructions in DEPLOYMENT.md + +## 📋 Testing Checklist + +Before going live, verify: +- [ ] Backend deploys successfully on Railway +- [ ] Environment variables are set correctly +- [ ] Frontend can connect to backend +- [ ] Image upload and prediction works +- [ ] Grad-CAM heatmap generation works +- [ ] CORS is working (no browser console errors) +- [ ] Gemini API is responding (if key is set) + +## 🐛 Known Considerations + +1. **Model File Size**: The model file (~150MB) may need Git LFS or external storage +2. **Dataset Files**: Optional for production; only needed for `/random` endpoint +3. **Gemini API**: Optional; application works without it (MRI validation disabled) +4. **Cold Starts**: Railway may have cold starts on free tier (first request slower) + +## 📦 Files Modified + +1. `server1.py` - Added CORS, environment-based config +2. `Dockerfile` - Updated to use Gunicorn with PORT variable +3. `README.md` - Added deployment sections + +## 📦 Files Added + +1. `DEPLOYMENT.md` - Comprehensive deployment guide +2. `.env.example` - Backend environment template +3. `.env.local.example` - Frontend environment template +4. `CHANGES_SUMMARY.md` - This file + +## 📞 Support + +If you need help with deployment: +1. Check DEPLOYMENT.md for detailed instructions +2. Review Railway logs for any errors +3. Verify environment variables are set correctly +4. Open an issue on GitHub if problems persist + +--- + +**Summary**: All requirements from the problem statement have been successfully implemented. The application is now ready for deployment on Railway with proper security, CORS support, and comprehensive documentation. From 07b99b066ab994d3747435d2e6e553550f39e9bc Mon Sep 17 00:00:00 2001 From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com> Date: Fri, 3 Oct 2025 08:18:17 +0000 Subject: [PATCH 5/5] Add deployment checklist for easy reference Co-authored-by: yashnaiduu <152394598+yashnaiduu@users.noreply.github.com> --- DEPLOYMENT_CHECKLIST.md | 205 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 205 insertions(+) create mode 100644 DEPLOYMENT_CHECKLIST.md diff --git a/DEPLOYMENT_CHECKLIST.md b/DEPLOYMENT_CHECKLIST.md new file mode 100644 index 00000000..953a29fb --- /dev/null +++ b/DEPLOYMENT_CHECKLIST.md @@ -0,0 +1,205 @@ +# 🚀 Railway Deployment Checklist + +Use this checklist to ensure a smooth deployment to Railway. + +## 📝 Pre-Deployment + +### Backend Preparation +- [ ] Verify `server1.py` has CORS enabled +- [ ] Confirm Gemini API key is NOT hardcoded in code +- [ ] Check `requirements.txt` includes all dependencies +- [ ] Verify `Dockerfile` uses gunicorn command +- [ ] Decide on model file storage strategy: + - [ ] Option 1: Include in repository (if < 100MB) + - [ ] Option 2: Use Git LFS for large files + - [ ] Option 3: Upload to Google Cloud Storage + - [ ] Option 4: Upload to AWS S3 + - [ ] Option 5: Use Railway volume + +### Frontend Preparation +- [ ] Create `.env.local` file (copy from `.env.local.example`) +- [ ] Choose frontend hosting platform: + - [ ] Vercel (recommended) + - [ ] Netlify + - [ ] Other + +## 🔐 Environment Variables + +### Railway Backend (Required) +- [ ] `GOOGLE_API_KEY` - Your Gemini API key from Google AI Studio + +### Railway Backend (Optional) +- [ ] `PORT` - (Auto-set by Railway, leave as default) +- [ ] `UPLOAD_FOLDER` - Default: `Uploads` +- [ ] `DATASET_PATH` - Default: `./Dataset` (only if using /random endpoint) +- [ ] `MODEL_PATH` - Default: `mobilenet_brain_tumor_classifier.h5` + +### Frontend (Required) +- [ ] `NEXT_PUBLIC_API_URL` - Your Railway backend URL + +## 🎯 Railway Deployment Steps + +1. **Create Railway Account** + - [ ] Sign up at https://railway.app + - [ ] Connect GitHub account + +2. **Create New Project** + - [ ] Click "New Project" in Railway + - [ ] Select "Deploy from GitHub repo" + - [ ] Choose your NeuroScan repository + - [ ] Railway detects Dockerfile automatically + +3. **Configure Environment Variables** + - [ ] Open project settings in Railway + - [ ] Add `GOOGLE_API_KEY` with your API key + - [ ] Add any optional variables if needed + +4. **Wait for Deployment** + - [ ] Monitor build logs for errors + - [ ] Wait for deployment to complete + - [ ] Check for green "Active" status + +5. **Get Backend URL** + - [ ] Copy Railway-provided URL (e.g., `https://neuroscan.railway.app`) + - [ ] Save this URL for frontend configuration + +6. **Test Backend** + - [ ] Visit backend URL in browser (should show homepage or API response) + - [ ] Test predict endpoint with curl or Postman: + ```bash + curl -X POST https://your-backend.railway.app/predict \ + -F "file=@test-image.jpg" + ``` + +## 🎨 Frontend Deployment Steps + +### Option 1: Vercel (Recommended) +1. **Setup** + - [ ] Go to https://vercel.com + - [ ] Import your GitHub repository + - [ ] Vercel auto-detects Next.js + +2. **Configure** + - [ ] Add environment variable: `NEXT_PUBLIC_API_URL` = Railway backend URL + - [ ] Set framework preset to "Next.js" + +3. **Deploy** + - [ ] Click "Deploy" + - [ ] Wait for deployment to complete + - [ ] Get Vercel URL for your frontend + +### Option 2: Netlify +1. **Setup** + - [ ] Go to https://netlify.com + - [ ] Import from GitHub + +2. **Build Settings** + - [ ] Build command: `npm run build` + - [ ] Publish directory: `out` + +3. **Environment Variables** + - [ ] Add `NEXT_PUBLIC_API_URL` in site settings + - [ ] Set to your Railway backend URL + +4. **Deploy** + - [ ] Trigger deployment + - [ ] Wait for completion + +## ✅ Post-Deployment Testing + +### Backend Tests +- [ ] Homepage loads (`/`) +- [ ] Upload endpoint works (`/predict`) +- [ ] Random image endpoint works (`/random`) - if dataset included +- [ ] Heatmap endpoint works (`/heatmap`) +- [ ] No CORS errors in browser console +- [ ] Gemini API integration works (if key is set) + +### Frontend Tests +- [ ] Website loads correctly +- [ ] Upload interface works +- [ ] Image upload to backend succeeds +- [ ] Predictions display correctly +- [ ] Heatmap visualization works +- [ ] No console errors +- [ ] Mobile responsive design works + +### Integration Tests +- [ ] Frontend can communicate with backend +- [ ] CORS allows cross-origin requests +- [ ] API responses are correctly formatted +- [ ] Error handling works properly + +## 🐛 Troubleshooting + +If something doesn't work, check: + +### Build Failures +- [ ] Check Railway build logs +- [ ] Verify all files are committed to Git +- [ ] Confirm Dockerfile syntax is correct +- [ ] Check requirements.txt has all dependencies + +### Runtime Errors +- [ ] Check Railway runtime logs +- [ ] Verify environment variables are set +- [ ] Confirm model file is accessible +- [ ] Check Gemini API key is valid + +### CORS Errors +- [ ] Verify `from flask_cors import CORS` in server1.py +- [ ] Confirm `CORS(app)` is called +- [ ] Check Flask-Cors is in requirements.txt + +### Frontend Connection Issues +- [ ] Verify `NEXT_PUBLIC_API_URL` is set correctly +- [ ] Check Railway backend URL is accessible +- [ ] Confirm backend is running (not sleeping) + +## 📊 Monitoring + +After deployment, monitor: +- [ ] Railway logs for errors +- [ ] API response times +- [ ] Error rates +- [ ] Resource usage (CPU/memory) +- [ ] Gemini API quota usage + +## 💡 Optimization Tips + +- [ ] Enable Railway metrics +- [ ] Set up error tracking (Sentry, etc.) +- [ ] Monitor API performance +- [ ] Consider upgrading Railway plan if needed +- [ ] Optimize TensorFlow model if predictions are slow +- [ ] Implement caching for static predictions + +## 📚 Documentation Reference + +- [ ] Read `DEPLOYMENT.md` for detailed instructions +- [ ] Check `CHANGES_SUMMARY.md` for what was changed +- [ ] Review `README.md` for general information +- [ ] Reference `.env.example` for all environment variables + +## 🎉 Success Criteria + +Your deployment is successful when: +- ✅ Backend is accessible via Railway URL +- ✅ Frontend is accessible via Vercel/Netlify URL +- ✅ Users can upload images and get predictions +- ✅ No CORS errors in browser console +- ✅ Heatmaps generate correctly +- ✅ All tests pass + +## 📞 Need Help? + +If you're stuck: +1. Review error logs in Railway dashboard +2. Check troubleshooting section in `DEPLOYMENT.md` +3. Verify all checklist items above +4. Open an issue on GitHub with error details +5. Consult Railway documentation: https://docs.railway.app + +--- + +**Ready to deploy?** Start from the top and check off each item! 🚀