- System Requirements
- Prerequisites
- Project Download
- Environment Setup
- Project Startup
- Access Guide
- Troubleshooting
- Development Guide
- Operating System: macOS 10.15+, Windows 10+, or Ubuntu 18.04+
- RAM: 8GB minimum, 16GB recommended
- Storage: 10GB available disk space
- CPU: 2 cores minimum, 4 cores recommended
- Docker: Version 20.10+ with Docker Compose
- Git: Version 2.30+
- Internet Connection: For downloading dependencies and AI services
# Install Docker Desktop (includes Docker Compose)
# Download from: https://www.docker.com/products/docker-desktop
# Or use Homebrew:
brew install --cask docker# Download Docker Desktop from:
# https://www.docker.com/products/docker-desktop
# Enable WSL2 backend for better performance# Install Docker
sudo apt update
sudo apt install docker.io docker-compose
# Add user to docker group
sudo usermod -aG docker $USER
# Logout and login again# Check Docker version
docker --version
# Check Docker Compose version
docker compose version
# Test Docker installation
docker run hello-world# Clone the project
git clone https://github.com/your-username/capstone-project-25t2-9900-t11a-donut.git
# Navigate to project directory
cd capstone-project-25t2-9900-t11a-donut# Check project structure
ls -la
# Expected structure:
# ├── backend/ # Flask backend service
# ├── frontend/ # React frontend application
# ├── ai_service/ # AI processing service
# ├── docker-compose.yml # Docker orchestration
# ├── README.md # Project documentation
# └── requirements.txt # Python dependenciesCreate a .env file in the project root directory:
# Create .env file
touch .envOr copy from the example template (recommended):
cp .env.example .envAdd the following configuration to your .env file:
VITE_USE_MOCK_API=false
VITE_BACKEND_PORT=5001
# Database Configuration
DATABASE_URL=postgresql://postgres:postgres@db:5432/postgres
DB_HOST=db
DB_PORT=5432
DB_NAME=postgres
DB_USER=postgres
DB_PASSWORD=postgres
# Flask Configuration
FLASK_APP=app.py
FLASK_RUN_HOST=0.0.0.0
FLASK_ENV=development
# JWT Configuration
JWT_SECRET_KEY=T11A_DONUT_SECRET_KEY
# Google AI Configuration (Optional - for enhanced AI features)
GOOGLE_API_KEY=your_gemini_api_keyIf you plan to use Gemini features, obtain an API key from Google AI Studio and set it in your .env file:
- Go to Google AI Studio and create a new API key.
- Copy the key and update
GOOGLE_API_KEYin.env(or.env.exampleif using the template).
GOOGLE_API_KEY=your_gemini_api_key| Variable | Description | Required | Default |
|---|---|---|---|
DATABASE_URL |
PostgreSQL connection string | Yes | - |
JWT_SECRET_KEY |
Secret key for JWT token generation | Yes | T11A_DONUT_SECRET_KEY |
OPENAI_API_KEY |
OpenAI API key for AI features | No | - |
GOOGLE_API_KEY |
Google AI API key for AI features | No | - |
FLASK_ENV |
Flask environment mode | Yes | development |
# Build and start all services in detached mode
docker compose up --build -d
# Monitor startup process
docker compose logs -f# Check all container status
docker compose ps
# Expected output:
# NAME STATUS PORTS
# capstone-project-25t2-9900-t11a-donut-db-1 Up 2 minutes 0.0.0.0:5432->5432/tcp
# capstone-project-25t2-9900-t11a-donut-frontend-1 Up 2 minutes 0.0.0.0:5173->5173/tcp
# capstone-project-25t2-9900-t11a-donut-pgadmin-1 Up 2 minutes 0.0.0.0:5678->80/tcp
# capstone-project-25t2-9900-t11a-donut-swagger-ui-1 Up 2 minutes 0.0.0.0:8080->8080/tcp
# demo-backend Up 2 minutes 0.0.0.0:5001->5001/tcpThe system requires 2-3 minutes for complete initialization:
# Monitor backend initialization
docker logs demo-backend -f
# Look for these success messages:
# ✅ answer_bp registered successfully
# ✅ user_bp registered successfully
# ✅ test blueprint registered successfully
# ✅ practice_bp registered successfully
# ✅ ai_data_bp registered successfully
# 🚀 Flask application started...# Test backend health endpoint
curl http://localhost:5001/ping
# Expected response:
# {
# "msg": "pong",
# "status": "Backend is running",
# "port": 5001,
# "container": "container-id",
# "python_version": "3.11.13"
# }| Service | URL | Port | Description |
|---|---|---|---|
| Frontend Application | http://localhost:5173 | 5173 | Main user interface |
| Backend API | http://localhost:5001 | 5001 | REST API service |
| API Documentation | http://localhost:8080 | 8080 | Swagger UI documentation |
| Database Management | http://localhost:5678 | 5678 | PgAdmin interface |
- URL: http://localhost:5678
- Email: admin@admin.com
- Password: admin
- Host: db
- Port: 5432
- Database: postgres
- Username: postgres
- Password: postgres
GET http://localhost:5001/pingPOST http://localhost:5001/api/auth/register
POST http://localhost:5001/api/auth/loginGET http://localhost:5001/api/test/questions
POST http://localhost:5001/api/test/ai-questionProblem: Port already in use
# Check port usage
lsof -i :5001
lsof -i :5173
lsof -i :5432
lsof -i :5678
lsof -i :8080
# Stop conflicting processes or modify ports in docker-compose.ymlSolution: Modify port mappings in docker-compose.yml:
ports:
- "5002:5001" # Change from 5001 to 5002Problem: Containers fail to start
# Check container logs
docker compose logs
# Restart with fresh build
docker compose down --volumes --remove-orphans
docker compose up --build -dProblem: Backend cannot connect to database
# Check database container
docker compose ps db
# View database logs
docker compose logs db
# Test database connection
docker exec -it demo-backend python3 -c "
import psycopg2
conn = psycopg2.connect(host='db', database='postgres', user='postgres', password='postgres')
print('Database connection successful')
"Problem: Frontend shows "Backend health check failed"
# Check backend health
curl http://localhost:5001/ping
# Verify CORS configuration
# Check browser console for CORS errors
# Restart backend service
docker compose restart backendProblem: Services cannot read environment variables
# Verify .env file exists
ls -la .env
# Check environment variables in container
docker exec -it demo-backend env | grep -E "(DATABASE|JWT|FLASK)"
# Restart services after .env changes
docker compose down
docker compose up -d# Monitor resource usage
docker stats
# Clean up unused resources
docker system prune -a# Use build cache
docker compose build --parallel
# Check disk space
df -h# Start all services
docker compose up -d
# Stop all services
docker compose down
# Restart specific service
docker compose restart backend
# View service logs
docker compose logs -f backend
# Rebuild specific service
docker compose build backend
docker compose up -d backend- Backend code changes are automatically reloaded
- Frontend changes require container restart for production builds
- Database changes require migration scripts
Backend (Python)
# Add to requirements.txt
echo "new-package==1.0.0" >> backend/requirements.txt
# Rebuild backend
docker compose build backend
docker compose up -d backendFrontend (Node.js)
# Add to package.json
# Rebuild frontend
docker compose build frontend
docker compose up -d frontend# Access database
docker compose exec db psql -U postgres -d postgres
# Backup database
docker compose exec db pg_dump -U postgres postgres > backup.sql
# Restore database
docker compose exec -T db psql -U postgres -d postgres < backup.sqlThe test/ directory provides a full testing toolkit, including backend unit/integration tests, end-to-end (Cypress) tests, and frontend unit tests. It is orchestrated via test/docker-compose.yml profiles.
Structure:
test/
backend/ # Backend pytest image and tests
EndtoEnd/ # Cypress config and e2e specs
frontend/ # Frontend unit test runner image
docker-compose.yml
quick-test.sh # One-click test runner
Quick one-click run (recommended):
chmod +x test/quick-test.sh
./test/quick-test.shThis will run in order:
- Initialize test DB and run backend tests
- Run E2E tests (Cypress) against
backend-e2e+frontend-e2e - Run frontend unit tests (and waits for backend health automatically)
Run backend tests only:
docker compose -f test/docker-compose.yml --profile backend up -d db-backend-tests
# (optional) load sample data if needed, then:
docker compose -f test/docker-compose.yml --profile backend run --rm backend-tests
docker compose -f test/docker-compose.yml --profile backend downRun all E2E tests (Cypress):
docker compose -f test/docker-compose.yml --profile e2e up --build --abort-on-container-exit --exit-code-from e2e-tests
# Clean up
docker compose -f test/docker-compose.yml --profile e2e downRun a single E2E spec:
docker compose -f test/docker-compose.yml --profile e2e run --rm e2e-tests cypress run --spec 'cypress/e2e/test.cy.js'Run frontend unit tests only:
# Start backend for tests
docker compose -f test/docker-compose.yml --profile frontend up -d backend-e2e
# Run frontend tests
docker compose -f test/docker-compose.yml --profile frontend run --rm frontend-tests
docker compose -f test/docker-compose.yml --profile frontend downNotes:
- Cypress uses environment variables inside the e2e runner:
BASE_URL(default:http://frontend-e2e)API_URL(default:http://backend-e2e:5001)
- The first E2E run will take longer because the backend test database and initial question data are loaded on startup.
- If you see connection refused in early E2E tests, re-run once services are healthy, or use the one-click
quick-test.shwhich waits for health automatically.
-
Change Default Passwords
- Update database passwords
- Change JWT secret key
- Update admin credentials
-
Network Security
- Use reverse proxy (nginx)
- Enable HTTPS
- Configure firewall rules
-
API Security
- Implement rate limiting
- Add request validation
- Use secure headers
- Never commit
.envfiles to version control - Use secrets management in production
- Rotate API keys regularly
# Update Docker images
docker compose pull
docker compose up -d
# Clean up unused resources
docker system prune -a
# Monitor disk usage
docker system df# Backup database
docker compose exec db pg_dump -U postgres postgres > backup_$(date +%Y%m%d).sql
# Backup volumes
docker run --rm -v capstone-project-25t2-9900-t11a-donut_db_data:/data \
-v $(pwd):/backup alpine tar czf /backup/db_backup_$(date +%Y%m%d).tar.gz -C /data .- Check this installation manual
- Review service logs:
docker compose logs - Check project documentation in README.md
- Submit issues to project repository
- Docker and Docker Compose installed
- Project cloned from repository
-
.envfile created with proper configuration - All services started:
docker compose up --build -d - Backend health check passed:
curl http://localhost:5001/ping - Frontend accessible: http://localhost:5173
- Database management accessible: http://localhost:5678
- API documentation accessible: http://localhost:8080
Note: This installation manual is designed for development and testing environments. For production deployment, additional security configurations and optimizations are required.