A full-stack web application that analyzes cloud infrastructure resources and provides cost optimization recommendations. Built with React, FastAPI, and PostgreSQL.
PostgreSQL β FastAPI β React Dashboard
- Frontend: React with TypeScript, Tailwind CSS
- Backend: FastAPI with SQLAlchemy ORM
- Database: PostgreSQL
- API: RESTful endpoints with JSON responses
- Node.js 16+ and npm
- Python 3.8+
- PostgreSQL 12+
git clone <repository-url>
cd cloud-optimization-dashboard
# Create PostgreSQL database
createdb cloud_optimizer
# Or using SQL
psql -U postgres
CREATE DATABASE cloud_optimizer;
\q
# Navigate to backend directory
cd backend
# Create virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install fastapi uvicorn sqlalchemy psycopg2-binary python-multipart pydantic
# Or install from requirements.txt
pip install -r requirements.txt
# Update database connection in app/database.py
# DATABASE_URL = "postgresql://username:password@localhost/cloud_optimizer"
# Initialize database with sample data
python init_db.py
# Start backend server
uvicorn main:app --reload --port 8000
# Navigate to frontend directory
cd frontend
# Install dependencies
npm install
# Install Tailwind CSS
npm install -D tailwindcss postcss autoprefixer
npx tailwindcss init -p
# Start development server
npm start
- Backend API: http://localhost:8000
- API Documentation: http://localhost:8000/docs
- Frontend Dashboard: http://localhost:5173
- Access Dashboard: Open
http://localhost:5173
- View Summary: Check total resources, costs, and savings at the top
- Monitor Resources: Review all infrastructure in the resources table
- Check Recommendations: View optimization opportunities
- Implement Changes: Mark recommendations as completed
- Track Progress: Monitor total potential savings
- Resources: Total count of monitored infrastructure
- Monthly Cost: Current spending across all resources
- Potential Savings: Estimated cost reductions available
- Opportunities: Number of actionable recommendations
- CPU/Storage Column: Shows CPU utilization for compute resources, storage size for storage resources
- Memory Column: Shows memory utilization (compute only)
- Color Coding:
- π‘ Yellow: Under-utilized (CPU <30%, Memory <50%)
- π’ Green: Well-utilized
- π΅ Blue: Storage information
- Reason: Explanation of why optimization is recommended
- Current Cost: Monthly cost of the resource
- Estimated Savings: Potential monthly savings
- Confidence: Algorithm confidence level (High/Medium/Low)
- Implementation: Button to mark as completed
The application comes pre-loaded with realistic sample resources:
web-server-1 | t3.xlarge | 15% CPU, 25% Memory | $150/month
api-server-2 | m5.large | 12% CPU, 30% Memory | $90/month
worker-3 | Standard_D2s_v3| 8% CPU, 20% Memory | $70/month
database-1 | m5.xlarge | 75% CPU, 85% Memory | $180/month
cache-server | n1-standard-2 | 65% CPU, 70% Memory | $50/month
backup-storage | 1000GB | AWS | $100/month (optimization candidate)
log-storage | 500GB | Azure | $75/month (optimization candidate)
database-storage| 200GB | GCP | $25/month (well-utilized)
- Criteria: CPU < 30% AND Memory < 50%
- Action: Recommend smaller instance type
- Savings: 40-60% cost reduction
- Confidence: High
- Example: t3.xlarge β t3.large
- Criteria: Storage volumes > 500GB
- Action: Suggest reducing storage size
- Savings: 20-40% cost reduction
- Confidence: Medium
- Example: 1000GB β 500GB
- High: Clear optimization opportunity with significant savings
- Medium: Moderate optimization potential
- Low: Minor optimization with small savings
GET /resources
Returns all cloud resources with utilization and cost data.
Response:
[
{
"id": 1,
"name": "web-server-1",
"resource_type": "instance",
"provider": "AWS",
"instance_type": "t3.xlarge",
"cpu_utilization": 15.0,
"memory_utilization": 25.0,
"monthly_cost": 150.0,
"recommendations": [...]
}
]
POST /resources
Create a new cloud resource.
GET /recommendations
Returns all optimization recommendations.
Response:
[
{
"id": 1,
"resource_id": 1,
"name": "web-server-1",
"reason": "Over-provisioned instance. Recommend downsizing.",
"current_cost": 150.0,
"estimated_savings": 75.0,
"confidence": "high",
"implemented": false
}
]
PATCH /recommendations/{id}/implement
Mark a recommendation as implemented.
GET /
API status check.
cloud-optimization-dashboard/
βββ backend/
β βββ app/
β β βββ __init__.py
β β βββ database.py # Database connection
β β βββ models.py # SQLAlchemy models
β β βββ schemas.py # Pydantic schemas
β β βββ routes/
β β βββ __init__.py
β β βββ resources.py # API endpoints
β βββ main.py # FastAPI application
β βββ init_db.py # Database initialization
β βββ requirements.txt # Python dependencies
βββ frontend/
| βββ node_modules/
| βββ public/
| β βββ index.html
| βββ src/
| β βββ api/
| β β βββ api.ts
| β βββ assets/
| β β βββ react.svg
| β βββ components/
| β β βββ Recommendations.tsx
| β β βββ ResourceTable.tsx
| β β |ββ Summary.tsx
| β β βββ ui/
| β β βββ card.tsx # Card component
| β β βββ button.tsx # Button component
| β βββ pages/
| β β βββ Dashboard.tsx
| β βββ types/
| β β βββ index.ts
| β βββ App.css
| β βββ App.tsx
| β βββ index.css
| β βββ main.tsx
| β βββ vite-env.d.ts
| βββ .gitignore
| βββ components.json
| βββ eslint.config.js
| βββ index.html
| βββ package-lock.json
| βββ package.json
βββ README.md
Create a .env
file in the backend directory:
DATABASE_URL=postgresql://username:password@localhost/cloud_optimizer
API_PORT=8000
CORS_ORIGINS=http://localhost:5173
Update app/database.py
with your PostgreSQL credentials:
DATABASE_URL = "postgresql://username:password@localhost/cloud_optimizer"
Update API URL in frontend components if needed:
const API_BASE_URL = "http://localhost:8000";
# Install production dependencies
pip install gunicorn
# Run with Gunicorn
gunicorn main:app --host 0.0.0.0 --port 8000 --workers 4
# Or using Docker
docker build -t cloud-dashboard-api .
docker run -p 8000:8000 cloud-dashboard-api
# Build for production
npm run build
# Serve static files
npm install -g serve
serve -s build -l 3000
# Or deploy to Netlify/Vercel
# Production PostgreSQL setup
# Update DATABASE_URL for production database
# Run migrations
python init_db.py
cd backend
pip install pytest pytest-asyncio httpx
python -m pytest tests/ -v
cd frontend
npm test
npm run test:coverage
Use the interactive API documentation at http://localhost:8000/docs
to test endpoints.
- Total Resources: Count of monitored infrastructure
- Monthly Costs: Current spending across all resources
- Potential Savings: Estimated cost reductions from recommendations
- Optimization Opportunities: Number of actionable recommendations
- Implementation Rate: Percentage of recommendations completed
- Over-provisioned Resources: Resources with low utilization
- Cost Optimization: Potential monthly savings per resource
- Provider Distribution: Resources across AWS, Azure, GCP
- Resource Types: Breakdown of compute vs storage resources
# Check PostgreSQL is running
pg_ctl status
# Verify database exists
psql -l | grep cloud_optimizer
# Test connection
psql -d cloud_optimizer -c "SELECT 1;"
# Verify backend is running
curl http://localhost:8000/
# Check CORS configuration in main.py
# Ensure allow_origins includes frontend URL
# Backend
pip install -r requirements.txt
# Frontend
npm install
npm audit fix
# Backend (change port)
uvicorn main:app --reload --port 8001
# Frontend (change port)
PORT=3001 npm start
Enable debug logging in FastAPI:
import logging
logging.basicConfig(level=logging.DEBUG)