From fundamentals to deployment.
This repository documents my projects and progress as I follow a strategic roadmap to land a top-tier tech role.
Completed the Introduction to IAM lab. Learned about IAM roles, policies, and service accounts.
First Production Deployment on Cloud Run
Live Application URL: https://my-first-cloud-app-640781293504.us-central1.run.app
- Set up secure GCP environment with budget controls
- Created Docker container from scratch
- Debugged real deployment issue (port configuration)
- Successfully deployed to Google Cloud Run
- Application is now live and publicly accessible
- HTML/CSS/JavaScript
- Docker Containerization
- Google Cloud Run
- Google Cloud Build
- Nginx Web Server
Initial Deployment Failed: Container port misconfiguration Root Cause: Nginx listening on port 80, Cloud Run requires port 8080 Solution: Modified nginx configuration to listen on port 8080
- Cloud Run requires containers to listen on port 8080
- Dockerfile configuration is critical for deployment success
- Debugging deployment issues is a valuable skill
- Professional deployment workflow: local β container β cloud
Live AI Sentiment Analyzer with Database Integration
URL: https://ai-sentiment-analyzer-640781293504.us-central1.run.app
- Gemini AI Integration - Real sentiment analysis via
models/gemini-2.0-flash - Database Persistence - SQLite with automatic saving to
/tmp - Full REST API - 4 endpoints with proper JSON responses
- Production Deployment - Google Cloud Run with auto-scaling
- Frontend Interface - Browser-accessible UI
- Backend: FastAPI 0.104.1
- AI Model: Gemini 2.0 Flash
- Database: SQLite (persistent in Cloud Run
/tmp) - Deployment: Google Cloud Run (serverless)
- Memory: 512Mi
- Region: us-central1
Input: "Gemini model fixed! Database integration complete!" Output:
- Sentiment: positive (95% confidence)
- Key Phrases: ["Gemini model fixed", "Database integration complete"]
- Database Save: β Successful
- Response Time: < 3 seconds
POST /analyze- AI sentiment analysis + database saveGET /history- Retrieve chronological analysis historyGET /stats- Analytics (sentiment distribution, averages)GET /health- Service health + database connection checkGET /- Frontend interface with sentiment input
This project demonstrates:
- Full-stack development from idea to production
- Cloud deployment expertise (Google Cloud Run)
- AI integration skills (Gemini API with proper model handling)
- Database design (SQLite with schema and queries)
- Production debugging (solving real deployment issues)
- API design (RESTful endpoints with proper responses)
π Live Production Application
URL: https://ml-model-monitor-640781293504.us-central1.run.app Features: Production ML monitoring, drift detection, alerting system, Streamlit dashboard
- FastAPI for production backend APIs
- MLOps with model monitoring and drift detection
- Cloud Deployment on Google Cloud Run
- Database Design with SQLAlchemy
- Authentication & Security with API keys
- Data Visualization with Streamlit and Plotly
- Backend: FastAPI, SQLAlchemy, Pydantic
- ML/AI: Gemini API, scikit-learn, statistical testing
- Database: SQLite (production-ready patterns)
- Cloud: Google Cloud Run, Docker, gcloud CLI
- Monitoring: Custom metrics, health checks, alerting
- Visualization: Streamlit, Plotly