Goal:
Build a production-oriented loan risk prediction system that not only makes decisions, but learns from human corrections and monitors its own performance in real time.
Most ML projects end at model.predict(). Real-world systems continue long after deployment.
The Active Sentinel bridges this gap by simulating how deployed ML systems handle uncertainty, collect corrections, and evolve post-deployment.
Financial institutions don't just need accurate modelsโthey need accountable, auditable systems that:
- Operate under regulatory scrutiny
- Handle edge cases through human oversight
- Adapt to data drift and changing risk patterns
โโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Streamlit Frontend โ User interaction & feedback collection
โโโโโโโโโโโโโฌโโโโโโโโโโโโโโ
โ HTTPS
โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ FastAPI Backend โ Inference API + feedback logging
โ (Render) โ
โโโโโโโโโโโโโฌโโโโโโโโโโโโโโ
โ
โโโโโโโโโดโโโโโโโโโ
โ โ
โโโโโโโโโโโ โโโโโโโโโโโโ
โ RF Modelโ โ SQLite DBโ
โ (.pkl) โ โ Feedback โ
โโโโโโโโโโโ โโโโโโโโโโโโ
Stack:
- Backend: FastAPI (stateless inference service)
- Frontend: Streamlit (user interface)
- Model: Random Forest (scikit-learn)
- Storage: SQLite (feedback persistence)
- Deployment: Render (HTTPS endpoints)
- REST API with
/predict,/feedback, and/statsendpoints - Confidence-scored predictions with interpretability
- Stateless service design for horizontal scaling
- Real-time feedback mechanism (Correct/Incorrect labels)
- Persistent storage for post-deployment analysis
- Simulates domain expert review workflows
- Dynamic accuracy computation from user corrections
- Transparent model reliability metrics
- Feedback-driven evaluation dashboards
- Independent backend/frontend services
- Production-grade API contracts & Scalable
| Scenario | Application |
|---|---|
| Loan Origination | Risk officers review and override model decisions |
| Insurance Underwriting | Human adjustors provide ground truth labels |
| Fraud Detection | Security analysts validate alert accuracy |
Check out the live demo at: https://the-active-sentinel.streamlit.app/
# Deployed on Render
# Exposes inference API at: https://active-sentinel.onrender.com# Configure backend URL in app
# Run locally or deploy to Streamlit Cloud
streamlit run app.pyRender's free tier auto-sleeps inactive services. First request after inactivity may take 30-60 secondsโthis is expected.
End-to-end ML system design
API-based inference (production patterns vs. notebook code)
Feedback integration (human-in-the-loop workflows)
Live monitoring (post-deployment performance tracking)
Deployment trade-offs (cold starts, stateless services, persistence)
| Choice | Rationale |
|---|---|
| FastAPI | Async support, auto-generated docs, type safety |
| SQLite | Zero-config persistence for MVP; easy migration to PostgreSQL |
| Random Forest | Interpretable, handles mixed data types, production-proven |
| Render Free Tier | Demonstrates real deployment constraints (cold starts) |
| Separate Backend/Frontend | Mirrors microservices architecture, independent scaling |
- PostgreSQL migration for multi-user persistence
- Scheduled retraining pipeline using collected feedback
- Drift detection alerts (PSI, KS tests, data quality monitoring)
- Role-based access control for feedback validation workflows
- A/B testing framework for model version comparison
"This project is intentionally simple in modeling and strong in system design."
The goal isn't chasing 99% accuracyโit's about demonstrating how ML systems operate after deployment:
- How feedback gets collected
- How performance is monitored
- How humans stay in the loop
# Clone repository
git clone https://github.com/Shreyas-S-809/The-Active-Sentinel-Human-in-the-Loop-Risk-Engine
cd The-Active-Sentinel-Human-in-the-Loop-Risk-Engine
# Install dependencies
pip install -r requirements.txt
# Run backend locally
uvicorn api:app --reload
# Run frontend (separate terminal)
streamlit run app.pyThis project is deployed using a separated frontendโbackend architecture. When running the system locally, a few configuration changes are required to avoid environment-specific issues.
The Streamlit frontend communicates with the FastAPI backend via an API URL.
When deployed, the frontend points to the Render-hosted backend:
API_URL = "https://active-sentinel.onrender.com"For local development, update this in frontend/app.py:
API_URL = "http://127.0.0.1:8000"This ensures the frontend correctly connects to the locally running FastAPI server.
During deployment, the backend and frontend were separated to meet platform constraints. When running the project locally, the frontend requires additional packages.
Ensure the following dependencies are present in requirements.txt:
streamlit
requests
Then install all dependencies using:
pip install -r requirements.txtThese packages are required only for local execution of the Streamlit UI and are intentionally excluded from the backend-only deployment environment.
Terminal 1 โ Start FastAPI Backend
uvicorn api.main:app --reloadTerminal 2 โ Start Streamlit Frontend
streamlit run frontend/app.pyAccess Points:
- Backend:
http://127.0.0.1:8000 - Frontend:
http://localhost:8501
MIT License - See LICENSE for details.
โญ Star this repo if you found it helpful!