Bridging Lab Results to Patient Understanding with AI
AI Healthcare System is a next-gen patient portal built for diagnostic centers. We wanted to solve a simple problem: Lab reports are confusing.
Most patients get a PDF full of numbers they don't understand. Our platform fixes this by combining:
- Automated Screening: Immediate risk assessment for Diabetes, Heart Disease, and more.
- AI Explanation: A "Medical Assistant" chat that explains the report in plain English (powered by Gemini Pro).
It's a full-stack solutionβdiagnostic centers get a dashboard to manage patients, and patients get a secure portal to understand their health.
-
For Patients:
- π Smart Reports: Upload a PDF and get an instant AI summary.
- π€ Health Assistant: Chat with an AI that knows your medical history.
- π©Ί Risk Screening: ML models check your vitals (Diabetes, Kidney, Liver, etc.).
-
For Doctors & Clinics:
- π₯ Patient Dashboard: View all patient records in one place.
- π Trend Analysis: Visualize patient health metrics over time.
- π Secure & compliant: Role-based access and data isolation.
We use trained ML models (XGBoost/RandomForest) to screen for:
- Diabetes: (Glucose, BMI, Insulin)
- Heart Disease: (Cholesterol, BP, ECG)
- Liver & Kidney Health
- Lung Cancer Risk
- RAG Architecture: We use separate vector stores for each user to prevent data leakage.
- Vision AI: Gemini Pro Vision reads raw PDF reports so you don't have to type data manually.
- Security: Full JWT authentication and session management.
Spin up the entire stack with one command:
# Clone the repository
git clone https://github.com/pavanbadempet/AI-Healthcare-System.git
cd AI-Healthcare-System
# Configure environment
cp .env.example .env
# Edit .env and add your GOOGLE_API_KEY
# Launch all services
docker-compose up --build| Service | URL |
|---|---|
| App (Frontend) | http://localhost:8501 |
| API Docs | http://localhost:8000/docs |
| MLflow UI | http://localhost:5000 |
Prerequisites: Python 3.10+, pip
# Install dependencies
# Install dependencies (Full Feature Set)
pip install -r requirements-full.txt
# OR for Lite Version (No PySpark/Heavy ML)
# pip install -r requirements.txt
# Start Backend (Terminal 1)
uvicorn backend.main:app --reload --port 8000
# Start Frontend (Terminal 2)
streamlit run frontend/main.py# Run everything
.\scripts\runners\run_app.bat
# Run E2E tests
.\scripts\runners\run_e2e_tests.ps1| Layer | Technology | Purpose |
|---|---|---|
| Frontend | Streamlit | Responsive UI & Data Visualization |
| Backend | FastAPI, Pydantic | REST API & Request Validation |
| ML/AI | XGBoost, Scikit-Learn | Disease Classification Models |
| GenAI | Gemini Pro, LangChain | Chat Assistant & RAG Pipeline |
| Vector DB | FAISS | Semantic Search & Memory |
| Database | SQLite | User Data & Chat History |
| DevOps | Docker, GitHub Actions | Containerization & CI/CD |
| Hosting | Streamlit Cloud, Render | Production Deployment |
# Run all tests with coverage
pytest tests/ --cov=backend --cov-report=term-missing
# Run specific test suites
pytest tests/unit/ # Unit tests
pytest tests/integration/ # Integration tests
pytest tests/e2e/ # End-to-end tests (requires running app)GitHub Actions automatically runs on every push:
- β Unit & Integration Tests
- β Code Coverage Reporting
- β Placeholder Model Generation for CI
βββ backend/ # FastAPI backend
β βββ main.py # API entrypoint
β βββ prediction.py # ML prediction logic
β βββ agent.py # AI chat agent
β βββ rag.py # RAG pipeline
β βββ vision_service.py # Lab report analyzer
β βββ *.pkl # Trained ML models
βββ frontend/ # Streamlit frontend
β βββ main.py # App entrypoint
β βββ views/ # Page components
β βββ components/ # Reusable UI components
βββ mlops/ # MLOps pipeline
β βββ data_ingestion.py # Data loading
β βββ data_processing.py # Feature engineering
β βββ model_training.py # Training scripts
βββ tests/ # Test suites
β βββ unit/ # Unit tests
β βββ integration/ # API integration tests
β βββ e2e/ # End-to-end tests
βββ scripts/ # Utility scripts
βββ docker-compose.yml # Multi-container setup
βββ render.yaml # Render deployment config
- Fork/Push to GitHub
- Connect repository to Streamlit Cloud
- Set
BACKEND_URLenvironment variable
- Connect repository to Render
- Uses
render.yamlfor auto-configuration - Set required environment variables:
GOOGLE_API_KEY- Gemini API keySECRET_KEY- JWT signing key
Contributions are welcome! Please check CONTRIBUTING.md for guidelines.
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
