A production-ready AI-powered web application that reads prescription images, analyzes symptoms, and provides medicine information using OCR and LangChain with Ollama. Built with privacy-first approach - all AI processing happens locally.
- π Prescription Analysis: Upload prescription images and extract medicine information using advanced OCR
- π©Ί Symptom Analysis: Describe symptoms and get AI-powered health insights and recommendations
- π Medicine Information: Get detailed information about specific medicines including dosages and side effects
- π Medicine Image Identification: Upload medicine images to identify pills, tablets, and capsules
- π¦ Packaging Analysis: Analyze medicine packaging to extract comprehensive product information
- πΈ Advanced OCR: Optimized text recognition from various image types and qualities
- π¨ Modern UI: Beautiful, responsive web interface with dark/light mode support
- π Privacy First: All AI processing happens locally with Ollama - no data sent to external services
- β‘ Fast Performance: Optimized image processing and AI inference
- π§ͺ Well Tested: Comprehensive test suite with pytest
- π± Mobile Friendly: Responsive design that works on all devices
- π§ Production Ready: Proper logging, error handling, and configuration management
- Python 3.8+ with pip
- Ollama - Local AI model server
- Tesseract OCR - Text extraction from images
- Git (for cloning)
# Clone the repository
git clone https://github.com/your-username/ai-prescription-reader.git
cd ai-prescription-reader
# Run the automated setup script
chmod +x setup.sh
./setup.sh
# Start Ollama in a separate terminal
ollama serve
# Start the application
source venv/bin/activate
python main.pyπ That's it! Open your browser to http://localhost:8000
For more control over the installation process:
# Create and activate virtual environment
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Upgrade pip and install dependencies
pip install --upgrade pip
pip install -r requirements.txtmacOS (with Homebrew):
brew install tesseractUbuntu/Debian:
sudo apt-get update
sudo apt-get install tesseract-ocr tesseract-ocr-engWindows:
- Download Tesseract from GitHub Releases
- Add to PATH or update
ocr_service.pywith tesseract path
# Install Ollama from https://ollama.ai/download
# Pull the required model
ollama pull llama3.2
# Start Ollama server
ollama serveCopy and modify the environment file:
cp .env.example .env
# Edit .env with your preferred settings# Development mode
python main.py
# Production mode with Gunicorn
gunicorn main:app -w 4 -k uvicorn.workers.UvicornWorker --bind 0.0.0.0:8000- Navigate to the Prescription tab
- Upload Image: Drag & drop or click to select prescription image (JPG, PNG, PDF)
- Or Enter Text: Manually type prescription text
- Click "Analyze Prescription"
- View extracted medicines, dosages, instructions, and doctor information
- Go to the Symptoms tab
- Describe your symptoms in detail in the text area
- Click "Analyze Symptoms"
- Review possible conditions, recommendations, and when to seek medical attention
- Switch to the Medicine Info tab
- Choose from three options:
- Search by Name: Enter medicine name and get detailed information
- Identify from Image: Upload medicine image to identify pills/tablets
- Analyze Packaging: Upload packaging image for comprehensive analysis
- Click the appropriate action button
- View detailed results including identification, composition, and safety information
http://localhost:8000/api/v1
POST /prescription/analyze
Content-Type: multipart/form-data
# Upload prescription image
curl -X POST "http://localhost:8000/api/v1/prescription/analyze" \
-F "file=@prescription.jpg"POST /prescription/analyze-text?prescription_text=<text>
# Analyze prescription text
curl -X POST "http://localhost:8000/api/v1/prescription/analyze-text?prescription_text=Paracetamol 500mg twice daily"POST /symptoms/analyze?symptoms=<symptoms>
# Analyze symptoms
curl -X POST "http://localhost:8000/api/v1/symptoms/analyze?symptoms=headache and fever"GET /medicine/info/{medicine_name}
# Get medicine info by name
curl "http://localhost:8000/api/v1/medicine/info/paracetamol"POST /medicine/identify
Content-Type: multipart/form-data
# Identify medicine from image
curl -X POST "http://localhost:8000/api/v1/medicine/identify" \
-F "file=@medicine_image.jpg"POST /medicine/analyze-packaging
Content-Type: multipart/form-data
# Analyze medicine packaging
curl -X POST "http://localhost:8000/api/v1/medicine/analyze-packaging" \
-F "file=@packaging_image.jpg"POST /medicine/batch-info
Content-Type: application/json
# Get info for multiple medicines
curl -X POST "http://localhost:8000/api/v1/medicine/batch-info" \
-H "Content-Type: application/json" \
-d '["paracetamol", "aspirin"]'GET /health
# Health status
curl "http://localhost:8000/health"All API responses follow this structure:
{
"status": "success|error",
"data": {...},
"message": "Optional message",
"timestamp": "2025-07-07T10:30:00Z"
}# Install test dependencies
pip install pytest pytest-asyncio pytest-cov
# Run all tests
pytest
# Run with coverage
pytest --cov=app tests/
# Run specific test file
pytest test_api.py -v- API endpoints testing
- OCR service testing
- AI service integration testing
- Error handling validation
# Dockerfile included in repository
docker build -t ai-prescription-reader .
docker run -p 8000:8000 ai-prescription-reader# Install Gunicorn
pip install gunicorn
# Run with multiple workers
gunicorn main:app -w 4 -k uvicorn.workers.UvicornWorker --bind 0.0.0.0:8000# Production settings
ENVIRONMENT=production
DEBUG=False
OLLAMA_BASE_URL=http://localhost:11434
OLLAMA_MODEL=llama3.2
LOG_LEVEL=INFO- File Upload Limits: Configured for prescription images only
- Rate Limiting: Implement rate limiting for production use
- HTTPS: Use reverse proxy (nginx) with SSL certificates
- Input Validation: All inputs are validated and sanitized
- Error Handling: Proper error messages without exposing internals
ai-prescription-reader/
βββ π main.py # FastAPI application entry point
βββ π requirements.txt # Python dependencies
βββ π setup.sh # Automated setup script
βββ π .env # Environment configuration
βββ π .gitignore # Git ignore patterns
βββ π Dockerfile # Container configuration
βββ π docker-compose.yml # Multi-service orchestration
βββ π app/ # Application package
β βββ π core/
β β βββ π __init__.py
β β βββ π config.py # Settings and configuration
β β βββ π logging.py # Logging configuration
β βββ π routers/ # API route handlers
β β βββ π __init__.py
β β βββ π prescription.py # Prescription analysis endpoints
β β βββ π symptoms.py # Symptom analysis endpoints
β β βββ π medicine.py # Medicine information endpoints
β βββ π services/ # Business logic services
β β βββ π __init__.py
β β βββ π ocr_service.py # OCR text extraction
β β βββ π ollama_langchain.py # AI analysis with LangChain
β βββ π models/ # Data models and schemas
β βββ π __init__.py
β βββ π schemas.py # Pydantic models
βββ π static/ # Frontend assets
β βββ π index.html # Main web interface
β βββ π css/
β βββ π js/
β βββ π images/
βββ π tests/ # Test suite
β βββ π test_api.py # API endpoint tests
β βββ π test_ocr.py # OCR service tests
β βββ π test_ai.py # AI service tests
βββ π docs/ # Documentation
βββ π API.md # API documentation
βββ π DEPLOYMENT.md # Deployment guide
βββ π CONTRIBUTING.md # Contribution guidelines
IMPORTANT MEDICAL DISCLAIMER
This application is for informational and educational purposes only and is not a substitute for professional medical advice, diagnosis, or treatment.
- Always seek the advice of your physician or other qualified health provider
- Never disregard professional medical advice because of information from this app
- Never delay seeking medical treatment because of information from this app
- This app does not provide medical diagnoses or treatment recommendations
- The AI analysis is based on general medical knowledge and may not be accurate
- Prescription information extracted may contain errors - always verify with healthcare providers
If you think you may have a medical emergency, call your doctor or emergency services immediately.
- FastAPI - Modern, fast web framework
- LangChain - AI/LLM application framework
- Ollama - Local LLM hosting and inference
- Tesseract OCR - Optical character recognition
- OpenCV - Computer vision and image processing
- Pillow - Python imaging library
- Vanilla JavaScript - No framework dependencies
- Modern CSS - Responsive design with CSS Grid and Flexbox
- HTML5 - Semantic markup with accessibility features
- Pytest - Testing framework
- Black - Code formatting
- Docker - Containerization
- Gunicorn - WSGI HTTP Server
We welcome contributions! Please see our Contributing Guidelines for details.
# Fork and clone the repository
git clone https://github.com/your-username/ai-prescription-reader.git
cd ai-prescription-reader
# Create development environment
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt
pip install -r requirements-dev.txt
# Run tests
pytest
# Format code
black .
# Run linting
flake8 app/- Use GitHub Issues for bug reports and feature requests
- Provide detailed reproduction steps
- Include system information and logs
This project is licensed under the MIT License - see the LICENSE file for details.
- Tesseract OCR team for the excellent OCR engine
- Ollama team for making local LLM inference accessible
- LangChain community for the powerful AI framework
- FastAPI creators for the amazing web framework
- Medical professionals who provided guidance on health information accuracy
- Documentation: Check the docs/ directory
- Issues: GitHub Issues
- Discussions: GitHub Discussions
Made with β€οΈ for better healthcare accessibility
β Star this repository if it helped you!
python3 -m venv venv
source venv/bin/activate-
Install dependencies:
pip install --upgrade pip pip install -r requirements.txt
-
Install Tesseract OCR:
- macOS:
brew install tesseract - Ubuntu/Debian:
sudo apt-get install tesseract-ocr - Windows: Download from GitHub
- macOS:
-
Install and setup Ollama:
# Install Ollama from https://ollama.ai/download ollama pull llama3.2 ollama serve
- Go to the "Prescription" tab
- Upload a prescription image or enter text manually
- Click "Analyze Prescription"
- View extracted medicines, dosages, and instructions
- Go to the "Symptoms" tab
- Describe your symptoms in detail
- Click "Analyze Symptoms"
- View possible conditions and recommendations
- Go to the "Medicine Info" tab
- Enter a medicine name
- Click "Get Medicine Information"
- View uses, dosage, side effects, and warnings
POST /api/v1/prescription/analyze- Upload and analyze prescription imagePOST /api/v1/prescription/analyze-text- Analyze prescription textPOST /api/v1/symptoms/analyze- Analyze symptomsGET /api/v1/medicine/info/{medicine_name}- Get medicine informationGET /health- Health check
Run tests with:
pytest test_api.py -vThis application is for informational purposes only and is not a substitute for professional medical advice, diagnosis, or treatment. Always seek the advice of your physician or other qualified health provider with any questions you may have regarding a medical condition.
- FastAPI - Web framework
- LangChain - AI/LLM integration
- Ollama - Local LLM hosting
- Tesseract OCR - Text extraction from images
- OpenCV - Image processing
- Pillow - Image manipulation
ai-prescription-reader/
βββ main.py # FastAPI application
βββ requirements.txt # Python dependencies
βββ setup.sh # Setup script
βββ .env # Environment configuration
βββ app/
β βββ core/
β β βββ config.py # Application configuration
β βββ routers/
β β βββ prescription.py # Prescription endpoints
β β βββ symptoms.py # Symptom endpoints
β β βββ medicine.py # Medicine endpoints
β βββ services/
β βββ ocr_service.py # OCR functionality
β βββ ollama_langchain.py # AI analysis
βββ static/
β βββ index.html # Web interface
βββ test_api.py # API tests
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if necessary
- Submit a pull request
This project is open source and available under the MIT License.