A production-grade abstractive summarization system using BART transformer model with FastAPI backend and Streamlit frontend.
- π¨ Frontend: https://stunning-octo-barnacle-hefbzzdatffcfef5uulaju.streamlit.app/
- π GitHub Repository: https://github.com/Prashant-ambati/stunning-octo-barnacle
- π API: Deploy to Railway/Render (instructions below)
- π API Docs: Available at
/docsendpoint after API deployment
- π€ BART-large-cnn model for high-quality abstractive summarization
- β‘ Fast inference with optimized PyTorch implementation
- π RESTful API with FastAPI and automatic documentation
- π¨ Interactive UI with Streamlit frontend
- π Real-time metrics and performance analytics
- π³ Docker support for easy deployment
- βοΈ Cloud-ready with multiple deployment options
- ML: PyTorch, HuggingFace Transformers
- Backend: FastAPI, Pydantic
- Frontend: Streamlit, Plotly
- Deployment: Docker, GitHub Actions
π Open Live App - No installation required!
# Clone the repository
git clone https://github.com/Prashant-ambati/stunning-octo-barnacle.git
cd stunning-octo-barnacle
# Install API dependencies
pip install -r requirements_api.txt
# Start the API server
python app_simple.pyIn another terminal:
# Install Streamlit dependencies
pip install -r requirements_streamlit.txt
# Start the frontend
streamlit run streamlit_app.py# Install all dependencies
pip install -r requirements.txt
# Start services with Docker
docker-compose up -d
# Run the full API
python -m app.mainimport requests
# Summarize text
response = requests.post("http://localhost:8002/summarize", json={
"text": "Your long document text here...",
"max_length": 150,
"min_length": 50
})
result = response.json()
print(f"Summary: {result['summary']}")
print(f"Compression: {result['compression_ratio']}x")curl -X POST "http://localhost:8002/summarize" \
-H "Content-Type: application/json" \
-d '{
"text": "Your text here...",
"max_length": 150,
"min_length": 50
}'βββ app/
β βββ main.py # FastAPI application
β βββ models/ # ML model handling
β βββ api/ # API routes
β βββ core/ # Configuration
β βββ database/ # MongoDB operations
βββ models/ # Trained model files
βββ scripts/ # Training and evaluation
βββ tests/ # Unit tests
βββ docker-compose.yml # Services orchestration
βββ Dockerfile # Container definition
βββ requirements.txt # Dependencies
- Speed: 2.3Γ faster inference with ONNX optimization
- Quality: Near research-level ROUGE-L scores
- Scalability: Handles concurrent requests with rate limiting
Status: β LIVE at stunning-octo-barnacle-hefbzzdatffcfef5uulaju.streamlit.app
To deploy your own:
- Fork this repository
- Connect to Streamlit Cloud
- Deploy
streamlit_app.py - Update
API_URLenvironment variable
- Connect your GitHub repository
- Use the provided
render.yamlconfiguration - Deploy both API and frontend services
- Connect your GitHub repository
- Railway will auto-detect the Python app
- Uses
railway.jsonconfiguration
# Install Heroku CLI and login
heroku create your-app-name
git push heroku main# Build and push
docker build -f Dockerfile.simple -t yourusername/semantic-summarizer .
docker push yourusername/semantic-summarizer- Model: BART-large-cnn (406M parameters)
- Inference Speed: ~10-15 seconds per summary (CPU)
- Compression Ratio: 2-5x typical reduction
- Memory Usage: ~1.5GB for model loading
# Test the model
python test_simple.py
# Test the API
python test_api.py
# Run unit tests
python -m pytest tests/ -vβββ app/ # Full production API
β βββ main.py # FastAPI application
β βββ models/ # ML model handling
β βββ api/ # API routes
β βββ database/ # MongoDB operations
βββ app_simple.py # Simplified API for deployment
βββ streamlit_app.py # Streamlit frontend
βββ test_simple.py # Basic model test
βββ test_api.py # API integration test
βββ requirements_api.txt # API dependencies
βββ requirements_streamlit.txt # Frontend dependencies
βββ docker-compose.yml # Full stack deployment
βββ Dockerfile.simple # Simple API container
βββ .github/workflows/ # CI/CD pipelines
# API Configuration
API_URL=http://localhost:8002 # For Streamlit app
PORT=8002 # API port
# Model Configuration
MODEL_NAME=facebook/bart-large-cnn
MAX_LENGTH=150
MIN_LENGTH=50- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- HuggingFace Transformers for the BART model
- FastAPI for the excellent web framework
- Streamlit for the beautiful frontend framework
- π§ Email: prashantambati12@gmail.com
- π Issues: GitHub Issues
- π¬ Discussions: GitHub Discussions
β Star this repository if you found it helpful!