A Python-based LLM application that summarizes text into Traditional Chinese bullet points using multiple AI providers, with automatic fallback for improved service robustness.
This project demonstrates lightweight LLM service orchestration and API integration in a practical summarization scenario.
It supports multiple providers:
- Hugging Face
- Google Gemini
- OpenRouter
If one provider fails due to quota limits, service unavailability, or response issues, the system automatically switches to another available provider.
- Multi-provider LLM integration
- Automatic fallback across providers
- Traditional Chinese bullet-point summarization
- Environment-based API key management with
.env - Simple CLI workflow for local testing
This project demonstrates practical experience in:
- integrating multiple LLM APIs
- handling provider fallback logic
- managing environment-based configuration
- building a simple but extensible application workflow
This project is not focused on model training.
Instead, it emphasizes application-layer AI engineering:
- integrating multiple external AI services
- improving robustness through fallback routing
- structuring provider-specific logic into reusable modules
- building a practical text-processing workflow
main.py
providers/
├── huggingface_provider.py
├── gemini_provider.py
└── openrouter_provider.py
pip install -r requirements.txtCreate a .env file:
GEMINI_API_KEY=your_key
HF_API_KEY=your_key
OPENROUTER_API_KEY=your_keypython main.pyEnter your text and the program will return a summarized result.
Input:
Artificial intelligence is changing education and healthcare. Companies are focusing on efficiency, cost, and data security.
Output:
1. AI is transforming education and healthcare
2. Companies focus on improving efficiency
3. Cost reduction and data security are key priorities
- API keys are required to run this project
- Free-tier APIs may have rate limits or availability issues
- The fallback mechanism is designed to improve reliability when one provider is temporarily unavailable
This project was built to demonstrate practical skills in LLM API integration, fallback handling, and lightweight AI application design.