Translate books, subtitles, and documents using AI - locally or in the cloud.
No size limit. Process documents of any length - from a single page to thousand-page novels. The intelligent chunking system handles unlimited content while preserving context between segments.
Perfect preservation. Your documents come out exactly as they went in: EPUB formatting, styles, and structure remain intact. SRT timecodes stay perfectly synchronized. Every tag, every timestamp, every formatting detail is preserved.
Resume anytime. Interrupted translation? Pick up exactly where you left off. The checkpoint system saves progress automatically.
Formats: EPUB, SRT, DOCX, TXT
Providers: Ollama (local), OpenRouter, OpenAI (compatible like LM Studio), Gemini
Translation Quality Benchmarks — Find the best model for your target language.
- Download and extract the archive for your platform
- Install Ollama (for local AI models)
- Run
TranslateBook.exe(Windows) or./TranslateBook(macOS) - Open http://localhost:5000 in your browser
Note: First run creates a
TranslateBook_Datafolder with configuration files.macOS: On first launch, go to System Settings > Privacy & Security and click "Open Anyway".
Prerequisites: Python 3.8+, Ollama, Git
git clone https://github.com/hydropix/TranslateBooksWithLLMs.git
cd TranslateBookWithLLM
ollama pull qwen3:14b # Download a model
# Windows
start.bat
# Mac/Linux
chmod +x start.sh && ./start.shThe web interface opens at http://localhost:5000
| Provider | Type | Setup |
|---|---|---|
| Ollama | Local | ollama.com |
| OpenAI-Compatible | Local | llama.cpp, LM Studio, vLLM, LocalAI... |
| OpenRouter | Cloud (200+ models) | openrouter.ai/keys |
| OpenAI | Cloud | platform.openai.com |
| Gemini | Cloud | Google AI Studio |
OpenAI-Compatible servers: Use
--provider openaiwith your server's endpoint (e.g., llama.cpp:http://localhost:8080/v1/chat/completions, LM Studio:http://localhost:1234/v1/chat/completions)
See docs/PROVIDERS.md for detailed setup instructions.
# Basic (auto-generates "book (Chinese).epub")
python translate.py -i book.epub -sl English -tl Chinese
# With OpenRouter
python translate.py -i book.txt --provider openrouter \
--openrouter_api_key YOUR_KEY -m anthropic/claude-sonnet-4 -tl French
# With OpenAI
python translate.py -i book.txt --provider openai \
--openai_api_key YOUR_KEY -m gpt-4o -tl French
# With Gemini
python translate.py -i book.txt --provider gemini \
--gemini_api_key YOUR_KEY -m gemini-2.0-flash -tl French
# With local OpenAI-compatible server (llama.cpp, LM Studio, vLLM, etc.)
python translate.py -i book.txt --provider openai \
--api_endpoint http://localhost:8080/v1/chat/completions -m your-model -tl French| Option | Description | Default |
|---|---|---|
-i, --input |
Input file | Required |
-o, --output |
Output file | Auto: {name} ({lang}).{ext} |
-sl, --source_lang |
Source language | English |
-tl, --target_lang |
Target language | Chinese |
-m, --model |
Model name | mistral-small:24b |
--provider |
ollama/openrouter/openai/gemini | ollama |
--text-cleanup |
OCR/typographic cleanup | disabled |
--refine |
Second pass for literary polish | disabled |
--tts |
Generate audio (Edge-TTS) | disabled |
See docs/CLI.md for all options (TTS voices, rates, formats, etc.).
Copy .env.example to .env and edit:
# Provider
LLM_PROVIDER=ollama
# Ollama
API_ENDPOINT=http://localhost:11434/api/generate
DEFAULT_MODEL=mistral-small:24b
# API Keys (if using cloud providers)
OPENROUTER_API_KEY=sk-or-v1-...
OPENAI_API_KEY=sk-...
GEMINI_API_KEY=...
# Performance
REQUEST_TIMEOUT=900
MAX_TOKENS_PER_CHUNK=400 # Token-based chunking (default: 400 tokens)docker build -t translatebook .
docker run -p 5000:5000 -v $(pwd)/translated_files:/app/translated_files translatebookSee DOCKER.md for more options.
| Problem | Solution |
|---|---|
| Ollama won't connect | Check Ollama is running, test curl http://localhost:11434/api/tags |
| Model not found | Run ollama list, then ollama pull model-name |
See docs/TROUBLESHOOTING.md for more solutions.
| Guide | Description |
|---|---|
| docs/PROVIDERS.md | Detailed provider setup (Ollama, LM Studio, OpenRouter, OpenAI, Gemini) |
| docs/CLI.md | Complete CLI reference |
| docs/TROUBLESHOOTING.md | Problem solutions |
| DOCKER.md | Docker deployment guide |
License: AGPL-3.0