Skip to content

Commit 7e906cb

Browse files
author
bram
committed
Added better readme
1 parent 63986e4 commit 7e906cb

File tree

1 file changed

+16
-76
lines changed

1 file changed

+16
-76
lines changed

docs/usage.md

Lines changed: 16 additions & 76 deletions
Original file line numberDiff line numberDiff line change
@@ -549,100 +549,40 @@ base_url = "http://192.168.1.100:11434"
549549

550550
#### Docker with Ollama
551551

552-
**Option 1: Ollama on Host Machine (Recommended)**
553-
554-
If Ollama is running on your host machine:
552+
Run Ollama on your host machine, then use Docker with `--network host`:
555553

556554
```bash
557-
# Linux/macOS
555+
# 1. Start Ollama on host
556+
ollama serve
557+
558+
# 2. Pull a model on host
559+
ollama pull qwen2.5
560+
561+
# 3. Run translator in Docker (Linux/macOS)
558562
docker run --rm \
559563
-v $(pwd):/data \
560564
--network host \
561-
python-gpt-po:latest \
565+
ghcr.io/pescheckit/python-gpt-po:latest \
562566
--provider ollama \
563-
--folder /data --bulk
564-
565-
# The --network host allows container to access host's localhost:11434
566-
```
567+
--folder /data
567568

568-
**For macOS/Windows Docker Desktop:**
569-
```bash
570-
# Use host.docker.internal to reach host machine
569+
# macOS/Windows Docker Desktop: use host.docker.internal
571570
docker run --rm \
572571
-v $(pwd):/data \
573-
python-gpt-po:latest \
572+
ghcr.io/pescheckit/python-gpt-po:latest \
574573
--provider ollama \
575574
--ollama-base-url http://host.docker.internal:11434 \
576-
--folder /data --bulk
577-
```
578-
579-
**Option 2: Both in Docker Compose**
580-
581-
```yaml
582-
version: '3.8'
583-
services:
584-
ollama:
585-
image: ollama/ollama:latest
586-
ports:
587-
- "11434:11434"
588-
volumes:
589-
- ollama_data:/root/.ollama
590-
# Optional: GPU support
591-
# deploy:
592-
# resources:
593-
# reservations:
594-
# devices:
595-
# - driver: nvidia
596-
# count: 1
597-
# capabilities: [gpu]
598-
599-
translator:
600-
image: python-gpt-po:latest
601-
depends_on:
602-
- ollama
603-
environment:
604-
- OLLAMA_BASE_URL=http://ollama:11434
605-
volumes:
606-
- ./locales:/data
607-
command: --provider ollama --folder /data --bulk
608-
# Or use pyproject.toml config
609-
# volumes:
610-
# - ./locales:/data
611-
# - ./pyproject.toml:/data/pyproject.toml
612-
613-
volumes:
614-
ollama_data:
615-
```
616-
617-
**To use:**
618-
```bash
619-
# Pull Ollama model (one-time setup)
620-
docker compose run ollama ollama pull llama3.2
621-
622-
# Run translation
623-
docker compose run translator
624-
625-
# Or run both services
626-
docker compose up
627-
```
628-
629-
**Option 3: Config File Approach**
630-
631-
Add to your `pyproject.toml`:
632-
```toml
633-
[tool.gpt-po-translator.provider.ollama]
634-
base_url = "http://ollama:11434" # Service name in docker-compose
635-
model = "llama3.2"
636-
timeout = 180
575+
--folder /data
637576
```
638577

639-
Then mount it:
578+
**With config file:**
640579
```bash
580+
# Add Ollama config to pyproject.toml in your project
641581
docker run --rm \
642582
-v $(pwd):/data \
643583
-v $(pwd)/pyproject.toml:/data/pyproject.toml \
644584
--network host \
645-
python-gpt-po:latest \
585+
ghcr.io/pescheckit/python-gpt-po:latest \
646586
--provider ollama \
647587
--folder /data
648588
```

0 commit comments

Comments
 (0)