This document provides solutions to common issues encountered while setting up or running SaralPolicy.
Symptoms: Hindi TTS generation takes 5-10 minutes. Cause: Neural TTS model (0.9B parameters) running on CPU. This is expected behavior. Neural TTS on CPU is slow but produces high-quality audio.
Solutions:
- Use GPU (if available):
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
- Use gTTS fallback: Set
TTS_ENGINE=gttsin.envfor instant (lower quality) TTS - Pre-generate audio: For demos, generate audio in advance
- Accept the trade-off: High-quality Hindi TTS on CPU is slow - this is normal
Expected Times:
| Hardware | ~100 chars | ~500 chars |
|---|---|---|
| CPU | 2-5 min | 5-10 min |
| GPU (CUDA) | 5-15 sec | 15-45 sec |
Symptoms: Warning message during TTS generation. Cause: Informational message from the model. Solution: This is safe to ignore. The model handles it automatically.
Symptoms: TTS falls back to gTTS instead of Indic Parler-TTS. Solution:
- Get a token from https://huggingface.co/settings/tokens
- Add to
backend/.env:HF_TOKEN=hf_your_token_here - Restart the application
Symptoms: MP3 conversion fails, falls back to WAV. Cause: ffmpeg not installed on system. Solution:
- Windows:
winget install ffmpegor download from https://ffmpeg.org - Linux:
sudo apt install ffmpeg - Mac:
brew install ffmpeg
Symptoms: Application fails to start or crashes when analyzing; logs show connection errors. Cause: Ollama background service is not running. Solution: Open a separate terminal and run:
ollama serveKeep this terminal open.
Symptoms: Error logs indicate model 404 or not found. Solution:
ollama pull gemma2:2bVerify installation:
ollama listSymptoms: Document analysis takes >30 seconds. Cause:
- Running on CPU with limited RAM.
- Large PDF size. Solutions:
- Ensure you have at least 8GB RAM.
- Close other memory-intensive applications (browser tabs, IDEs).
- Advanced: Switch to a smaller model (e.g.,
gemma:2b) inapp/services/ollama_llm_service.pyif hardware is very limited.
Symptoms: Upload fails immediately.
Solution: Only .pdf, .docx, and .txt files are supported. Ensure the file has the correct extension.
Symptoms: You see this in strict logs or older traces.
Solution: This is a legacy artifact message. The system now uses app/services/document_service.py. Ensure you are running the latest main.py.
Symptoms: Running scripts directly fails.
Cause: Python path not set correctly.
Solution:
Run scripts as modules from the backend directory:
# Correct
python -m scripts.index_irdai_knowledge
# Or ensure PYTHONPATH includes backend
$env:PYTHONPATH="C:\path\to\backend"
python scripts/index_irdai_knowledge.pySymptoms: Cleanup scripts fail.
Cause: Files are locked by a running python process or OS.
Solution: Stop all python.exe processes and try again.
Symptoms: Frontend loads but API calls fail.
Cause: Mismatch between frontend origin and backend allowed origins.
Solution: Check app/dependencies.py and ensure your localhost port (usually 8000) is in allowed_origins.
Symptoms: Progress bar hangs indefinitely.
Cause: Backend error during RAG or LLM step.
Solution: Check the terminal running python main.py for traceback errors. Common causes include Ollama being down or memory overflows.
- Check Logs: The application uses structured logging. Check the console output.
- Run Tests:
python -m pytest tests/to isolate failing components. - Open Issue: Report on GitHub with your log output.