Skip to content

Commit dc35345

Browse files
committed
�[200~fix: improve Ollama model detection in LocalRAGAgent
- Changed model detection logic to only treat models as Ollama models when they start with 'ollama:' or contain 'Ollama - ' - Previously, any model name containing 'ollama' (case-insensitive) was incorrectly treated as an Ollama model - This fixes the issue where model names like 'deepseek-r1' were being incorrectly identified as Ollama models~
1 parent 0c43536 commit dc35345

File tree

2 files changed

+28
-28
lines changed

2 files changed

+28
-28
lines changed

agentic_rag/gradio_app.py

Lines changed: 27 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -430,33 +430,33 @@ def create_interface():
430430
431431
| Model | Parameters | Size | Download Command |
432432
|-------|------------|------|-----------------|
433-
| Gemma 3 | 1B | 815MB | ollama run gemma3:1b |
434-
| Gemma 3 | 4B | 3.3GB | ollama run gemma3 |
435-
| Gemma 3 | 12B | 8.1GB | ollama run gemma3:12b |
436-
| Gemma 3 | 27B | 17GB | ollama run gemma3:27b |
437-
| QwQ | 32B | 20GB | ollama run qwq |
438-
| DeepSeek-R1 | 7B | 4.7GB | ollama run deepseek-r1 |
439-
| DeepSeek-R1 | 671B | 404GB | ollama run deepseek-r1:671b |
440-
| Llama 3.3 | 70B | 43GB | ollama run llama3.3 |
441-
| Llama 3.2 | 3B | 2.0GB | ollama run llama3.2 |
442-
| Llama 3.2 | 1B | 1.3GB | ollama run llama3.2:1b |
443-
| Llama 3.2 Vision | 11B | 7.9GB | ollama run llama3.2-vision |
444-
| Llama 3.2 Vision | 90B | 55GB | ollama run llama3.2-vision:90b |
445-
| Llama 3.1 | 8B | 4.7GB | ollama run llama3.1 |
446-
| Llama 3.1 | 405B | 231GB | ollama run llama3.1:405b |
447-
| Phi 4 | 14B | 9.1GB | ollama run phi4 |
448-
| Phi 4 Mini | 3.8B | 2.5GB | ollama run phi4-mini |
449-
| Mistral | 7B | 4.1GB | ollama run mistral |
450-
| Moondream 2 | 1.4B | 829MB | ollama run moondream |
451-
| Neural Chat | 7B | 4.1GB | ollama run neural-chat |
452-
| Starling | 7B | 4.1GB | ollama run starling-lm |
453-
| Code Llama | 7B | 3.8GB | ollama run codellama |
454-
| Llama 2 Uncensored | 7B | 3.8GB | ollama run llama2-uncensored |
455-
| LLaVA | 7B | 4.5GB | ollama run llava |
456-
| Granite-3.2 | 8B | 4.9GB | ollama run granite3.2 |
457-
| Llama 3 | 8B | 4.7GB | ollama run llama3 |
458-
| Phi 3 | 4B | 4.0GB | ollama run phi3 |
459-
| Qwen 2 | 7B | 4.1GB | ollama run qwen2 |
433+
| Gemma 3 | 1B | 815MB | gemma3:1b |
434+
| Gemma 3 | 4B | 3.3GB | gemma3 |
435+
| Gemma 3 | 12B | 8.1GB | gemma3:12b |
436+
| Gemma 3 | 27B | 17GB | gemma3:27b |
437+
| QwQ | 32B | 20GB | qwq |
438+
| DeepSeek-R1 | 7B | 4.7GB | deepseek-r1 |
439+
| DeepSeek-R1 | 671B | 404GB | deepseek-r1:671b |
440+
| Llama 3.3 | 70B | 43GB | llama3.3 |
441+
| Llama 3.2 | 3B | 2.0GB | llama3.2 |
442+
| Llama 3.2 | 1B | 1.3GB | llama3.2:1b |
443+
| Llama 3.2 Vision | 11B | 7.9GB | llama3.2-vision |
444+
| Llama 3.2 Vision | 90B | 55GB | llama3.2-vision:90b |
445+
| Llama 3.1 | 8B | 4.7GB | llama3.1 |
446+
| Llama 3.1 | 405B | 231GB | llama3.1:405b |
447+
| Phi 4 | 14B | 9.1GB | phi4 |
448+
| Phi 4 Mini | 3.8B | 2.5GB | phi4-mini |
449+
| Mistral | 7B | 4.1GB | mistral |
450+
| Moondream 2 | 1.4B | 829MB | moondream |
451+
| Neural Chat | 7B | 4.1GB | neural-chat |
452+
| Starling | 7B | 4.1GB | starling-lm |
453+
| Code Llama | 7B | 3.8GB | codellama |
454+
| Llama 2 Uncensored | 7B | 3.8GB | llama2-uncensored |
455+
| LLaVA | 7B | 4.5GB | llava |
456+
| Granite-3.2 | 8B | 4.9GB | granite3.2 |
457+
| Llama 3 | 8B | 4.7GB | llama3 |
458+
| Phi 3 | 4B | 4.0GB | phi3 |
459+
| Qwen 2 | 7B | 4.1GB | qwen2 |
460460
461461
### HuggingFace Models
462462

agentic_rag/local_rag_agent.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -165,7 +165,7 @@ def __init__(self, vector_store: VectorStore = None, model_name: str = "mistrala
165165
# skip_analysis parameter kept for backward compatibility but no longer used
166166

167167
# Check if this is an Ollama model
168-
self.is_ollama = model_name.startswith("ollama:") or "ollama" in model_name.lower()
168+
self.is_ollama = model_name.startswith("ollama:") or "Ollama - " in model_name
169169

170170
if self.is_ollama:
171171
# Extract the actual model name from the prefix

0 commit comments

Comments
 (0)