Skip to content

Commit ebf45d1

Browse files
authored
add qwq as default ollama model (#133)
Signed-off-by: ChengZi <chen.zhang@zilliz.com>
1 parent db686f0 commit ebf45d1

4 files changed

Lines changed: 5 additions & 5 deletions

File tree

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
<a href="https://discord.gg/mKc3R95yE5"><img height="20" src="https://img.shields.io/badge/Discord-%235865F2.svg?style=for-the-badge&logo=discord&logoColor=white" alt="discord"/></a>
66

77

8-
DeepSearcher combines reasoning LLMs (OpenAI o1, o3-mini, DeepSeek, Grok 3, Claude 3.7 Sonnet, etc.) and Vector Databases (Milvus, Zilliz Cloud etc.) to perform search, evaluation, and reasoning based on private data, providing highly accurate answer and comprehensive report. This project is suitable for enterprise knowledge management, intelligent Q&A systems, and information retrieval scenarios.
8+
DeepSearcher combines reasoning LLMs (OpenAI o1, o3-mini, DeepSeek, Grok 3, Claude 3.7 Sonnet, QwQ, etc.) and Vector Databases (Milvus, Zilliz Cloud etc.) to perform search, evaluation, and reasoning based on private data, providing highly accurate answer and comprehensive report. This project is suitable for enterprise knowledge management, intelligent Q&A systems, and information retrieval scenarios.
99

1010
![Architecture](./assets/pic/deep-searcher-arch.png)
1111

@@ -135,7 +135,7 @@ result = query("Write a report about xxx.") # Your question here
135135
<p> <a href="https://ollama.ai/download">Download</a> and install Ollama onto the available supported platforms (including Windows Subsystem for Linux).</p>
136136
<p> View a list of available models via the <a href="https://ollama.ai/library">model library</a>.</p>
137137
<p> Fetch available LLM models via <code>ollama pull &lt;name-of-model&gt;</code></p>
138-
<p> Example: <code>ollama pull qwen2.5:3b</code></p>
138+
<p> Example: <code>ollama pull qwq</code></p>
139139
<p> To chat directly with a model from the command line, use <code>ollama run &lt;name-of-model&gt;</code>.</p>
140140
<p> By default, Ollama has a REST API for running and managing models on <a href="http://localhost:11434">http://localhost:11434</a>.</p>
141141
</details>

config.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ provide_settings:
3838

3939
# provider: "Ollama"
4040
# config:
41-
# model: "qwen2.5:3b"
41+
# model: "qwq"
4242
## base_url: ""
4343

4444
embedding:

deepsearcher/llm/ollama.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44

55

66
class Ollama(BaseLLM):
7-
def __init__(self, model: str = "qwen2.5:3b", **kwargs):
7+
def __init__(self, model: str = "qwq", **kwargs):
88
from ollama import Client
99

1010
self.model = model

evaluation/eval_config.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ provide_settings:
3838

3939
# provider: "Ollama"
4040
# config:
41-
# model: "qwen2.5:3b"
41+
# model: "qwq"
4242
## base_url: ""
4343

4444
embedding:

0 commit comments

Comments
 (0)