You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+13Lines changed: 13 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -121,6 +121,19 @@ pip install -e ".[emb]"
121
121
122
122
> If you need fully offline installs, pre-bundle the Sentence-Transformers model in your image/host and point to it via `--emb-model` (local path).
123
123
124
+
### 3) LLM Re-ranking (Ollama, optional)
125
+
If you intend to use the LLM re-ranking feature (available in the demo's "Advanced options"), you need to have [Ollama](https://ollama.com/) installed and the `llama3.1:8b` model pulled locally.
126
+
127
+
```bash
128
+
# Install Ollama (if you haven't already)
129
+
# Refer to https://ollama.com/download for installation instructions
130
+
131
+
# Pull the required LLM model
132
+
ollama pull llama3.1:8b
133
+
```
134
+
135
+
After installing Ollama and pulling the model, ensure your Ollama server is running (it usually starts automatically after installation).
0 commit comments