Skip to content

Commit c0cb613

Browse files
authored
Update README.md
1 parent 193ab3c commit c0cb613

File tree

1 file changed

+0
-1
lines changed

1 file changed

+0
-1
lines changed

README.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,6 @@ python optillm.py
4848
> [!WARNING]
4949
> Note that llama-server currently does not support sampling multiple responses from a model, which limits the available approaches to the following:
5050
> `cot_reflection`, `leap`, `plansearch`, `rstar`, `rto`, `self_consistency`, `re2`, and `z3`.
51-
> In order to use other approaches, consider using an alternative compatible server such as [ollama](https://github.com/ollama/ollama).
5251
5352
> [!NOTE]
5453
> You'll later need to specify a model name in the OpenAI client configuration. Since llama-server was started with a single model, you can choose any name you want.

0 commit comments

Comments
 (0)