Skip to content

Commit 60a98b2

Browse files
authored
[Docs] Mention model_impl arg when explaining Transformers fallback (vllm-project#14552)
Signed-off-by: Harry Mellor <[email protected]>
1 parent 460f553 commit 60a98b2

File tree

1 file changed

+4
-0
lines changed

1 file changed

+4
-0
lines changed

docs/source/models/supported_models.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -59,6 +59,10 @@ llm.apply_model(lambda model: print(type(model)))
5959

6060
If it is `TransformersModel` then it means it's based on Transformers!
6161

62+
:::{tip}
63+
You can force the use of `TransformersModel` by setting `model_impl="transformers"` for <project:#offline-inference> or `--model-impl transformers` for the <project:#openai-compatible-server>.
64+
:::
65+
6266
:::{note}
6367
vLLM may not fully optimise the Transformers implementation so you may see degraded performance if comparing a native model to a Transformers model in vLLM.
6468
:::

0 commit comments

Comments
 (0)