You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source/models/supported_models.md
+11Lines changed: 11 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -846,6 +846,13 @@ See [this page](#generative-models) for more information on how to use generativ
846
846
* ✅︎
847
847
* ✅︎
848
848
* ✅︎
849
+
-*`Qwen2_5_VLForConditionalGeneration`
850
+
* Qwen2.5-VL
851
+
* T + I<sup>E+</sup> + V<sup>E+</sup>
852
+
*`Qwen/Qwen2.5-VL-3B-Instruct`, `Qwen/Qwen2.5-VL-72B-Instruct`, etc.
853
+
*
854
+
* ✅︎
855
+
* ✅︎
849
856
-*`UltravoxModel`
850
857
* Ultravox
851
858
* T + A<sup>E+</sup>
@@ -880,6 +887,10 @@ The chat template for Pixtral-HF is incorrect (see [discussion](https://huggingf
880
887
A corrected version is available at <gh-file:examples/template_pixtral_hf.jinja>.
881
888
:::
882
889
890
+
:::{note}
891
+
To use Qwen2.5-VL series models, you have to install Huggingface `transformers` library from source via `pip install git+https://github.com/huggingface/transformers`.
892
+
:::
893
+
883
894
### Pooling Models
884
895
885
896
See [this page](pooling-models) for more information on how to use pooling models.
0 commit comments