Skip to content

Commit 4351f99

Browse files
committed
[Bugfix] Fix Dense module loading for sentence-transformers embedding models v12
Signed-off-by: FFFfff1FFFfff <[email protected]>
1 parent 4b56667 commit 4351f99

File tree

2 files changed

+1
-2
lines changed

2 files changed

+1
-2
lines changed

requirements/test.txt

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -968,7 +968,6 @@ setuptools==77.0.3
968968
# via
969969
# lightning-utilities
970970
# pytablewriter
971-
# torch
972971
# triton
973972
shapely==2.1.1
974973
# via

vllm/transformers_utils/config.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,6 @@
3838
RWConfig, SpeculatorsConfig,
3939
Step3TextConfig, Step3VLConfig,
4040
UltravoxConfig)
41-
4241
# yapf: enable
4342
from vllm.transformers_utils.configs.mistral import adapt_config_dict
4443
from vllm.transformers_utils.utils import check_gguf_file
@@ -67,6 +66,7 @@ def _get_hf_token() -> Optional[str]:
6766
return token
6867
return None
6968

69+
7070
_CONFIG_REGISTRY: dict[str, type[PretrainedConfig]] = {
7171
"chatglm": ChatGLMConfig,
7272
"deepseek_vl_v2": DeepseekVLV2Config,

0 commit comments

Comments
 (0)