Skip to content

Commit 7fd97b4

Browse files
Update vllm/model_executor/model_loader/utils.py
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> Signed-off-by: Nandan Vallamdasu <[email protected]> Signed-off-by: nandan2003 <[email protected]>
1 parent b1538f3 commit 7fd97b4

File tree

1 file changed

+1
-1
lines changed
  • vllm/model_executor/model_loader

1 file changed

+1
-1
lines changed

vllm/model_executor/model_loader/utils.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -166,7 +166,7 @@ def device_loading_context(module: torch.nn.Module, target_device: torch.device)
166166
"""Caches the outputs of `_get_model_architecture`."""
167167

168168

169-
def _get_model_architecture(model_config: ModelConfig) -> tuple[type[nn.Module], str, bool]:
169+
def _get_model_architecture(model_config: ModelConfig) -> tuple[type[nn.Module], str]:
170170
from vllm.model_executor.models.adapters import (
171171
as_embedding_model,
172172
as_reward_model,

0 commit comments

Comments
 (0)