Skip to content

Commit 0e3bb54

Browse files
authored
[Bugfix] Support compile for Transformers multimodal (vllm-project#23095)
Signed-off-by: raushan <[email protected]>
1 parent 569aefd commit 0e3bb54

File tree

1 file changed

+7
-0
lines changed

1 file changed

+7
-0
lines changed

vllm/model_executor/models/transformers.py

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -709,6 +709,13 @@ def _can_concat(x: list[torch.Tensor]):
709709
MultiModalProcessor,
710710
info=MultiModalProcessingInfo,
711711
dummy_inputs=MultiModalDummyInputsBuilder)
712+
@support_torch_compile(
713+
dynamic_arg_dims={
714+
"input_ids": 0,
715+
"positions": -1,
716+
"intermediate_tensors": 0,
717+
"inputs_embeds": 0,
718+
}) # set `positions` to last dim to support Qwen-mrope
712719
class TransformersForMultimodalLM(TransformersForCausalLM, SupportsMultiModal):
713720
# Backwards compatibility for prev released models. State dicts back then
714721
# had different formats and cannot be loaded with `AutoModel` mapping as is

0 commit comments

Comments
 (0)