Commit 906abe3
authored
Fix Llama4 shape mismatch for 32k+ context window (#842)
Llama4 for `max_model_len > 32k` enable temperature adjustment
https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/models/llama4.py#L719.
Enabled adjustment causes tensor `q` shape modification from 2D to 3D:
https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/models/llama4.py#L307.
This tensor is passing to `UnqnatizedFusedMoEMetod -> forward`:
https://github.com/vllm-project/vllm-gaudi/blob/main/vllm_gaudi/ops/hpu_fused_moe.py#L163
causing invalid reshaping - we trying to return a 3D `output.view` based
on 2D output tensor.
Found that following PR introduced the bug: #680 and #684
---------
Signed-off-by: Artur Fierka <artur.fierka@intel.com>1 parent 6e2d045 commit 906abe3
1 file changed
+4
-1
lines changed| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
89 | 89 | | |
90 | 90 | | |
91 | 91 | | |
92 | | - | |
| 92 | + | |
| 93 | + | |
| 94 | + | |
| 95 | + | |
93 | 96 | | |
94 | 97 | | |
95 | 98 | | |
| |||
0 commit comments