Skip to content

Commit 68c5401

Browse files
authored
[Eagle] Fix attn_mask index out of range in high concurrency situations (#3187)
### What this PR does / why we need it? - Fixes the bug that Multiple calls (maybe >100) to eagle3-qwen3-8b often incurs "attn_mask index out of range" error ### Does this PR introduce _any_ user-facing change? N/A ### How was this patch tested? ``` python -m vllm.entrypoints.openai.api_server --host 0.0.0.0 --served-model-name Eagle3 --port 8000 --model Qwen/Qwen3-8B --seed 42 -tp 1 --speculative_config '{"model": "Tengyunw/qwen3_8b_eagle3", "draft_tensor_parallel_size": 1, "num_speculative_tokens": 5, "method": "eagle3"}' ``` Co-authored-by: liuruijin17 [[email protected]](mailto:[email protected]) - vLLM version: v0.10.2 - vLLM main: vllm-project/vllm@52d0cb8 Signed-off-by: Icey <[email protected]>
1 parent 1705501 commit 68c5401

File tree

1 file changed

+1
-3
lines changed

1 file changed

+1
-3
lines changed

vllm_ascend/spec_decode/eagle_proposer.py

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,4 @@
11
# SPDX-License-Identifier: Apache-2.0
2-
import os
32
from typing import Optional
43

54
import numpy as np
@@ -72,8 +71,7 @@ def __init__(self,
7271
1,
7372
device=device,
7473
dtype=torch.int32)
75-
attn_mask_len = min(self.vllm_config.model_config.max_model_len,
76-
int(os.getenv("PAGED_ATTENTION_MASK_LEN", 10000)))
74+
attn_mask_len = self.vllm_config.model_config.max_model_len
7775
self.attn_mask_builder = AttentionMaskBuilder(
7876
attn_mask_len, self.vllm_config.model_config.dtype)
7977

0 commit comments

Comments
 (0)