Skip to content

Commit 7b6cb72

Browse files
K11OntheBoatK11OntheBoatxiegegege
authored
Fix wrong batch size of thinking_mask (#4296)
Co-authored-by: K11OntheBoat <“[email protected]”> Co-authored-by: xiegegege <[email protected]>
1 parent 3cef851 commit 7b6cb72

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

fastdeploy/model_executor/pre_and_post_process.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -196,7 +196,7 @@ def post_process_normal(
196196
"""Post-processing steps after completing a single token generation."""
197197
# handle vl:
198198
if model_output.think_end_id != -1:
199-
thinking_mask = model_output.enable_thinking
199+
thinking_mask = model_output.enable_thinking[: sampler_output.sampled_token_ids.shape[0]]
200200
exists_think_end = (sampler_output.sampled_token_ids == model_output.think_end_id) & thinking_mask
201201
paddle.assign(
202202
paddle.where(

0 commit comments

Comments
 (0)