Skip to content

Commit a36e3da

Browse files
authored
[Misc] Drop 0102 related lines (#3323)
### What this PR does / why we need it? Since #3284 merged, should discard some extra code that was previously done for version compatibility ### Does this PR introduce _any_ user-facing change? ### How was this patch tested? - vLLM version: v0.11.0 Signed-off-by: wangli <[email protected]>
1 parent 1c5b302 commit a36e3da

File tree

1 file changed

+0
-13
lines changed

1 file changed

+0
-13
lines changed

vllm_ascend/ops/vocab_parallel_embedding.py

Lines changed: 0 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -253,16 +253,3 @@ def _get_logits_normal(
253253
logits = logits[..., :self.org_vocab_size]
254254

255255
return logits
256-
257-
def forward(
258-
self,
259-
lm_head: VocabParallelEmbedding,
260-
hidden_states: torch.Tensor,
261-
# keep this for version compatibility
262-
sampling_metadata=None, # type: ignore
263-
embedding_bias: Optional[torch.Tensor] = None,
264-
) -> Optional[torch.Tensor]:
265-
return LogitsProcessor.forward(self,
266-
lm_head,
267-
hidden_states,
268-
embedding_bias=embedding_bias)

0 commit comments

Comments
 (0)