Skip to content

Commit d8a9cb8

Browse files
authored
[Bugfix] fix bug when tp=1 (#3193)
### What this PR does / why we need it? Addresses a bug in DenseOptimRowParallelOp that occurs when tensor parallelism is not used ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? - vLLM version: v0.10.2 - vLLM main: vllm-project/vllm@52d0cb8
1 parent b72e332 commit d8a9cb8

File tree

1 file changed

+3
-1
lines changed

1 file changed

+3
-1
lines changed

vllm_ascend/ops/linear_op.py

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -390,7 +390,9 @@ def apply_impl(
390390
bias_ = None if (self.tp_rank > 0 or self.skip_bias_add) else self.bias
391391

392392
if self.tp_size == 1 or not self.reduce_results:
393-
output = self.quant_method.apply(self, input_parallel, bias=bias_)
393+
output = self.quant_method.apply(self.layer,
394+
input_parallel,
395+
bias=bias_)
394396
else:
395397
output_parallel = self.quant_method.apply(self.layer,
396398
input_parallel,

0 commit comments

Comments
 (0)