Skip to content

Commit d91c6da

Browse files
authored
[improve] Remove redundant parentheses in pangu_moe.py (#2081)
### What this PR does / why we need it? Remove redundant parentheses in pangu_moe.py. ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? Local. - vLLM version: v0.10.0 - vLLM main: vllm-project/vllm@099c046 Signed-off-by: xleoken <[email protected]>
1 parent 6335fe3 commit d91c6da

File tree

2 files changed

+3
-3
lines changed

2 files changed

+3
-3
lines changed

setup.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -364,7 +364,7 @@ def _read_requirements(filename: str) -> List[str]:
364364
version=VERSION,
365365
author="vLLM-Ascend team",
366366
license="Apache 2.0",
367-
description=("vLLM Ascend backend plugin"),
367+
description="vLLM Ascend backend plugin",
368368
long_description=read_readme(),
369369
long_description_content_type="text/markdown",
370370
url="https://github.com/vllm-project/vllm-ascend",

vllm_ascend/models/pangu_moe.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -122,7 +122,7 @@ def __init__(
122122
input_size=self.input_size,
123123
output_size=self.output_size,
124124
params_dtype=self.params_dtype,
125-
weight_loader=(self.weight_loader))
125+
weight_loader=self.weight_loader)
126126
if bias:
127127
self.bias = Parameter(
128128
torch.empty(self.output_size_per_partition,
@@ -227,7 +227,7 @@ def __init__(
227227
input_size=self.input_size,
228228
output_size=self.output_size,
229229
params_dtype=self.params_dtype,
230-
weight_loader=(self.weight_loader))
230+
weight_loader=self.weight_loader)
231231
if not reduce_results and (bias and not skip_bias_add):
232232
raise ValueError("When not reduce the results, adding bias to the "
233233
"results can lead to incorrect results")

0 commit comments

Comments
 (0)