Skip to content
This repository was archived by the owner on Sep 10, 2025. It is now read-only.

Commit fcadb14

Browse files
authored
Fix docstring args names (#1045)
* Fix docstring args names * Update docstring for tasks
1 parent 925b7bd commit fcadb14

File tree

6 files changed

+10
-10
lines changed

6 files changed

+10
-10
lines changed

build/builder.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -400,7 +400,7 @@ def _maybe_parellelize_model(
400400
if the user specifies using distributed inference. If not, this is a no-op.
401401
402402
Args:
403-
module (:class:`nn.Module`):
403+
model (:class:`nn.Module`):
404404
Module to be parallelized.
405405
builder_args (:class:`BuilderArgs`):
406406
Command args for model building.

distributed/checkpoint.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -108,7 +108,7 @@ def load_checkpoints_to_model(
108108
We parallelize the module and load the distributed checkpoint to the model.
109109
110110
Args:
111-
module (:class:`nn.Module`):
111+
model (:class:`nn.Module`):
112112
Module to be parallelized.
113113
builder_args (:class:`BuilderArgs`):
114114
Command args for model building.

distributed/parallelize_llama.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ def apply_tp(
2828
2929
3030
Args:
31-
module (:class:`nn.Module`):
31+
model (:class:`nn.Module`):
3232
Module to be parallelized.
3333
world_mesh (:class:`DeviceMesh`):
3434
Object which describes the mesh topology
@@ -104,7 +104,7 @@ def parallelize_llama(
104104
the model must fit on GPU or CPU memory.
105105
106106
Args:
107-
module (:class:`nn.Module`):
107+
model (:class:`nn.Module`):
108108
Module to be parallelized.
109109
world_mesh (:class:`DeviceMesh`):
110110
Object which describes the mesh topology

distributed/world_maker.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ def launch_distributed(
2424
using distributed inference. If not, this is a no-op.
2525
2626
Args:
27-
config: str:
27+
toml_config: str:
2828
toml file for the inference config.
2929
Returns:
3030
Tuple[Optional[DeviceMesh], Optional[ParallelDims]]:

eval.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -167,7 +167,7 @@ def eval(
167167
Args:
168168
model (Transformer): The pre-trained language model to evaluate.
169169
tokenizer: The tokenizer to use for encoding/decoding text.
170-
task (str): The name of the evaluation task to perform.
170+
tasks (Optional[list]): The names of the evaluation tasks to perform.
171171
limit (Optional[int]): The maximum number of samples to evaluate (None for all available).
172172
max_seq_length (Optional[int]): The maximum sequence length allowed for input text.
173173
@@ -210,7 +210,7 @@ def main(args) -> None:
210210
Args:
211211
checkpoint_path (Path): The path to the model checkpoint file to load.
212212
compile (bool): Whether or not to compile the model for optimization.
213-
task (Optional[str]): The name of the evaluation task or a list of tasks to perform.
213+
tasks (Optional[list]): The names of the evaluation tasks to perform.
214214
limit (Optional[int]): The maximum number of samples to evaluate (None for all available).
215215
max_seq_length (Optional[int]): The maximum sequence length allowed for input text.
216216

tokenizer/tiktoken.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -116,16 +116,16 @@ def encode(
116116
s (str): The input string to be encoded.
117117
bos (bool): Whether to prepend the beginning-of-sequence token.
118118
eos (bool): Whether to append the end-of-sequence token.
119-
allowed_tokens ("all"|set[str]): allowed special tokens in string
120-
disallowed_tokens ("all"|set[str]): special tokens that raise an error when in string
119+
allowed_special ("all"|set[str]): allowed special tokens in string
120+
disallowed_special ("all"|set[str]): special tokens that raise an error when in string
121121
122122
Returns:
123123
list[int]: A list of token IDs.
124124
125125
By default, setting disallowed_special=() encodes a string by ignoring
126126
special tokens. Specifically:
127127
- Setting `disallowed_special` to () will cause all text corresponding
128-
to special tokens to be encoded as natural text (insteading of raising
128+
to special tokens to be encoded as natural text (instead of raising
129129
an error).
130130
- Setting `allowed_special` to "all" will treat all text corresponding
131131
to special tokens to be encoded as special tokens.

0 commit comments

Comments
 (0)