Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions src/MaxText/input_pipeline/_hf_data_processing.py
Original file line number Diff line number Diff line change
Expand Up @@ -192,6 +192,7 @@ def preprocessing_pipeline(
use_sft=None,
sft_train_on_completion_only=True,
grain_worker_count=1, # only support 0 or 1
max_segments_per_seq=1,
):
"""pipeline for preprocessing HF dataset"""

Expand Down Expand Up @@ -301,6 +302,7 @@ def lists2array(x):
grain.experimental.PackAndBatchOperation(
batch_size=global_batch_size // jax.process_count(),
length_struct=length_struct,
max_sequences_per_bin=max_segments_per_seq,
)
)
operations.append(_input_pipeline_utils.ReformatPacking(data_column_names))
Expand Down Expand Up @@ -386,6 +388,7 @@ def make_hf_train_iterator(
use_sft=config.use_sft,
sft_train_on_completion_only=config.sft_train_on_completion_only,
chat_template_path=config.chat_template_path,
max_sequences_per_bin=config.max_segments_per_seq,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

According to this PR, max_segment_per_seq is only relevant for GPU packed attention. But this change will apply it to TPU workloads as well.
To be cleaner, it's better to align the behavior across hardware, and across different pipelines (grain pipeline's FirstFitPackIterDataset also has this parameter). We can set the default value to -1, which means no limit (passing None to PackAndBatchOperation)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't quite follow what you have in mind.
On GPUs, we need to have the same value for TransformerEngine attention and grain, or we will have data corruption.
That is why we're using the same config value.

Are you suggesting that we set the default value for this configuration value to -1 here, and update the comment to say that on GPUs this should be changed to something like 32?
I can make that change (and turn it to -1 here) if that is what is desired. If we do that I can also add a warning if TE attention is used and max_segments_per_seq is not updated

)
return train_iter

Expand Down Expand Up @@ -437,5 +440,6 @@ def make_hf_eval_iterator(
use_sft=config.use_sft,
sft_train_on_completion_only=config.sft_train_on_completion_only,
chat_template_path=config.chat_template_path,
max_sequences_per_bin=config.max_segments_per_seq,
)
return eval_iter