Skip to content

Enable NullTokenizer for pretraining to reduce I/O access #4057

Open
asolergi-nv wants to merge 4 commits intoNVIDIA:mainfrom
asolergi-nv:offline-tokenizer
Open

Enable NullTokenizer for pretraining to reduce I/O access #4057
asolergi-nv wants to merge 4 commits intoNVIDIA:mainfrom
asolergi-nv:offline-tokenizer

Conversation

@asolergi-nv
Copy link
Copy Markdown
Contributor

What does this PR do ?

Problem

During pretraining, Megatron-LM requires a tokenizer config even though data is already pretokenized. The tokenizer is only used for 3 integer values (vocab_size, eod, pad), yet every rank loads tokenizer files from the filesystem. At scale, this causes expensive redundant I/O on shared filesystems like Lustre. This PR is a improvement over #2865.

Changes

Update NullTokenizer so it can serve as a zero-I/O tokenizer for real pretokenized data:

  • vocab_size returns the exact value passed (previously added +1 for an implicit eod token)
  • eod = vocab_size - 1 (last token in the vocabulary)
  • Added pad_id property (defaults to 0)
  • Added vocab_size to unique_identifiers for proper dataset cache invalidation
  • Replaced GPT2BPETokenizer with NullTokenizer in gpt3_mcore_te_tp2_pp2_cp2 functional test, eliminating the need for --vocab-file and --merge-file. This test is leveraging HF tokenizers, so it's a way to showcase that we can savely replace them w/ NullTokenizer
  • Updated --vocab-size in all existing NullTokenizer tests to preserve the same effective vocab size

Usage

Before: requires tokenizer files:

--tokenizer-type TikTokenizer --tokenizer-model /path/to/vocab.json

After: zero file I/O

--tokenizer-type NullTokenizer --vocab-size 128256

Contribution process

Pre-checks

  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!

All PRs start as draft. If you open a non-draft PR, it will be automatically converted to draft.

Step 1: Mark PR as "Ready for Review"

  1. When your PR is ready, click Ready for Review.
  2. An oncall reviewer is auto-assigned and expert reviewers are notified based on your changes.
    • Some PRs may jump straight to step 2. This is determined by .github/CODEOWNERS.

⚠️ Only mark as ready once merge-conflicts are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

Step 2: Final Review

For PRs that change megatron/core, once all expert reviewers have approved, the Final Review label is applied automatically and final reviewers are assigned.

For PRs outside megatron/core, this step is skipped.

Step 3: Approved

Once all required reviewers have approved, the Approved label is applied automatically.

Merge

Any member of mcore-engineers will be able to merge your PR.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either eharper@nvidia.com or zijiey@nvidia.com.

@asolergi-nv asolergi-nv requested review from a team as code owners March 30, 2026 14:45
@svcnvidia-nemo-ci svcnvidia-nemo-ci marked this pull request as draft March 30, 2026 14:45
@github-actions
Copy link
Copy Markdown
Contributor

This PR has been automatically converted to draft because all PRs must start as drafts.

When you are ready for review, click Ready for Review to begin the review process. This will:

  1. Add the oncall reviewer (optional reviewer)
  2. Add required review teams based on your changes

See the contribution guide for more details.

@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot bot commented Mar 30, 2026

Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually.

Contributors can view more details about this message here.

@svcnvidia-nemo-ci svcnvidia-nemo-ci added this to the Core 0.16 milestone Mar 30, 2026
@asolergi-nv asolergi-nv marked this pull request as ready for review March 30, 2026 16:55
@svcnvidia-nemo-ci svcnvidia-nemo-ci requested a review from a team March 30, 2026 16:55
Copy link
Copy Markdown
Contributor

@dimapihtar dimapihtar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thank you!

@svcnvidia-nemo-ci svcnvidia-nemo-ci added the Final Review PR is in the "final review" stage label Mar 31, 2026
@janEbert
Copy link
Copy Markdown
Contributor

When using this for training, I think it would be really important to have a small check whether the automatically assigned tokens actually match what the true tokenizer uses. Maybe as a script in tools? Then people could add it to job submission scripts to error out early in case things don't match.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants