Skip to content

fix: parallel state initialization error in Megatron to HF model conversion#1120

Merged
terrykong merged 10 commits intoNVIDIA-NeMo:mainfrom
skirdey-inflection:main
Oct 7, 2025
Merged

fix: parallel state initialization error in Megatron to HF model conversion#1120
terrykong merged 10 commits intoNVIDIA-NeMo:mainfrom
skirdey-inflection:main

Conversation

@skirdey-inflection
Copy link
Copy Markdown
Contributor

@skirdey-inflection skirdey-inflection commented Sep 13, 2025

Error:

[rank0]: pp_rank = parallel_state.get_pipeline_model_parallel_rank()
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "RL/3rdparty/Megatron-LM-workspace/Megatron-LM/megatron/core/parallel_state.py", line 1474, in get_pipeline_model_parallel_rank
[rank0]: return torch.distributed.get_rank(group=get_pipeline_model_parallel_group())
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "RL/3rdparty/Megatron-LM-workspace/Megatron-LM/megatron/core/parallel_state.py", line 1288, in get_pipeline_model_parallel_group
[rank0]: _PIPELINE_MODEL_PARALLEL_GROUP is not None
[rank0]: AssertionError: pipeline_model parallel group is not initialized

Fix parallel state initialization error in Megatron to HF model conversion
Wraps both model loading and HF saving operations within the same temporary distributed context to ensure the pipeline parallel group remains initialized throughout the conversion process, preventing the "pipeline_model parallel group is not initialized" AssertionError.

Tested on PP megatron checkpoint to HF conversion.

Summary by CodeRabbit

  • Bug Fixes
    • Improved reliability of Megatron model export by ensuring a safe CPU-based distributed context during the process.
    • Prevents export failures in environments where certain Megatron training components are unavailable, with clearer error messaging.
    • Enhances compatibility for CPU-only setups and reduces intermittent export errors.

Signed-off-by: Stan Kirdey <stan@inflection.ai>
…text

fix: temporary distributed context to handle pipeline-parallel megatron checkpoint
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Sep 13, 2025

Walkthrough

Introduces an import-time fallback for temporary_distributed_context from megatron.bridge.training. Wraps model export in a CPU "gloo" distributed context, calling bridge.load_megatron_model with skip_temp_dist_context=True, then bridge.save_hf_pretrained. Existing output path checks and mcore state reset remain unchanged.

Changes

Cohort / File(s) Summary
Megatron export context management
nemo_rl/models/megatron/community_import.py
Adds guarded import of temporary_distributed_context; wraps export in with temporary_distributed_context(backend="gloo"); uses bridge.load_megatron_model(..., skip_temp_dist_context=True) and bridge.save_hf_pretrained(...); retains path validation and mcore reset logic.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor User
  participant Exporter as community_import.py
  participant Bridge as megatron.bridge
  participant Dist as temporary_distributed_context (gloo)

  User->>Exporter: export(input_path, output_path)
  Exporter->>Exporter: validate output path
  alt training bridge available
    Exporter->>Dist: enter context (backend="gloo")
    activate Dist
    Note right of Dist: CPU-based distributed context
    Exporter->>Bridge: load_megatron_model(input_path, skip_temp_dist_context=True)
    Bridge-->>Exporter: MegatronModel
    Exporter->>Bridge: save_hf_pretrained(model, output_path)
    Bridge-->>Exporter: saved
    Exporter->>Dist: exit context
    deactivate Dist
  else missing training bridge
    Exporter-->>User: ImportError("megatron.bridge.training is not available.")
  end
  Exporter->>Exporter: reset mcore state
  Exporter-->>User: done
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Poem

I hop through gloo-lit fields of code,
A gentle context eases load,
With bridges crossed and models saved,
The export path is neatly paved.
Thump-thump—reset, and off I go,
A bunny shipping HF flow! 🐇✨

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title concisely and accurately summarizes the primary change: fixing a parallel state initialization error encountered when converting Megatron checkpoints to Hugging Face format. It directly reflects the PR objectives and the code changes (wrapping model load and HF save in a temporary distributed context to keep the pipeline model parallel group initialized). The phrasing is specific, developer-focused, and free of noise.
Docstring Coverage ✅ Passed No functions found in the changes. Docstring coverage check skipped.
✨ Finishing touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

👮 Agentic pre-merge checks are now available in preview!

Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.

  • Built-in checks – Quickly apply ready-made checks to enforce title conventions, require pull request descriptions that follow templates, validate linked issues for compliance, and more.
  • Custom agentic checks – Define your own rules using CodeRabbit’s advanced agentic capabilities to enforce organization-specific policies and workflows. For example, you can instruct CodeRabbit’s agent to verify that API documentation is updated whenever API schema files are modified in a PR. Note: Upto 5 custom checks are currently allowed during the preview period. Pricing for this feature will be announced in a few weeks.

Please see the documentation for more information.

Example:

reviews:
  pre_merge_checks:
    custom_checks:
      - name: "Undocumented Breaking Changes"
        mode: "warning"
        instructions: |
          Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal).

Please share your feedback with us on this Discord post.


Comment @coderabbitai help to get the list of available commands and usage tips.

@skirdey-inflection skirdey-inflection changed the title Fix: parallel state initialization error in Megatron to HF model conversion fix: parallel state initialization error in Megatron to HF model conversion Sep 13, 2025
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
nemo_rl/models/megatron/community_import.py (1)

15-16: Make overwrite=True actually overwrite to avoid partial writes.

Currently, overwrite=True doesn’t clear the target dir; save may fail or mix old/new files.

Apply:

-import os
+import os
+import shutil
@@
-    if os.path.exists(output_path) and not overwrite:
-        raise FileExistsError(
-            f"HF checkpoint already exists at {output_path}. Delete it to run or set overwrite=True."
-        )
+    if os.path.exists(output_path):
+        if overwrite:
+            shutil.rmtree(output_path)
+        else:
+            raise FileExistsError(
+                f"HF checkpoint already exists at {output_path}. Delete it to run or set overwrite=True."
+            )

Also applies to: 102-106, 115-120

🧹 Nitpick comments (2)
nemo_rl/models/megatron/community_import.py (2)

107-111: Chain ImportError to preserve the root cause (Ruff B904).

Attach the original ImportError so debugging isn’t opaque.

Apply:

-    try:
-        from megatron.bridge.training.model_load_save import temporary_distributed_context
-    except ImportError:
-        raise ImportError("megatron.bridge.training is not available.")
+    try:
+        from megatron.bridge.training.model_load_save import temporary_distributed_context
+    except ImportError as err:
+        raise ImportError("megatron.bridge.training is not available.") from err

114-121: Always reset Megatron state even if export fails.

If load/save throws, rerun_state_machine.destroy_rerun_state_machine() won’t run. Use finally.

Apply:

-    # Export performs on CPU with proper distributed context
-    with temporary_distributed_context(backend="gloo"):
-        # Load the Megatron model
-        megatron_model = bridge.load_megatron_model(input_path, skip_temp_dist_context=True)
-        
-        # Save in HuggingFace format
-        bridge.save_hf_pretrained(megatron_model, output_path)
-
-    # resetting mcore state
-    import megatron.core.rerun_state_machine
-
-    megatron.core.rerun_state_machine.destroy_rerun_state_machine()
+    try:
+        # Export performs on CPU with proper distributed context
+        with temporary_distributed_context(backend="gloo"):
+            # Load the Megatron model
+            megatron_model = bridge.load_megatron_model(
+                input_path, skip_temp_dist_context=True
+            )
+            # Save in HuggingFace format
+            bridge.save_hf_pretrained(megatron_model, output_path)
+    finally:
+        # resetting mcore state
+        import megatron.core.rerun_state_machine
+        megatron.core.rerun_state_machine.destroy_rerun_state_machine()

Also applies to: 122-126

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 94a3d49 and 930c5c8.

📒 Files selected for processing (1)
  • nemo_rl/models/megatron/community_import.py (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
nemo_rl/models/megatron/community_import.py (1)
nemo_rl/models/policy/megatron_policy_worker.py (1)
  • prepare_for_lp_inference (1694-1697)
🪛 Ruff (0.12.2)
nemo_rl/models/megatron/community_import.py

110-110: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


110-110: Avoid specifying long messages outside the exception class

(TRY003)

🔇 Additional comments (2)
nemo_rl/models/megatron/community_import.py (2)

115-120: Good fix: wrapping load+save in one temp distributed context.

This keeps the PP group initialized across both steps and addresses the original AssertionError. The explicit gloo backend and skip_temp_dist_context=True look correct.


95-101: hf_tokenizer_path is unused — wire it through or drop it.

In nemo_rl/models/megatron/community_import.py the hf_tokenizer_path parameter is unused (call at line ~120: bridge.save_hf_pretrained(megatron_model, output_path)). Repo search did not find save_hf_pretrained/load_megatron_model definitions — confirm whether bridge.save_hf_pretrained accepts a tokenizer_path.

Option A (preferred if supported by bridge API): pass tokenizer path.

-        bridge.save_hf_pretrained(megatron_model, output_path)
+        bridge.save_hf_pretrained(megatron_model, output_path, tokenizer_path=hf_tokenizer_path)

Option B: if unsupported, remove the parameter from the function signature and all call sites.

-def export_model_from_megatron(
-    hf_model_name: str,
-    input_path: str,
-    output_path: str,
-    hf_tokenizer_path: str,
-    overwrite: bool = False,
-):
+def export_model_from_megatron(
+    hf_model_name: str,
+    input_path: str,
+    output_path: str,
+    overwrite: bool = False,
+):

@euronymous-aithal
Copy link
Copy Markdown
Contributor

@yaoyu-33 can you pleaser review

@skirdey-inflection skirdey-inflection requested a review from a team as a code owner September 30, 2025 17:13
ZhiyuLi-Nvidia
ZhiyuLi-Nvidia previously approved these changes Oct 3, 2025
Copy link
Copy Markdown
Contributor

@ZhiyuLi-Nvidia ZhiyuLi-Nvidia left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Thank you @skirdey-inflection for your contribution.

@terrykong
Copy link
Copy Markdown
Collaborator

Hi @skirdey-inflection ! Thanks for the contribution. Can you update your branch and apply the pre-commit hooks to lint?

lint

Signed-off-by: Stan Kirdey <stan@inflection.ai>
@terrykong terrykong added the CI:L1 Run doctests, unit tests, and functional tests label Oct 7, 2025
@terrykong terrykong enabled auto-merge (squash) October 7, 2025 17:58
@terrykong terrykong added r0.4.0 CI:L1 Run doctests, unit tests, and functional tests and removed CI:L1 Run doctests, unit tests, and functional tests labels Oct 7, 2025
@terrykong terrykong merged commit 00cb570 into NVIDIA-NeMo:main Oct 7, 2025
61 of 69 checks passed
chtruong814 pushed a commit that referenced this pull request Oct 7, 2025
…ersion (#1120)

Signed-off-by: Stan Kirdey <stan@inflection.ai>
Signed-off-by: NeMo Bot <nemo-bot@nvidia.com>
odelalleau pushed a commit to odelalleau/NeMo-RL that referenced this pull request Oct 21, 2025
…ersion (NVIDIA-NeMo#1120)

Signed-off-by: Stan Kirdey <stan@inflection.ai>
PrinsYin pushed a commit to PrinsYin/RL that referenced this pull request Nov 30, 2025
…ersion (NVIDIA-NeMo#1120)

Signed-off-by: Stan Kirdey <stan@inflection.ai>
yuanhangsu1986 pushed a commit to yuanhangsu1986/RL-Nemontron-Edge-Omni that referenced this pull request Feb 21, 2026
…ersion (NVIDIA-NeMo#1120)

Signed-off-by: Stan Kirdey <stan@inflection.ai>
Signed-off-by: yuanhangs <yuanhangs@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CI:L1 Run doctests, unit tests, and functional tests community-request external r0.4.0 x-inflection

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants