-
Notifications
You must be signed in to change notification settings - Fork 620
[PyTorch Debug] NVFP4 debug stats support #2296
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
pggPL
wants to merge
16
commits into
NVIDIA:main
Choose a base branch
from
pggPL:debug_nvfp4
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from 13 commits
Commits
Show all changes
16 commits
Select commit
Hold shift + click to select a range
b463cd7
init
pggPL 96622bc
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] 4b48026
fixes
pggPL 55ed4ee
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] c5f1849
fix
pggPL 160b77b
fix
pggPL 9907081
fix
pggPL 23b2d1d
fix
pggPL cfa4af2
Merge remote-tracking branch 'upstream/main' into debug_nvfp4
pggPL c80533e
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] b120f05
Merge branch 'main' into debug_nvfp4
pggPL 1abcb0b
year update in license
pggPL 857d495
fix
pggPL e0c4027
Merge remote-tracking branch 'upstream/main' into debug_nvfp4
pggPL 780c75f
fix
pggPL df077bb
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -2,54 +2,47 @@ | |
| # | ||
| # See LICENSE for license information. | ||
|
|
||
| """DisableFP8Layer Feature support for nvidia-dlframework-inspect""" | ||
| """DisableFP8Layer Feature support for nvidia-dlframework-inspect | ||
|
|
||
| import nvdlfw_inspect.api as debug_api | ||
| from nvdlfw_inspect.registry import Registry, api_method | ||
| DEPRECATED: This is a backward compatibility alias for DisableQuantizationLayer. | ||
| New code should use DisableQuantizationLayer instead, which works with all quantization formats. | ||
| """ | ||
|
|
||
| import warnings | ||
|
|
||
| from nvdlfw_inspect.registry import Registry | ||
| from transformer_engine.debug.features.disable_quantization_layer import DisableQuantizationLayer | ||
|
|
||
|
|
||
| @Registry.register_feature(namespace="transformer_engine") | ||
| class DisableFP8Layer: | ||
| class DisableFP8Layer(DisableQuantizationLayer): | ||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It may be worth raising a deprecation warning in the constructor or something. |
||
| """ | ||
| Disables all FP8 GEMMs in the layer. | ||
|
|
||
| .. deprecated:: | ||
| Use :class:`DisableQuantizationLayer` instead. This class is maintained for | ||
| backward compatibility only. DisableQuantizationLayer works with all quantization | ||
| formats (FP8, NVFP4, etc.), not just FP8. | ||
|
|
||
| Example | ||
| ------- | ||
| .. code-block:: yaml | ||
|
|
||
| example_disable_fp8_layer: | ||
| enabled: True | ||
| layers: | ||
| layer_types: [fc1] | ||
| transformer_engine: | ||
| DisableFP8Layer: | ||
| enabled: True | ||
| layers: | ||
| layer_types: [fc1] | ||
| transformer_engine: | ||
| DisableFP8Layer: # Deprecated: use DisableQuantizationLayer | ||
| enabled: True | ||
| """ | ||
|
|
||
| @api_method | ||
| def fp8_gemm_enabled( | ||
| self, config, layer_name: str, gemm: str, iteration: int | ||
| ): # pylint: disable=unused-argument | ||
| """API call responsible for selecting between high-precision and FP8 GEMM execution.""" | ||
| for key in config: | ||
| if key not in ["enabled", "gemm"]: | ||
| raise ValueError(f'[NVTORCH INSPECT ERROR] Unexpected key in config: "{key}".') | ||
| # If FP8 training, disable FP8 for the selected layers if this feature is enabled in config. | ||
| debug_api.log_message("FP8 Disabled", layer_name) | ||
|
|
||
| # If this feature is invoked, then FP8 GEMM is disabled. | ||
| # If not, then default behavior in TransformerEngineAPI | ||
| # is that fp8_gemm() API call returns True. | ||
| return False, iteration + 1 | ||
|
|
||
| def parse_config_and_api(self, config, **_kwargs): | ||
| """Determines whether to run the API | ||
| DisableFP8Layer is the only feature provided by the Transformer Engine | ||
| which does not inherit from TEConfigAPIMapper - this mapper is primarly responsible for | ||
| parsing gemms and tensors fields from the config, which are not needed for this feature. | ||
|
|
||
| Explanation of the parse_config_and_api can be found in the | ||
| nvidia-dlframework-inspect documentation. | ||
| """ | ||
| return config["enabled"], None | ||
| def __init__(self, *args, **kwargs): | ||
| warnings.warn( | ||
| "DisableFP8Layer is deprecated. " | ||
| "Use DisableQuantizationLayer instead, which works with all quantization " | ||
| "formats (FP8, NVFP4, etc.).", | ||
| DeprecationWarning, | ||
| stacklevel=2, | ||
| ) | ||
| super().__init__(*args, **kwargs) | ||
59 changes: 59 additions & 0 deletions
59
transformer_engine/debug/features/disable_quantization_gemm.py
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,59 @@ | ||
| # Copyright (c) 2022-2026, NVIDIA CORPORATION & AFFILIATES. All rights reserved. | ||
| # | ||
| # See LICENSE for license information. | ||
|
|
||
| """DisableQuantizationGEMM Feature support for nvidia-dlframework-inspect""" | ||
|
|
||
| from nvdlfw_inspect.registry import Registry, api_method | ||
| from transformer_engine.debug.features.api import TEConfigAPIMapper | ||
|
|
||
|
|
||
| @Registry.register_feature(namespace="transformer_engine") | ||
| class DisableQuantizationGEMM(TEConfigAPIMapper): | ||
| """ | ||
| Disables specific GEMM operations from using quantization, forcing high-precision execution. | ||
|
|
||
| Works with any quantization format (FP8, NVFP4, etc.). | ||
|
|
||
| Parameters | ||
| ---------- | ||
|
|
||
| gemms: List[str] | ||
| list of gemms to disable quantization for | ||
|
|
||
| - fprop | ||
| - dgrad | ||
| - wgrad | ||
|
|
||
| Example | ||
| ------- | ||
| .. code-block:: yaml | ||
|
|
||
| example_disable_quantization_gemm: | ||
| enabled: True | ||
| layers: | ||
| layer_types: [fc1] | ||
| transformer_engine: | ||
| DisableQuantizationGEMM: | ||
| enabled: True | ||
| gemms: [dgrad, wgrad] | ||
| """ | ||
|
|
||
| @api_method | ||
| def fp8_gemm_enabled( | ||
| self, config, layer_name: str, gemm: str, iteration: int | ||
| ): # pylint: disable=unused-argument | ||
| """API call responsible for choice between high-precision and quantized GEMM execution. | ||
|
|
||
| Note: Method name kept as 'fp8_gemm_enabled' for backward compatibility with the debug API, | ||
| but it applies to all quantization formats (FP8, NVFP4, etc.). | ||
| """ | ||
|
|
||
| for key in config: | ||
| if key != "gemm": | ||
| raise ValueError(f'[NVTORCH INSPECT ERROR] Unexpected key in config: "{key}".') | ||
|
|
||
| # If this feature is invoked, then quantized GEMM is disabled (returns to high precision). | ||
| # If not, then default behavior in TransformerEngineAPI | ||
| # is that fp8_gemm() API call returns True. | ||
| return False, iteration + 1 |
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we also able to get more information about the scaling factors such as
scale_inv_std,scale_inv_msescale_inv_maxscale_inv_min?