Skip to content

Commit 9907081

Browse files
committed
fix
Signed-off-by: Pawel Gadzinski <pgadzinski@nvidia.com>
1 parent 160b77b commit 9907081

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed

transformer_engine/debug/pytorch/debug_quantization.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@
3636
}
3737

3838
API_CALL_MODIFY = "modify_tensor()"
39-
STANDARD_QUANTIZE = "Quantize" # Generalized: works with FP8, NVFP4, etc.
39+
STANDARD_QUANTIZE = "Quantize"
4040
HIGH_PRECISION = "High Precision"
4141

4242

transformer_engine/pytorch/tensor/mxfp8_tensor.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -70,7 +70,7 @@ def update_quantized(
7070

7171
def quantize_impl(self, tensor: torch.Tensor) -> QuantizedTensor:
7272
"""Quantize tensor implementation"""
73-
return tex.quantize(tensor, self, None)
73+
return tex.quantize(tensor, self)
7474

7575
def is_quantizable(self, inp: torch.Tensor) -> bool:
7676
"""Returns whether or not given inp can be quantized"""

0 commit comments

Comments
 (0)