Skip to content

Commit b8433ef

Browse files
Fix fp8_per_tensor_gemm doc build error
Signed-off-by: Keval Morabia <[email protected]>
1 parent 4b7cdd6 commit b8433ef

File tree

2 files changed

+2
-1
lines changed

2 files changed

+2
-1
lines changed

docs/source/_templates/autosummary/module.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@
1111
:recursive:
1212
{% for item in modules %}
1313
{% set full_item = fullname + '.' + item.split('.')[-1] %}
14-
{% if '.plugins.' not in full_item or full_item == 'modelopt.torch.opt.plugins.huggingface' %}
14+
{% if ('.plugins.' not in full_item or full_item == 'modelopt.torch.opt.plugins.huggingface') and full_item != 'modelopt.torch.quantization.backends.fp8_per_tensor_gemm' %}
1515
{{ full_item }}
1616
{% endif %}
1717
{%- endfor %}

modelopt/torch/quantization/backends/__init__.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,3 +16,4 @@
1616
"""Quantization backends."""
1717

1818
from .gemm_registry import *
19+
from .nvfp4_gemm import *

0 commit comments

Comments
 (0)