Skip to content

Commit 115e283

Browse files
committed
update
Signed-off-by: He, Xin3 <xin3.he@intel.com>
1 parent f1fe2a6 commit 115e283

File tree

1 file changed

+18
-4
lines changed

1 file changed

+18
-4
lines changed

docs/source/3x/PT_NVFP4Quant.md

Lines changed: 18 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -21,22 +21,36 @@ The following table summarizes the NVFP4 quantization format:
2121
<th>Scaling Block Size</th>
2222
<th>Scale Data Type</th>
2323
<th>Scale Bits</th>
24-
<th>Global Scale Data Type</th>
25-
<th>Global Scale Bits</th>
24+
<th>Global Tensor-Wise Scale Data Type</th>
25+
<th>Global Tensor-Wise Scale Bits</th>
2626
</tr>
2727
<tr>
2828
<td>NVFP4</td>
2929
<td>E2M1</td>
3030
<td>4</td>
3131
<td>16</td>
32-
<td>E4M3</td>
32+
<td>UE4M3</td>
3333
<td>8</td>
3434
<td>FP32</td>
3535
<td>32</td>
3636
</tr>
3737
</table>
3838

39-
At similar accuracy levels, NVFP4 can deliver lower memory usage and improved compute efficiency for multiply-accumulate operations compared to higher-precision formats. Neural Compressor supports post-training quantization to NVFP4, providing recipes and APIs for users to quantize LLMs easily. To provide the best performance, the global scale for activation is static.
39+
### Understanding the Scaling Mechanism
40+
41+
NVFP4 uses a two-level scaling approach to maintain accuracy while reducing precision:
42+
43+
- **Block-wise Scale**: The quantized tensor is divided into blocks of size 16 (the Scaling Block Size). Each block has its own scale factor stored in UE4M3 format (8 bits), which is used to convert the 4-bit E2M1 quantized values back to a higher precision representation. This fine-grained scaling helps preserve local variations in the data.
44+
45+
- **Global Tensor-Wise Scale**: In addition to the block-wise scales, a single FP32 (32-bit) scale factor is applied to the entire tensor. This global scale provides an additional level of normalization for the whole weight or activation tensor. For activations, this global scale is static (computed during calibration and fixed during inference) to optimize performance.
46+
47+
The dequantization formula can be expressed as:
48+
49+
$$\text{dequantized\_value} = \text{quantized\_value} \times \text{block\_scale} \times \text{global\_scale}$$
50+
51+
This hierarchical scaling strategy balances compression efficiency with numerical accuracy, enabling NVFP4 to maintain model performance while significantly reducing memory footprint.
52+
53+
At similar accuracy levels, NVFP4 can deliver lower memory usage and improved compute efficiency for multiply-accumulate operations compared to higher-precision formats. Neural Compressor supports post-training quantization to NVFP4, providing recipes and APIs for users to quantize LLMs easily.
4054

4155
## Get Started with NVFP4 Quantization API
4256

0 commit comments

Comments
 (0)