Skip to content

Commit b00287f

Browse files
minor updates on README
Signed-off-by: cliu-us <[email protected]>
1 parent 026a3ea commit b00287f

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

examples/MX/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ First example is based on a toy model with only a few Linear layers, in which on
2727
>>> python simple_mx_example.py
2828
```
2929

30-
Expected output:
30+
Comparison between different formats, including the first 3 elements from output tensors and the norm compared to FP32 reference, is shown below.
3131

3232
| dtype | output[0, 0] | output[0, 1] | output[0, 2] | \|\|ref - out_dtype\|\|<sub>2</sub> |
3333
|:-----------|---------------:|---------------:|---------------:|------------------------:|
@@ -42,7 +42,7 @@ Expected output:
4242

4343

4444
### Example 2
45-
The second example is the same as in the [DQ](../DQ_SQ/README.md) folder, except using [microxcaling](https://arxiv.org/abs/2310.10537) format. We demonstrate the effect of MXINT8, MXFP8, MXFP6, MXFP4 for weights, activations, and/or KV-cache.
45+
The second example is the same as the [DQ example](../DQ_SQ/README.md), except using [microxcaling](https://arxiv.org/abs/2310.10537) format. We only demonstrate `mxfp8` and `mxfp4` here, but MXINT8, MXFP8, MXFP6, MXFP4 are also available for weights, activations, and/or KV-cache.
4646

4747
**1. Prepare Data** for calibration process by converting into its tokenized form. An example of tokenization using `LLAMA-3-8B`'s tokenizer is below.
4848

0 commit comments

Comments
 (0)