You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[5455919] Fix Q/DQ/Cast placement in 'FP32 required' custom ops (NVIDIA#554)
## What does this PR do?
**Type of change:** Bug fix
**Overview:** Fix incorrect quantization of custom ops when some input
tensors are required to be in INT8 and some in FP32.
| Before fix | After fix |
|----------------|-------------|
| <img width="841" height="623" alt="snap_custom_op_quant_incorrect"
src="https://github.com/user-attachments/assets/88e4d460-fbae-4bcb-86c8-139d23ce04c8"
/> | <img width="786" height="286" alt="snap_custom_op_quant_correct"
src="https://github.com/user-attachments/assets/475079c2-a565-4f0d-b167-6d801ab83dfc"
/> |
## Usage
```python
$ python -m modelopt.onnx.quantization --onnx_path=$MODEL_PATH.onnx \
--trt_plugins $PLUGIN_PATH.so \
--trt_plugins_precision $CUSTOM_OP_NAME:$PRECISION
```
## Testing
### 1. BEVFormer model
- Follow step 1 in
[README](https://github.com/NVIDIA/DL4AGX/tree/master/AV-Solutions/bevformer-int8-eq#1-export-model-to-onnx-and-compile-plugins).
- In the quantization step, do:
```sh
$ python -m modelopt.onnx.quantization --onnx_path=/mnt/models/bevformer_tiny_epoch_24_cp2_op13.onnx \
--trt_plugins=$PLUGIN_PATH \
--trt_plugins_precision MultiScaleDeformableAttnTRT:[int8,int32,fp32,int8,int8]:[int8] \
--high_precision_dtype fp16
```
> See table in "Overview" for expected graph structure.
### 2. 5455919 model
Validated model in bug 5455919.
## Before your PR is "*Ready for review*"
<!-- If you haven't finished some of the above items you can still open
`Draft` PR. -->
- **Make sure you read and follow [Contributor
guidelines](https://github.com/NVIDIA/TensorRT-Model-Optimizer/blob/main/CONTRIBUTING.md)**
and your commits are signed.
- **Is this change backward compatible?**: Yes
- **Did you write any new necessary tests?**: No
- **Did you add or update any necessary documentation?**: No
- **Did you update
[Changelog](https://github.com/NVIDIA/TensorRT-Model-Optimizer/blob/main/CHANGELOG.rst)?**:
Yes
## Additional Information
- NVIDIA/pull/363: Feature expansion.
- NVIDIA/pull/524: The graph cleanup is
actually needed after Q/DQ trimming around custom ops. Moved the cleanup
lines to inside that function.
---------
Signed-off-by: gcunhase <[email protected]>
Copy file name to clipboardExpand all lines: CHANGELOG.rst
+2Lines changed: 2 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,12 +20,14 @@ Model Optimizer Changelog (Linux)
20
20
**Bug Fixes**
21
21
22
22
- Fix a bug in FastNAS pruning (computer vision models) where the model parameters were sorted twice messing up the ordering.
23
+
- Fix Q/DQ/Cast node placements in 'FP32 required' tensors in custom ops in the ONNX quantization workflow.
23
24
24
25
**New Features**
25
26
26
27
- Add MoE (e.g. Qwen3-30B-A3B, gpt-oss-20b) pruning support for ``num_moe_experts``, ``moe_ffn_hidden_size`` and ``moe_shared_expert_intermediate_size`` parameters in Minitron pruning (``mcore_minitron``).
27
28
- Add ``specdec_bench`` example to benchmark speculative decoding performance. See `examples/specdec_bench/README.md <https://github.com/NVIDIA/TensorRT-Model-Optimizer/tree/main/examples/specdec_bench#speculative-decoding-benchmark>`_ for more details.
28
29
- Add FP8/NVFP4 KV cache quantization support for Megatron Core models.
30
+
- Add flag ``trt_plugins_precision`` in ONNX autocast to indicate custom ops precision. This is similar to the flag already existing in the quantization workflow.
0 commit comments