Skip to content

Conversation

@gcunhase
Copy link
Contributor

@gcunhase gcunhase commented Nov 13, 2025

What does this PR do?

Type of change: Bug fix

Overview: Fix incorrect quantization of custom ops when some input tensors are required to be in INT8 and some in FP32.

Before fix After fix
snap_custom_op_quant_incorrect snap_custom_op_quant_correct

Usage

$ python -m modelopt.onnx.quantization --onnx_path=$MODEL_PATH.onnx \
    --trt_plugins $PLUGIN_PATH.so \
    --trt_plugins_precision $CUSTOM_OP_NAME:$PRECISION

Testing

1. BEVFormer model

  • Follow step 1 in README.
  • In the quantization step, do:
$ python -m modelopt.onnx.quantization --onnx_path=/mnt/models/bevformer_tiny_epoch_24_cp2_op13.onnx \
      --trt_plugins=$PLUGIN_PATH \
      --trt_plugins_precision MultiScaleDeformableAttnTRT:[int8,int32,fp32,int8,int8]:[int8] \
      --high_precision_dtype fp16

See table in "Overview" for expected graph structure.

2. 5455919 model

Validated model in bug 5455919.

Before your PR is "Ready for review"

  • Make sure you read and follow Contributor guidelines and your commits are signed.
  • Is this change backward compatible?: Yes
  • Did you write any new necessary tests?: No
  • Did you add or update any necessary documentation?: No
  • Did you update Changelog?: Yes

Additional Information

@gcunhase gcunhase requested review from a team as code owners November 13, 2025 19:38
@gcunhase gcunhase changed the title Dev/gcunhasergio/fix custom op quant convert fp16 [5455919] Fix Q/DQ/Cast placement in 'FP32 required' custom ops Nov 13, 2025
Copy link
Contributor

@ajrasane ajrasane left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Can we add a test case for this?

@codecov
Copy link

codecov bot commented Nov 13, 2025

Codecov Report

❌ Patch coverage is 32.25806% with 21 lines in your changes missing coverage. Please review.
✅ Project coverage is 74.43%. Comparing base (9cd0824) to head (49a4513).
⚠️ Report is 2 commits behind head on main.

Files with missing lines Patch % Lines
modelopt/onnx/quantization/qdq_utils.py 0.00% 17 Missing ⚠️
modelopt/onnx/autocast/precisionconverter.py 71.42% 4 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #554      +/-   ##
==========================================
- Coverage   74.47%   74.43%   -0.04%     
==========================================
  Files         182      182              
  Lines       18225    18238      +13     
==========================================
+ Hits        13573    13576       +3     
- Misses       4652     4662      +10     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@gcunhase gcunhase force-pushed the dev/gcunhasergio/fix_custom_op_quant_convert_fp16 branch from 7c854c7 to 49a4513 Compare November 17, 2025 15:15
@gcunhase gcunhase enabled auto-merge (squash) November 17, 2025 15:15
@gcunhase gcunhase merged commit 6abded4 into NVIDIA:main Nov 17, 2025
26 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants