Skip to content

Conversation

@gcunhase
Copy link
Contributor

@gcunhase gcunhase commented Nov 7, 2025

What does this PR do?

Type of change: Bug fix

Overview: Loading the model with ONNX graphsurgeon after quantization and FP16 conversion results in an ONNX model with FP16 output instead of FP32 even though the Cast_to_fp32 layer was correctly placed in the graph output. This PR fixes that issue.

Usage

$ python -m modelopt.onnx.quantization --onnx_path=$MODEL_NAME.onnx --high_precision_dtype=fp16

Testing

See bug 5620660.

Before your PR is "Ready for review"

  • Make sure you read and follow Contributor guidelines and your commits are signed.
  • Is this change backward compatible?: Yes
  • Did you write any new necessary tests?: No
  • Did you add or update any necessary documentation?: No
  • Did you update Changelog?: No

@gcunhase gcunhase requested a review from a team as a code owner November 7, 2025 01:41
@gcunhase gcunhase requested a review from i-riyad November 7, 2025 01:41
@codecov
Copy link

codecov bot commented Nov 7, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 73.52%. Comparing base (5adb9ba) to head (bf42d85).

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #524      +/-   ##
==========================================
- Coverage   73.52%   73.52%   -0.01%     
==========================================
  Files         181      181              
  Lines       18207    18204       -3     
==========================================
- Hits        13387    13384       -3     
  Misses       4820     4820              

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants