-
Notifications
You must be signed in to change notification settings - Fork 192
PyTorch geometric quantization support #494
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
ca267f1 to
5386681
Compare
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #494 +/- ##
=======================================
Coverage 73.43% 73.43%
=======================================
Files 180 180
Lines 18146 18146
=======================================
Hits 13326 13326
Misses 4820 4820 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
tests/unit/torch/quantization/plugins/test_pytorch_geometric_plugin.py
Outdated
Show resolved
Hide resolved
5386681 to
0bab18e
Compare
0bab18e to
17f5636
Compare
CHANGELOG.rst
Outdated
| Model Optimizer Changelog (Linux) | ||
| ================================= | ||
|
|
||
| 0.40 (2025-12-09) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Exact date might get pushed
| 0.40 (2025-12-09) | |
| 0.40 (2025-12-xx) |
17f5636 to
243af33
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM on a high level. Approving as codeowner
243af33 to
2f94190
Compare
Signed-off-by: Riyad Islam <[email protected]>
2f94190 to
b65e0b3
Compare
| def forward(self, input, *args, **kwargs): | ||
| """Forward pass with quantization. | ||
| Args: | ||
| input: Input tensor to the linear layer | ||
| *args: Additional positional arguments | ||
| **kwargs: Additional keyword arguments | ||
| Returns: | ||
| Quantized output tensor | ||
| """ | ||
| # Quantize input activations | ||
| input_q = self.input_quantizer(input) | ||
|
|
||
| # Quantize weights | ||
| weight_q = self.weight_quantizer(self.weight) | ||
|
|
||
| # Perform linear operation | ||
| output = torch.nn.functional.linear( | ||
| input_q, | ||
| weight_q, | ||
| self.bias if hasattr(self, "bias") and self.bias is not None else None, | ||
| ) | ||
|
|
||
| # Quantize output (typically disabled by default) | ||
| return self.output_quantizer(output) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: This might not be needed
| def forward(self, input, *args, **kwargs): | |
| """Forward pass with quantization. | |
| Args: | |
| input: Input tensor to the linear layer | |
| *args: Additional positional arguments | |
| **kwargs: Additional keyword arguments | |
| Returns: | |
| Quantized output tensor | |
| """ | |
| # Quantize input activations | |
| input_q = self.input_quantizer(input) | |
| # Quantize weights | |
| weight_q = self.weight_quantizer(self.weight) | |
| # Perform linear operation | |
| output = torch.nn.functional.linear( | |
| input_q, | |
| weight_q, | |
| self.bias if hasattr(self, "bias") and self.bias is not None else None, | |
| ) | |
| # Quantize output (typically disabled by default) | |
| return self.output_quantizer(output) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For example, see the definition for ConvNd layers:
| @QuantModuleRegistry.register({nn.Conv1d: "nn.Conv1d"}) |
The forward path should work (inherited from QuantLinearConvBase)
| if __name__ == "__main__": | ||
| pytest.main([__file__]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit:
For debugging, I directly run the unitests using VSCode/Cursor test explorer.
| if __name__ == "__main__": | |
| pytest.main([__file__]) |
What does this PR do?
Type of change: New feature
Overview: Support quantization of PyTorch Geometric
# Add a code snippet demonstrating how to use thisTesting
python -m pytest tests/unit/torch/quantization/plugins/test_pytorch_geometric_plugin.py -vBefore your PR is "Ready for review"
Additional Information