-
-
Notifications
You must be signed in to change notification settings - Fork 17.4k
Fix: Update torch.cuda.amp to torch.amp to Resolve Deprecation Warning #13483
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Fix: Update torch.cuda.amp to torch.amp to Resolve Deprecation Warning #13483
Conversation
|
All Contributors have signed the CLA. ✅ |
|
👋 Hello @Bala-Vignesh-Reddy, thank you for submitting an
🛠️ Notes and Feedback:Your PR for replacing all instances of If possible, please provide a minimum reproducible example (MRE) for broader validation, including any specific configurations or edge cases you've tested this with. This will help other contributors and engineers verify the fixes more effectively. 🐛 For additional guidance, you can refer to our Contributing Guide and CI Documentation. 🔔 Next Steps: |
|
I have read the CLA Document and I sign the CLA |
|
May resolve #13226 |
|
@Bala-Vignesh-Reddy please review and resolve failing CI tests. Thank you! |
…Reddy/yolov5 into fix-deprecated-amp
|
@glenn-jocher have a look at this pr.. and review the pr asap.. |
|
@Bala-Vignesh-Reddy this is a good PR but there is too much duplication of code. The check should be done once per file at the top after import. Can you please make this change? Thank you! |
|
@glenn-jocher refactored the code and check for the version is done at the top.. Have a look at it and tell me if any changes needed.. Thank you!! |
…/yolov5 into fix-deprecated-amp
|
@glenn-jocher any further updates..? |
|
👋 Hello there! We wanted to let you know that we've decided to close this pull request due to inactivity. We appreciate the effort you put into contributing to our project, but unfortunately, not all contributions are suitable or aligned with our product roadmap. We hope you understand our decision, and please don't let it discourage you from contributing to open source projects in the future. We value all of our community members and their contributions, and we encourage you to keep exploring new projects and ways to get involved. For additional resources and information, please see the links below:
Thank you for your contributions to YOLO 🚀 and Vision AI ⭐ |
Description
This PR addresses issue #13226 which raises a FutureWarning due to the deprecation of
torch.cuda.ampas of PyTorch 2.4. This replaces all instances oftorch.cuda.ampwithtorch.ampto resolve the warning.Key Changes
torch.cuda.amp.autocastwithtorch.amp.autocast('cuda', ...).torch.cuda.amp.GradScalerwithtorch.amp.GradScaler('cuda').Steps to Reproduce and Testing
To verify the fix, I used a custom test script that was previously showing the deprecation warning. The following tests were performed:
Test Script:
The test script used for verification:
Screenshots
Warning Before the Change:
This screenshot shows the deprecation warning before the fix.
Warning After the Change:
This screenshot shows that the warning is no longer present after applying the fix.
Purpose & Impact
Additional Comments
Since the original PR #13244 has been inactive, I have submitted this new PR with the updated changes to resolve the issue #13226 .
🛠️ PR Summary
Made with ❤️ by Ultralytics Actions
🌟 Summary
Updated the PyTorch mixed precision (AMP) method usage to align with the latest
torch.ampstandards for better compatibility and future-proofing.📊 Key Changes
torch.cuda.ampusages withtorch.ampacross various files:val.py,common.py,train.py,segment/train.py, andutils/autobatch.py.autocastandGradScalermethods to specify"cuda"explicitly.🎯 Purpose & Impact
torch.ampis more generic and future-focused compared totorch.cuda.amp."cuda"helps clarify intent and avoids potential confusion, particularly for non-CPU environments.