-
Notifications
You must be signed in to change notification settings - Fork 3
Add targeted device selection for CUDA and 128GB M3 Max MPS #5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Add targeted device selection for CUDA and 128GB M3 Max MPS #5
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR adds intelligent device selection for PyTorch operations, prioritizing CUDA while restricting MPS (Metal Performance Shaders) usage to Apple M3 Max machines with 128GB memory to ensure optimal performance and compatibility.
- Implements device detection utilities for Apple Silicon Macs including memory and chip identification
- Adds a centralized
select_device()function with CUDA priority and restricted MPS support - Updates model loading to be compatible across different device types
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
| if platform.system() != "Darwin": | ||
| return None | ||
| try: | ||
| output = subprocess.check_output(["sysctl", "-n", name]) |
Copilot
AI
Sep 16, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using subprocess.check_output with user-controlled input could be vulnerable to command injection. Consider validating the name parameter against an allowlist of known sysctl keys before passing it to the subprocess call.
| print( | ||
| "Warning: MPS backend detected but restricted to Apple M3 Max systems " | ||
| "with 128GB memory. " | ||
| f"Detected chip '{chip_name}' with {mem_gb:.1f}GB. Falling back to CPU." | ||
| ) |
Copilot
AI
Sep 16, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] Consider using the logging module instead of print statements for warnings. This would allow better control over log levels and output formatting in production environments.
| print( | ||
| "Warning: MPS backend detected but unable to verify Apple M3 Max 128GB " | ||
| "requirement. Falling back to CPU." | ||
| ) |
Copilot
AI
Sep 16, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] Consider using the logging module instead of print statements for warnings. This would allow better control over log levels and output formatting in production environments.
Summary
select_devicehelper, logging the device choicedevice_map="auto", keeping compatibility across CUDA, MPS, and CPUTesting
https://chatgpt.com/codex/tasks/task_e_68c879619398832c81d48af0ee412241