Skip to content

Conversation

@AKharytonchyk
Copy link

@AKharytonchyk AKharytonchyk commented Jul 16, 2025

Pull Request: Fix PyTorch 2.7+ and ROCm/AMD GPU Compatibility

Summary

This PR fixes critical compatibility issues with PyTorch 2.7+ and adds enhanced support for ROCm/AMD GPUs in ComfyUI Impact Subpack.

Problem Statement

When using PyTorch 2.7.0+rocm6.3 with AMD GPUs, the UltralyticsDetectorProvider node fails to load YOLO models with the following error:

WeightsUnpickler error: Unsupported global: GLOBAL getattr was not an allowed global by default. 
Please use `torch.serialization.add_safe_globals([getattr])` or the `torch.serialization.safe_globals([getattr])` 
context manager to allowlist this global if you trust this class/function.

Root Cause

PyTorch 2.6+ introduced enhanced security features where weights_only=True became the default for torch.load(). YOLO models require the getattr builtin function during deserialization, which is blocked by default for security reasons.

Solution

1. PyTorch 2.7+ Compatibility Fix

  • Added getattr to PyTorch safe globals during module initialization
  • Maintains PyTorch's security features while allowing trusted YOLO models to load
  • Uses torch.serialization.add_safe_globals([getattr]) as recommended by PyTorch documentation

2. ROCm/AMD GPU Auto-Detection

  • Enhanced inference_bbox() and inference_segm() functions with automatic device detection
  • Automatically uses CUDA/ROCm when available and no device is specified
  • Improves performance on AMD GPUs with ROCm support

3. Comprehensive Documentation and Testing

  • Added ROCM_FIXES.md with detailed fix documentation
  • Included test_rocm_compatibility.py for validation
  • Maintains backward compatibility with existing installations

Code Changes

modules/subcore.py

# Add getattr to safe globals for PyTorch 2.6+ compatibility
if hasattr(torch.serialization, 'add_safe_globals'):
    torch.serialization.add_safe_globals([getattr])
    logging.info("[Impact Pack/Subpack] Added getattr to PyTorch safe globals for YOLO model compatibility")
# Enhanced device auto-detection in inference functions
def inference_bbox(..., device: str = ""):
    # If device is empty and CUDA/ROCm is available, use it
    if not device and torch.cuda.is_available():
        device = "cuda"
    ...

def inference_segm(..., device: str = ""):
    # If device is empty and CUDA/ROCm is available, use it
    if not device and torch.cuda.is_available():
        device = "cuda"
    ...

Testing Environment

  • OS: Linux
  • Python: 3.12.3
  • PyTorch: 2.7.0+rocm6.3
  • GPU: AMD Radeon RX 7900 XTX
  • ComfyUI: 0.3.44

Test Results

✅ All YOLO models (bbox and segm) load successfully
✅ ROCm device detection working correctly
✅ No security warnings or errors
✅ Backward compatibility maintained
✅ Performance improved on AMD GPUs

Impact

  • Fixes: Critical model loading failures on PyTorch 2.7+
  • Enhances: AMD GPU support with ROCm
  • Maintains: Full backward compatibility
  • Improves: Performance on systems with GPU acceleration

Related Issues

This addresses the PyTorch 2.6+ security changes and ROCm compatibility issues reported by users upgrading to newer PyTorch versions with AMD GPUs.

Breaking Changes

None. All changes are backward compatible and only activate when:

  1. PyTorch 2.6+ is detected (for safe globals)
  2. CUDA/ROCm is available (for device auto-detection)

Files Modified

  • modules/subcore.py - Core compatibility fixes
  • ROCM_FIXES.md - Documentation (new)
  • test_rocm_compatibility.py - Test script (new)

- Add getattr to PyTorch safe globals for PyTorch 2.6+ compatibility
  Fixes 'GLOBAL getattr was not an allowed global' error when loading YOLO models
- Add automatic CUDA/ROCm device detection in inference functions
  Improves performance on AMD GPUs with ROCm support
- Add comprehensive documentation and test script

Resolves model loading issues on PyTorch 2.7.0+rocm6.3 with AMD Radeon GPUs
Tested on AMD RX 7900 XTX with Python 3.12.3
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant