Skip to content

Feat: Enhance device compatibility with Lightning Trainer API#490

Open
yishutu wants to merge 6 commits intojwohlwend:mainfrom
yishutu:devices_arg
Open

Feat: Enhance device compatibility with Lightning Trainer API#490
yishutu wants to merge 6 commits intojwohlwend:mainfrom
yishutu:devices_arg

Conversation

@yishutu
Copy link

@yishutu yishutu commented Jul 17, 2025

This pull request introduces changes to improve device compatibility within the project, aligning with the devices argument specifications of the PyTorch Lightning Trainer API.

Previously, device handling might have been less flexible. With these updates, the project now better supports the various inputs for the devices parameter as described in the Lightning Trainer Class API, including:

  • Integer (e.g., devices=1 for a single GPU)
  • List of integers (e.g., devices=[0, 1] for specific GPUs)
  • String "auto" for automatic device selection.

Note on TPU Usage:
When using TPUs with the Lightning Trainer, the devices argument is typically limited to specific options. Valid configurations for TPUs often include:

  • devices=1 (for a single TPU core)
  • devices='auto' (for automatic detection of available TPU cores)
  • devices='<all tpu>' (to use all available TPU cores)
  • A specified list of TPU core indices (e.g., devices=[0, 1] if applicable for your setup).

These changes aim to make device configuration more robust and user-friendly, ensuring seamless integration with PyTorch Lightning's device management.

For reference, see the Lightning Trainer Class API - Devices

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant