Skip to content

Commit f14a47a

Browse files
ckchowChris Chowtchaton
authored
guard against None in pytorch get_xla_supported_devices (#9572)
Co-authored-by: Chris Chow <[email protected]> Co-authored-by: thomas chaton <[email protected]>
1 parent b530b7a commit f14a47a

File tree

2 files changed

+4
-3
lines changed

2 files changed

+4
-3
lines changed

CHANGELOG.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -469,6 +469,9 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
469469
- Fixed `BasePredictionWriter` not returning the batch_indices in a non-distributed setting ([#9432](https://github.com/PyTorchLightning/pytorch-lightning/pull/9432))
470470

471471

472+
- Fixed an error when running on in XLA environments with no TPU attached ([#9572](https://github.com/PyTorchLightning/pytorch-lightning/pull/9572))
473+
474+
472475
- Fixed check on torchmetrics logged whose `compute()` output is a multielement tensor ([#9582](https://github.com/PyTorchLightning/pytorch-lightning/pull/9582))
473476

474477

pytorch_lightning/utilities/xla_device.py

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -70,9 +70,7 @@ def _is_device_tpu() -> bool:
7070
# we would have to use `torch_xla.distributed.xla_dist` for
7171
# multiple VMs and TPU_CONFIG won't be available, running
7272
# `xm.get_xla_supported_devices("TPU")` won't be possible.
73-
if xm.xrt_world_size() > 1:
74-
return True
75-
return len(xm.get_xla_supported_devices("TPU")) > 0
73+
return (xm.xrt_world_size() > 1) or bool(xm.get_xla_supported_devices("TPU"))
7674

7775
@staticmethod
7876
def xla_available() -> bool:

0 commit comments

Comments
 (0)