### Environment - **TPU hardware:** v4-32 (TPU VM) - **OS image:** `tpu-ubuntu2204-base` ### Steps to reproduce On a fresh TPU VM: ``` bash sudo apt-get update sudo apt-get install libopenblas-dev -y pip install numpy pip install torch torch_xla[tpu] -f https://storage.googleapis.com/libtpu-releases/index.html ``` Then run the verification command: `PJRT_DEVICE=TPU python3 -c "import torch_xla.core.xla_model as xm; print(xm.get_xla_supported_devices(\"TPU\"))"` ### Expected behavior The command should return the list of available TPU devices, e.g.: ['TPU:0', 'TPU:1', ...] ### Actual behavior The command hangs indefinitely and never prints any output. This makes it impossible to confirm TPU accessibility from PyTorch/XLA on the TPU VM. ### Additional context - This was run directly on a TPU VM (not Colab). - No error messages are shown β just hangs after import.