Skip to content
This repository was archived by the owner on Sep 4, 2025. It is now read-only.

Commit eeee1c3

Browse files
authored
[TPU] Avoid initializing TPU runtime in is_tpu (vllm-project#7763)
1 parent aae74ef commit eeee1c3

File tree

1 file changed

+4
-2
lines changed

1 file changed

+4
-2
lines changed

vllm/platforms/__init__.py

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,8 +8,10 @@
88

99
is_tpu = False
1010
try:
11-
import torch_xla.core.xla_model as xm
12-
xm.xla_device(devkind="TPU")
11+
# While it's technically possible to install libtpu on a non-TPU machine,
12+
# this is a very uncommon scenario. Therefore, we assume that libtpu is
13+
# installed if and only if the machine has TPUs.
14+
import libtpu # noqa: F401
1315
is_tpu = True
1416
except Exception:
1517
pass

0 commit comments

Comments
 (0)