@@ -11,31 +11,29 @@ experience and enables users to adhere to the familiar PyTorch device
11
11
programming model without needing to explicitly load or import device-specific
12
12
extensions. On the other hand, it facilitates effortless
13
13
adoption of existing PyTorch applications with zero-code changes on
14
- out-of-tree devices. For more information,
15
- see `[RFC] Autoload Device Extension <https://github.com/pytorch/pytorch/issues/122468 >`_.
14
+ out-of-tree devices. For further details, refer to the
15
+ `[RFC] Autoload Device Extension <https://github.com/pytorch/pytorch/issues/122468 >`_.
16
16
17
17
.. note ::
18
18
19
19
This feature is enabled by default and can be disabled using
20
20
``export TORCH_DEVICE_BACKEND_AUTOLOAD=0 ``.
21
21
If you get an error like this: "Failed to load the backend extension",
22
- this error has nothing to do with PyTorch, you should disable this feature
22
+ this error is independent with PyTorch, you should disable this feature
23
23
and ask the out-of-tree extension maintainer for help.
24
24
25
25
How to apply this mechanism to out-of-tree extensions?
26
26
------------------------------------------------------
27
27
28
- For example, if you have a backend named ``foo `` and a package named
29
- ``torch_foo ``. Make sure your package is based on PyTorch 2.5+ and includes
30
- the following in its ``__init__.py ``:
28
+ For instance, suppose you have a backend named ``foo `` and a corresponding package named ``torch_foo ``. Ensure that
29
+ your package is compatible with PyTorch 2.5+ and includes the following snippet in its ``__init__.py `` file:
31
30
32
31
.. code-block :: python
33
32
34
33
def _autoload ():
35
34
print (" No need to import torch_foo anymore! Check things are working with `torch.foo.is_available()`." )
36
35
37
- Then the only thing you need to do is add an entry point to your Python
38
- package:
36
+ Then the only thing you need to do is define an entry point within your Python package:
39
37
40
38
.. code-block :: python
41
39
@@ -62,8 +60,7 @@ Examples
62
60
^^^^^^^^
63
61
64
62
Here we take Intel Gaudi HPU and Huawei Ascend NPU as examples to determine how to
65
- integrate your out-of-tree extension with PyTorch based on the autoloading
66
- mechanism.
63
+ integrate your out-of-tree extension with PyTorch using the autoloading mechanism.
67
64
68
65
`habana_frameworks.torch `_ is a Python package that enables users to run
69
66
PyTorch programs on Intel Gaudi via the PyTorch ``HPU `` device key.
@@ -72,24 +69,58 @@ is applied.
72
69
73
70
.. _habana_frameworks.torch : https://docs.habana.ai/en/latest/PyTorch/Getting_Started_with_PyTorch_and_Gaudi/Getting_Started_with_PyTorch.html
74
71
72
+ ``habana_frameworks.torch `` is a submodule of ``habana_frameworks ``, we add an entry point to
73
+ ``__autoload() `` in ``habana_frameworks/setup.py ``:
74
+
75
75
.. code-block :: diff
76
76
77
- import torch
78
- import torchvision.models as models
79
- - import habana_frameworks.torch # <-- extra import
80
- model = models.resnet50().eval().to("hpu")
81
- input = torch.rand(128, 3, 224, 224).to("hpu")
82
- output = model(input)
77
+ setup(
78
+ name="habana_frameworks",
79
+ version="2.5",
80
+ + entry_points={
81
+ + 'torch.backends': [
82
+ + "device_backend = habana_frameworks:__autoload",
83
+ + ],
84
+ + }
85
+ )
86
+
87
+ In ``habana_frameworks/init.py ``, we use a global variable to track if our module has been loaded:
88
+
89
+ .. code-block :: python
90
+
91
+ import os
92
+
93
+ is_loaded = False # A member variable of habana_frameworks module to track if our module has been imported
94
+
95
+ def __autoload ():
96
+ # This is an entrypoint for pytorch autoload mechanism
97
+ # If the following condition is true, that means our backend has already been loaded, either explicitly
98
+ # or by the autoload mechanism and importing it again should be skipped to avoid circular imports
99
+ global is_loaded
100
+ if is_loaded:
101
+ return
102
+ import habana_frameworks.torch
103
+
104
+ In ``habana_frameworks/torch/init.py ``, We prevent circular imports by updating the state of the global variable:
105
+
106
+ .. code-block :: python
107
+
108
+ import os
109
+
110
+ # This is to prevent torch autoload mechanism from causing circular imports
111
+ import habana_frameworks
112
+
113
+ habana_frameworks.is_loaded = True
83
114
84
115
`torch_npu `_ enables users to run PyTorch programs on Huawei Ascend NPU, it
85
116
leverages the ``PrivateUse1 `` device key and exposes the device name
86
117
as ``npu `` to the end users.
87
118
88
119
.. _torch_npu : https://github.com/Ascend/pytorch
89
120
90
- Define an entry point in `torch_npu/setup.py `_:
121
+ We define an entry point in `torch_npu/setup.py `_:
91
122
92
- .. _torch_npu/setup.py : https://github.com/Ascend/pytorch/blob/c164fbd5bb74790191ff8496b77d620fddf806d8 /setup.py#L618
123
+ .. _torch_npu/setup.py : https://github.com/Ascend/pytorch/blob/master /setup.py#L618
93
124
94
125
.. code-block :: diff
95
126
@@ -103,16 +134,14 @@ Define an entry point in `torch_npu/setup.py`_:
103
134
+ }
104
135
)
105
136
106
- ``import torch_npu `` is also no longer needed after applying this mechanism:
137
+ Unlike ``habana_frameworks ``, ``torch_npu `` uses the environment variable ``TORCH_DEVICE_BACKEND_AUTOLOAD ``
138
+ to control the autoloading process. For example, we set it to `0 ` to disable autoloading to prevent circular imports:
107
139
108
- .. code-block :: diff
140
+ .. code-block :: python
141
+ # Disable autoloading before running 'import torch'
142
+ os.environ[' TORCH_DEVICE_BACKEND_AUTOLOAD' ] = ' 0'
109
143
110
144
import torch
111
- import torchvision.models as models
112
- - import torch_npu # <-- extra import
113
- model = models.resnet50().eval().to("npu")
114
- input = torch.rand(128, 3, 224, 224).to("npu")
115
- output = model(input)
116
145
117
146
How it works
118
147
------------
0 commit comments