Skip to content

Commit 84879c6

Browse files
committed
update
1 parent e726565 commit 84879c6

File tree

1 file changed

+54
-25
lines changed

1 file changed

+54
-25
lines changed

advanced_source/python_extension_autoload.rst

Lines changed: 54 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -11,31 +11,29 @@ experience and enables users to adhere to the familiar PyTorch device
1111
programming model without needing to explicitly load or import device-specific
1212
extensions. On the other hand, it facilitates effortless
1313
adoption of existing PyTorch applications with zero-code changes on
14-
out-of-tree devices. For more information,
15-
see `[RFC] Autoload Device Extension <https://github.com/pytorch/pytorch/issues/122468>`_.
14+
out-of-tree devices. For further details, refer to the
15+
`[RFC] Autoload Device Extension <https://github.com/pytorch/pytorch/issues/122468>`_.
1616

1717
.. note::
1818

1919
This feature is enabled by default and can be disabled using
2020
``export TORCH_DEVICE_BACKEND_AUTOLOAD=0``.
2121
If you get an error like this: "Failed to load the backend extension",
22-
this error has nothing to do with PyTorch, you should disable this feature
22+
this error is independent with PyTorch, you should disable this feature
2323
and ask the out-of-tree extension maintainer for help.
2424

2525
How to apply this mechanism to out-of-tree extensions?
2626
------------------------------------------------------
2727

28-
For example, if you have a backend named ``foo`` and a package named
29-
``torch_foo``. Make sure your package is based on PyTorch 2.5+ and includes
30-
the following in its ``__init__.py``:
28+
For instance, suppose you have a backend named ``foo`` and a corresponding package named ``torch_foo``. Ensure that
29+
your package is compatible with PyTorch 2.5+ and includes the following snippet in its ``__init__.py`` file:
3130

3231
.. code-block:: python
3332
3433
def _autoload():
3534
print("No need to import torch_foo anymore! Check things are working with `torch.foo.is_available()`.")
3635
37-
Then the only thing you need to do is add an entry point to your Python
38-
package:
36+
Then the only thing you need to do is define an entry point within your Python package:
3937

4038
.. code-block:: python
4139
@@ -62,8 +60,7 @@ Examples
6260
^^^^^^^^
6361

6462
Here we take Intel Gaudi HPU and Huawei Ascend NPU as examples to determine how to
65-
integrate your out-of-tree extension with PyTorch based on the autoloading
66-
mechanism.
63+
integrate your out-of-tree extension with PyTorch using the autoloading mechanism.
6764

6865
`habana_frameworks.torch`_ is a Python package that enables users to run
6966
PyTorch programs on Intel Gaudi via the PyTorch ``HPU`` device key.
@@ -72,24 +69,58 @@ is applied.
7269

7370
.. _habana_frameworks.torch: https://docs.habana.ai/en/latest/PyTorch/Getting_Started_with_PyTorch_and_Gaudi/Getting_Started_with_PyTorch.html
7471

72+
``habana_frameworks.torch`` is a submodule of ``habana_frameworks``, we add an entry point to
73+
``__autoload()`` in ``habana_frameworks/setup.py``:
74+
7575
.. code-block:: diff
7676
77-
import torch
78-
import torchvision.models as models
79-
- import habana_frameworks.torch # <-- extra import
80-
model = models.resnet50().eval().to("hpu")
81-
input = torch.rand(128, 3, 224, 224).to("hpu")
82-
output = model(input)
77+
setup(
78+
name="habana_frameworks",
79+
version="2.5",
80+
+ entry_points={
81+
+ 'torch.backends': [
82+
+ "device_backend = habana_frameworks:__autoload",
83+
+ ],
84+
+ }
85+
)
86+
87+
In ``habana_frameworks/init.py``, we use a global variable to track if our module has been loaded:
88+
89+
.. code-block:: python
90+
91+
import os
92+
93+
is_loaded = False # A member variable of habana_frameworks module to track if our module has been imported
94+
95+
def __autoload():
96+
# This is an entrypoint for pytorch autoload mechanism
97+
# If the following condition is true, that means our backend has already been loaded, either explicitly
98+
# or by the autoload mechanism and importing it again should be skipped to avoid circular imports
99+
global is_loaded
100+
if is_loaded:
101+
return
102+
import habana_frameworks.torch
103+
104+
In ``habana_frameworks/torch/init.py``, We prevent circular imports by updating the state of the global variable:
105+
106+
.. code-block:: python
107+
108+
import os
109+
110+
# This is to prevent torch autoload mechanism from causing circular imports
111+
import habana_frameworks
112+
113+
habana_frameworks.is_loaded = True
83114
84115
`torch_npu`_ enables users to run PyTorch programs on Huawei Ascend NPU, it
85116
leverages the ``PrivateUse1`` device key and exposes the device name
86117
as ``npu`` to the end users.
87118

88119
.. _torch_npu: https://github.com/Ascend/pytorch
89120

90-
Define an entry point in `torch_npu/setup.py`_:
121+
We define an entry point in `torch_npu/setup.py`_:
91122

92-
.. _torch_npu/setup.py: https://github.com/Ascend/pytorch/blob/c164fbd5bb74790191ff8496b77d620fddf806d8/setup.py#L618
123+
.. _torch_npu/setup.py: https://github.com/Ascend/pytorch/blob/master/setup.py#L618
93124

94125
.. code-block:: diff
95126
@@ -103,16 +134,14 @@ Define an entry point in `torch_npu/setup.py`_:
103134
+ }
104135
)
105136
106-
``import torch_npu`` is also no longer needed after applying this mechanism:
137+
Unlike ``habana_frameworks``, ``torch_npu`` uses the environment variable ``TORCH_DEVICE_BACKEND_AUTOLOAD``
138+
to control the autoloading process. For example, we set it to `0` to disable autoloading to prevent circular imports:
107139

108-
.. code-block:: diff
140+
.. code-block:: python
141+
# Disable autoloading before running 'import torch'
142+
os.environ['TORCH_DEVICE_BACKEND_AUTOLOAD'] = '0'
109143
110144
import torch
111-
import torchvision.models as models
112-
- import torch_npu # <-- extra import
113-
model = models.resnet50().eval().to("npu")
114-
input = torch.rand(128, 3, 224, 224).to("npu")
115-
output = model(input)
116145
117146
How it works
118147
------------

0 commit comments

Comments
 (0)