-
Notifications
You must be signed in to change notification settings - Fork 4.3k
Add autoloading tutorial #3037
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add autoloading tutorial #3037
Changes from 10 commits
ee33c2d
6215e6e
9f23f16
37c72ee
67ad7ff
22ddca1
891753e
54dd911
5c9d78b
aa47a21
7caaad8
b4e884d
523d289
b3766ed
5d9adfb
5d36b03
9123d82
e726565
84879c6
6ac2d2e
5a0f00e
a85ebed
2db4cee
4d44a78
d5fe718
b6281bf
a4ace51
3980ab7
dcb5fd3
93087be
a48cfc4
d1217dc
23cfef4
3c0c1e0
dda22c4
0a47d48
bcbe0f6
80c8683
2ba51d0
5ea5a36
f8365e8
0cc9850
33c60cc
ee5c353
e425fe9
f1018e3
ebfcbff
0b52b02
9a1b2f7
d45477c
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,116 @@ | ||
Out-of-tree extension autoloading in Python | ||
shink marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
=========================================== | ||
|
||
shink marked this conversation as resolved.
Show resolved
Hide resolved
|
||
What is it? | ||
----------- | ||
shink marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
|
||
The extension autoloading mechanism enables PyTorch to automatically | ||
load out-of-tree backend extensions without explicit import statements. This | ||
mechanism is very useful for users. On the one hand, it improves the user | ||
shink marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
experience and enables users to adhere to the familiar PyTorch device | ||
shink marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
programming model without needing to explicitly load or import device-specific | ||
shink marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
extensions. On the other hand, it facilitates effortless | ||
shink marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
adoption of existing PyTorch applications with zero-code changes on | ||
out-of-tree devices. For more information, | ||
see `[RFC] Autoload Device Extension <https://github.com/pytorch/pytorch/issues/122468>`_. | ||
|
||
shink marked this conversation as resolved.
Show resolved
Hide resolved
|
||
.. note:: | ||
|
||
This feature is enabled by default and can be disabled using | ||
shink marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
``export TORCH_DEVICE_BACKEND_AUTOLOAD=0``. | ||
If you get an error like this: "Failed to load the backend extension", | ||
this error has nothing to do with PyTorch, you should disable this feature | ||
and ask the out-of-tree extension maintainer for help. | ||
|
||
How to apply this mechanism to out-of-tree extensions? | ||
-------------------------------------------- | ||
|
||
For example, if you have a backend named ``foo`` and a package named | ||
``torch_foo``. Make sure your package is based on PyTorch 2.5+ and includes | ||
the following in its ``__init__.py``: | ||
|
||
.. code-block:: python | ||
|
||
def _autoload(): | ||
print("No need to import torch_foo anymore! You can run torch.foo.is_available() directly.") | ||
shink marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
|
||
Then the only thing you need to do is add an entry point to your Python | ||
package: | ||
|
||
.. code-block:: python | ||
|
||
setup( | ||
name="torch_foo", | ||
version="1.0", | ||
entry_points={ | ||
"torch.backends": [ | ||
"torch_foo = torch_foo:_autoload", | ||
], | ||
} | ||
) | ||
|
||
Now the ``torch_foo`` module can be imported when running import torch: | ||
shink marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
|
||
.. code-block:: python | ||
|
||
>>> import torch | ||
No need to import torch_foo anymore! You can run torch.foo.is_available() directly. | ||
shink marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
>>> torch.foo.is_available() | ||
True | ||
|
||
Examples | ||
^^^^^^^^ | ||
|
||
TODO: take HPU and NPU as examples | ||
|
||
`habana_frameworks.torch`_ is a Python package that enables users to run | ||
PyTorch programs on Intel Gaudi via the PyTorch ``HPU`` device key. | ||
shink marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
``import habana_frameworks.torch`` is no longer necessary after this mechanism | ||
is applied. | ||
|
||
.. _habana_frameworks.torch: https://docs.habana.ai/en/latest/PyTorch/Getting_Started_with_PyTorch_and_Gaudi/Getting_Started_with_PyTorch.html | ||
|
||
.. code-block:: diff | ||
|
||
import torch | ||
import torchvision.models as models | ||
- import habana_frameworks.torch # <-- extra import | ||
model = models.resnet50().eval().to("hpu") | ||
input = torch.rand(128, 3, 224, 224).to("hpu") | ||
output = model(input) | ||
|
||
|
||
`torch_npu`_ enables users to run PyTorch program on Huawei Ascend NPU, it | ||
leverages the ``PrivateUse1`` device key and exposes the device name | ||
as ``npu`` to the end users. | ||
``import torch_npu`` is also no longer needed after applying this mechanism. | ||
|
||
.. _torch_npu: https://github.com/Ascend/pytorch | ||
|
||
.. code-block:: diff | ||
|
||
import torch | ||
import torchvision.models as models | ||
- import torch_npu # <-- extra import | ||
model = models.resnet50().eval().to("npu") | ||
input = torch.rand(128, 3, 224, 224).to("npu") | ||
output = model(input) | ||
|
||
How it works | ||
------------ | ||
|
||
.. image:: ../_static/img/python_extension_autoload_impl.png | ||
:alt: Autoloading implementation | ||
:align: center | ||
|
||
This mechanism is implemented based on Python's `Entry points | ||
shink marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
<https://packaging.python.org/en/latest/specifications/entry-points/>`_ | ||
mechanism. We discover and load all of the specific entry points | ||
in ``torch/__init__.py`` that are defined by out-of-tree extensions. | ||
Its implementation is in `[RFC] Add support for device extension autoloading | ||
<https://github.com/pytorch/pytorch/pull/127074>`_. | ||
|
||
Conclusion | ||
---------- | ||
|
||
This tutorial has guided you through the out-of-tree extension autoloading | ||
shink marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
mechanism, including its usage and implementation. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The diagram seems missing a pointer from
import torch
to the entry points to load - the loading of entry points is triggered byimport torch
, right?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, I will make some changes to this diagram
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jgong5 updated, please have a look