@@ -23,7 +23,7 @@ see `[RFC] Autoload Device Extension <https://github.com/pytorch/pytorch/issues/
23
23
and ask the out-of-tree extension maintainer for help.
24
24
25
25
How to apply this mechanism to out-of-tree extensions?
26
- --------------------------------------------
26
+ ------------------------------------------------------
27
27
28
28
For example, if you have a backend named ``foo `` and a package named
29
29
``torch_foo ``. Make sure your package is based on PyTorch 2.5+ and includes
@@ -61,7 +61,9 @@ Now the ``torch_foo`` module can be imported when running import torch:
61
61
Examples
62
62
^^^^^^^^
63
63
64
- TODO: take HPU and NPU as examples
64
+ Here we take Intel HPU and Huawei Ascend NPU as examples to determine how to
65
+ integrate your out-of-tree extension with PyTorch based on the autoloading
66
+ mechanism.
65
67
66
68
`habana_frameworks.torch `_ is a Python package that enables users to run
67
69
PyTorch programs on Intel Gaudi via the PyTorch ``HPU `` device key.
@@ -79,13 +81,30 @@ is applied.
79
81
input = torch.rand(128, 3, 224, 224).to("hpu")
80
82
output = model(input)
81
83
82
- `torch_npu `_ enables users to run PyTorch program on Huawei Ascend NPU, it
84
+ `torch_npu `_ enables users to run PyTorch programs on Huawei Ascend NPU, it
83
85
leverages the ``PrivateUse1 `` device key and exposes the device name
84
86
as ``npu `` to the end users.
85
- ``import torch_npu `` is also no longer needed after applying this mechanism.
86
87
87
88
.. _torch_npu : https://github.com/Ascend/pytorch
88
89
90
+ Define an entry point in `torch_npu/setup.py `_:
91
+
92
+ .. _torch_npu/setup.py : https://github.com/Ascend/pytorch/blob/c164fbd5bb74790191ff8496b77d620fddf806d8/setup.py#L618
93
+
94
+ .. code-block :: diff
95
+
96
+ setup(
97
+ name="torch_npu",
98
+ version="2.5",
99
+ + entry_points={
100
+ + 'torch.backends': [
101
+ + 'torch_npu = torch_npu:_autoload',
102
+ + ],
103
+ + }
104
+ )
105
+
106
+ ``import torch_npu `` is also no longer needed after applying this mechanism:
107
+
89
108
.. code-block :: diff
90
109
91
110
import torch
0 commit comments