Skip to content

Commit 9f23f16

Browse files
committed
update
1 parent 6215e6e commit 9f23f16

File tree

1 file changed

+13
-17
lines changed

1 file changed

+13
-17
lines changed

advanced_source/python_backend_autoload.rst

Lines changed: 13 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -12,9 +12,7 @@ programming model without needing to explicitly load or import device-specific
1212
extensions. On the other hand, it facilitates effortless
1313
adoption of existing PyTorch applications with zero-code changes on
1414
out-of-tree devices. For more information,
15-
see `[RFC] Autoload Device Extension <rfc>`_.
16-
17-
.. _rfc: https://github.com/pytorch/pytorch/issues/122468
15+
see `[RFC] Autoload Device Extension <https://github.com/pytorch/pytorch/issues/122468>`_.
1816

1917
Examples
2018
^^^^^^^^
@@ -31,8 +29,8 @@ is applied.
3129
import torch
3230
import torchvision.models as models
3331
- import habana_frameworks.torch # <-- extra import
34-
model = models.resnet50().eval().to(hpu)
35-
input = torch.rand(128, 3, 224, 224).to(hpu)
32+
model = models.resnet50().eval().to("hpu")
33+
input = torch.rand(128, 3, 224, 224).to("hpu")
3634
output = model(input)
3735
3836
`torch_npu`_ enables users to run PyTorch program on Huawei Ascend NPU, it
@@ -47,8 +45,8 @@ as ``npu`` to the end users.
4745
import torch
4846
import torchvision.models as models
4947
- import torch_npu # <-- extra import
50-
model = models.resnet50().eval().to(npu)
51-
input = torch.rand(128, 3, 224, 224).to(npu)
48+
model = models.resnet50().eval().to("npu")
49+
input = torch.rand(128, 3, 224, 224).to("npu")
5250
output = model(input)
5351
5452
How it works
@@ -58,14 +56,12 @@ How it works
5856
:alt: Autoloading implementation
5957
:align: center
6058

61-
This mechanism is implemented based on Python's `entry_points`_ mechanism.
62-
We discover and load all of the specific entry points in ``torch/__init__.py``
63-
that are defined by out-of-tree extensions.
64-
Its implementation is in `[RFC] Add support for device extension autoloading <impl>`_
65-
66-
.. _entry_points: https://packaging.python.org/en/latest/specifications/entry-points/
67-
68-
.. _impl: https://github.com/pytorch/pytorch/pull/127074
59+
This mechanism is implemented based on Python's `Entry point
60+
<https://packaging.python.org/en/latest/specifications/entry-points/>`_
61+
mechanism. We discover and load all of the specific entry points
62+
in ``torch/__init__.py`` that are defined by out-of-tree extensions.
63+
Its implementation is in `[RFC] Add support for device extension autoloading
64+
<https://github.com/pytorch/pytorch/pull/127074>`_
6965

7066
How to apply this to out-of-tree extensions?
7167
--------------------------------------------
@@ -87,8 +83,8 @@ package.
8783
name="torch_foo",
8884
version="1.0",
8985
entry_points={
90-
'torch.backends': [
91-
'torch_foo = torch_foo:_autoload',
86+
"torch.backends": [
87+
"torch_foo = torch_foo:_autoload",
9288
],
9389
}
9490
)

0 commit comments

Comments
 (0)