|
| 1 | +Out-of-tree extension autoloading in Python |
| 2 | +=========================================== |
| 3 | + |
| 4 | +What is it? |
| 5 | +----------- |
| 6 | + |
| 7 | +The extension autoloading mechanism enables PyTorch to automatically |
| 8 | +load out-of-tree backend extensions without explicit import statements. This |
| 9 | +mechanism is very useful for users. On the one hand, it improves the user |
| 10 | +experience and enables users to adhere to the familiar PyTorch device |
| 11 | +programming model without needing to explicitly load or import device-specific |
| 12 | +extensions. On the other hand, it facilitates effortless |
| 13 | +adoption of existing PyTorch applications with zero-code changes on |
| 14 | +out-of-tree devices. |
| 15 | + |
| 16 | +Examples |
| 17 | +^^^^^^^^ |
| 18 | + |
| 19 | +`habana_frameworks.torch`_ is a Python package that enables users to run |
| 20 | +PyTorch programs on Intel Gaudi via the PyTorch ``HPU`` device key. |
| 21 | +``import habana_frameworks.torch`` is no longer necessary after this mechanism |
| 22 | +is applied. |
| 23 | + |
| 24 | +.. _habana_frameworks.torch: https://docs.habana.ai/en/latest/PyTorch/Getting_Started_with_PyTorch_and_Gaudi/Getting_Started_with_PyTorch.html |
| 25 | + |
| 26 | +.. code-block:: diff |
| 27 | +
|
| 28 | + import torch |
| 29 | + import torchvision.models as models |
| 30 | + - import habana_frameworks.torch # <-- extra import |
| 31 | + model = models.resnet50().eval().to(“hpu”) |
| 32 | + input = torch.rand(128, 3, 224, 224).to(“hpu”) |
| 33 | + output = model(input) |
| 34 | +
|
| 35 | +`torch_npu`_ enables users to run PyTorch program on Huawei Ascend NPU, it |
| 36 | +leverages the ``PrivateUse1`` device key and exposes the device name |
| 37 | +as ``npu`` to the end users. |
| 38 | +``import torch_npu`` is also no longer needed after applying this mechanism. |
| 39 | + |
| 40 | +.. _torch_npu: https://github.com/Ascend/pytorch |
| 41 | + |
| 42 | +.. code-block:: diff |
| 43 | +
|
| 44 | + import torch |
| 45 | + import torchvision.models as models |
| 46 | + - import torch_npu # <-- extra import |
| 47 | + model = models.resnet50().eval().to(“npu”) |
| 48 | + input = torch.rand(128, 3, 224, 224).to(“npu”) |
| 49 | + output = model(input) |
| 50 | +
|
| 51 | +How it works |
| 52 | +------------ |
| 53 | + |
| 54 | +.. image:: ../_static/img/python_backend_autoload_impl.png |
| 55 | + :alt: Autoloading implementation |
| 56 | + :align: center |
| 57 | + |
| 58 | +This mechanism is implemented based on Python's `entry_points`_ mechanism. |
| 59 | +We discover and load all of the specific entry points in ``torch/__init__.py`` |
| 60 | +that are defined by out-of-tree extensions. |
| 61 | + |
| 62 | +.. _entry_points: https://packaging.python.org/en/latest/specifications/entry-points/ |
| 63 | + |
| 64 | +How to apply this to out-of-tree extensions? |
| 65 | +-------------------------------------------- |
| 66 | + |
| 67 | +For example, if you have a package named ``torch_foo`` and it includes the |
| 68 | +following in its ``__init__.py``: |
| 69 | + |
| 70 | +.. code-block:: python |
| 71 | +
|
| 72 | + def _autoload(): |
| 73 | + print("No need to import torch_foo anymore! You can run torch.foo.is_available() directly.") |
| 74 | +
|
| 75 | +Then the only thing you need to do is add an entry point to your Python |
| 76 | +package. |
| 77 | + |
| 78 | +.. code-block:: python |
| 79 | +
|
| 80 | + setup( |
| 81 | + name="torch_foo", |
| 82 | + version="1.0", |
| 83 | + entry_points={ |
| 84 | + 'torch.backends': [ |
| 85 | + 'torch_foo = torch_foo:_autoload', |
| 86 | + ], |
| 87 | + } |
| 88 | + ) |
| 89 | +
|
| 90 | +Now the ``torch_foo`` module can be imported when running import torch. |
| 91 | + |
| 92 | +Conclusion |
| 93 | +---------- |
| 94 | + |
| 95 | +This tutorial has guided you through the out-of-tree extension autoloading |
| 96 | +mechanism, including its usage and implementation. |
0 commit comments