Skip to content

Commit 02dd60f

Browse files
committed
docs: update privateuse1 doc
1 parent ec867b2 commit 02dd60f

File tree

2 files changed

+15
-14
lines changed

2 files changed

+15
-14
lines changed

advanced_source/extend_dispatcher.rst

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,8 @@ In this tutorial we will walk through all necessary steps to extend the dispatch
55
add a new device living outside ``pytorch/pytorch`` repo and maintain it to keep in
66
sync with native PyTorch devices. Here we'll assume that you're familiar with how
77
to `register a dispatched operator in C++ <dispatcher>`_ and how to write a
8-
`custom autograd function <cpp_autograd>`_.
8+
`custom autograd function <cpp_autograd>`_. For more details about PrivateUse1 backend registration,
9+
you can check out `Facilitating New Backend Integration by PrivateUse1 <privateuseone>`_.
910

1011

1112
.. note::

advanced_source/privateuseone.rst

Lines changed: 13 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ you are an advanced user of PyTorch.
1616
What is PrivateUse1?
1717
--------------------
1818

19-
Prior to Pytorch 2.0, PyTorch provided three reserved dispatch keys (and their corresponding Autograd keys)
19+
Prior to PyTorch 2.0, PyTorch provided three reserved dispatch keys (and their corresponding Autograd keys)
2020
for prototyping out-of-tree backend extensions, the three dispatch keys are as follows:
2121

2222
* ``PrivateUse1/AutogradPrivateUse1``
@@ -43,14 +43,14 @@ into the PyTorch via ``PrivateUse1``.
4343
However, the previous ``PrivateUse1`` mechanism is not fully capable of integrating with the new backend, because it
4444
lacks some related support in certain modules, such as Storage, AMP, Distributed, and so on.
4545

46-
With the arrival of Pytorch 2.1.0, a series of optimizations and enhancements have been made
46+
With the arrival of PyTorch 2.1.0, a series of optimizations and enhancements have been made
4747
for ``PrivateUse1`` in terms of new backend integration, and it is now possible to support the integration
4848
of new devices rapidly and efficiently.
4949

5050
How to integrate new backend via PrivateUse1
5151
--------------------------------------------
5252

53-
In this section, we will discuss the details of integrating the new backend into Pytorch via ``PrivateUse1``,
53+
In this section, we will discuss the details of integrating the new backend into PyTorch via ``PrivateUse1``,
5454
which mainly consists of the following parts:
5555

5656
1. Register kernels for the new backend.
@@ -98,12 +98,12 @@ several situations:
9898

9999
.. code-block:: cpp
100100
101-
class CumtomSeluFunction : public torch::autograd::Function<CumtomSeluFunction> {
101+
class CustomSeluFunction : public torch::autograd::Function<CustomSeluFunction> {
102102
// Implementation of selu kernel in new backend
103103
}
104104
105-
at::Tensor wrapper_AutogradCumstom__selu(const at::Tensor & self) {
106-
return CumtomSeluFunction::apply(self);
105+
at::Tensor wrapper_AutogradCustom__selu(const at::Tensor & self) {
106+
return CustomSeluFunction::apply(self);
107107
}
108108
109109
TORCH_LIBRARY_IMPL(aten, AutogradPrivateUse1, m) {
@@ -219,17 +219,17 @@ such as ``distributed collective communication``, ``benchmark timer``, and other
219219
One example about ``PrivateUse1`` integration is `Ascend NPU <https://github.com/ascend/pytorch>`_.
220220

221221

222-
How to Improve User Experience with Privateuse1
222+
How to Improve User Experience with PrivateUse1
223223
-----------------------------------------------
224224

225225
The primary goal of integrating new devices through ``PrivateUse1`` is to meet the basic functional requirements,
226226
and the next thing to do is to improve usability, which mainly involves the following aspects.
227227

228-
1. Register new backend module to Pytorch.
228+
1. Register new backend module to PyTorch.
229229
2. Rename PrivateUse1 to a custom name for the new backend.
230230
3. Generate methods and properties related to the new backend.
231231

232-
Register new backend module to Pytorch
232+
Register new backend module to PyTorch
233233
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
234234

235235
Some CUDA-related interfaces in PyTorch can be called through the following form: ``torch.cuda.xxx``. Therefore, in order to
@@ -239,7 +239,7 @@ For example, using ``Ascend NPU``:
239239

240240
.. code-block:: python
241241
242-
torch._register_device_module('npu', torch_npu.npu)
242+
torch._register_device_module("npu", torch_npu.npu)
243243
244244
After doing the above operations, users can call some exclusive APIs of ``Ascend NPU`` through ``torch.npu.xxx``
245245

@@ -253,8 +253,8 @@ Taking the ``Ascend NPU`` as an example, the first usage will be more user-frien
253253

254254
.. code-block:: python
255255
256-
torch.rand((2,2),device='npu:0')
257-
torch.rand((2,2),device='privateuse1:0')
256+
torch.rand((2, 2), device="npu:0")
257+
torch.rand((2, 2), device="privateuseone:0")
258258
259259
Now, PyTorch provides a new C++/Python API for the self-named ``PrivateUse1`` backend, which is very simple to use.
260260

@@ -271,7 +271,7 @@ Now, PyTorch provides a new C++/Python API for the self-named ``PrivateUse1`` ba
271271
Generate methods and properties related to the new backend
272272
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
273273

274-
After renaming ``PrivateUse1`` to a custome name, automatically generate properties and methods related to the new backend name
274+
After renaming ``PrivateUse1`` to a custom name, automatically generate properties and methods related to the new backend name
275275
in the ``Tensor, nn, Storage`` modules for the new backend.
276276

277277
Here is an example for ``Ascend NPU``:

0 commit comments

Comments
 (0)