@@ -16,7 +16,7 @@ you are an advanced user of PyTorch.
16
16
What is PrivateUse1?
17
17
--------------------
18
18
19
- Prior to Pytorch 2.0, PyTorch provided three reserved dispatch keys (and their corresponding Autograd keys)
19
+ Prior to PyTorch 2.0, PyTorch provided three reserved dispatch keys (and their corresponding Autograd keys)
20
20
for prototyping out-of-tree backend extensions, the three dispatch keys are as follows:
21
21
22
22
* ``PrivateUse1/AutogradPrivateUse1 ``
@@ -43,14 +43,14 @@ into the PyTorch via ``PrivateUse1``.
43
43
However, the previous ``PrivateUse1 `` mechanism is not fully capable of integrating with the new backend, because it
44
44
lacks some related support in certain modules, such as Storage, AMP, Distributed, and so on.
45
45
46
- With the arrival of Pytorch 2.1.0, a series of optimizations and enhancements have been made
46
+ With the arrival of PyTorch 2.1.0, a series of optimizations and enhancements have been made
47
47
for ``PrivateUse1 `` in terms of new backend integration, and it is now possible to support the integration
48
48
of new devices rapidly and efficiently.
49
49
50
50
How to integrate new backend via PrivateUse1
51
51
--------------------------------------------
52
52
53
- In this section, we will discuss the details of integrating the new backend into Pytorch via ``PrivateUse1 ``,
53
+ In this section, we will discuss the details of integrating the new backend into PyTorch via ``PrivateUse1 ``,
54
54
which mainly consists of the following parts:
55
55
56
56
1. Register kernels for the new backend.
@@ -98,12 +98,12 @@ several situations:
98
98
99
99
.. code-block :: cpp
100
100
101
- class CumtomSeluFunction : public torch::autograd::Function<CumtomSeluFunction > {
101
+ class CustomSeluFunction : public torch::autograd::Function<CustomSeluFunction > {
102
102
// Implementation of selu kernel in new backend
103
103
}
104
104
105
- at::Tensor wrapper_AutogradCumstom__selu (const at::Tensor & self) {
106
- return CumtomSeluFunction ::apply(self);
105
+ at::Tensor wrapper_AutogradCustom__selu (const at::Tensor & self) {
106
+ return CustomSeluFunction ::apply(self);
107
107
}
108
108
109
109
TORCH_LIBRARY_IMPL(aten, AutogradPrivateUse1, m) {
@@ -219,17 +219,17 @@ such as ``distributed collective communication``, ``benchmark timer``, and other
219
219
One example about ``PrivateUse1 `` integration is `Ascend NPU <https://github.com/ascend/pytorch >`_.
220
220
221
221
222
- How to Improve User Experience with Privateuse1
222
+ How to Improve User Experience with PrivateUse1
223
223
-----------------------------------------------
224
224
225
225
The primary goal of integrating new devices through ``PrivateUse1 `` is to meet the basic functional requirements,
226
226
and the next thing to do is to improve usability, which mainly involves the following aspects.
227
227
228
- 1. Register new backend module to Pytorch .
228
+ 1. Register new backend module to PyTorch .
229
229
2. Rename PrivateUse1 to a custom name for the new backend.
230
230
3. Generate methods and properties related to the new backend.
231
231
232
- Register new backend module to Pytorch
232
+ Register new backend module to PyTorch
233
233
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
234
234
235
235
Some CUDA-related interfaces in PyTorch can be called through the following form: ``torch.cuda.xxx ``. Therefore, in order to
@@ -239,7 +239,7 @@ For example, using ``Ascend NPU``:
239
239
240
240
.. code-block :: python
241
241
242
- torch._register_device_module(' npu' , torch_npu.npu)
242
+ torch._register_device_module(" npu" , torch_npu.npu)
243
243
244
244
After doing the above operations, users can call some exclusive APIs of ``Ascend NPU `` through ``torch.npu.xxx ``
245
245
@@ -253,8 +253,8 @@ Taking the ``Ascend NPU`` as an example, the first usage will be more user-frien
253
253
254
254
.. code-block :: python
255
255
256
- torch.rand((2 ,2 ),device = ' npu:0' )
257
- torch.rand((2 ,2 ),device = ' privateuse1:0 ' )
256
+ torch.rand((2 , 2 ), device = " npu:0" )
257
+ torch.rand((2 , 2 ), device = " privateuseone:0 " )
258
258
259
259
Now, PyTorch provides a new C++/Python API for the self-named ``PrivateUse1 `` backend, which is very simple to use.
260
260
@@ -271,7 +271,7 @@ Now, PyTorch provides a new C++/Python API for the self-named ``PrivateUse1`` ba
271
271
Generate methods and properties related to the new backend
272
272
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
273
273
274
- After renaming ``PrivateUse1 `` to a custome name, automatically generate properties and methods related to the new backend name
274
+ After renaming ``PrivateUse1 `` to a custom name, automatically generate properties and methods related to the new backend name
275
275
in the ``Tensor, nn, Storage `` modules for the new backend.
276
276
277
277
Here is an example for ``Ascend NPU ``:
0 commit comments