Skip to content

Commit 09ce1af

Browse files
committed
Redirecting (Prototype) Use iOS GPU in PyTorch to ExecuTorch.
1 parent 0156929 commit 09ce1af

File tree

1 file changed

+4
-136
lines changed

1 file changed

+4
-136
lines changed

prototype_source/ios_gpu_workflow.rst

Lines changed: 4 additions & 136 deletions
Original file line numberDiff line numberDiff line change
@@ -1,142 +1,10 @@
11
(Prototype) Use iOS GPU in PyTorch
22
==================================
33

4-
**Author**: `Tao Xu <https://github.com/xta0>`_
4+
PyTorch Mobile is no longer actively supported. Please check out ExecuTorch.
55

6-
Introduction
7-
------------
6+
Redirecting in 3 seconds...
87

9-
This tutorial introduces the steps to run your models on iOS GPU. We'll be using the mobilenetv2 model as an example. Since the mobile GPU features are currently in the prototype stage, you'll need to build a custom pytorch binary from source. For the time being, only a limited number of operators are supported, and certain client side APIs are subject to change in the future versions.
8+
.. raw:: html
109

11-
Model Preparation
12-
-------------------
13-
14-
Since GPUs consume weights in a different order, the first step we need to do is to convert our TorchScript model to a GPU compatible model. This step is also known as "prepacking".
15-
16-
PyTorch with Metal
17-
^^^^^^^^^^^^^^^^^^
18-
To do that, we'll install a pytorch nightly binary that includes the Metal backend. Go ahead run the command below
19-
20-
.. code:: shell
21-
22-
conda install pytorch -c pytorch-nightly
23-
// or
24-
pip3 install --pre torch -f https://download.pytorch.org/whl/nightly/cpu/torch_nightly.html
25-
26-
Also, you can build a custom pytorch binary from source that includes the Metal backend. Just checkout the pytorch source code from github and run the command below
27-
28-
.. code:: shell
29-
30-
cd PYTORCH_ROOT
31-
USE_PYTORCH_METAL_EXPORT=ON python setup.py install --cmake
32-
33-
The command above will build a custom pytorch binary from master. The ``install`` argument simply tells ``setup.py`` to override the existing PyTorch on your desktop. Once the build finished, open another terminal to check the PyTorch version to see if the installation was successful. As the time of writing of this recipe, the version is ``1.8.0a0+41237a4``. You might be seeing different numbers depending on when you check out the code from master, but it should be greater than 1.7.0.
34-
35-
.. code:: python
36-
37-
import torch
38-
torch.__version__ #1.8.0a0+41237a4
39-
40-
Metal Compatible Model
41-
^^^^^^^^^^^^^^^^^^^^^^
42-
43-
The next step is going to be converting the mobilenetv2 torchscript model to a Metal compatible model. We'll be leveraging the ``optimize_for_mobile`` API from the ``torch.utils`` module. As shown below
44-
45-
.. code:: python
46-
47-
import torch
48-
import torchvision
49-
from torch.utils.mobile_optimizer import optimize_for_mobile
50-
51-
model = torchvision.models.mobilenet_v2(pretrained=True)
52-
scripted_model = torch.jit.script(model)
53-
optimized_model = optimize_for_mobile(scripted_model, backend='metal')
54-
print(torch.jit.export_opnames(optimized_model))
55-
optimized_model._save_for_lite_interpreter('./mobilenetv2_metal.pt')
56-
57-
Note that the ``torch.jit.export_opnames(optimized_model)`` is going to dump all the optimized operators from the ``optimized_mobile``. If everything works well, you should be able to see the following ops being printed out from the console
58-
59-
60-
.. code:: shell
61-
62-
['aten::adaptive_avg_pool2d',
63-
'aten::add.Tensor',
64-
'aten::addmm',
65-
'aten::reshape',
66-
'aten::size.int',
67-
'metal::copy_to_host',
68-
'metal_prepack::conv2d_run']
69-
70-
Those are all the ops we need to run the mobilenetv2 model on iOS GPU. Cool! Now that you have the ``mobilenetv2_metal.pt`` saved on your disk, let's move on to the iOS part.
71-
72-
73-
Use PyTorch iOS library with Metal
74-
----------------------------------
75-
The PyTorch iOS library with Metal support ``LibTorch-Lite-Nightly`` is available in Cocoapods. You can read the `Using the Nightly PyTorch iOS Libraries in CocoaPods <https://pytorch.org/mobile/ios/#using-the-nightly-pytorch-ios-libraries-in-cocoapods>`_ section from the iOS tutorial for more detail about its usage.
76-
77-
We also have the `HelloWorld-Metal example <https://github.com/pytorch/ios-demo-app/tree/master/HelloWorld-Metal>`_ that shows how to conect all pieces together.
78-
79-
Note that if you run the HelloWorld-Metal example, you may notice that the results are slighly different from the `results <https://pytorch.org/mobile/ios/#install-libtorch-via-cocoapods>`_ we got from the CPU model as shown in the iOS tutorial.
80-
81-
.. code:: shell
82-
83-
- timber wolf, grey wolf, gray wolf, Canis lupus
84-
- malamute, malemute, Alaskan malamute
85-
- Eskimo dog, husky
86-
87-
This is because by default Metal uses fp16 rather than fp32 to compute. The precision loss is expected.
88-
89-
90-
Use LibTorch-Lite Built from Source
91-
-----------------------------------
92-
93-
You can also build a custom LibTorch-Lite from Source and use it to run GPU models on iOS Metal. In this section, we'll be using the `HelloWorld example <https://github.com/pytorch/ios-demo-app/tree/master/HelloWorld>`_ to demonstrate this process.
94-
95-
First, make sure you have deleted the **build** folder from the "Model Preparation" step in PyTorch root directory. Then run the command below
96-
97-
.. code:: shell
98-
99-
IOS_ARCH=arm64 USE_PYTORCH_METAL=1 ./scripts/build_ios.sh
100-
101-
Note ``IOS_ARCH`` tells the script to build a arm64 version of Libtorch-Lite. This is because in PyTorch, Metal is only available for the iOS devices that support the Apple A9 chip or above. Once the build finished, follow the `Build PyTorch iOS libraries from source <https://pytorch.org/mobile/ios/#build-pytorch-ios-libraries-from-source>`_ section from the iOS tutorial to setup the XCode settings properly. Don't forget to copy the ``./mobilenetv2_metal.pt`` to your XCode project and modify the model file path accordingly.
102-
103-
Next we need to make some changes in ``TorchModule.mm``
104-
105-
.. code:: objective-c
106-
107-
...
108-
// #import <Libtorch-Lite/Libtorch-Lite.h>
109-
// If it's built from source with Xcode, comment out the line above
110-
// and use following headers
111-
#include <torch/csrc/jit/mobile/import.h>
112-
#include <torch/csrc/jit/mobile/module.h>
113-
#include <torch/script.h>
114-
...
115-
116-
- (NSArray<NSNumber*>*)predictImage:(void*)imageBuffer {
117-
c10::InferenceMode mode;
118-
at::Tensor tensor = torch::from_blob(imageBuffer, {1, 3, 224, 224}, at::kFloat).metal();
119-
auto outputTensor = _impl.forward({tensor}).toTensor().cpu();
120-
...
121-
}
122-
...
123-
124-
As you can see, we simply just call ``.metal()`` to move our input tensor from CPU to GPU, and then call ``.cpu()`` to move the result back. Internally, ``.metal()`` will copy the input data from the CPU buffer to a GPU buffer with a GPU compatible memory format. When ``.cpu()`` is invoked, the GPU command buffer will be flushed and synced. After `forward` finished, the final result will then be copied back from the GPU buffer back to a CPU buffer.
125-
126-
The last step we have to do is to add the ``Accelerate.framework`` and the ``MetalPerformanceShaders.framework`` to your xcode project (Open your project via XCode, go to your project target’s "General" tab, locate the "Frameworks, Libraries and Embedded Content" section and click the "+" button).
127-
128-
If everything works fine, you should be able to see the inference results on your phone.
129-
130-
131-
Conclusion
132-
----------
133-
134-
In this tutorial, we demonstrated how to convert a mobilenetv2 model to a GPU compatible model. We walked through a HelloWorld example to show how to use the C++ APIs to run models on iOS GPU. Please be aware of that GPU feature is still under development, new operators will continue to be added. APIs are subject to change in the future versions.
135-
136-
Thanks for reading! As always, we welcome any feedback, so please create an issue `here <https://github.com/pytorch/pytorch/issues>`_ if you have any.
137-
138-
Learn More
139-
----------
140-
141-
- The `Mobilenetv2 <https://pytorch.org/hub/pytorch_vision_mobilenet_v2/>`_ from Torchvision
142-
- To learn more about how to use ``optimize_for_mobile``, please refer to the `Mobile Perf Recipe <https://pytorch.org/tutorials/recipes/mobile_perf.html>`_
10+
<meta http-equiv="Refresh" content="3; url='https://pytorch.org/executorch/stable/index.html'" />

0 commit comments

Comments
 (0)